The core data structure, the Chain, is basically just a function. Combining chains is function composition, like literally it's just f(g(x)), but incompatible with _your_ f's and g's without an adapter.
They build all these adapters and integrations and make it seem like they're helping you piece together a solution, but in how many cases were they necessary as a middleman? Like I really don't need a wrapper around the openai client, and for the more complex stuff like Agents, isn't this the most critical part of your app, if it's production? And for notebooks, is it any better than coding directly against your llm api? You probably won't swap llm backends, and if you're using the notebook for education/documentation, I think I'd want to show the actual openai api calls.
OpenAI's official documentation gives you the code for a tool-running agent anyway. Taking that and editing it can probably be done faster than pip-installing langchain and navigating its docs.
Langchain don't support basic things, like splitting a list result in single items and processing those one by one, accumulating the results further down the road.
By the time you have built custom chains, custom prompts, and custom agents to support all that, you basically are using their interface and not their code. At that point, eh, it's pointless. It's great for demos, I give you that, but every time I tried to coherce it into a product, it fell short.
Read this page and mentally swap "chain" for "function": https://python.langchain.com/docs/modules/chains/foundationa...
They build all these adapters and integrations and make it seem like they're helping you piece together a solution, but in how many cases were they necessary as a middleman? Like I really don't need a wrapper around the openai client, and for the more complex stuff like Agents, isn't this the most critical part of your app, if it's production? And for notebooks, is it any better than coding directly against your llm api? You probably won't swap llm backends, and if you're using the notebook for education/documentation, I think I'd want to show the actual openai api calls.
OpenAI's official documentation gives you the code for a tool-running agent anyway. Taking that and editing it can probably be done faster than pip-installing langchain and navigating its docs.