🚨🤖New Agent Release🤖🚨
We can @OpenAI's new function parameter to create a new type of agent (`openai-functions`) now available in Python and JS
Links to documentation a thread on what went into it below 👇
We can @OpenAI's new function parameter to create a new type of agent (`openai-functions`) now available in Python and JS
Links to documentation a thread on what went into it below 👇
First, if you just want to jump right to trying it out:
Python Docs: github.com
JS Docs: js.langchain.com
Python Docs: github.com
JS Docs: js.langchain.com
github.com/hwchase17/lang…
langchain/docs/modules/agents/agents/examples/openai_functions_agent.ipynb at master · hwchase17/langchain
⚡ Building applications with LLMs through composability ⚡ - langchain/docs/modules/agents/agents/exa...
js.langchain.com/docs/modules/a…
OpenAI Agent for Chat Models | 🦜️🔗 Langchain
This example covers how to use an agent that uses OpenAI's Function Calling functionality to pick th...
Under the hood, we are heavily utilizing the new `functions` parameter available in the chat model
First, we convert the LangChain tool spec to the function tool spec the expect
First, we convert the LangChain tool spec to the function tool spec the expect
We then call the language model with those parameters
Because of this new API, the prompt is a lot simpler - we don't waste any tokens telling the LLM what tools it has, how to structure output, etc
That's very nice!
Because of this new API, the prompt is a lot simpler - we don't waste any tokens telling the LLM what tools it has, how to structure output, etc
That's very nice!
We now get back a result from the LLM
If it contains the `function_call` argument, we parse that into a tool invocation
If it doesn't, then we just return to the user
If it contains the `function_call` argument, we parse that into a tool invocation
If it doesn't, then we just return to the user
We then go an execute the tool as usual
We then introduce a new type of message - a FunctionMessage - to pass the result of calling the tool back to the LLM
This message has `role=function` and is something new OpenAI introduced
We then introduce a new type of message - a FunctionMessage - to pass the result of calling the tool back to the LLM
This message has `role=function` and is something new OpenAI introduced
We then continue until `function_call` is not returned from the LLM, meaning it's safe to return to the user!
Along the way, the current agent executor framework handles any errors (either in parsing or when calling the tool itself)
Along the way, the current agent executor framework handles any errors (either in parsing or when calling the tool itself)
Overall, this fit quite nicely into the existing agent framework
Next up is to make this agent type (`openai-functions`) usable when you're using a SQL agent, a CSV agent, etc
But that might wait for tomorrow :)
Next up is to make this agent type (`openai-functions`) usable when you're using a SQL agent, a CSV agent, etc
But that might wait for tomorrow :)
جاري تحميل الاقتراحات...