(Run this example in Google Colab here)
Prompts, chaining, and prompt engineering is important. However, you might not always know what chain or prompts you need to execute prior to receiving user input or new data. This is where automation and agents can help. This is an active area of development, but some very useful tooling is available.
In the following we will explore using LangChain agents with Prediction Guard LLMs to detect and automate LLM actions.
We will use LangChain again, but we will also use a Google search API called SerpAPI. You can get a free API key for SerpAPI here.
To setup an agent that will search the internet on-the-fly and use the LLM to generate a response:
This will verbosely log the agents activities until it reaching a final answer and generates the response: