Agents wrap a model and give it access to a set of tools. These tools may access additional data sources, APIs, or functionality. The model is used to determine which of the tools to use to complete a task.
The agent you will create can chat about movies and search YouTube for movie trailers. You will use the YouTubeSearchTool to search YouTube for movie trailers.
Movie trailer agent
Open the 2-llm-rag-python-langchain\chat_agent.py
file and review the program, before running it.
import os
from dotenv import load_dotenv
load_dotenv()
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_react_agent
from langchain.tools import Tool
from langchain import hub
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain.schema import StrOutputParser
from langchain_neo4j import Neo4jChatMessageHistory, Neo4jGraph
from uuid import uuid4
SESSION_ID = str(uuid4())
print(f"Session ID: {SESSION_ID}")
llm = ChatOpenAI(openai_api_key=os.getenv('OPENAI_API_KEY'))
graph = Neo4jGraph(
url=os.getenv('NEO4J_URI'),
username=os.getenv('NEO4J_USERNAME'),
password=os.getenv('NEO4J_PASSWORD'),
)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a movie expert. You find movies from a genre or plot.",
),
("human", "{input}"),
]
)
movie_chat = prompt | llm | StrOutputParser()
def get_memory(session_id):
return Neo4jChatMessageHistory(session_id=session_id, graph=graph)
tools = [
Tool.from_function(
name="Movie Chat",
description="For when you need to chat about movies. The question will be a string. Return a string.",
func=movie_chat.invoke,
)
]
agent_prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools) # use verbose=True to see the agent workflow
chat_agent = RunnableWithMessageHistory(
agent_executor,
get_memory,
input_messages_key="input",
history_messages_key="chat_history",
)
while True:
q = input("> ")
response = chat_agent.invoke(
{
"input": q
},
{"configurable": {"session_id": SESSION_ID}},
)
print(response["output"])
You should be able to identify the following:
-
A chat model is being used to have a conversation about movies
-
The prompt which sets the context for the LLM and the input variables
-
That memory is used to store the conversation history in a Neo4j database
In addition to the above, the following is new:
-
A tool is created using the chain:
pythontools = [ Tool.from_function( name="Movie Chat", description="For when you need to chat about movies. The question will be a string. Return a string.", func=movie_chat.invoke, ) ]
-
An agent is created that uses the tool:
pythonagent_prompt = hub.pull("hwchase17/react-chat") agent = create_react_agent(llm, tools, agent_prompt) agent_executor = AgentExecutor(agent=agent, tools=tools)
-
The agent is wrapped in a
RunnableWithMessageHistory
chain that allows it to interact with the memory:pythonchat_agent = RunnableWithMessageHistory( agent_executor, get_memory, input_messages_key="input", history_messages_key="chat_history", )
Tools are interfaces that an agent can interact with.
The name
and description
help the LLM select which tool to use when presented with a question.
Agents support multiple tools, so you pass them to the agent as a list (tools
).
Run the agent and ask it movie related questions.
LangSmith API Warning
The agent uses a prompt hosted on the LangChain hub.
The agent will still run successfully but you will receive a LangSmithMissingAPIKeyWarning
warning if you don’t have a LangSmith API key.
You can Create a LangSmith Personal Access Token API Key and assign it to the LANGSMITH_API_KEY
environment variable to remove the warning.
Learn more about agents
The following code creates the agent:
agent_prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
There are different types of agents that you can create. This example creates a ReAct - Reasoning and Acting) agent type.
An agent requires a prompt. You could create a prompt, but in this example, the program pulls a pre-existing prompt from the Langsmith Hub.
The hwcase17/react-chat
prompt instructs the model to provide an answer using the tools available in a specific format.
The create_react_agent
function creates the agent and expects the following parameters:
-
The
llm
that will manage the interactions and decide which tool to use -
The
tools
that the agent can use -
The
prompt
that the agent will use
The AgentExecutor
class runs the agent. It expects the following parameters:
-
The
agent
to run -
The
tools
that the agent can use
You may find the following additional parameters useful when initializing an agent:
-
max_iterations
- the maximum number of iterations to run the LLM for. This is useful in preventing the LLM from running for too long or entering an infinite loop. -
verbose
- ifTrue
the agent will print out the LLM output and the tool output. -
handle_parsing_errors
- ifTrue
the agent will handle parsing errors and return a message to the user.
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
max_iterations=3,
verbose=True,
handle_parse_errors=True
)
Multiple tools
A key advantage of using an agent is that they can use multiple tools.
You can extend this example to allow it to search YouTube for movie trailers by adding the YouTubeSearchTool to the tools
list.
Import the YouTubeSearchTool
and create a new tool.
from langchain_community.tools import YouTubeSearchTool
youtube = YouTubeSearchTool()
The YouTubeSearchTool
tool expects a search term and the number of results passed as a comma-separated string.
The agent may pass queries containing commas, so create a function to strip the commas from the query and pass the query to the YouTubeSearchTool
.
def call_trailer_search(input):
input = input.replace(",", " ")
return youtube.run(input)
Finally, add the call_trailer_search
function to the tools
list.
tools = [
Tool.from_function(
name="Movie Chat",
description="For when you need to chat about movies. The question will be a string. Return a string.",
func=movie_chat.invoke,
),
Tool.from_function(
name="Movie Trailer Search",
description="Use when needing to find a movie trailer. The question will include the word trailer. Return a link to a YouTube video.",
func=call_trailer_search,
),
]
Click here to reveal the complete program
import os
from dotenv import load_dotenv
load_dotenv()
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_react_agent
from langchain.tools import Tool
from langchain import hub
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain.schema import StrOutputParser
from langchain_community.tools import YouTubeSearchTool
from langchain_neo4j import Neo4jChatMessageHistory, Neo4jGraph
from uuid import uuid4
SESSION_ID = str(uuid4())
print(f"Session ID: {SESSION_ID}")
llm = ChatOpenAI(openai_api_key=os.getenv('OPENAI_API_KEY'))
graph = Neo4jGraph(
url=os.getenv('NEO4J_URI'),
username=os.getenv('NEO4J_USERNAME'),
password=os.getenv('NEO4J_PASSWORD'),
)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a movie expert. You find movies from a genre or plot.",
),
("human", "{input}"),
]
)
movie_chat = prompt | llm | StrOutputParser()
youtube = YouTubeSearchTool()
def get_memory(session_id):
return Neo4jChatMessageHistory(session_id=session_id, graph=graph)
def call_trailer_search(input):
input = input.replace(",", " ")
return youtube.run(input)
tools = [
Tool.from_function(
name="Movie Chat",
description="For when you need to chat about movies. The question will be a string. Return a string.",
func=movie_chat.invoke,
),
Tool.from_function(
name="Movie Trailer Search",
description="Use when needing to find a movie trailer. The question will include the word trailer. Return a link to a YouTube video.",
func=call_trailer_search,
),
]
agent_prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
chat_agent = RunnableWithMessageHistory(
agent_executor,
get_memory,
input_messages_key="input",
history_messages_key="chat_history",
)
while True:
q = input("> ")
response = chat_agent.invoke(
{
"input": q
},
{"configurable": {"session_id": SESSION_ID}},
)
print(response["output"])
The model will then use the name
and description
for each tool to decide which tool to use.
When prompted to find a movie trailer, the model should use the YouTubeSearchTool
tool.
[user] Find the movie trailer for the Matrix.
[agent] Here are the movie trailers for "The Matrix":
The Matrix - Official Trailer #1 - https://www.youtube.com/watch?v=vKQi3bBA1y8&pp=ygUKVGhlIE1hdHJpeA%3D%3D
The Matrix - Official Trailer #2 - https://www.youtube.com/watch?v=xrYg_qKX-aI&pp=ygUKVGhlIE1hdHJpeA%3D%3D
However, when asked about movies, genres or plots, the model will use the chat_chain
tool.
[user] Find a movie about the meaning of life
[agent] Certainly! One movie that explores the meaning of life is "The Tree of Life" directed by Terrence Malick. It follows the journey of a young boy as he grows up in the 1950s and reflects on his experiences and the meaning of existence. It's a visually stunning and thought-provoking film that delves into existential questions.
As the agent also uses the conversation memory, you can refer back to the previous questions, such as finding a trailer for a movie it has recommended:
[user] Can you find the trailer
[agent] Here are two links to the trailer for "The Tree of Life":
Link 1 - https://www.youtube.com/watch?v=RrAz1YLh8nY&pp=ygUQVGhlIFRyZWUgb2YgTGlmZQ%3D%3D
Link 2 - https://www.youtube.com/watch?v=cv-dH5gHi1c&pp=ygUQVGhlIFRyZWUgb2YgTGlmZQ%3D%3D
Agents and tools allow you to create more adaptable and flexible models to perform multiple tasks.
Continue
When you are ready, you can move on to the next task.
Lesson Summary
You learned about agents and how they use multiple tools to perform tasks.
Next you will learn about retrievers.