In the previous challenge, you created a tool that used the Neo4jVector
Store and a retriever chain to identify movies with similar plots to the user’s input.
This approach may be relatively easy to set up, but as you learned in Vectors & Semantic Search module of the Neo4j & LLM Fundamentals course, this approach can have its drawbacks.
Semantic Search using Vector Similarity relies on relative proximity in vector space, which may not provide a precise match.
Graph-enhanced semantic search combines the nuanced understanding of data from vector search with the contextual insights provided by graph features, leading to search results with greater depth and relevance by considering the relationships and hierarchies between entities within a broader knowledge network.
In this challenge, you will create a tool that uses the structure of the graph to generate a Cypher statement to answer a question.
To complete this challenge, you must:
-
Create a Graph Cypher QA Chain
-
Register the Chain as a Tool
Creating a Graph Cypher QA Chain
To create a QA Chain that generates Cypher, you must import the GraphCypherQAChain
.
Open the tools/cypher.py
file.
import streamlit as st
from llm import llm
from graph import graph
# Create the Cypher QA chain
The streamlit
library, the llm
, and graph
objects you created are already imported.
Create the Cypher QA Chain:
from langchain_community.chains.graph_qa.cypher import GraphCypherQAChain
cypher_qa = GraphCypherQAChain.from_llm(
llm,
graph=graph,
verbose=True
)
The GraphCypherQAChain
provides a static .from_llm()
method for creating a new instance.
The chain will use the schema the Neo4jGraph
class provides to write a Cypher statement and execute it against the graph
database.
View the complete code
import streamlit as st
from llm import llm
from graph import graph
from langchain_community.chains.graph_qa.cypher import GraphCypherQAChain
cypher_qa = GraphCypherQAChain.from_llm(
llm,
graph=graph,
verbose=True
)
Registering the Graph Cypher QA Chain as a Tool
You can add the Graph Cypher QA Chain as a tool to your agent.
Open the agent.py
file, import the cypher_qa
chain and register it as a tool.
from tools.cypher import cypher_qa
tools = [
Tool.from_function(
name="General Chat",
description="For general movie chat not covered by other tools",
func=movie_chat.invoke,
),
Tool.from_function(
name="Movie Plot Search",
description="For when you need to find information about movies based on a plot",
func=get_movie_plot,
),
Tool.from_function(
name="Movie information",
description="Provide information about movies questions using Cypher",
func = cypher_qa
)
]
View the complete code
from llm import llm
from graph import graph
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts import PromptTemplate
from langchain.schema import StrOutputParser
from langchain.tools import Tool
from langchain_community.chat_message_histories import Neo4jChatMessageHistory
from langchain.agents import AgentExecutor, create_react_agent
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain import hub
from utils import get_session_id
from tools.vector import get_movie_plot
from tools.cypher import cypher_qa
chat_prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a movie expert providing information about movies."),
("human", "{input}"),
]
)
movie_chat = chat_prompt | llm | StrOutputParser()
tools = [
Tool.from_function(
name="General Chat",
description="For general movie chat not covered by other tools",
func=movie_chat.invoke,
),
Tool.from_function(
name="Movie Plot Search",
description="For when you need to find information about movies based on a plot",
func=get_movie_plot,
),
Tool.from_function(
name="Movie information",
description="Provide information about movies questions using Cypher",
func = cypher_qa
)
]
def get_memory(session_id):
return Neo4jChatMessageHistory(session_id=session_id, graph=graph)
agent_prompt = PromptTemplate.from_template("""
You are a movie expert providing information about movies.
Be as helpful as possible and return as much information as possible.
Do not answer any questions that do not relate to movies, actors or directors.
Do not answer any questions using your pre-trained knowledge, only use the information provided in the context.
TOOLS:
------
You have access to the following tools:
{tools}
To use a tool, please use the following format:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```
Begin!
Previous conversation history:
{chat_history}
New input: {input}
{agent_scratchpad}
""")
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True
)
chat_agent = RunnableWithMessageHistory(
agent_executor,
get_memory,
input_messages_key="input",
history_messages_key="chat_history",
)
def generate_response(user_input):
"""
Create a handler that calls the Conversational agent
and returns a response to be rendered in the UI
"""
response = chat_agent.invoke(
{"input": user_input},
{"configurable": {"session_id": get_session_id()}},)
return response['output']
Testing the Tool
You can test the new Cypher generation tool by asking the bot a question about a Movie. For example, you could ask "What movies has Tom Hanks acted in??".
You can check the agent used the Cypher QA tool has in the console.
Console Output
> Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: Movie information Action Input: {"actor": "Tom Hanks"}
> Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor {name: "Tom Hanks"})-[:ACTED_IN]->(m:Movie) RETURN m.title Full Context: [{'m.title': 'Punchline'}, {'m.title': 'Catch Me If You Can'}, {'m.title': 'Dragnet'}, {'m.title': 'Saving Mr. Banks'}, {'m.title': 'Bachelor Party'}, {'m.title': 'Volunteers'}, {'m.title': 'Man with One Red Shoe, The'}, {'m.title': 'Splash'}, {'m.title': 'Big'}, {'m.title': 'Nothing in Common'}]
> Finished chain. {'query': '{"actor": "Tom Hanks"}', 'result': "Tom Hanks has acted in the films 'Punchline', 'Catch Me If You Can', 'Dragnet', 'Saving Mr. Banks', 'Bachelor Party', 'Volunteers', 'The Man with One Red Shoe', 'Splash', 'Big', and 'Nothing in Common'."}Do I need to use a tool? No Final Answer: Tom Hanks has acted in the films 'Punchline', 'Catch Me If You Can', 'Dragnet', 'Saving Mr. Banks', 'Bachelor Party', 'Volunteers', 'The Man with One Red Shoe', 'Splash', 'Big', and 'Nothing in Common'.
> Finished chain.
Inconsistent Results
The LLM doesn’t return consistent results - its objective is to produce an answer, not the same response. The response may not be correct, generate an error due to invalid Cypher, or more data is returned than can be processed.
In the following two lessons, you will learn how to provide additional context and instructions to the LLM to generate better and more consistent results.
Did it work for you? Once you have completed the steps, click the button below to mark the lesson as completed.
Summary
In this lesson, you created a tool capable of generating a Cypher statement to answer a specific question and execute it against the database. But the Cypher it generates isn’t perfect.
In the next lesson, you will learn how to handle edge cases by fine-tuning the prompt used to generate the Cypher statement.