Now that the agent is ready, you can hook it into the front end.
Inside modules/agent/index.ts
, you will find a call()
function.
This function is called by the route handler when the chat form in the UI is submitted.
export async function call(input: string, sessionId: string): Promise<string> {
// TODO: Replace this code with an agent
await sleep(2000);
return input;
}
As you can see, the function waits a second before returning the message to the user.
Create a new instance of your agent and return the results of the .invoke()
method to complete this challenge.
Calling the agent
Inside the call()
function, start by creating the objects that the initAgent()
function expects.
The agent requires an LLM.
const llm = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
// Note: only provide a baseURL when using the GraphAcademy Proxy
configuration: {
baseURL: process.env.OPENAI_API_BASE,
},
});
The retrieval tool requires an embedding model.
const embeddings = new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY,
configuration: {
baseURL: process.env.OPENAI_API_BASE,
},
});
To interact with the graph, the agent should use the singleton instance created in the Initializing Neo4j lesson.
// Get Graph Singleton
const graph = await initGraph();
Use the initAgent()
function to create a new agent
instance.
Use the .invoke()
method to send the input
argument into the agent, and pass the sessionId
as a configurable
.
As the function resolves to a string, you can return this value.
const agent = await initAgent(llm, embeddings, graph);
const res = await agent.invoke({ input }, { configurable: { sessionId } });
return res;
Completed function
If you have followed the instructions correctly, your code should resemble the following:
export async function call(input: string, sessionId: string): Promise<string> {
const llm = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
// Note: only provide a baseURL when using the GraphAcademy Proxy
configuration: {
baseURL: process.env.OPENAI_API_BASE,
},
});
const embeddings = new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY,
configuration: {
baseURL: process.env.OPENAI_API_BASE,
},
});
// Get Graph Singleton
const graph = await initGraph();
const agent = await initAgent(llm, embeddings, graph);
const res = await agent.invoke({ input }, { configurable: { sessionId } });
return res;
}
Testing your changes
If you run the npm run dev
command to start the application in development mode you should see the agent thinking and responding to questions.
npm run dev
Try asking the chatbot "who acted in the movie "Neo4j - Into the Graph"?
It works!
Once you’re happy with the response from the chatbot, hit the button below to mark the lesson as completed.
Summary
Congratulations! You should now have a working chatbot.
However, you may have noticed that the agent will respond to any question, no matter how obscene.
In the next optional challenge, you will learn how to modify the agent prompt.