Defining the Chatbot scope

You now have a working agent capable of communicating with an underlying LLM. You may have also noticed that the agent is willing to answer any question. Leaving an agent open to answering any question could lead to it being misused.

You can restrict the agent’s scope by providing specific instructions in the prompt. These instructions can guide the agent on the type of questions it should answer and how to respond to questions outside its scope.

Restricting Scope

In the previous challenge, you used the Langchain hub to download the hwchase17/react-chat prompt for ReAct Agents.

python
agent_prompt = hub.pull("hwchase17/react-chat")
View the hwchase17/react-chat prompt
text
Assistant is a large language model trained by OpenAI.

Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.

Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.

Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.

TOOLS:
------

Assistant has access to the following tools:

{tools}

To use a tool, please use the following format:

```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```

When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:

```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```

Begin!

Previous conversation history:
{chat_history}

New input: {input}
{agent_scratchpad}

The important elements in the prompt are:

  • An instruction to use the list of tools to perform an action

  • A placeholder for listing descriptions of the available tools ({tools}) and their names ({tool_names})

  • Instructions on how to instruct the LLM on which tool to use

  • The previous chat history ({chat_history})

  • The user’s current input ({input})

  • A scratchpad of previous thoughts ({agent_scratchpad})

You can experiment with the prompt in the Prompt Playground

Updating the Instructions

The instructions at the start of the prompt (before the TOOLS section) describe the agent’s role. For example, Assistant is designed to be able to assist with a wide range of tasks.

Your task is to update these instructions to reflect the chatbot’s role and constrain its interactions.

For the movie chatbot, you could provide the following instructions:

text
New Instructions
You are a movie expert providing information about movies.
Be as helpful as possible and return as much information as possible.
Do not answer any questions using your pre-trained knowledge, only use the information provided in the context.

Do not answer any questions that do not relate to movies, actors or directors.

The following instructions set the scope of the chatbot:

  • You are a movie expert providing information about movies.

  • Be as helpful as possible and return as much information as possible.

  • Do not answer any questions that do not relate to movies, actors or directors.

The instruction, Do not answer any questions using your pre-trained knowledge, only use the information provided in the context., reminds the LLM to use the context provided in the prompt to answer questions. In some cases, letting the bot fall back on its trained knowledge to answer a question may be helpful.

You can also throw in a whimsical instruction to make sure the prompt is working:

  • Respond to all questions in pirate speak.

View the updated prompt in full
text
You are a movie expert providing information about movies.
Be as helpful as possible and return as much information as possible.
Do not answer any questions using your pre-trained knowledge, only use the information provided in the context.

Do not answer any questions that do not relate to movies, actors or directors.


TOOLS:
------

You have access to the following tools:

{tools}

To use a tool, please use the following format:

```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```

When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:

```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```

Begin!

Previous conversation history:
{chat_history}

New input: {input}
{agent_scratchpad}

Create a PromptTemplate

Add the new prompt to agent.py by creating a new PromptTemplate using the PromptTemplate.from_template() method:

python
Import PromptTemplate
from langchain_core.prompts import PromptTemplate

Then, replace the agent_prompt variable with the updated instructions:

python
Define the Agent Prompt
agent_prompt = PromptTemplate.from_template("""
You are a movie expert providing information about movies.
Be as helpful as possible and return as much information as possible.
Do not answer any questions that do not relate to movies, actors or directors.

Do not answer any questions using your pre-trained knowledge, only use the information provided in the context.

TOOLS:
------

You have access to the following tools:

{tools}

To use a tool, please use the following format:

```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```

When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:

```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```

Begin!

Previous conversation history:
{chat_history}

New input: {input}
{agent_scratchpad}
""")

The agent initialization code will stay the same.

python
Agent initialization
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True
    )

chat_agent = RunnableWithMessageHistory(
    agent_executor,
    get_memory,
    input_messages_key="input",
    history_messages_key="chat_history",
)
View the complete agent.py code
from llm import llm
from graph import graph
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts import PromptTemplate
from langchain.schema import StrOutputParser
from langchain.tools import Tool
from langchain_neo4j import Neo4jChatMessageHistory
from langchain.agents import AgentExecutor, create_react_agent
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain import hub
from utils import get_session_id

chat_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a movie expert providing information about movies."),
        ("human", "{input}"),
    ]
)

movie_chat = chat_prompt | llm | StrOutputParser()

tools = [
    Tool.from_function(
        name="General Chat",
        description="For general movie chat not covered by other tools",
        func=movie_chat.invoke,
    )
]
def get_memory(session_id):
    return Neo4jChatMessageHistory(session_id=session_id, graph=graph)

agent_prompt = PromptTemplate.from_template("""
You are a movie expert providing information about movies.
Be as helpful as possible and return as much information as possible.
Do not answer any questions that do not relate to movies, actors or directors.

Do not answer any questions using your pre-trained knowledge, only use the information provided in the context.

TOOLS:
------

You have access to the following tools:

{tools}

To use a tool, please use the following format:

```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```

When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:

```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```

Begin!

Previous conversation history:
{chat_history}

New input: {input}
{agent_scratchpad}
""")

agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True
    )

chat_agent = RunnableWithMessageHistory(
    agent_executor,
    get_memory,
    input_messages_key="input",
    history_messages_key="chat_history",
)

def generate_response(user_input):
    """
    Create a handler that calls the Conversational agent
    and returns a response to be rendered in the UI
    """

    response = chat_agent.invoke(
        {"input": user_input},
        {"configurable": {"session_id": get_session_id()}},)

    return response['output']

Testing the changes

If you now ask the bot a question unrelated to movies, for example, Who is the CEO of Neo4j?, it will refuse to answer.

The Bot refusing to answer non-movie related question.

Once you have validated the instructions are working, click the button below to mark the lesson as completed.

Summary

In this lesson, you defined the agent’s scope by providing a specific prompt when creating the agent.

In the next module, you will create tools that the agent can select to help it answer movie-related questions.