Let’s start by getting the project up and running.
If you plan to complete the course using the online IDE, you can skip to Exploring bot.py
.
Local install
To run the project locally, you need to install the project dependencies.
Online Labs
If you prefer, you can use the buttons on each lesson to complete the code in an Online IDE provided by GitPod. To use GitPod, you will need to register with a Github, Gitlab or BitBucket account.
Explore Repository in GitPod →Python
To run the project, you will need Python setup locally. We have designed the project to work with Python v3.11.
If you do not have Python installed, you can follow the installation instructions on Python.org.
Python v3.12+
As of writing, Langchain doesn’t support Python v3.12 or above. You can download Python v3.11 from python.org/downloads.You can verify your Python version by running the following command.
python --version
Setting up the Project
The project repository is hosted on Github.
You can use the GitHub UI or CLI to clone the repository or download a ZIP archive. We recommend forking the repository so you have a personal copy for future reference.
To clone the repository using the git
CLI, you can run the following command.
git clone https://github.com/neo4j-graphacademy/llm-chatbot-python
Installing Dependencies
The project has four dependencies, streamlit
, langchain
, openai
and neo4j-driver
.
To install the dependencies, you can run the pip install
command.
pip install -r requirements.txt
Starting the App
The bot.py
file in the root folder contains the main application code.
To start the app, run the streamlit run
command.
streamlit run bot.py
This command command will start a server on http://localhost:8501. Once you have run the command, you should see the following interface.
When you send a message, the message will be rendered with a red icon to represent a user message. The app will wait for one second, and then render the same message next to an orange robot icon, which represents an assistant message.
Exploring bot.py
We have purposely kept the code simple, so you can focus on the LLM integration.
If you are interested in how to build the Chat interface from scratch, check out the Build conversational apps documentation.
The majority of the code is contained in the bot.py
file.
Let’s take a look at bot.py
in more detail.
View the contents of bot.py
import streamlit as st
from utils import write_message
# tag::setup[]
# Page Config
st.set_page_config("Ebert", page_icon=":movie_camera:")
# end::setup[]
# tag::session[]
# Set up Session State
if "messages" not in st.session_state:
st.session_state.messages = [
{"role": "assistant", "content": "Hi, I'm the GraphAcademy Chatbot! How can I help you?"},
]
# end::session[]
# tag::submit[]
# Submit handler
def handle_submit(message):
"""
Submit handler:
You will modify this method to talk with an LLM and provide
context using data from Neo4j.
"""
# Handle the response
with st.spinner('Thinking...'):
# # TODO: Replace this with a call to your LLM
from time import sleep
sleep(1)
write_message('assistant', message)
# end::submit[]
# tag::chat[]
# Display messages in Session State
for message in st.session_state.messages:
write_message(message['role'], message['content'], save=False)
# Handle any user input
if prompt := st.chat_input("What is up?"):
# Display user message in chat message container
write_message('user', prompt)
# Generate a response
handle_submit(prompt)
# end::chat[]
Page Config
The code sets out by calling the st.set_page_config()
to configure the title and icon used on the page.
# Page Config
st.set_page_config("Ebert", page_icon=":movie_camera:")
App Session State
The next block of code checks the session state for the current user. The session is used to save a list of messages between the user and the LLM.
If an array of messages has not already been set, a list of messages containing a default greeting from the assistant message is set.
# Set up Session State
if "messages" not in st.session_state:
st.session_state.messages = [
{"role": "assistant", "content": "Hi, I'm the GraphAcademy Chatbot! How can I help you?"},
]
The session state will persist for as long as the user keeps their browser tab open.
As the app state changes, certain sections of the UI may be re-rendered. Storing a list of messages to the session state ensures that no information is lost during the re-rendering.
Chat Messages
Within a container, any messages held in the session state are written to the screen using the write_message()
helper function.
# Set up Session State
if "messages" not in st.session_state:
st.session_state.messages = [
{"role": "assistant", "content": "Hi, I'm the GraphAcademy Chatbot! How can I help you?"},
]
For berevity, the write_message()
helper function has been abstracted into the utils.py
file.
def write_message(role, content, save = True):
"""
This is a helper function that saves a message to the
session state and then writes a message to the UI
"""
# Append to session state
if save:
st.session_state.messages.append({"role": role, "content": content})
# Write to UI
with st.chat_message(role):
st.markdown(content)
The function accepts two positional arguments; the role
of the author, either human
or assistant
, and the message.
An additional save
parameter can be passed to instruct the function to append the message to the session state.
The block concludes by setting a prompt
variable that will contain the user input.
When the user sends their message, the write_message()
function is used to save the message to the session state and render the message in the UI.
Handling Submissions
The handle_submit()
mocks an interaction by calling the sleep()
method to pause for a second, before repeating the user’s input.
# Submit handler
def handle_submit(message):
"""
Submit handler:
You will modify this method to talk with an LLM and provide
context using data from Neo4j.
"""
# Handle the response
with st.spinner('Thinking...'):
# # TODO: Replace this with a call to your LLM
from time import sleep
sleep(1)
write_message('assistant', message)
You will modify this function to add interactions with the LLM.
Check Your Understanding
Server Address
What Local URL would you use to view the Streamit app in your browser?
-
❏ http://localhost:1234
-
❏ http://localhost:7474
-
❏ http://localhost:7687
-
✓ http://localhost:8501
Hint
After running the streamlit run bot.py
command, you should see an output similar to the following:
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501 Network URL: http://192.168.4.20:8501
The answer to this question is the Local URL written to the console.
Solution
The answer is http://localhost:8501.
Summary
In this lesson, you obtained a copy of the course code, installed the dependency and used the streamlit run
command to start the app.
In the next module, you will start writing the code to interact with the LLM.