In the Initializing the LLM lesson of Neo4j & LLM Fundamentals, you learned how to initialize an LLM class and generate a response from an LLM.
In this lesson, you will need to put this learning into practice by creating an LLM instance to communicate with a GPT model using OpenAI.
You will need to:
-
Obtain an API key from platform.openai.com
-
Create a secrets file to save the API key
-
Initialize an instance of the
ChatOpenAI
-
Create an instance of the
OpenAIEmbeddings
model
Setting Streamlit Secrets
To keep it secure, you will store the API key in the Streamlit secrets.toml
file.
Create a new file, .streamlit/secrets.toml
, and copy the following text, adding your OpenAI API key.
OPENAI_API_KEY = "sk-..."
OPENAI_MODEL = "gpt-4"
gpt-4
yields the best results. There are other models that may work better for your scenario.You can access values stored in the secrets.toml
file using st.secrets
:
import streamlit as st
openai_api_key = st.secrets['OPENAI_API_KEY']
openai_model = st.secrets['OPENAI_MODEL']
Keep your secrets safe
The Streamlit documentation outlines four approaches to handling secrets and credentials in your application.
Ensure you do not share or include your API keys in a git commit
.
The .gitignore
file includes the .streamlit/secrets.toml
file, so git won’t push the API key to Github.
Initializing an OpenAI LLM
As you will use the LLM across the application, you should include the LLM instance in a module that you can import.
Open the llm.py
file in the project root.
Create a new llm
instance of the ChatOpenAI
class:
# Create the LLM
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
openai_api_key=st.secrets["OPENAI_API_KEY"],
model=st.secrets["OPENAI_MODEL"],
)
The LLM is initialized with the openai_api_key
and model
stored in the secrets.toml file.
Initializing an Embedding Model
To use the Vector Search Index, you must create an instance of the OpenAIEmbeddings
model.
Langchain will use this when creating embeddings to find similar documents to the user’s input using Neo4j’s vector index.
# Create the Embedding model
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
openai_api_key=st.secrets["OPENAI_API_KEY"]
)
Using the LLM
Once you have completed the steps, you can import
the llm
and embeddings
objects into other modules within the project.
from llm import llm, embeddings
That’s it!
Once you have completed the steps above, click the button to mark the lesson as completed.
Summary
In this lesson, you have created the classes required to interact with OpenAI’s LLMs.
In the next lesson, you will create the classes required to connect to Neo4j.