Although connecting GPT to the Gradio interface seems to be a trivial task, some beginners have difficulties, so we have described in as much detail as possible the whole process of creating the simplest chatbot.

Gradio is an open source library. Its advantage is that you can quickly write your own web applications for ML models, generate APIs. That’s why you don’t need any skills in working with Java Script or CSS – we write everything on Python.

Gradio is a whole ecosystem of Python and JavaScript libraries… Inside the library, you can work with JS and access the downloaded application on Hugging Face by name, for example.

Gradio is more often used for demonstration models to test a neuron with an adequate interface. But the utility is not limited to two windows of answers and requests. The library also has classes for flexible adjustment of the appearance: using the “gr.Blocks” class, you can play with the processing of complex data flows, controlling the display of components.

But today we will limit ourselves to simple functionality.

The task is simple: create a chatbot with reference to the most popular LLM – GPT

We will work with the gr.ChatInterface(predict).launch() class, which creates a basic interface for the neuron to work.

Predict is a prediction function responsible for the input data that OpenAI will receive and generate our response based on it. Here we will place the history of all messages for the context and the user’s request itself.

To begin with, we import Gradio and the OpenAI API utility, having previously installed the desired one via pip.

!pip install openai gradio

Additionally, we download getpass to work with passwords.

from openai import OpenAI # для работы с API OpenAI
import gradio as gr
import getpass

We use the getpass.getpass() function to securely enter the OpenAI API key. Immediately connect the client.

# Requesting key to OpenAI
api_key = getpass.getpass("Enter OpenAI API Key:") 

# Creating client to API OpenAI
client = OpenAI(api_key=api_key)

You can pick up the key in your profile after registering on the company’s website.

Everything is ready to create our chatbot. Now we need to set the predict function, which will accept the input data.

In order for our GPT to take into account not only the user’s messages, you need to add a message history. So GPT will generate answers by considering the context.

Accordingly, the predict function will accept two parameters: message (the user’s current message) and history (the history of previous messages in the chat).

def predict(message, history):

Let’s try to write the returned data in the history argument, write down the logic of replenishing the “context” of the LLM.

For our “history” to work, it must first be formatted for the OpenAI ecosystem. To do this, inside the function, we create an empty list history_openai_format, where the messages in the processed form will go.

history_openai_format = [] # список сообщений в формате пригодном для OpenAI

Next, you need to divide the data between user requests and GPT responses. Content – messages from a neuron or a person.

User – user requests,

Assistant – chatbot’s answers.

Thanks to the for loop, the program will consistently record all user requests and GPT responses in the message history and fill in our empty formatted list.

 for human, assistant in history: # перебираем все сообщения
# Указываем для каждого сообщения роль
history_openai_format.append({"role": "user", "content": human }) # наши запросы
history_openai_format.append({"role": "assistant", "content": assistant}) # ответы чат-бота

Important. The neuron should take into account not only the past history, but also the current request that we have just introduced… Therefore, we add the message argument to the context/message history.

# Наше последнее сообщение для чат-бота
history_openai_format.append({"role": "user", "content": message})

It will look something like this live. Let’s say we got a raw list:

history = [
("Hello, who are you?", "I am a virtual assistant."),
("Can you help me with my homework?", "Of course, what do you need help with?")
]

With a valid message message = “I need help with my math assignment.”

Then the formatted story will be written by roles:

  history_openai_format = [
{"role": "user", "content": "Hello, who are you?"},
{"role": "assistant", "content": "I am a virtual assistant."},
{"role": "user", "content": "Can you help me with my homework?"},
{"role": "assistant", "content": "Of course, what do you need help with?"},
{"role": "user", "content": "I need help with my math assignment."}
]

Now we will create the request to the chatbot itself through the response with the call of the entire history of correspondence, indicating the accuracy of the answers and the selected model.

 #  Формируем запрос к чат-боту со всей историей переписки
response = client.chat.completions.create(
model='gpt-4o', # используемая модель
messages= history_openai_format, # список форматированных сообщений с ролями
temperature=1.0 # точность ответов модели

Return the result of the prediction.

 return response.choices[0].message.content 

And start the interface.

# Запуск интерфейса чат-бота с функцией предсказания
gr.ChatInterface(predict).launch()

The whole code looks like this:

from openai import OpenAI # для работы с API OpenAI
import gradio as gr
import getpass # для работы с паролями


# Запрос ввода ключа от OpenAI
api_key = getpass.getpass("Введите OpenAI API Key:")


# Создание клиента к API OpenAI
client = OpenAI(api_key=api_key)


# Функция предсказания с двумя параметрами
# message - текущее сообщение
# history - история сообщений с чат-ботом
def predict(message, history):
history_openai_format = [] # список сообщений в формате пригодном для OpenAI
for human, assistant in history: # перебираем все сообщения
# Указываем для каждого сообщения роль
history_openai_format.append({"role": "user", "content": human }) # наши запросы
history_openai_format.append({"role": "assistant", "content": assistant}) # ответы чат-бота
# Наше последнее сообщение для чат-бота
history_openai_format.append({"role": "user", "content": message})


# Формируем запрос к чат-боту со всей историей переписки
response = client.chat.completions.create(
model='gpt-4o', # используемая модель
messages= history_openai_format, # список форматированных сообщений с ролями
temperature=1.0 # точность ответов модели
)


return response.choices[0].message.content # возвращаем результат предсказания


# Запуск интерфейса чат-бота с функцией предсказания
gr.ChatInterface(predict).launch()

We believe that even a schoolboy will deal with GPT connection.