Creating a React Frontend for an AI Chatbot
An interactive React chatbot UI with real-time streaming, markdown rendering and auto-scrolling
September 13, 2024
In our previous post, “Building an AI Chatbot Powered by Your Data”, we explored how to implement Retrieval-Augmented Generation (RAG) to create a production-ready AI chatbot that can answer questions about new technology trends, powered by the latest reports from leading institutions like the World Bank, World Economic Forum, McKinsey, Deloitte and the OECD.
However, there was a very important part missing to complete this full-stack chatbot application: the user interface. In this post, we are building a React frontend for our technology trends AI chatbot.
This frontend is designed to be flexible and can be easily adapted for any other AI chatbot application, like a customer support bot or a coding assistant. All you need is to connect it to an API that provides two specific chat endpoints (or adjust it to your own API endpoints).
The frontend will be a simple React application without additional frameworks, using Tailwind CSS for styling and Vite as a build tool and bundler. By building the frontend from the ground up, you'll discover that creating a functional chatbot interface doesn't require a lot of code and the code is quite simple.
Our frontend implementation will include features like streaming responses with Server-Sent-Events, markdown support for chat responses, a mobile-friendly responsive design and error handling during chat interactions. And you can extend it with more advanced features like authentication, past chats history and even chat sharing to build your own ChatGPT-like application tailored to your specific needs.
By the end of this post, you'll have a fully functional, customizable AI chatbot frontend that looks like this:
You can access a live version of the chatbot app here. And the complete source code for this project (frontend and backend) is available on this GitHub repository.
#Frontend Project Structure
This is the core project structure of our React application:
#Main Chatbot Component
The core component of the chatbot frontend app is the Chatbot
component. It contains the main application state and renders the required subcomponents. Let's take a look at a slightly simplified version of the component code:
⚠️In general, we are going to be focusing on the structure and core logic of the frontend components. To keep the code snippets concise and easier to read, the Tailwind CSS styling classes will be omitted, as they aren't crucial for understanding the chatbot's functionality. But you can check out the full project code in the GitHub repository.
As you can see at the top of the component, our application has three main state variables:
chatId
: Stores the current chat session id.messages
: Holds all the messages in the current chat. Each message contains arole
(user or assistant),content
,loading
anderror
property.newMessage
: Stores the current text in the chat input (before it gets submitted).
Note also how we are using both useState
and useImmer
for state management. If you have experience with React, you will know that state must never be updated directly and all state updates must be performed immutably (creating a new object or modifying a deep copy of the object). This can lead to verbose and error-prone code, especially in nested data structures like the messages
array.
Immer is a tiny and very convenient library that simplifies state updates, and provides the useImmer
hook for that purpose. Immer allows you to write more concise and intuitive code by applying all updates to a temporary draft object, and takes care of creating the next state immutably for you. For instance, you can conveniently update the last object in the messages
array like this:
The JSX structure of the Chatbot component is straightforward. It renders three elements:
- An initial welcome message (displayed if there are no messages yet).
- The
ChatMessages
component to display the chat conversation. - The
ChatInput
component for user input.
#Submitting New Messages & Parsing the Response
The core functionality of our chatbot lies in the submitNewMessage
function:
Let's break down what is happening:
- We make sure that the input message is not empty or that a response is already loading before proceeding.
- We add the user's message to the chat and create a placeholder assistant message with the
loading
property set to true (useful to display the spinner while it loads). - If there is no existing chat session, we create a new one using the
api.createChat
function. - We then use
api.sendChatMessage
to send the user's message to the backend, which returns a stream as response. - We use the
parseSSEStream
utility function to convert the SSE stream into an async iterator of text chunks. For each new text chunk received, we update the assistant message, creating a real-time streaming effect. - Once the response finishes streaming, we update the assistant message's
loading
property to false. - If there any errors in the process, we set the assistant message's
error
property to true to display an error message in the chat interface.
The api.js
file contains the two functions (createChat
and sendChatMessage
) that interact with the backend API endpoints:
As you can see in the code, they both use the native Fetch API. The createChat
function returns a JSON response with the new chat id, while sendChatMessage
returns the body directly as it's a streaming response that we need to handle differently.
Finally, let's take a look at the utility function that parses the SSE stream, taking advantage of the eventsource-parser library to simplify the SSE data extraction:
The function applies two transformations to the input stream: TextDecoderStream()
converts the incoming bytes into text, and EventSourceParserStream()
parses the individual server-sent events. It then iterates through the events and yields each event's data (which contains a text chunk of the assistant's response).
Notice how the function is an async generator, which is why we can iterate over the text chunks with a simple for await...of
loop in the submitNewMessage
function.
#Displaying Chat Messages
The ChatMessages
component is responsible for rendering the message history. Let's take a look at a simplified version of the code (excluding the CSS styling classes):
The component handles different types of message visualizations:
- User messages are displayed with a user icon.
- Assistant messages are rendered using the
Markdown
component provided by the react-markdown library. This is very useful as LLM responses are often formatted in Markdown with rich text formatting, paragraphs, lists and other elements. - When an assistant message is loading and has no content yet, a
Spinner
component is displayed. - If there are any errors processing an assistant response, an error icon and message are displayed below.
To improve the user experience using the chatbot, we also implement auto-scrolling when new assistant messages are streamed with a custom useAutoScroll
hook. If you are curious about the details, you can check out the full code here. This is a brief breakdown of how the hook performs auto-scrolling:
- It defines and returns a
scrollContentRef
that we add to the chat messages container element. Using this ref and the Resize Observer Web API, we can now monitor the chat messages container for changes in size (when new content is added) and automatically scroll to the bottom if the scrollbar isn't at the bottom. - It also includes a smart disable feature, common in AI chat applications: if the user manually scrolls up while an assistant message is being streamed, it temporarily disables auto-scrolling. This allows the user to read any part of the conversation history without being interrupted by auto-scrolling.
- Auto-scrolling is re-enabled when the user scrolls back to the bottom, or when a new assistant message starts streaming.
- An important detail to note is that this hook assumes the entire document (HTML element) is the scrollable container and therefore uses
document.documentElement
for scroll measurements and scrolling operations. If your application uses a different scrollable container (e.g., a div withoverflow: scroll
), you would need to modify the hook to use a ref for that specific container.
#User Input Interface
The final piece of our chatbot frontend is the user input interface. This is implemented in the ChatInput
component, which allows users to type and submit their messages. This is a simplified version of the code:
The ChatInput
component includes a textarea element for typing messages and a send button to submit them. The textarea also includes an auto-resizing functionality using a custom useAutosize
hook. This hook dynamically adjusts the height of the textarea based on its content, allowing it to grow as the user types and shrink when content is deleted. You can see the hook code here.
The component also includes a handleKeyDown
function that allows users to submit messages by simply pressing Enter (without Shift). At the same time, it preserves the textarea's native behavior of adding newlines with Shift+Enter, giving users the flexibility to format longer messages or add line breaks for clarity.
We have now completed the implementation of a full-stack AI chatbot. You can use this project as a starting point to build upon and customize for your specific needs.
All the techniques that we have covered in this post and the previous one - Retrieval Augmented Generation, vector databases, semantic search, asynchronous programming, structured outputs, real-time SSE streaming, markdown rendering, auto-scrolling - provide a practical framework that can help you create your own AI chatbot applications.
I hope this was helpful, and I look forward to seeing what you build next with it!