logo

Creating a React Frontend for an AI Chatbot

An interactive React chatbot UI with real-time streaming, markdown rendering and auto-scrolling

September 13, 2024

In our previous post, “Building an AI Chatbot Powered by Your Data”, we explored how to implement Retrieval-Augmented Generation (RAG) to create a production-ready AI chatbot that can answer questions about new technology trends, powered by the latest reports from leading institutions like the World Bank, World Economic Forum, McKinsey, Deloitte and the OECD.

However, there was a very important part missing to complete this full-stack chatbot application: the user interface. In this post, we are building a React frontend for our technology trends AI chatbot.

This frontend is designed to be flexible and can be easily adapted for any other AI chatbot application, like a customer support bot or a coding assistant. All you need is to connect it to an API that provides two specific chat endpoints (or adjust it to your own API endpoints).

The frontend will be a simple React application without additional frameworks, using Tailwind CSS for styling and Vite as a build tool and bundler. By building the frontend from the ground up, you'll discover that creating a functional chatbot interface doesn't require a lot of code and the code is quite simple.

Our frontend implementation will include features like streaming responses with Server-Sent-Events, markdown support for chat responses, a mobile-friendly responsive design and error handling during chat interactions. And you can extend it with more advanced features like authentication, past chats history and even chat sharing to build your own ChatGPT-like application tailored to your specific needs.

By the end of this post, you'll have a fully functional, customizable AI chatbot frontend that looks like this:

AI chatbot user interface

You can access a live version of the chatbot app here. And the complete source code for this project (frontend and backend) is available on this GitHub repository.

#Frontend Project Structure

This is the core project structure of our React application:

frontend/
├── public/ # Public static assets
├── src/
│ ├── assets/ # Images and icons
│ ├── components/
│ │ ├── Chatbot.jsx # Main chatbot component
│ │ ├── ChatInput.jsx # User input component
│ │ ├── ChatMessages.jsx # Chat messages component
│ │ └── Spinner.jsx # Loading spinner component
│ ├── hooks/ # Custom React hooks
│ │ ├── useAutoScroll.js
│ │ └── useAutosize.js
│ ├── api.js # Functions for backend API communication
│ ├── App.jsx # Root App component
│ ├── index.css # Global styles
│ ├── main.jsx # Entry point for the React app
│ └── utils.js # Utility functions (parseSSEStream)
├── index.html # HTML template
├── tailwind.config.js # Tailwind CSS configuration
└── vite.config.js # Vite configuration

#Main Chatbot Component

The core component of the chatbot frontend app is the Chatbot component. It contains the main application state and renders the required subcomponents. Let's take a look at a slightly simplified version of the component code:

import { useState } from 'react';
import { useImmer } from 'use-immer';
import ChatMessages from '@/components/ChatMessages';
import ChatInput from '@/components/ChatInput';

function Chatbot() {
const [chatId, setChatId] = useState(null);
const [messages, setMessages] = useImmer([]);
const [newMessage, setNewMessage] = useState('');

const isLoading = messages.length && messages[messages.length - 1].loading;

async function submitNewMessage() {
// Implemented in the next section
}

return (
<div>
{messages.length === 0 && (
<div>{/* Chatbot welcome message */}</div>
)}
<ChatMessages
messages={messages}
isLoading={isLoading}
/>
<ChatInput
newMessage={newMessage}
isLoading={isLoading}
setNewMessage={setNewMessage}
submitNewMessage={submitNewMessage}
/>
</div>
);
}

export default Chatbot;

⚠️In general, we are going to be focusing on the structure and core logic of the frontend components. To keep the code snippets concise and easier to read, the Tailwind CSS styling classes will be omitted, as they aren't crucial for understanding the chatbot's functionality. But you can check out the full project code in the GitHub repository.

As you can see at the top of the component, our application has three main state variables:

Note also how we are using both useState and useImmer for state management. If you have experience with React, you will know that state must never be updated directly and all state updates must be performed immutably (creating a new object or modifying a deep copy of the object). This can lead to verbose and error-prone code, especially in nested data structures like the messages array.

Immer is a tiny and very convenient library that simplifies state updates, and provides the useImmer hook for that purpose. Immer allows you to write more concise and intuitive code by applying all updates to a temporary draft object, and takes care of creating the next state immutably for you. For instance, you can conveniently update the last object in the messages array like this:

setMessages(draft => {
draft[draft.length - 1].loading = false;
});

The JSX structure of the Chatbot component is straightforward. It renders three elements:

#Submitting New Messages & Parsing the Response

The core functionality of our chatbot lies in the submitNewMessage function:

async function submitNewMessage() {
const trimmedMessage = newMessage.trim();
if (!trimmedMessage || isLoading) return;

setMessages(draft => [...draft,
{ role: 'user', content: trimmedMessage },
{ role: 'assistant', content: '', sources: [], loading: true }
]);
setNewMessage('');

let chatIdOrNew = chatId;
try {
if (!chatId) {
const { id } = await api.createChat();
setChatId(id);
chatIdOrNew = id;
}

const stream = await api.sendChatMessage(chatIdOrNew, trimmedMessage);
for await (const textChunk of parseSSEStream(stream)) {
setMessages(draft => {
draft[draft.length - 1].content += textChunk;
});
}
setMessages(draft => {
draft[draft.length - 1].loading = false;
});
} catch (err) {
console.log(err);
setMessages(draft => {
draft[draft.length - 1].loading = false;
draft[draft.length - 1].error = true;
});
}
}

Let's break down what is happening:

  1. We make sure that the input message is not empty or that a response is already loading before proceeding.
  2. We add the user's message to the chat and create a placeholder assistant message with the loading property set to true (useful to display the spinner while it loads).
  3. If there is no existing chat session, we create a new one using the api.createChat function.
  4. We then use api.sendChatMessage to send the user's message to the backend, which returns a stream as response.
  5. We use the parseSSEStream utility function to convert the SSE stream into an async iterator of text chunks. For each new text chunk received, we update the assistant message, creating a real-time streaming effect.
  6. Once the response finishes streaming, we update the assistant message's loading property to false.
  7. If there any errors in the process, we set the assistant message's error property to true to display an error message in the chat interface.

The api.js file contains the two functions (createChat and sendChatMessage) that interact with the backend API endpoints:

const BASE_URL = import.meta.env.VITE_API_URL;

async function createChat() {
const res = await fetch(BASE_URL + '/chats', {
method: 'POST',
headers: { 'Content-Type': 'application/json' }
});
const data = await res.json();
if (!res.ok) {
return Promise.reject({ status: res.status, data });
}
return data;
}

async function sendChatMessage(chatId, message) {
const res = await fetch(BASE_URL + `/chats/${chatId}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message })
});
if (!res.ok) {
return Promise.reject({ status: res.status, data: await res.json() });
}
return res.body;
}

As you can see in the code, they both use the native Fetch API. The createChat function returns a JSON response with the new chat id, while sendChatMessage returns the body directly as it's a streaming response that we need to handle differently.

Finally, let's take a look at the utility function that parses the SSE stream, taking advantage of the eventsource-parser library to simplify the SSE data extraction:

import { EventSourceParserStream } from 'eventsource-parser/stream';

export async function* parseSSEStream(stream) {
const sseStream = stream
.pipeThrough(new TextDecoderStream())
.pipeThrough(new EventSourceParserStream())

for await (const chunk of sseStream) {
if (chunk.type === 'event') {
yield chunk.data;
}
}
}

The function applies two transformations to the input stream: TextDecoderStream() converts the incoming bytes into text, and EventSourceParserStream() parses the individual server-sent events. It then iterates through the events and yields each event's data (which contains a text chunk of the assistant's response).

Notice how the function is an async generator, which is why we can iterate over the text chunks with a simple for await...of loop in the submitNewMessage function.

#Displaying Chat Messages

The ChatMessages component is responsible for rendering the message history. Let's take a look at a simplified version of the code (excluding the CSS styling classes):

import Markdown from 'react-markdown';
import useAutoScroll from '@/hooks/useAutoScroll';
import Spinner from '@/components/Spinner';
import userIcon from '@/assets/images/user.svg';
import errorIcon from '@/assets/images/error.svg';

function ChatMessages({ messages, isLoading }) {
const scrollContentRef = useAutoScroll(isLoading);

return (
<div ref={scrollContentRef}>
{messages.map(({ role, content, loading, error }, idx) => (
<div key={idx}>
{role === 'user' && (
<img src={userIcon} alt='user icon' />
)}
<div>
<div>
{(loading && !content) ? <Spinner />
: (role === 'assistant')
? <Markdown>{content}</Markdown>
: <div>{content}</div>
}
</div>
{error && (
<div>
<img src={errorIcon} alt='error icon' />
<span>Error generating the response</span>
</div>
)}
</div>
</div>
))}
</div>
);
}

export default ChatMessages;

The component handles different types of message visualizations:

To improve the user experience using the chatbot, we also implement auto-scrolling when new assistant messages are streamed with a custom useAutoScroll hook. If you are curious about the details, you can check out the full code here. This is a brief breakdown of how the hook performs auto-scrolling:

#User Input Interface

The final piece of our chatbot frontend is the user input interface. This is implemented in the ChatInput component, which allows users to type and submit their messages. This is a simplified version of the code:

import useAutosize from '@/hooks/useAutosize';
import sendIcon from '@/assets/images/send.svg';

function ChatInput({ newMessage, isLoading, setNewMessage, submitNewMessage }) {
const textareaRef = useAutosize(newMessage);

function handleKeyDown(e) {
if(e.keyCode === 13 && !e.shiftKey && !isLoading) {
e.preventDefault();
submitNewMessage();
}
}

return(
<div>
<textarea
ref={textareaRef}
rows='1'
value={newMessage}
onChange={e => setNewMessage(e.target.value)}
onKeyDown={handleKeyDown}
/>
<button onClick={submitNewMessage}>
<img src={sendIcon} alt='send' />
</button>
</div>
);
}

export default ChatInput;

The ChatInput component includes a textarea element for typing messages and a send button to submit them. The textarea also includes an auto-resizing functionality using a custom useAutosize hook. This hook dynamically adjusts the height of the textarea based on its content, allowing it to grow as the user types and shrink when content is deleted. You can see the hook code here.

The component also includes a handleKeyDown function that allows users to submit messages by simply pressing Enter (without Shift). At the same time, it preserves the textarea's native behavior of adding newlines with Shift+Enter, giving users the flexibility to format longer messages or add line breaks for clarity.


We have now completed the implementation of a full-stack AI chatbot. You can use this project as a starting point to build upon and customize for your specific needs.

All the techniques that we have covered in this post and the previous one - Retrieval Augmented Generation, vector databases, semantic search, asynchronous programming, structured outputs, real-time SSE streaming, markdown rendering, auto-scrolling - provide a practical framework that can help you create your own AI chatbot applications.

I hope this was helpful, and I look forward to seeing what you build next with it!