Building an AI Chatbot UI with React and LangChain

Team 6 min read

#react

#langchain

#ai

#frontend

#tutorial

Introduction

Creating an AI-powered chat interface is a common goal for modern web apps. By combining a clean React UI with LangChain on the server, you can orchestrate prompts, manage conversation memory, and switch LLMs without rewriting frontend logic. This post walks you through a practical setup: a React frontend that talks to a lightweight Node.js backend powered by LangChain.

Why LangChain with React

  • Centralized prompt orchestration: build prompt templates, chains, and memory once and reuse across UIs.
  • Pluggable LLM backends: swap models (OpenAI, local models, or others) with minimal code changes.
  • Memory and context: preserve conversation history to enable more natural interactions.
  • Clear separation of concerns: keep the UI focused on rendering and UX, while LangChain handles the AI reasoning.

Prerequisites

  • Node.js and npm (or pnpm/yarn)
  • Basic React knowledge
  • OpenAI API key (or another supported LLM provider)
  • Familiarity with REST endpoints (for the backend API)

Quick project outline

  • Frontend: React app that renders a chat window and sends user messages to the backend.
  • Backend: Node.js server using LangChain to run a chat chain with memory, exposed via /api/chat.
  • Deployment: host the backend separately or alongside the frontend (depending on your stack).

Project structure

  • backend/
    • server.js (LangChain-backed chat API)
    • .env (OPENAI_API_KEY)
  • frontend/
    • src/App.jsx (or main component)
    • src/index.css (basic styling)
    • package.json (scripts)

Code blocks below show minimal, working scaffolding for both sides.

Backend: LangChain-powered chat API

This minimal Express server exposes a single endpoint that receives a user message and returns the AI’s reply. It uses LangChain’s ChatOpenAI model with a ConversationChain and memory.

// backend/server.js
import express from 'express';
import cors from 'cors';
import dotenv from 'dotenv';
import { ChatOpenAI } from 'langchain/chat_models/openai';
import { ConversationChain } from 'langchain/chains';
import { BufferMemory } from 'langchain/memory';

dotenv.config();

const app = express();
app.use(cors());
app.use(express.json());

// Set up memory so the AI can remember the chat history
const memory = new BufferMemory({ memoryKey: 'chat_history' });

// Use a chat-oriented LLM
const model = new ChatOpenAI({
  temperature: 0.7,
  openAIApiKey: process.env.OPENAI_API_KEY
});

// Create a conversation chain with memory
const chain = new ConversationChain({ llm: model, memory });

app.post('/api/chat', async (req, res) => {
  const { message } = req.body;
  if (!message) {
    return res.status(400).json({ error: 'message is required' });
  }
  try {
    const response = await chain.call({ input: message });
    res.json({ text: response });
  } catch (err) {
    console.error(err);
    res.status(500).json({ error: 'Failed to generate response' });
  }
});

const PORT = process.env.PORT || 3001;
app.listen(PORT, () => {
  console.log(`API listening on http://localhost:${PORT}`);
});

Notes:

  • Install: npm i express cors dotenv langchain
  • Environment: create a .env file with OPENAI_API_KEY=your_key
  • This example uses a basic in-memory memory; for multi-user apps, consider a per-session or per-user memory store.

Frontend: React chat UI

A simple React app that renders a chat interface and calls the backend API to fetch responses.

// frontend/src/App.jsx
import React, { useState, useEffect, useRef } from 'react';
import './App.css';

function App() {
  const [messages, setMessages] = useState([
    { from: 'bot', text: 'Hello! I am your AI assistant. Ask me anything about React and LangChain.' }
  ]);
  const [input, setInput] = useState('');
  const bottomRef = useRef(null);

  useEffect(() => {
    bottomRef.current?.scrollIntoView({ behavior: 'smooth' });
  }, [messages]);

  async function sendMessage() {
    const text = input.trim();
    if (!text) return;

    // User message
    setMessages(m => [...m, { from: 'user', text }]);
    setInput('');

    try {
      // Call backend
      const res = await fetch('/api/chat', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ message: text })
      });
      const data = await res.json();
      setMessages(m => [...m, { from: 'bot', text: data.text || '' }]);
    } catch (err) {
      setMessages(m => [...m, { from: 'bot', text: 'Sorry, something went wrong.' }]);
    }
  }

  return (
    <div className="chat-container">
      <div className="chat-window" aria-label="Chat window">
        {messages.map((m, idx) => (
          <div key={idx} className={`message ${m.from}`}>
            <div className="bubble">{m.text}</div>
          </div>
        ))}
        <div ref={bottomRef} />
      </div>
      <div className="input-area">
        <input
          value={input}
          onChange={e => setInput(e.target.value)}
          onKeyDown={e => { if (e.key === 'Enter') sendMessage(); }}
          placeholder="Type your message..."
        />
        <button onClick={sendMessage}>Send</button>
      </div>
    </div>
  );
}

export default App;

Styling (basic, optional):

/* frontend/src/App.css */
.chat-container {
  display: flex;
  flex-direction: column;
  height: 100vh;
  max-width: 720px;
  margin: 0 auto;
  border: 1px solid #ddd;
  border-radius: 8px;
  overflow: hidden;
  background: #fff;
}

.chat-window {
  flex: 1;
  padding: 16px;
  overflow-y: auto;
  background: #f7f7f7;
}

.message {
  display: flex;
  margin: 8px 0;
}
.message.user { justify-content: flex-end; }
.message.bot { justify-content: flex-start; }

.bubble {
  max-width: 70%;
  padding: 10px 14px;
  border-radius: 14px;
  background: #e0e0e0;
  color: #333;
}
.message.user .bubble { background: #4f8bd3; color: #fff; }

.input-area {
  display: flex;
  padding: 10px;
  border-top: 1px solid #ddd;
  background: #fafafa;
}
.input-area input {
  flex: 1;
  padding: 12px;
  border: 1px solid #ccc;
  border-radius: 8px;
}
.input-area button {
  margin-left: 8px;
  padding: 0 16px;
  border: none;
  border-radius: 8px;
  background: #4f8bd3;
  color: white;
  cursor: pointer;
}

Running the project

  • Backend

    • Navigate to backend
    • npm init -y
    • npm i express cors dotenv langchain
    • Create server.js as shown above
    • Create .env with OPENAI_API_KEY
    • Run: node server.js (or use nodemon for development)
  • Frontend

    • Navigate to frontend (created with create-react-app or your preferred setup)
    • npm i
    • Ensure the frontend can reach the backend URL (adjust if backend runs on a different host/port)
    • Run: npm start

Tips:

  • If hosting frontend separately from backend, replace the fetch URL with the full backend URL (e.g., https://api.yourdomain.com/api/chat).
  • For production, consider rate limiting, authentication, and per-user memory storage.

End-to-end flow recap

  1. User types a message in the React UI.
  2. Frontend sends a POST to /api/chat with the user input.
  3. Backend LangChain chain processes the input, consults memory, and generates a reply.
  4. Backend returns the answer to the frontend.
  5. UI renders the bot response; memory preserves the conversation for later turns.

Next steps and enhancements

  • Add memory tuning: switch from in-memory to a per-user store (e.g., Redis) to support multi-user chats.
  • Prompt engineering: tailor prompts for specific domains (e.g., code assistance, learning resources).
  • Real-time streaming: implement streaming responses using LangChain callbacks for a more dynamic UI.
  • Retrieval augmented generation: integrate a knowledge base so the bot can fetch relevant facts before answering.
  • Authentication and security: protect the API, rate limit, and manage API keys securely.

Conclusion

With a React frontend and LangChain-powered backend, you get a clean separation of concerns and a scalable path to richer AI chat experiences. This setup makes it straightforward to experiment with different LLMs, memory strategies, and prompt ensembles while keeping the UI lightweight and responsive.