Start a conversation with the AI assistant!
Before diving into implementation, it's essential to understand the key components of an OpenAI-powered chatbot:
To get started, we need to set up a Next.js project with the necessary dependencies:
npx create-next-app@latest openai-chatbot
cd openai-chatbot
npm install openai
Create a .env.local
file in the root directory to store your OpenAI API key securely:
OPENAI_API_KEY=your_api_key_here
Important: Never commit your API key to version control. Make sure to add .env.local
to your .gitignore
file.
Our application follows a modular structure:
/src
/components # React components for UI elements
/hooks # Custom React hooks for state management
/pages # Next.js pages including API routes
/services # OpenAI service integration
/styles # Global and component-specific styles
/utils # Helper functions
First, let's implement the service that communicates with the OpenAI API:
// src/services/openai.ts
import { OpenAI } from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
export async function generateChatResponse(messages) {
try {
const response = await openai.chat.completions.create({
model: "gpt-4-turbo",
messages: messages,
temperature: 0.7,
max_tokens: 1000,
});
return response.choices[0].message;
} catch (error) {
console.error("Error calling OpenAI:", error);
throw error;
}
}
Next, we'll create an API route to handle chat requests:
// src/pages/api/chat.ts
import type { NextApiRequest, NextApiResponse } from 'next';
import { generateChatResponse } from '../../services/openai';
export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
if (req.method !== 'POST') {
return res.status(405).json({ error: 'Method not allowed' });
}
try {
const { messages } = req.body;
if (!messages || !Array.isArray(messages)) {
return res.status(400).json({ error: 'Messages are required and must be an array' });
}
const response = await generateChatResponse(messages);
return res.status(200).json({ response });
} catch (error) {
console.error('Chat API error:', error);
return res.status(500).json({ error: 'Failed to generate response' });
}
}
Now, let's build the chat interface:
// src/components/ChatMessage.tsx
import React from 'react';
type MessageProps = {
content: string;
role: 'user' | 'assistant' | 'system';
};
export default function ChatMessage({ content, role }: MessageProps) {
return (
{content}
);
}
// src/components/ChatInput.tsx
import React, { useState } from 'react';
type ChatInputProps = {
onSendMessage: (message: string) => void;
isLoading: boolean;
};
export default function ChatInput({ onSendMessage, isLoading }: ChatInputProps) {
const [input, setInput] = useState('');
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (input.trim() && !isLoading) {
onSendMessage(input);
setInput('');
}
};
return (
);
}
// src/hooks/useChat.ts
import { useState } from 'react';
type Message = {
role: 'user' | 'assistant' | 'system';
content: string;
};
export function useChat(initialSystemMessage: string) {
const [messages, setMessages] = useState([
{ role: 'system', content: initialSystemMessage }
]);
const [isLoading, setIsLoading] = useState(false);
async function sendMessage(content: string) {
const userMessage = { role: 'user', content };
setMessages(prev => [...prev, userMessage]);
setIsLoading(true);
try {
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ messages: [...messages, userMessage] }),
});
if (!response.ok) {
throw new Error('Failed to get response');
}
const data = await response.json();
const assistantMessage = data.response;
setMessages(prev => [...prev, assistantMessage]);
} catch (error) {
console.error('Error sending message:', error);
// Add error handling UI here
} finally {
setIsLoading(false);
}
}
return {
messages,
isLoading,
sendMessage,
};
}
// src/pages/index.tsx
import { useChat } from '../hooks/useChat';
import ChatMessage from '../components/ChatMessage';
import ChatInput from '../components/ChatInput';
export default function Home() {
const systemMessage =
"You are a helpful AI assistant. Answer the user's questions concisely and accurately.";
const { messages, isLoading, sendMessage } = useChat(systemMessage);
// Filter out system messages for display
const displayMessages = messages.filter(msg => msg.role !== 'system');
return (
OpenAI Chatbot
{displayMessages.length === 0 ? (
Start a conversation with the AI assistant!
) : (
displayMessages.map((msg, i) => (
))
)}
{isLoading && (
Thinking...
)}
);
}
Before deployment, thoroughly test your chatbot for:
To improve your chatbot's performance:
Vercel is an excellent platform for deploying Next.js applications:
# Install Vercel CLI
npm install -g vercel
# Login to Vercel
vercel login
# Deploy your app
vercel
During deployment, you'll need to configure your environment variables (including your OpenAI API key) in the Vercel dashboard.
OPENAI_API_KEY=your_api_key_here
Security Note: Always use environment variables for sensitive information like API keys. Never hardcode these values in your application.
After deployment, continue improving your chatbot by:
Building an OpenAI-powered chatbot with Next.js is a powerful way to create interactive, intelligent applications. By combining OpenAI's advanced language models with a responsive front-end, you can create engaging conversational experiences for your users.
Remember these key points as you continue developing:
Happy building!