in Engineering


How to Create a Full-Stack GenAI App Using SingleStore, OpenAI and Next.js

Yaroslav Demenskyi

Growth Engineer

We believe the future of web apps will include an AI conversational interface that not only returns text but also interactive, data-rich widgets. This would lead to truly 1:1 personalized web experiences.

How to Create a Full-Stack GenAI App Using SingleStore, OpenAI and Next.js

In this article, we will guide you through creating this kind of gen AI  app using SingleStore, OpenAI and Next.js. This step-by-step tutorial will help you build and test a micro gen AI app, enabling you to chat with gpt-4o, retrieve random products and render them in custom React components.

We will demonstrate the approach we used to build our gen AI eStore app. This app loads the dashboard in under two seconds by executing five parallel queries that simultaneously read over 100 million rows. It includes a text-to-SQL chat experience over the dashboard data, performs a hybrid (vector + exact keyword match) search to help you find the perfect clothing products for your needs and showcases SingleStore's performance analytics.

Additionally, it performs multiple function calls, all while maintaining efficient and effective performance. The conversational interface also returns agentic widgets that allow users to purchase and rate a product right from the conversational interface.

Let’s get started!

create-databaseCreate database

First, we need to create a database. You can either create a free-for-life Shared Tier database or create a dedicated workspace and get started with some free credits. Once you are logged into the Portal, the newly created database should be ready to use.

If a standard workspace is needed, click on the “+ Group” button in the “Cloud” sidebar section, select a standard workspace variant and fill out the form. When the workspace is ready, create a database and add your IP address to the firewall. To do this go to your workspace, switch to the “Firewall” tab and click on the “Edit” button to open an IP configuration modal. In the opened modal make sure that the “Only allow access from specified IP addresses” option is selected, and click on the “Add My Current IP Address” button.

create-next-js-projectCreate Next.js project

To create a Next.js project, open a terminal window and run the following command:

npx create-next-app genai-app --typescript --tailwind --eslint

After creating the new project, open it in the code editor and create a .env.local file in the root of the project and assign the following variables:

# ./.env.local
DB_HOST=""
DB_USER=""
DB_PASSWORD=""
DB_NAME=""
OPENAI_API_KEY=""

To get these variable values go to the SingleStore Portal,  open your database group and click on the “Connect” button, click on the “Your app” menu item and select the “Node.js” option. Set the environment variables in .env.local.

install-dependenciesInstall dependencies

Let’s install the SingleStore Elegance SDK to work with the database by running:

npm i @singlestore/elegance-sdk

Then install the shadcn/ui components by running:

npx shadcn-ui init
npx shadcn-ui add button form input card

To manage the global state of the application, we need to install jotai by running:

npm install jotai

Install the Vercel AI SDK to stream React components to the UI by running:

npm i ai@3.0.32

Install the OpenAI by running:

npm i openai

And install the rest of the dependencies by running:

npm i nanoid react-markdown zod-to-json-schema && npm i -D eslint-config-next
eslint-config-prettier eslint-plugin-import eslint-plugin-prettier
prettier-plugin-tailwindcss dotenv-cli

establish-database-connectionEstablish database connection

Once the required dependencies are installed, we can create a database client to work with the database. For this we will use the Elegance SDK that gives you pre-built components and hooks for gen AI and SingleStore CRUD use cases. Create a db.ts file in the ./lib directory with the following code:

// ./lib/db.ts
import { createEleganceServerClient } from "@singlestore/elegance-sdk/server";
export const db = createEleganceServerClient("mysql", {
connection: {
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
multipleStatements: true,
},
ai: {
openai: {
apiKey: process.env.OPENAI_API_KEY,
},
},
});

The database connection is now ready for use.

insert-data-into-the-databaseInsert data into the database

Our intelligent eStore uses a large dataset with 100M+ rows. However for this app, we prepared a smaller dataset with 2.5k products. Download the dataset form the GitHub repository, and move it to the ./data directory.

Then create a setup-db.ts file in the root directory with the content below. Notice that we are also creating a column called title_v which is a VECTOR type. We will be using this to do a semantic search later.

// ./setup-db.ts
import { readFile } from "fs/promises";
import path from "path";
import { db } from "@/lib/db";
(async () => {
try {
await Promise.all(
["chat_messages", "products"].map((tableName) => {
return db.connection.query(`DROP TABLE IF EXISTS ${tableName}`);
}),
);
await Promise.all([
db.connection.query(`
CREATE TABLE IF NOT EXISTS products (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
created_at DATETIME,
title TEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,
description TEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,
image VARCHAR(256),
price DECIMAL(9,2),
gender VARCHAR(64),
type_id BIGINT,
title_v VECTOR(1536),
description_v VECTOR(1536),
FULLTEXT KEY(title, description)
)
`),
db.connection.query(`
CREATE TABLE IF NOT EXISTS chat_messages (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
created_at BIGINT,
chat_id BIGINT,
user_id BIGINT,
role VARCHAR(64),
content JSON
)
`),
]);
const dataPath = path.join(process.cwd(), "data");
const fileContent = await readFile(path.join(dataPath, "products-1.json"), "utf-8");
const values = JSON.parse(fileContent);
await db.controllers.insertMany({ collection: "products", values });
console.log("Database is ready");
process.exit(0);
} catch (error) {
console.error(error);
process.exit(1);
}
})();

and run the following command to create tables and insert the data into the database:

npx dotenv -e .env.local -- npx tsx setup-db.ts

set-up-open-aiSet up OpenAI

To execute a custom SQL query to retrieve the results from the database and provide them as the LLM context to generate the chat completion or render a React component, we need to use the function calling. This is important to note that we are not leaving it up to the LLM to guess how to get the database, but  are rather prescriptive by giving it the SQL to get the data accurately. Create a llm.ts file in the ./lib directory and assign the llm variable:

// ./lib/llm.ts
import OpenAI from "openai";
export const llm = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

This LLM will be used to respond to a user request and call a function, the result of which will be rendered on the client.

create-chatCreate chat

Now we are all set to implement a basic chat experience and random product search (gen AI). To better understand the chat flow, let's take a look at the following diagram:

create-typesCreate types

In the root of the project create a type.ts file with the following content:

// ./types.ts
import { StreamableValue } from "ai/rsc";
import { ElementType, ReactNode } from "react";
import { z } from "zod";
export type ChatMessage = {
id: string;
createdAt: Date;
role: "user" | "assistant" | "system" | "function";
content: string | StreamableValue<string>;
node?: ReactNode;
};
export type Product = {
id: number;
created_at: string;
title: string;
description: string;
image: string;
price: number;
gender: string;
type_id?: number;
title_v: string;
description_v: string;
};
export type LLMTool = {
name: string;
description: string;
schema: z.AnyZodObject;
node: ElementType;
call: (...args: any[]) => Promise<{ name: string; props: any }>;
};

create-llm-toolsCreate LLM tools

Create a custom map of the LLM tools used in the LLM create chat completion API. Based on these tools and the user's prompt, the LLM understands which function to call. These tools are also used when parsing the LLM response and retrieving chat messages from the database to display the chat history. This is the primary file where a user's prompt and functions are linked based on the tool description and the schema fields.

Create a llm-tools.tsx file in the ./lib directory with the following code:

// ./lib/llm-tools.tsx
import { z } from "zod";
import { LLMTool, Product } from "@/types";
import { db } from "@/lib/db";
import { ProductList } from "@/components/product-list";
// Map of tools that can be called by the LLM.
export const llmTools: Record<string, LLMTool> = {
get_random_products: {
name: "get_random_products",
description: "Useful when you need to get random products",
schema: z.object({ limit: z.number().min(1).optional().describe("Number of products
to get") }),
node: (props) => (
<ProductList
{...props}
className="px-4"
/>
),
call: async ({ limit = 5 }) => {
const products = await db.controllers.query<Pick<Product, "id" | "title" | "price"
| "image">[]>({
query: `\
SELECT id, title, price, image FROM products ORDER BY RAND() LIMIT ${limit}
`,
});
return { name: "get_random_products", props: { products } };
},
},
};

create-store-providerCreate store provider

We need a global store to work with the data on the client side. Create a store-provider.tsx file in the ./components directory with the following content:

// ./components/store-provider.tsx
import { Provider } from "jotai";
import { ReactNode } from "react";
import { ChatMessage } from "@/types";
import { db } from "@/lib/db";
import { llmTools } from "@/lib/llm-tools";
import { StoreHydrate } from "@/components/store-hydrate";
export async function StoreProdiver({ children }: { children?: ReactNode }) {
const chatMessageRows = await db.controllers.findMany<
(Pick<ChatMessage, "id" | "role"> & { created_at: number; content: string })[]
>({ collection: "chat_messages", extra: "ORDER BY created_at DESC" });
const chatMessages = chatMessageRows.map((message) => {
let node: ChatMessage["node"] | undefined = undefined;
let content;
try {
content = JSON.parse(message.content);
} catch (error) {
content = message.content;
}
const isToolResult = typeof content === "object" && "name" in content && "props" in
content;
if (isToolResult) {
const tool = llmTools[content.name];
if (tool.node) node = <tool.node {...content.props} />;
}
return {
<Provider>
<StoreHydrate chatMessages={chatMessages}>{children}</StoreHydrate>
</Provider>
);
}

This component fetches messages from the database, converting them into React components and adding them to the global client state.

To store and display chat messages on the client side, create an atoms directory in the root of the project and create a chat-messages.ts file with the following content:

// ./atoms/chat-messages.ts
import { atom } from "jotai";
import { ChatMessage } from "@/types";
export const chatMessagesAtom = atom<ChatMessage[]>([]);

To pre-set the chatMessagesAtom value we need to get messages from the database in the server component, and hydrate the atom value. Create a store-hydrate.tsx component that accepts the chatMessages variable and hydrates the atom value in the ./components directory with the following content:

// ./components/store-hydrate.tsx
"use client";
import { useHydrateAtoms } from "jotai/utils";
import { chatMessagesAtom } from "@/atoms/chat-messages";
import { ChatMessage } from "@/types";
import { ReactNode } from "react";
export function StoreHydrate({
children,
chatMessages,
}: {
children?: ReactNode;
chatMessages: ChatMessage[];
}) {
useHydrateAtoms([[chatMessagesAtom, chatMessages]]);
return children;
}

update-the-layout-componentUpdate the layout component

Update the layout.tsx file in the ./app directory to wrap the children nodes with the StoreProvider to provider the store context:

// ./app/layout.tsx
import { Inter } from "next/font/google";
import type { Metadata } from "next";
import { StoreProdiver } from "@/components/store-provider";
import "./globals.css";
const inter = Inter({ subsets: ["latin"] });
export const metadata: Metadata = {
title: "GenAI App Tutorial",
};
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body className={inter.className}>
<StoreProdiver>{children}</StoreProdiver>
</body>
</html>
);
}

update-the-page-componentUpdate the page component

In the ./app directory, update the page.tsx file with the following content:

// ./app/page.tsx
import { ChatInput } from "@/components/chat-input";
import { ChatMessageList } from "@/components/chat-message-list";
export default function Home() {
return (
<main className="flex min-h-screen flex-col items-center justify-between gap-4 p-4">
<ChatMessageList />
<ChatInput />
</main>
);
}

This page component renders the ChatMessageList component to create a list of messages from the state, and the ChatInput component to submit a prompt.

create-chat-message-card-componentCreate chat message card component

In the ./components directory create a chat-message-card.tsx file with the following content:

// ./components/chat-message-card.tsx
"use client";
import { readStreamableValue } from "ai/rsc";
import { useState, useEffect } from "react";
import Markdown from "react-markdown";
import { ChatMessage } from "@/types";
import { cn } from "@/lib/utils";
import { Card } from "@/components/ui/card";
export function ChatMessageCard({ content, role, createdAt, node }: ChatMessage) {
const [activeContent, setActiveContent] = useState<string>(typeof content === "string"
? content : "");
useEffect(() => {
(async () => {
if (typeof content === "object") {
let value = "";
for await (const token of readStreamableValue(content)) {
setActiveContent((value += token));
}
}
})();
}, [content]);
return (
<Card className={cn("max-w-[75%] py-2", role === "user" ? "ml-auto" : "mr-auto")}>
<div className="flex items-center justify-between gap-2 px-4">
<h4 className="font-medium first-letter:uppercase">{role}</h4>
<time
className="ml-auto text-right text-xs text-muted-foreground"
dateTime={createdAt.toLocaleString()}
>
{createdAt.toLocaleTimeString("en-US", { hour12: false })}
</time>
</div>
{activeContent && (
<div className="w-full max-w-full px-4 [&_pre]:overflow-auto">
<Markdown>{activeContent}</Markdown>
</div>
)}
{node && <div className="mt-2">{node}</div>}
</Card>
);
}

This component gets a message object, parses the message content and renders a chat message card. If the content is a stream, the readStreamableValue is used to read the stream and render the message content.

create-chat-message-list-componentCreate chat message list component

In the ./components directory create a chat-messages-list.tsx file with the following content:

// ./components/chat-message-list.tsx
"use client";
import { useAtomValue } from "jotai";
import { cn } from "@/lib/utils";
import { ChatMessageCard } from "@/components/chat-message-card";
import { Card } from "@/components/ui/card";
import { chatMessagesAtom } from "@/atoms/chat-messages";
export function ChatMessageList() {
const messages = useAtomValue(chatMessagesAtom);
return (
<Card className="relative w-full flex-1">
<div className="absolute left-0 top-0 flex h-full w-full flex-col-reverse
overflow-y-auto overflow-x-hidden p-4">
<ul className="relative flex w-full flex-col-reverse">
{messages.map((message, i, arr) => {
const isSameRole = message.role === arr[i + 1]?.role;
return (
<li
key={message.id}
className={cn(isSameRole ? "mt-2" : "mt-8", "flex w-full max-w-full
last:mt-0")}
>
<ChatMessageCard {...message} />
</li>
);
})}
</ul>
</div>
</Card>
);
}

This component reads messages from the state using chatMessagesAtom and renders them as a list.

create-chat-input-componentCreate chat input component

In the same folder create the chat-input.tsx file to display a form to submit a prompt with the following content:

// ./components/chat-input.tsx
"use client";
import { zodResolver } from "@hookform/resolvers/zod";
import { useSetAtom } from "jotai";
import { SendHorizonal } from "lucide-react";
import { nanoid } from "nanoid";
import { useTransition } from "react";
import { SubmitHandler, useForm } from "react-hook-form";
import { z } from "zod";
import { ChatMessage } from "@/types";
import { submitChatMessage } from "@/actions/submit-chat-message";
import { Button } from "@/components/ui/button";
import { Card } from "@/components/ui/card";
import { FormField, FormItem, FormControl, Form } from "@/components/ui/form";
import { Input } from "@/components/ui/input";
import { chatMessagesAtom } from "@/atoms/chat-messages";
const chatInputFormSchema = z.object({
content: z.string().min(1).max(1024),
});
export type ChatInputFormSchema = z.infer<typeof chatInputFormSchema>;
export function ChatInput() {
const setMessages = useSetAtom(chatMessagesAtom);
const [isPending, startTransition] = useTransition();
const form = useForm<ChatInputFormSchema>({
resolver: zodResolver(chatInputFormSchema),
defaultValues: { content: "" },
});
const handleSubmit: SubmitHandler<ChatInputFormSchema> = async ({ content }) => {
try {
setMessages((i) => [
{ id: nanoid(), role: "user", content, createdAt: new Date() } satisfies
ChatMessage,
...i,
]);
const message = await (() => {
return new Promise<ChatMessage>((resolve) => {
startTransition(async () => {
resolve(await submitChatMessage(content));
});
});
})();
setMessages((i) => [message, ...i]);
form.reset();
} catch (error) {
console.error(error);
}
};
return (
<Card className="relative w-full">
<Form {...form}>
<form onSubmit={form.handleSubmit(handleSubmit)}>
<FormField
control={form.control}
name="content"
disabled={isPending}
render={({ field }) => (
<FormItem>
<FormControl>
<Input
className="pr-12"
placeholder="Message"
autoFocus
{...field}
/>
</FormControl>
</FormItem>
)}
/>
<Button
type="submit"
className="absolute right-0 top-1/2 -translate-y-1/2 rounded-bl-none
rounded-tl-none"
disabled={isPending}
>
<SendHorizonal className="w-[1em]" />
</Button>
</form>
</Form>
</Card>
);
}

This component displays the prompt and handles the form submission event. After the user has entered the prompt and submitted the form, a new user chat message is created and added to chatMessagesAtom, and the submitChatMessage action is called on the server side to receive a response from the LLM and add the LLM message to chatMessagesAtom.

create-product-list-componentCreate product list component

In the ./components directory create a product-list.tsx file with the following content:

// ./components/product-list.tsx
import { Product } from "@/types";
import { cn } from "@/lib/utils";
import { Card } from "@/components/ui/card";
export function ProductList({
className,
products,
}: {
className?: string;
products: Pick<Product, "id" | "title" | "price" | "image">[];
}) {
return (
<ul className={cn("flex gap-4 overflow-x-auto overflow-y-hidden", className)}>
{products.map((product) => (
<li
key={product.id}
className="flex-1"
>
<Card className="w-full overflow-hidden">
<div className="relative h-0 w-full overflow-hidden border-b pt-[100%]">
<img
className="absolute left-0 top-0 h-full w-full object-cover"
src={product.image}
alt={product.title}
/>
</div>
<div className="flex items-center justify-between gap-3 px-4 py-2">
<h4 className="line-clamp-2 font-medium capitalize">{product.title}</h4>
<p className="font-medium">${product.price}</p>
</div>
</Card>
</li>
))}
</ul>
);
}

This component is used in the chatLLMTools map as a node field value. This is part of GenAI, when a user submits a chat input form, the LLM gets the prompt, the tool map and decides if a tool needs to be called. If yes, the a is called and the result of the function call is passed to this component as a props and rendered on the client using the createStreamableUI.

create-a-submit-chat-message-server-actionCreate a submitChatMessage server action

This app is based on the server actions, providing a new approach for the data fetching. The chat flow requires a submitChatMessage action that gets the user’s prompt, creates text and node streams and calls the LLM to get a text stream response or function calling result.

After receiving the result from the LLM, the createStreamableValue and createStreamableUI functions from the ai/rsc package are used to display the text response or React component on the client. Create a submit-chat-message.tsx file in the ./actions directory with the following content:


// ./actions/submit-chat-message.tsx
"use server";
import { createStreamableUI, createStreamableValue } from "ai/rsc";
import { nanoid } from "nanoid";
import { createElement } from "react";
import { zodToJsonSchema } from "zod-to-json-schema";
import { ChatMessage } from "@/types";
import { db } from "@/lib/db";
import { llm } from "@/lib/llm";
import { llmTools } from "@/lib/llm-tools";
// Helper function to insert a message into the db
function insertChatMessage({ role, content }: Pick<ChatMessage, "role" | "content">) {
return db.controllers.insertOne({
collection: "chat_messages",
value: { role, content: JSON.stringify(content), created_at: new Date().getTime() },
});
}
export async function submitChatMessage(content: string) {
// The text stream that is used to stream the LLM text response to the client
const textStream: ReturnType<typeof createStreamableValue<string>> =
createStreamableValue("");
// The initial LLM response message
const message: ChatMessage = {
id: nanoid(),
role: "assistant",
content: textStream.value,
createdAt: new Date(),
};
// The node steram that is used to stream React nodes to the client
const nodeStream = createStreamableUI();
(async () => {
try {
const [completionStream] = await Promise.all([
// The create chat completion call with tools that returns a completion steram
llm.chat.completions.create({
model: "gpt-4o",
temperature: 0,
stream: true,
messages: [
{ role: "system", content: "You are an assistant" },
{ role: "user", content },
],
// The tools normalization for the llm accepted format
tools: Object.values(llmTools).map(({ name, description, schema }) => ({
type: "function",
function: { name, description, parameters: zodToJsonSchema(schema) },
})),
}),
// The user message inserting
insertChatMessage({ role: "user", content }),
]);
// The pasered tool name that should be called
let completionToolName = "";
// The pasered tool args that should be provided to a tool call function
let completionToolArgs = "";
// The simple text response
let completionContent = "";
// The completion stream chunking
for await (const chunk of completionStream) {
const tool = chunk.choices[0].delta.tool_calls?.[0]?.function;
const textToken = chunk.choices[0].delta.content || "";
// Assigning tool-related data
if (tool) {
if (tool?.name) completionToolName = tool.name;
if (tool?.arguments) completionToolArgs += tool.arguments;
}
// Updating the textStream on the new text response
if (textToken) {
completionContent += textToken;
textStream.update(textToken);
}
}
await Promise.all([
// Inserting a message with the completion content into the db
(async () => {
if (!completionContent) return;
return insertChatMessage({
role: "assistant",
content: JSON.stringify(completionContent),
});
})(),
// Calls the tool provided by the LLM and updates the nodeStream with the new
React node
(async () => {
if (!completionToolName) return;
const tool = llmTools[completionToolName as keyof typeof llmTools];
if (!tool) return;
const args = JSON.parse(completionToolArgs);
const result = await tool.call(args);
const node = result.props ? createElement(tool.node, result.props) : undefined;
await Promise.all([
nodeStream.update(node),
insertChatMessage({
role: "function",
content: JSON.stringify(result),
}),
]);
})(),
]);
} catch (error) {
console.error(error);
}
textStream.done();
nodeStream.done();
})();
return { ...message, node: nodeStream.value };
}

run-the-appRun the app

Now you can start the application by running the following command:

npm run build && npm run start

Go to the http://localhost:3000 and try to get N number of random products:

Congratulations! You've successfully built a gen AI app using SingleStore, OpenAI and Next.js. This tutorial guided you through creating a micro gen AI app, enabling chat with gpt-4o, retrieving random products and rendering them in React components.

We also showcased the eStore app, highlighting its efficient handling of over 100 million rows, swift execution of parallel queries and features like text-to-MySQL chat and hybrid search.

We hope this tutorial has empowered you to create and scale your own gen AI applications. Ready to try it out? Start your free SingleStore trial today.


Share