How We Built a High-Performance Hybrid Search Using SingleStore in a Full-Stack Gen AI App

Clock Icon

4 min read

Pencil Icon

Aug 26, 2024

In a previous article, we described how to use SingleStore in OpenAI function calling in a Next.js gen AI application.

How We Built a High-Performance Hybrid Search Using SingleStore in a Full-Stack Gen AI App

In this article, we will explain how we built a high-performance hybrid search using SingleStore in our gen AI eStore app. This app loads the dashboard in under two seconds by executing five parallel queries that simultaneously read over 100 million rows. It includes a text-to-SQL chat experience over the dashboard data, performs a hybrid (vector + exact keyword match) search to help you find the perfect clothing products for your needs and showcases SingleStore's performance analytics.

Additionally, it performs multiple function calls — all while maintaining efficient and effective performance. The conversational interface also returns agentic widgets that allow users to purchase and rate a product.

requirementsRequirements

In our eStore app, we aim to provide users with a new shopping experience powered by a high-performance database and AI. Users can find products by simply describing their needs, get relevant results in milliseconds and purchase their favorite products. Additionally, users can click on a product to get additional information about its sales, stock and description within a second.

In most cases, we would need two different databases: one to store product data and another to store product vector data, and perform a hybrid search. Fortunately, SingleStore fully meets our requirements — allowing us to achieve all this within a single database.

Once we create the database, we can create the "products" table.

products-tableProducts table

We generated a dataset with 10,000 unique products, inserting them into the products table with the following schema:

CREATE TABLE IF NOT EXISTS products (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
created_at DATETIME,
title TEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,
description TEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,
image VARCHAR(256),
price DECIMAL(9,2),
gender VARCHAR(64),
type_id BIGINT,
title_v VECTOR(1536),
description_v VECTOR(1536),
FULLTEXT KEY(title, description)
)

You’ll notice that aside from the general product data, this table contains two vector columns (title_v, description_v) and two full-text keys (title, description). Both are used in a hybrid search query. Then, we built the UI and started implementing the search functionality.

function-callingFunction calling

However, we couldn't simply take a user prompt and execute a SQL query because we needed to parse the user prompt into filter values (like color, minimum and maximum price, size, etc.)  that we could use in a hybrid search query. The best solution for this is to use OpenAI function calling.

We created the submitChatMessage server action that is executed once a user submits a prompt. This action forwards the user prompt to the OpenAI chat.completion.create function, which also receives a list of tools the LLM can execute. One of these tools is find_product, which describes the purpose of the tool and the argument schema that the findProducts function accepts to perform a hybrid search query.

This means that if the user prompt is relevant to a product search, the LLM provides the required and available arguments to the find_product tool and triggers it. The findProducts function, which performs the hybrid search query, is then executed.

hybrid-search-queryHybrid search query

Following the requirements, a hybrid search query should accept several optional parameters such as color, price, min price, max price, and size. In addition to these parameters, it should also receive a user prompt to perform a vector search over the title_v and description_v columns. Let’s take a look at the final hybrid search query:

SET @promptEmbedding = '[0.1,0.2,...]' :> vector(1536) :> blob;
SELECT
products.id,
MATCH(products.description) AGAINST ('blue') AS ft_score_color,
title_v <*> @promptEmbedding AS v_score_title,
description_v <*> @promptEmbedding AS v_score_description,
v_score_title + v_score_description AS score
FROM products
JOIN product_sku ON products.id = product_sku.product_id
JOIN product_sizes ON product_sku.product_size_id = product_sizes.id AND
product_sizes.value = 'xs'
WHERE ft_score_color AND (v_score_title >= 0.75 OR v_score_description >= 0.75)
AND price BETWEEN 100 AND 1000
GROUP BY products.id
ORDER BY score DESC
LIMIT 5;

This query sets the @promptEmbedding variable, which is an embedding value of the user prompt and is used in the vector search over the title_v and description_v columns. It’s recommended to use the following syntax to use an embedding in a vector search:

SET @promptEmbedding = '[0.1,0.2,...]' :> vector(1536) :> blob;

In the SELECT statement, the product ID is retrieved, and a full-text search over the description column is performed to find products with the color blue:

MATCH(products.description) AGAINST ('blue') AS ft_score_color,

Then, the statement performs two vector searches over the title_v and description_v columns and calculates the weighted score to find products close to the user prompt:

title_v <*> @promptEmbedding AS v_score_title,
description_v <*> @promptEmbedding AS v_score_description,
v_score_title + v_score_description AS score

And finally, it joins the product_sku and product_sizes tables to filter results by size and includes a condition to filter the results by the price range:

JOIN product_sku ON products.id = product_sku.product_id
JOIN product_sizes ON product_sku.product_size_id = product_sizes.id AND
product_sizes.value = 'xs'
WHERE ft_score_color AND (v_score_title >= 0.75 OR v_score_description >= 0.75)
AND price BETWEEN 100 AND 1000

As a result, the query returns five products that are most relevant to the user prompt, ordered by their relevance. The found products are then displayed to the user as a list of products using the gen AI approach, which you can read more about in this article.

In this article, we described how we implemented hybrid search in our full-stack gen AI application eStore. We recommend you clone the application and explore its features.


Share