
Real Time Recommendation Engine
Notebook

Note
This notebook can be run on a Free Starter Workspace. To create a Free Starter Workspace navigate to Start using the left nav. You can also use your existing Standard or Premium workspace with this Notebook.
How to build a real-time recommendation engine with SingleStore & Vercel
We will demonstrate how to build a modern real-time AI application for free using a Shared Tier Database, SingleStore Notebooks, and Job Service.
A Free SingleStore Starter Workspace enables you to execute hybrid search, real-time analytics, and point read/writes/updates in a single database. With SingleStore Notebooks and our Job Service, you easily bring in data from various sources (APIs, MySQL / Mongo endpoints) in real-time. You can also execute Python-based transforms, such as adding embeddings, ensuring that real-time data is readily available for your downstream LLMs and applications.
We will showcase the seamless transition from a prototype to an end-application using SingleStore. The final application will be hosted on Vercel. You can see the App we've built following this notebook here
Architecture:
Scenario:
Building a recommendation engine on what LLM you should be using for your use-case. Bringing together semantic search + real-time analytics on the performance of the LLM to make the recommendations.
Here are the requirements we've set out for this recommendation engine:
Pull data from Hugging Face Leaderboard on various Open source LLM models and their scores. Pull updated scores on these models every hour.
For each of these models, pull data from Twitter and Github on what developers are saying about these models, and how they are being used in active projects. Pull this data every hour.
Provide an easy 'search' interface to users where they can describe their use-case. When users provide describe their use-case, perform a hybrid search (vector + full-text search) across the descriptions of these models, what users are saying about it on Twitter, and which github repos are using these LLMs.
Combine the results of the semantic search with analytics on the public benchmarks, # likes, # downloads of these models.
Power the app entirely on a single SingleStore Free Shared Tier Workspace.
Ensure that all of the latest posts / scores are reflected in the App. Power this entirely with SingleStore Notebook and Job Service
Contents
Step 1: Creating a Starter Workspace
Step 2: Installing & Importing required libraries
Step 3: Setting Key Variables
Step 4: Designing your table scheama on SingleStore
Step 5: Creating Helper Functions to load data into SingleStore
Step 6: Loading data with embeddings into SingleStore
Step 7: Building the Recommendation Engine Algorithm on Vercel
Step 1. Create a Starter Workspace
Create a new Workpsace Group and select a Starter Workspace. If you do not have this enabled email pm@singlestore.com
Step 2. Install and import required libraries
In [1]:
1%pip install singlestoredb openai tiktoken beautifulsoup4 pandas python-dotenv Markdown praw tweepy --quiet2 3import re4import json5import openai6import tiktoken7import json8import requests9import getpass10import pandas as pd11import singlestoredb as s212import tweepy13import praw14from bs4 import BeautifulSoup15from markdown import markdown16from datetime import datetime17from time import time, sleep
Step 3. Seting Environment variables
3.1. Set the app common variables. Do not change these
In [2]:
1MODELS_LIMIT = 1002MODELS_TABLE_NAME = 'models'3MODEL_READMES_TABLE_NAME = 'model_readmes'4MODEL_TWITTER_POSTS_TABLE_NAME = 'model_twitter_posts'5MODEL_REDDIT_POSTS_TABLE_NAME = 'model_reddit_posts'6MODEL_GITHUB_REPOS_TABLE_NAME = 'model_github_repos'7LEADERBOARD_DATASET_URL = 'https://llm-recommender.vercel.app/datasets/leaderboard.json'8TOKENS_LIMIT = 20479TOKENS_TRASHHOLD_LIMIT = TOKENS_LIMIT - 128
3.2. Set the OpenAI variables
We will be using OpenAI's embedding models to create vectors representing our data. The vectors will be stored in the SingleStore Starter Workspace as a column in the relevant tables.
Using OpenAI's LLMs we will also generate output text after we complete the Retrieval Augmentation Generation Steps.
Create a new key
Copy the key and paste it into the
OPENAI_API_KEY
variable
In [3]:
1OPENAI_API_KEY = getpass.getpass("enter openAI apikey here")
3.3. Set the HuggingFace variables
We will be pulling data from HugginFace about the different models, the usage of these models, and how they score in several evaluation metrics.
Create a new token
Copy the key and paste it into the
HF_TOKEN
variable
In [4]:
1HF_TOKEN = getpass.getpass("enter HuggingFace apikey here")
3.4. Set the Twitter variables
We will be pulling data from Twitter about what users might be saying about these models. Since teh quality of these models may change over time, we want to caputre the sentiment of what people are talking about and using on twitter.
Add a new app
Fill the form
Generate a Bearer Token and paste it into the
TWITTER_BEARER_TOKEN
variable
In [5]:
1TWITTER_BEARER_TOKEN = getpass.getpass("enter Twitter Bearer Token here")
3.5 Set the GitHub variables
We will also be pulling data from various Github repos on which models are being referenced and used for which scenarios.
Fill the form
Get an access token and paste it into the
GITHUB_ACCESS_TOKEN
variable
In [6]:
1GITHUB_ACCESS_TOKEN = getpass.getpass("enter Github Access Token here")
Step 4. Designing and creating your table schemas in SingleStore
We will be storing all of this data in a single Free Shared Tier Database. Through this database, you can write hybrid search queries, run analytics on the model's performance, and get real-time reads/updates.
connection
- database connection to execute queriescreate_tables
- function that creates empty tables in the databasedrop_table
- helper function to drop a tableget_models
- helper function to get models from the models tabledb_get_last_created_at
- helper function to get lastcreated_at
value from a table
The create_tables
creates the following tables:
models_table
- table with all models data from the Open LLM Leaderboardreadmes_table
- table with model readme texts from the HugginFace model pages (used in semantic search)twitter_posts
- table with tweets related to models (used in semantic search)github_repos
- table with GitHub readme texts related to models (used in semantic search)
Action Required
Make sure to select a database from the drop-down menu at the top of this notebook. It updates the connection_url to connect to that database.
In [7]:
1connection = s2.connect(connection_url)2 3 4def create_tables():5 def create_models_table():6 with connection.cursor() as cursor:7 cursor.execute(f'''8 CREATE TABLE IF NOT EXISTS {MODELS_TABLE_NAME} (9 id INT AUTO_INCREMENT PRIMARY KEY,10 name VARCHAR(512) NOT NULL,11 author VARCHAR(512) NOT NULL,12 repo_id VARCHAR(1024) NOT NULL,13 score DECIMAL(5, 2) NOT NULL,14 arc DECIMAL(5, 2) NOT NULL,15 hellaswag DECIMAL(5, 2) NOT NULL,16 mmlu DECIMAL(5, 2) NOT NULL,17 truthfulqa DECIMAL(5, 2) NOT NULL,18 winogrande DECIMAL(5, 2) NOT NULL,19 gsm8k DECIMAL(5, 2) NOT NULL,20 link VARCHAR(255) NOT NULL,21 downloads INT,22 likes INT,23 still_on_hub BOOLEAN NOT NULL,24 created_at TIMESTAMP,25 embedding BLOB26 )27 ''')28 29 def create_model_readmes_table():30 with connection.cursor() as cursor:31 cursor.execute(f'''32 CREATE TABLE IF NOT EXISTS {MODEL_READMES_TABLE_NAME} (33 id INT AUTO_INCREMENT PRIMARY KEY,34 model_repo_id VARCHAR(512),35 text LONGTEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,36 clean_text LONGTEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,37 created_at TIMESTAMP,38 embedding BLOB39 )40 ''')41 42 def create_model_twitter_posts_table():43 with connection.cursor() as cursor:44 cursor.execute(f'''45 CREATE TABLE IF NOT EXISTS {MODEL_TWITTER_POSTS_TABLE_NAME} (46 id INT AUTO_INCREMENT PRIMARY KEY,47 model_repo_id VARCHAR(512),48 post_id VARCHAR(256),49 clean_text LONGTEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,50 created_at TIMESTAMP,51 embedding BLOB52 )53 ''')54 55 def create_model_github_repos_table():56 with connection.cursor() as cursor:57 cursor.execute(f'''58 CREATE TABLE IF NOT EXISTS {MODEL_GITHUB_REPOS_TABLE_NAME} (59 id INT AUTO_INCREMENT PRIMARY KEY,60 model_repo_id VARCHAR(512),61 repo_id INT,62 name VARCHAR(512) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,63 description TEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,64 clean_text LONGTEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci,65 link VARCHAR(256),66 created_at TIMESTAMP,67 embedding BLOB68 )69 ''')70 71 create_models_table()72 create_model_readmes_table()73 create_model_twitter_posts_table()74 create_model_github_repos_table()75 76 77def drop_table(table_name: str):78 with connection.cursor() as cursor:79 cursor.execute(f'DROP TABLE IF EXISTS {table_name}')80 81 82def get_models(select='*', query='', as_dict=True):83 with connection.cursor() as cursor:84 _query = f'SELECT {select} FROM {MODELS_TABLE_NAME}'85 86 if query:87 _query += f' {query}'88 89 cursor.execute(_query)90 91 if as_dict:92 columns = [desc[0] for desc in cursor.description]93 return [dict(zip(columns, row)) for row in cursor.fetchall()]94 95 return cursor.fetchall()96 97 98def db_get_last_created_at(table, repo_id, to_string=False):99 with connection.cursor() as cursor:100 cursor.execute(f"""101 SELECT UNIX_TIMESTAMP(created_at) FROM {table}102 WHERE model_repo_id = '{repo_id}'103 ORDER BY created_at DESC104 LIMIT 1105 """)106 107 rows = cursor.fetchone()108 created_at = float(rows[0]) if rows and rows[0] else None109 110 if (created_at and to_string):111 created_at = datetime.fromtimestamp(created_at)112 created_at = created_at.strftime('%Y-%m-%dT%H:%M:%SZ')113 114 return created_at
Step 5. Creating helper functions to load data into SingleStore
5.1. Setting up the openai.api_key
In [8]:
1openai.api_key = OPENAI_API_KEY
5.2. Create the create_embeddings
function
This function will be used to create embeddings on data based on an input to the function. We will be doing this to all data pulled from Github, HuggingFace and Twitter. The vector embeddings created will be stored in the same SingleStore table as a separate column.
In [9]:
1def count_tokens(text: str):2 enc = tiktoken.get_encoding('cl100k_base')3 return len(enc.encode(text, disallowed_special={}))4 5def create_embedding(input):6 try:7 data = openai.embeddings.create(input=input, model='text-embedding-ada-002').data8 return data[0].embedding9 except Exception as e:10 print(e)11 return [[]]
5.3. Create the function/Utils to help parse the data ingested from the various sources
This is a set of functions that ensure the JSON is in the right format and can be stored in SingleStore as a JSON column. In your Free Shared Tier workspace you can bring data of various formats (JSON, Geospatial, Vector) and interact with this data with SQL and MongoDB API.
In [10]:
1class JSONEncoder(json.JSONEncoder):2 def default(self, obj):3 if isinstance(obj, datetime):4 return obj.strftime('%Y-%m-%d %H:%M:%S')5 return super().default(obj)6 7def list_into_chunks(lst, chunk_size=100):8 return [lst[i:i + chunk_size] for i in range(0, len(lst), chunk_size)]9 10def string_into_chunks(string: str, max_tokens=TOKENS_LIMIT):11 if count_tokens(string) <= max_tokens:12 return [string]13 14 delimiter = ' '15 words = string.split(delimiter)16 chunks = []17 current_chunk = []18 19 for word in words:20 if count_tokens(delimiter.join(current_chunk + [word])) <= max_tokens:21 current_chunk.append(word)22 else:23 chunks.append(delimiter.join(current_chunk))24 current_chunk = [word]25 26 if current_chunk:27 chunks.append(delimiter.join(current_chunk))28 29 return chunks30 31def clean_string(string: str):32 def strip_html_elements(string: str):33 html = markdown(string)34 soup = BeautifulSoup(html, "html.parser")35 text = soup.get_text()36 return text.strip()37 38 def remove_unicode_escapes(string: str):39 return re.sub(r'[^\x00-\x7F]+', '', string)40 41 def remove_string_spaces(strgin: str):42 new_string = re.sub(r'\n+', '\n', strgin)43 new_string = re.sub(r'\s+', ' ', new_string)44 return new_string45 46 def remove_links(string: str):47 url_pattern = r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'48 return re.sub(url_pattern, '', string)49 50 new_string = strip_html_elements(string)51 new_string = remove_unicode_escapes(new_string)52 new_string = remove_string_spaces(new_string)53 new_string = re.sub(r'\*\*+', '*', new_string)54 new_string = re.sub(r'--+', '-', new_string)55 new_string = re.sub(r'====+', '=', new_string)56 new_string = remove_links(new_string)57 58 return new_string
Step 6. Loading Data into SingleStore
6.1. Load Data on all Open-Source LLM models from HuggingFace Leaderboard
This function loads a pre-generated Open LLM Leaderboard dataset. Based on this dataset, all model data is created and inserted into the database. We will also create embeddings for all of this data pulled using the OpenAI Embedding Model.
In [11]:
1def leaderboard_get_df():2 response = requests.get(LEADERBOARD_DATASET_URL)3 4 if response.status_code == 200:5 data = json.loads(response.text)6 df = pd.DataFrame(data).head(MODELS_LIMIT)7 return df8 else:9 print("Failed to retrieve JSON file")10 11def leaderboard_insert_model(model):12 try:13 _model = {key: value for key, value in model.items() if key != 'readme'}14 to_embedding = json.dumps(_model, cls=JSONEncoder)15 embedding = str(create_embedding(to_embedding))16 model_to_insert = {**_model, embedding: embedding}17 readmes_to_insert = []18 19 if model['readme']:20 readme = {21 'model_repo_id': model['repo_id'],22 'text': model['readme'],23 'created_at': time()24 }25 26 if count_tokens(readme['text']) <= TOKENS_TRASHHOLD_LIMIT:27 readme['clean_text'] = clean_string(readme['text'])28 to_embedding = json.dumps({29 'model_repo_id': readme['model_repo_id'],30 'clean_text': readme['clean_text'],31 })32 readme['embedding'] = str(create_embedding(to_embedding))33 readmes_to_insert.append(readme)34 else:35 for i, chunk in enumerate(string_into_chunks(readme['text'])):36 _readme = {37 **readme,38 'text': chunk,39 'created_at': time()40 }41 42 _readme['clean_text'] = clean_string(chunk)43 to_embedding = json.dumps({44 'model_repo_id': _readme['model_repo_id'],45 'clean_text': chunk,46 })47 _readme['embedding'] = str(create_embedding(to_embedding))48 readmes_to_insert.append(_readme)49 50 with connection.cursor() as cursor:51 cursor.execute(f'''52 INSERT INTO {MODELS_TABLE_NAME} (name, author, repo_id, score, link, still_on_hub, arc, hellaswag, mmlu, truthfulqa, winogrande, gsm8k, downloads, likes, created_at, embedding)53 VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, FROM_UNIXTIME(%s), JSON_ARRAY_PACK(%s))54 ''', tuple(model_to_insert.values()))55 56 for chunk in list_into_chunks([tuple(readme.values()) for readme in readmes_to_insert]):57 with connection.cursor() as cursor:58 cursor.executemany(f'''59 INSERT INTO {MODEL_READMES_TABLE_NAME} (model_repo_id, text, created_at, clean_text, embedding)60 VALUES (%s, %s, FROM_UNIXTIME(%s), %s, JSON_ARRAY_PACK(%s))61 ''', chunk)62 except Exception as e:63 print('Error leaderboard_insert_model: ', e)64 65 66def leaderboard_process_models():67 print('Processing models')68 69 existed_model_repo_ids = [i[0] for i in get_models('repo_id', as_dict=False)]70 leaderboard_df = leaderboard_get_df()71 72 for i, row in leaderboard_df.iterrows():73 if not row['repo_id'] in existed_model_repo_ids:74 leaderboard_insert_model(row.to_dict())
6.2 Loading Data from Github about model usage
We will search the Github API by keyword based on the model names we have above to find their usage across repos. We will then pull data from the ReadME's of the repos that reference a particular model and create an embedding for it.
This allows us to see in which kinds of scenarios are developers using a particular LLM and incoporate it as a part of our recommendation.
In the first step we search for the model using the github API
In [12]:
1def github_search_repos(keyword: str, last_created_at):2 repos = []3 headers = {'Authorization': f'token {GITHUB_ACCESS_TOKEN}'}4 query = f'"{keyword}" in:name,description,readme'5 6 if last_created_at:7 query += f' created:>{last_created_at}'8 9 try:10 repos_response = requests.get(11 "https://api.github.com/search/repositories",12 headers=headers,13 params={'q': query}14 )15 16 if repos_response.status_code == 403:17 # Handle rate limiting18 rate_limit = repos_response.headers['X-RateLimit-Reset']19 if not rate_limit:20 return repos21 22 sleep_time = int(rate_limit) - int(time())23 if sleep_time > 0:24 print(f"Rate limit exceeded. Retrying in {sleep_time} seconds.")25 sleep(sleep_time)26 return github_search_repos(keyword, last_created_at)27 28 if repos_response.status_code != 200:29 return repos30 31 for repo in repos_response.json().get('items', []):32 try:33 readme_response = requests.get(repo['contents_url'].replace('{+path}', 'README.md'), headers=headers)34 if repos_response.status_code != 200:35 continue36 37 readme_file = readme_response.json()38 if readme_file['size'] > 7000:39 continue40 41 readme_text = requests.get(readme_file['download_url']).text42 if not readme_text:43 continue44 45 repos.append({46 'repo_id': repo['id'],47 'name': repo['name'],48 'link': repo['html_url'],49 'created_at': datetime.strptime(repo['created_at'], '%Y-%m-%dT%H:%M:%SZ').timestamp(),50 'description': repo.get('description', ''),51 'readme': readme_text,52 })53 except:54 continue55 except:56 return repos57 58 return repos
After we conduct this serach, we will insert it into another table in the database. The data inserted will have embeddings associated with it.
In [13]:
1def github_insert_model_repos(model_repo_id, repos):2 for repo in repos:3 try:4 values = []5 value = {6 'model_repo_id': model_repo_id,7 'repo_id': repo['repo_id'],8 'name': repo['name'],9 'description': repo['description'],10 'clean_text': clean_string(repo['readme']),11 'link': repo['link'],12 'created_at': repo['created_at'],13 }14 15 to_embedding = {16 'model_repo_id': model_repo_id,17 'name': value['name'],18 'description': value['description'],19 'clean_text': value['clean_text']20 }21 22 if count_tokens(value['clean_text']) <= TOKENS_TRASHHOLD_LIMIT:23 embedding = str(create_embedding(json.dumps(to_embedding)))24 values.append({**value, 'embedding': embedding})25 else:26 for chunk in string_into_chunks(value['clean_text']):27 embedding = str(create_embedding(json.dumps({28 **to_embedding,29 'clean_text': chunk30 })))31 values.append({**value, 'clean_text': chunk, 'embedding': embedding})32 33 for chunk in list_into_chunks([list(value.values()) for value in values]):34 with connection.cursor() as cursor:35 cursor.executemany(f'''36 INSERT INTO {MODEL_GITHUB_REPOS_TABLE_NAME} (model_repo_id, repo_id, name, description, clean_text, link, created_at, embedding)37 VALUES (%s, %s, %s, %s, %s, %s, FROM_UNIXTIME(%s), JSON_ARRAY_PACK(%s))38 ''', chunk)39 except Exception as e:40 print('Error github_insert_model_repos: ', e)41 42 43def github_process_models_repos(existed_models):44 print('Processing GitHub posts')45 46 for model in existed_models:47 try:48 repo_id = model['repo_id']49 last_created_at = db_get_last_created_at(MODEL_GITHUB_REPOS_TABLE_NAME, repo_id, True)50 keyword = model['name'] if re.search(r'\d', model['name']) else repo_id51 found_repos = github_search_repos(keyword, last_created_at)52 53 if len(found_repos):54 github_insert_model_repos(repo_id, found_repos)55 except Exception as e:56 print('Error github_process_models_repos: ', e)
6.3. Load Data from Twitter about these models.
First, we will search Twitter based on the model names we have using the API.
In [14]:
1twitter = tweepy.Client(TWITTER_BEARER_TOKEN)2def twitter_search_posts(keyword, last_created_at):3 posts = []4 5 try:6 tweets = twitter.search_recent_tweets(7 query=f'{keyword} -is:retweet',8 tweet_fields=['id', 'text', 'created_at'],9 start_time=last_created_at,10 max_results=10011 )12 13 for tweet in tweets.data:14 posts.append({15 'post_id': tweet.id,16 'text': tweet.text,17 'created_at': tweet.created_at,18 })19 except Exception:20 return posts21 22 return posts
Next, we will add the text from the posts per model into another table. This table will also have embeddings associated with it.
In [15]:
1def twitter_insert_model_posts(model_repo_id, posts):2 for post in posts:3 try:4 values = []5 6 value = {7 'model_repo_id': model_repo_id,8 'post_id': post['post_id'],9 'clean_text': clean_string(post['text']),10 'created_at': post['created_at'],11 }12 13 to_embedding = {14 'model_repo_id': value['model_repo_id'],15 'clean_text': value['clean_text']16 }17 18 embedding = str(create_embedding(json.dumps(to_embedding)))19 values.append({**value, 'embedding': embedding})20 21 for chunk in list_into_chunks([list(value.values()) for value in values]):22 with connection.cursor() as cursor:23 cursor.executemany(f'''24 INSERT INTO {MODEL_TWITTER_POSTS_TABLE_NAME} (model_repo_id, post_id, clean_text, created_at, embedding)25 VALUES (%s, %s, %s, %s, JSON_ARRAY_PACK(%s))26 ''', chunk)27 except Exception as e:28 print('Error twitter_insert_model_posts: ', e)29 30def twitter_process_models_posts(existed_models):31 print('Processing Twitter posts')32 33 for model in existed_models:34 try:35 repo_id = model['repo_id']36 last_created_at = db_get_last_created_at(MODEL_TWITTER_POSTS_TABLE_NAME, repo_id, True)37 keyword = model['name'] if re.search(r'\d', model['name']) else repo_id38 found_posts = twitter_search_posts(keyword, last_created_at)39 40 if len(found_posts):41 twitter_insert_model_posts(repo_id, found_posts)42 except Exception as e:43 print('Error twitter_process_models_posts: ', e)
6.4. Run the functions we've created above to load the data into SingleStore
First, the notebook creates tables in the database if they don't exist.
Next, the notebook retrieves the specified number of models from the Open LLM Leaderboard dataset, creates embeddings, and enters the data into the models
and model_reamdes
tables.
Next, it executes a query to retrieve all the models in the database. Based on these models, Twitter posts, Reddit posts, and GitHub repositories are searched, converted into embeddings and inserted into tables.
Finally, we get a ready set of data for finding the most appropriate model for any use case using semantic search.
In [16]:
1create_tables()2leaderboard_process_models()3existed_models = get_models('repo_id, name', f'ORDER BY score DESC LIMIT {MODELS_LIMIT}')4twitter_process_models_posts(existed_models)5github_process_models_repos(existed_models)
(Optional) 6.5 Run this notebook every hour using our built-in Job Service
By scheduling this notebook to run every hour, the latest data from Hugging Face will be pulled on new models, their scores and their likes/downloads. This will ensure that you can capture the latest sentiment and usage from Twitter / Github about developers.
SingleStore Notebook + Job Service makes it really easy to bring real-time data to your vector-based searches and AI/ML models downstream. You can ensure that the data is in the right format and apply python based transformations like creating embeddings on the most newly ingested data. This would've previously required a combination of several serverless technologies alongside your database as we wrote about this previously
(Optional) Step 7: Host the app with Vercel
Follow our github repo where we showcase how to write the front end code of the app which does the vector similarity search to provide the results.The front end is built with our elegance SDK and hosted with Vercel.
See our guide on our vercel integration with SingleStore. We have a public version of the app running for free here.

Details
About this Template
We demonstrate how to build and host a real-time recommendation engine for free with SingleStore. The notebook also leverages our new SingleStore Job Service to ensure that the latest data is ingested and used in providing recommendations.
This Notebook can be run in Shared Tier, Standard and Enterprise deployments.
Tags
License
This Notebook has been released under the Apache 2.0 open source license.
See Notebook in action
Launch this notebook in SingleStore and start executing queries instantly.