Formula 1 (F1) represents the pinnacle of motorsport technology, where success is measured in milliseconds.

In modern F1, each car is essentially a data center on wheels, generating massive amounts of real-time telemetry data during every session — practice, qualifying and the race. Here, victory is crafted not just on the asphalt, but in the digital realm where every millisecond tells a story.
Each F1 car is equipped with 300 sensors generating an astounding 1.1 million telemetry data points per second. During a single grand prix weekend, teams process approximately 160 terabytes of data — enough to fill 800 million pages of text. This torrent of information captures everything from the microscopic variations in tire temperature to the precise angle of the steering wheel, painting a digital portrait of performance that's as complex as it is crucial. This digital revolution has become so integral that teams now employ AI-powered simulations to model billions of potential race parameters, predicting how variables might affect performance before cars even hit the track.
The modern F1 car isn't just a marvel of mechanical engineering — it's a rolling supercomputer that broadcasts its vital signs with extraordinary precision. From the moment the lights go out, teams rely on this continuous stream of telemetry to make split-second decisions that mean the difference between victory and defeat. With average latencies in the low milliseconds from data transmission to insight delivery — even under the challenging network conditions typical at race tracks — this real-time analytics capability is as crucial to winning as engine power and driver skill. In this blog, we break down how this data is generated, simulating these conditions and modeling analytics behavior in SingleStore.
Each F1 car is equipped with hundreds of sensors that capture data points every millisecond. These sensors monitor everything from engine performance, tire temperatures and brake temperatures to driver inputs like throttle position, brake pressure and steering angle — not to mention additional terabytes of data generated across various parameters:
- Speed and acceleration metrics
- Throttle and brake application patterns
- DRS (Drag Reduction System) usage
- Gear selection across different track sections
- Weather conditions and their impact on performance
- Tire degradation patterns
- Position changes and lap times
When distilled into concrete insights, this raw information helps teams and drivers improve operational performance across several criteria. Some of these include understanding tire degradation patterns for optimal pit stop timing, analyzing driver performance in different track sections, deciding car setup based on historical performance data and determining race strategies to account for weather forecasts and track conditions. When put together, this data delivers sound strategy in a sport where a split second makes the difference between a win or loss.
As someone deeply invested in the sport, having access to this data transforms the viewing experience from passive observation to active engagement. When you see your favorite driver pushing their tires to the limit with 15 laps remaining, or that they're within striking distance of an overtake in the next few corners, it creates an emotional investment in every strategic decision.
This combination of real-time and historical analysis transforms F1 from just a racing spectacle into a data-rich analytical experience — where fans can truly understand the complex factors that determine race outcomes.
The evolution from batch to real-time analytics
Fans are not satisfied with only knowing the fastest driver — they need to know top performers in particular sections of the track and more complex analytical insights. Outside of F1, this shift has exposed the limitations of traditional batch processing approaches, particularly in scenarios where immediacy is crucial like ride-share pricing, fraud detection or supply chain monitoring.
Why do these traditional approaches fall short?
- Single-node databases — like MySQL, PostgreSQL and MongoDB® — hit performance walls when scaling real-time analytics to terabytes of data
- Traditional data warehouses, while excellent for BI reporting, aren't optimized for low-latency analytics or high concurrent user loads
- Batch processing and ETL workflows create unnecessary delays that modern applications can't afford
SingleStore as a real-time solution
SingleStore is specifically engineered for real-time analytics, addressing key requirements:
- Speed. Our three-tiered storage architecture delivers millisecond response times for complex analytical queries
- Streaming capability. SingleStore supports high-throughput, parallel streaming ingest of millions of events per second
- Unified architecture. We uniquely combine transactional and analytical workloads in a single engine, eliminating data movement delays
- Scalability. SingleStore handles high concurrency with support for millions of real-time queries across thousands of users
Now for the fun part — let’s show it!
For this simulation, the real-time component leverages Confluent Kafka hosted on AWS EC2, paired with SingleStore as our operational database. Confluent Kafka serves as our message broker since it excels at handling high-throughput, real-time data streams with minimal latency.
Formula 1 telemetry generates thousands of data points per second across multiple cars, especially telemetry metrics like speed and throttle, so Kafka's publish-subscribe model perfectly suits this use case. The platform's partitioning and fault tolerance ensure we never miss critical race data. Four topics in the producer are consumed by SingleStore pipelines, since SingleStore is ideal for processing time-series telemetry data while maintaining sub-second query response times for our dashboards. And, it serves as both an OLTP and OLAP database in one, leading to high-fidelity data ingests and analysis over millions of rows with millisecond latency.
Process flow
Let’s walk you through a simulation involving three notebooks, each representing an actor in this environment.
- Producer. Simulates an F1 data producer that retrieves telemetry data from the API and sends it to Kafka topics, emulating the real-time ingestion process
- Consumer. Focuses on setting up and consuming data with Confluent Kafka and SingleStore pipelines, including error handling and monitoring of the pipeline status
- Trigger. Shows how to trigger a SingleStore pipeline by fetching F1 telemetry data (like lap start times) from an API, generating a Grafana dashboard URL to visualize the data

Running the simulation
Head to this Github and clone the repository locally. Make sure you have valid access credentials to a free SingleStore account, Confluent Cloud and the F1 telemetry API.
First, run the trigger notebook after you set session parameters, retrieve the start time for a specific lap from the F1 API and generate a Grafana URL for visualization. Running this notebook first confirms that your connection to the telemetry API is working and you’re able to capture and format telemetric data correctly.
Next, update the SingleStore connection parameters at the top of the producer notebook. This notebook is responsible for configuring and triggering a SingleStore pipeline that consumes data from your Kafka topic. It uses the provided connection information to create the necessary table and pipeline in SingleStore, monitor its status and report any issues. Before you run this notebook, be aware that some SingleStore deployments might require additional configuration to ensure the information schema views (like pipelines_summary) are available. Adjust your configuration if needed.
Finally, for the actual simulations:
- Run the consumer to set up the pipeline that will prepare for ingestion. Run it only once since it starts the pipeline, and the pipeline persists until we explicitly stop the pipeline with a STOP command in the producer notebook.
.png?width=1024&disable=upscale&auto=webp)
- Check the pipeline status in the SingleStore portal pipelines section
- Ensure the pipeline is in running status
- Make sure the Kafka pipeline to SingleStore is set up, so any data pushed to the Kafka topic, “topic_1”, flows through the SingleStore pipeline and is written to the table “f1_car_data”
.png?width=1024&disable=upscale&auto=webp)
- Run the producer notebook, which simulates a live data stream. This notebook fetches telemetry data for a specific lap from the F1 API available here, sending the records to your Kafka topic
- The producer notebooks purpose is to publish this data to the Kafka topic “topic_1”
- To verify the producer outputs, set the loglevel to INFO logs to see the data record-by-record within your notebook in real time
- Now, you can query the the SingleStore table “f1_car_data” to see the records being populated in real time
- (Optional) Grafana dashboard setup and linking
- We have provided a notebook that links to a Grafana dashboard that provides visualization for the ingested data
- Once you’ve set up the dashboard correctly, you can see real-time driver updates
- Invoke the trigger notebook to get the Grafana dashboard URL that you have set up — and log in to the dashboard with the generated URL and view the real-time updates

Final thoughts
We have just explored real-time ingestion using SingleStore pipelines, but look forward to using the entire historically available data for building dashboards in the next blog in this series. We found setting up Grafana to be the hardest step in this simulation. This is currently not possible in Tableau since we can’t have a one-second data refresh as it operates on Extracts, but this might be overcome to some extent with live connections (a paid product offering).
We also explored Streamlit as one of the free alternative choices to set up these dashboards. But our engineering teams are working on dash apps that you can look forward to trying out for free in the coming weeks.
Curious to see how fast you can ingest telemetry data? Start free with SingleStore.