Our data is changing faster than our data centers, making it harder and harder to keep up with the influx of incoming information, let alone make use of it. IT teams still tolerate overnight batch processing. The cost of scaling legacy solutions remains cost prohibitive. And many promised solutions force a complete departure from the past.
If this sounds familiar, you are not alone. Far too many innovative companies struggle to build applications for the future on infrastructure of the past. It’s time for a new approach.
In their report, “Hybrid Transaction/Analytical Processing Will Foster Opportunities for Dramatic Business Innovation,” Gartner identifies four major drawbacks of traditional database management systems, and how a new approach of hybrid transactional and analytical processing can solve these issues.
A Brief Overview of HTAP
Hybrid transactional/analytical processing (HTAP) merges two formerly distinct categories of data management: operational databases that processed transactions, and data warehouses that processed analytics. Combining these functions into a single system inherently eliminates many challenges faced by database administrators today.
How HTAP Remedies the Four Drawbacks of Traditional Systems
ETL
In HTAP, data doesn’t need to move from operational databases to separated data warehouses/data marts to support analytics. Rather, data is processed in a single system of record, effectively eliminating the need to extract, transform, and load (ETL) data. This benefit provides much welcomed relief to data analysts and administrators, as ETL often takes hours (sometimes days) to complete.
Analytic Latency
In HTAP, transactional data of applications is readily available for analytics when created. As a result, HTAP provides an accurate representation of data as it’s being created, allowing businesses to power applications and monitor infrastructure in real-time.
Synchronization
In HTAP, drill-down from analytic aggregates always points to the “fresh” HTAP application data. Contrast that with a traditional architecture, where analytical and transactional data is stored in silos, and building a system to synchronize data stores quickly and accurately is cumbersome. On top of that, it’s likely that the “analytics copy” of data will be stale and provide a false representation of data.
Copies of Data
In HTAP, the need to create multiple copies of the same data is eliminated (or at least reduced). Compared to a traditional architectures, where copies of data must managed and monitored for consistency, HTAP reduces inaccuracies and timing differences associated with the duplication of data. The result is a simplified system architecture that mitigates the complexity of managing data and hardware costs.
Why HTAP and Why Now?
One of the reasons we segmented workloads in the past was to optimize for specific hardware, especially disk drives. In order to meet performance needs, systems designed for transactions were best optimized one way, and systems designed for queries another. Merging systems on top of the same set of disk drives would have been impossible from a performance perspective.
With the advent of low cost, memory-rich servers, in your data center or in the cloud, new in-memory databases can transcend prior restrictions and foster simplified deployments for existing use cases while simultaneously opening doors to new data centric applications.
Want to learn more about in-memory databases and opportunities with HTAP? – Take a look at the recent Gartner report here.
If you’re interested in test driving an in-memory database that offers the full benefits of HTAP, give SingleStore a try for 30 days, or give us a ring.