The Data Apps Conference
A yellow arrow pointing to the right.
A yellow arrow pointing to the right.
Luke Stock
Sr. Director, Solution Engineering
April 1, 2025

The "Middleman" Of BI Is A Parasite On Your Data

April 1, 2025
The "Middleman" Of BI Is A Parasite On Your Data

Over my 20-plus years in analytics, I've seen companies struggle with traditional BI tools—both as a practitioner and during my 10-year tenure at Tableau. The biggest challenge? What I call the “Middleman” of BI: the massive technical debt that comes with bespoke compute and storage layers.

Think Tableau's Hyper Extracts, Power BI SSAS Cubes, or legacy OLAP cubes from MicroStrategy, IBM Cognos, SAP BusinessObjects, and Oracle OBIEE. They introduce complexity, slow workflows, and force BI teams and business users into frustrating workarounds. Business users find BI tools too cumbersome and end up relying on spreadsheets instead.

Spreadsheet hell continues

These bespoke storage layers are more than just technical baggage. They actively stifle innovation. Traditional BI tools are trying to solve two very hard problems at once: high-performance query processing and a user-friendly yet analytically rich interface. And they are not so great at either.

BI vendors focus on maintaining their own compute and storage layers instead of improving usability. As a result, business users—especially those comfortable in spreadsheets—find these tools too complex. With adoption peaking at 25% in enterprises, most revert to Excel, spending hours analyzing fragmented, disconnected data. 

Traditional BI tools are trying to solve two very hard problems at once. And they are not so great at either.

And all that often means double the work for BI & data teams. Whenever data moves into a bespoke compute or storage layer, security policies don’t magically come with it. BI teams are stuck redoing row-level security, column permissions, and access controls just to keep things running. Then, when something breaks (and it always does), business users hit a wall, execs start asking questions, and the BI team is back in firefighting mode.

More points of failure, more stale data

Each additional hop between your source system, cloud data warehouse, and bespoke BI storage layer introduces another failure point. Extracts and data copies fail all the time—query timeouts, network disruptions, you name it.

I've seen real-world middleman data workflows that look like this:

Source system → SQL Server → MSSAS Cubes → Power Pivot query → Manual copy to Excel → Cleanup/export macro → Print to PDF → Email to executives.

Business end users wake up in the morning expecting to see fresh numbers—only to find out they’re looking at the same stale data from yesterday (or last week).  

When leadership teams rely on accurate data for critical decisions and find outdated numbers instead, trust erodes fast.

This isn’t just an annoyance. When leadership teams rely on accurate data for critical decisions and find outdated numbers instead, trust erodes fast. Then comes the call—an executive, under pressure, demanding answers. Why don’t the numbers match? Where did the data go wrong? And if you don’t have an airtight explanation, you’re the one on the hook.

Goodbye, “Single Source of Truth”

Bespoke compute and storage layers in legacy BI architectures undermine the very foundation of a data warehouse: a single source of truth. By creating countless copies of data within their own proprietary systems, they fragment the data landscape, making it nearly impossible to maintain data consistency and integrity.  

The cost to the business is paramount. Teams either go on gut instinct or, more likely, pull data from source systems and spend hours laboriously stitching them together in spreadsheets: vlookups, manual spreadsheet gymnastics, and endless frustration.

No native Writeback = No actionable insights

Dashboards and reports are not enough. Business users don’t just want to look at data—they want to act on it. Forecasting, data reconciliation, scenario modeling—these all require human input.

Business users don’t just want to look at data—they want to act on it.

But because traditional BI tools copy subsets of data into their own bespoke storage layers, native write-back to the data warehouse is nearly impossible. Instead, users pull data into spreadsheets and work with information that becomes outdated the moment it’s downloaded. BI dashboards become glorified ETL tools.

Paying double for the same compute

And then there’s the cost. 

With bespoke storage and compute layers, enterprises end up paying twice. They burn through cloud data warehouse compute credits to query and copy data out. Then, they pay legacy BI vendors for peak compute capacities on top of that.

It’s like building your dream home (your cloud data warehouse), but instead of living in it, you pitch tents in your neighbor's yard. Why? Because you’re worried about property taxes, water bills, and maintenance costs. But in reality, you’re just making your life harder and more expensive.

No middleman? No limits.

Cloud data warehouses like Snowflake and Databricks have fundamentally changed the game. These platforms offer unmatched speed, scalability, and performance. They can handle massive datasets and thousands of concurrent users without sacrificing query performance.

The dream of a single, scalable source of truth for all enterprise data is now reality.

Imagine a world where business users drill down from a dashboard to row-level details without SQL or Python. Data exploration happens at full fidelity across billions of rows without extracts or refresh failures.

The dream of a single, scalable source of truth for all enterprise data is now reality.

Beyond viewing data, users should be able to write back to the cloud warehouse using a familiar, spreadsheet-like interface. No more outdated extracts, no more scrambling to reconcile stale numbers. Just live, real-time data. Enterprises get faster insights, massive efficiency gains, and analysts and spreadsheet users become 10x more productive.

Your window into the cloud data warehouse is now simple, intuitive, and analytically rich, while storage and compute are handled where they should be: in your cloud data warehouse, not an old and dusty BI tool that predates your cloud data warehouse.

THE ULTIMATE KPI PLAYBOOK