Why Your Data Model Isn’t Broken. Your Company Just Can’t Decide.

Explore why your data model may not be wrong but misaligned with business decisions, and learn how to design architecture that supports clarity, accountability, and trust.

TECH TOOLS

Alexander Pau

1/25/20265 min read

Most teams do not suffer from bad data.

They suffer from too many versions of reasonable truth.

Every dashboard looks defensible. Every metric has logic behind it. Every team can explain how their number was calculated. And yet when leadership asks a simple question, the room goes quiet.

What is our actual revenue number?
Why did conversion drop?
Which KPI should we trust?

The problem is rarely SQL quality or tooling maturity. It is that the data model is solving the wrong problem.

Not wrong technically. Wrong organizationally.

I have seen this firsthand while working with a client where finding a source of truth became a weekly argument. Finance had one number. Operations had another. Product had a third. None of them were incorrect. They were just answering different questions using different assumptions. The dashboards looked clean. The meetings did not.

That experience forced something uncomfortable to click.

Most teams do not need better data models.
They need clarity on what decisions their data is meant to support.

Data models are opinions, not infrastructure

We talk about data architecture as if it is plumbing. Something neutral. Something objective.

It is not.

Every data model encodes an opinion about what matters.

When you aggregate daily instead of event level, you are saying trends matter more than behavior. When you model by customer instead of transaction, you are saying relationships matter more than volume. When you snapshot monthly, you are saying change speed is less important than stability.

Those are not technical choices. They are business ones.

This is why teams can implement modern stacks, adopt best practices, and still feel stuck. They copy architectures without copying lay the context that made those architectures useful.

The result is familiar. Data that is clean but not trusted. Dashboards that are accurate but unused.

If this feels familiar, it usually shows up the same way your execution systems do when ownership is fuzzy. I have written about this pattern before when breaking down how bad structure quietly kills outcomes in Your SQL Isn’t Messy, It’s Lying.

Different system. Same root cause.

Why multiple data models exist in the first place

There is a reason the analytics world never settled on one universal model.

Because there is no universal question.

Some teams need traceability. Others need speed. Others need consistency for reporting. Others need flexibility for exploration.

Layered approaches exist because teams kept tripping over each other’s logic. That frustration is exactly what Databricks points to when they talk about separating raw, cleaned, and business-ready data in their breakdown of medallion architecture:

Not because it is elegant.
Because it creates boundaries.

Raw data answers what happened.
Cleaned data answers what reliably happened.
Business data answers what the company agrees happened.

That last part is the one most teams skip.

They think agreement will emerge automatically once the data is clean.

It never does.

The real source of truth problem

The client I worked with did not lack data.

They had too much of it.

Multiple ingestion pipelines. Multiple BI tools. Multiple teams defining KPIs locally because waiting for central approval felt slow.

So people did what operators always do under pressure. They optimized for movement.

The outcome was predictable. Every team shipped dashboards. None of them aligned.

Leadership meetings turned into reconciliation sessions. Time was spent debating numbers instead of decisions.

This is the moment when teams usually say they need a single source of truth.

What they actually need is a single source of accountability.

Without clear ownership, no data model can save you.

This mirrors what happens outside analytics too. When responsibility is shared but accountability is not, progress stalls. The same pattern shows up in execution systems, something I explored more deeply in Power BI + OKRs: How Leaders Turn Dashboards Into Execution.

Dashboards do not create alignment.
Ownership does.

When classic models work well

Traditional dimensional modeling still matters. When teams need consistency more than speed, the Kimball approach continues to hold up, especially for organizations that rely on stable reporting definitions across months and quarters:

Finance loves it for a reason.

It enforces definitions. It stabilizes metrics. It prevents quiet drift.

If your business needs standardized reporting across months and quarters, dimensional models shine. They trade flexibility for trust.

The mistake is using them everywhere.

Exploratory teams feel constrained. Product teams feel slowed. Analysts start building shadow models again.

Not because Kimball is outdated.
Because the problem changed.

When modern analytics models shine

Modern transformation tools introduced a different philosophy. Build modular models. Layer logic. Document assumptions. Iterate quickly.

Modern analytics tooling tried to close this gap by introducing semantic layers. The idea is simple: define metrics once so teams stop rebuilding the same logic everywhere. dbt frames this as a way to reduce duplication and disagreement, not eliminate it entirely:

In theory.

In practice, semantic layers fail for the same reason dashboards fail.

No one owns the final definition.

If the business cannot agree on what success means, the semantic layer simply becomes another place where disagreement lives.

This is why data intelligence models only work when paired with decision clarity. Otherwise, you just move confusion upstream.

The question most teams never ask

Before choosing architecture, you should be asking one uncomfortable question.

What decision is this number supposed to influence?

Not who consumes it.
Not how often it refreshes.
Not how pretty the dashboard looks.

What decision changes because this metric exists?

If no decision changes, the model does nomot matter.

This mindset is similar to how I approach tracking systems more broadly. Tools only earn their place when they reduce cognitive load. I wrote about this thinking in The Sharp Starts Tracking Playbook.

Data architecture is no different.

If it increases debate instead of reducing it, it is solving the wrong problem.

What strong teams do differently

Strong teams do not chase perfect models.

They do three things consistently.

First, they explicitly separate exploratory data from decision data.

Second, they assign metric ownership, even when it feels political.

Third, they accept that different models can coexist as long as their purpose is clear.

This is not about purity. It is about usefulness.

The goal is not one model to rule them all.
The goal is fewer meetings where nobody trusts the numbers on the screen.

Architecture is downstream of leadership

This is the part that rarely gets said out loud.

Data architecture reflects how a company makes decisions.

If decisions are centralized, models trend rigid.
If decisions are decentralized, models fragment.
If accountability is unclear, metrics multiply.

No tool fixes that.

That is why this topic overlaps so heavily with leadership. The same dynamics appear when teams avoid responsibility or default to consensus. I touched on this behavioral layer in Why Servant Leadership Is the Only Kind That Actually Works.

Data just makes the dysfunction visible.

The quiet takeaway

Your data model is probably not wrong.

It is just answering a question nobody agreed on.

Before redesigning pipelines, replatforming tools, or adopting the latest architecture trend, pause.

Ask what decision the business is actually trying to make.

Then design backward from there.

That is how data becomes an asset instead of a debate.

📚Further Reading

TLDR

  • Most data problems are decision problems wearing technical clothing

  • Different data models exist because different teams need different answers

  • A single source of truth fails when ownership is unclear

  • Clean pipelines do not fix misaligned incentives

  • Good architecture supports decisions, it does not replace them