Data Science Is Easy. Operating It Is Not.
Why analytics struggles once it leaves the notebook.
Another observation from recent Storm King engagements.
Spend time around modern data science education, bootcamps, master’s programs, even PhD tracks, and the emphasis is clear. We teach people how to code, how to build models, and how to evaluate performance. Python, SQL, notebooks, pipelines, and increasingly some flavor of AI layered on top.
What we spend far less time on is what happens after that work lands inside an organization.
Not the math.
Not the model.
The day-to-day reality of operating analytics inside real systems.
Most data science training quietly assumes a world where analysts control their data, can refactor freely, and work largely independently. That world exists in coursework and competitions. It does not exist in most enterprises.
In practice, analytics shows up inside platforms like the Microsoft Power Platform, Dataverse for Teams, Power Apps, and Power BI, often embedded directly into operational workflows. These tools are designed for durability and governance first, and experimentation second. That design choice shapes everything that comes after.
The gap becomes obvious the first time a team tries to scale beyond a pilot.
Take Dataverse for Teams as a concrete example. On paper, it’s collaborative and low-code. In reality, teams quickly learn that table structure, relationships, and Power App logic behave like shared infrastructure. There’s no real schema versioning, no safe branching, and no clean way for multiple people to make structural changes at once without stepping on each other.
That’s not a bug. It’s the tradeoff the platform makes to stay accessible.

Most data scientists are trained to refactor constantly - cleaning up and reorganizing data as they learn more. In notebooks and personal projects, this is a strength. In shared enterprise systems, that same instinct can quietly break everything downstream. A renamed field, a split table, or a “small cleanup” can ripple into broken apps, failed automations, and dashboards that no longer reconcile.
Enterprise platforms, especially low-code ones, actively discourage this kind of continuous structural change, not because it’s wrong, but because the blast radius is so large.
What’s striking is how little practical guidance exists to help teams navigate this transition. While there is no shortage of material on modeling techniques or coding patterns, there are far fewer resources that speak to best practices for operating analytics inside constrained enterprise platforms. Most teams are left to learn through trial and error, often after fragile systems have already taken root.

When friction appears, the instinctive diagnosis is predictable. Leaders assume the problem is skill. They look for stronger data scientists, better coders, more advanced modeling. What they often miss is that the system itself is being stressed in ways no amount of technical talent can fix.
Highly capable analysts are placed into environments that punish the very behaviors they were trained to value. The result is churn: schemas change too often, apps become fragile, dashboards lose credibility, and teams slowly become hesitant to touch anything at all.
This isn’t a modeling failure. It’s an operational one.
At its core, the challenge is sustainability. Many analytics efforts produce impressive early results, only to degrade over time as systems become brittle and difficult to evolve. Without an explicit focus on sustainable system development, clear ownership, disciplined change, and workflows aligned to tool constraints, data science efforts struggle to deliver lasting value.
What’s missing from most data science education is any serious discussion of how analytics lives inside constrained tools, shared environments, and human workflows. How ownership works and where change needs to slow down. And how stability, paradoxically, is what enables scale.
In the next post, we’ll explore practical approaches teams can use to operate analytics as a system rather than a collection of individual skills, and how doing so dramatically improves durability and impact.
If this tension feels familiar, Storm King Analytics works with organizations to design and operate sustainable analytics systems that maximize the utility of data science investments - without sacrificing governance, trust, or adaptability. If you’re wrestling with fragile pipelines, brittle dashboards, or systems that don’t scale, we’d welcome the conversation.



Point superbly made!
Nailed the core tension here. The Dataverse example illustrates something I've seen repeatedly where low-code accessibility and operational stabiltiy pull in opposite directions. Back when I worked with a team migrating from prototype to production, they kept treating schema changes like notebook edits until downstream dependencies broke silently. What made it worse was the organizational assumpton that more senior data scientists would solve it, when really what they needed was someone who understood change management. The blast radius concept is spot on.