Hello everyone! Itās been quite a while. If you donāt remember, Iām Robert, former co-founder of Hyperquery, and now co-founder of Oxygen Intelligence. After Win With Data was acquired by Deepnote, I wanted to take some time to think through the space from first principles again, relinquished of the bias that took hold of me from having my opinions hitched to a particular product. Weāre starting this newsletter in the spirit of writing (and building) in a way that attempts to rectify that bias.
Our product, our philosophy, and our newsletter are all being built in a distinctly different way from last time ā rather than building a thing, then guzzling the kool-aid in an attempt to sell it with conviction, our existence is now an active exploration in trying to get to the right solution. Iām hoping the best of you will appreciate this, and we can have some interesting dialogues sans the irreverent influencer posts.
With that said, enjoy my first post. :)
At this point in 2025, AI + data marketing narratives have played out in almost every conceivable way. Data companies old and new have been painting sweeping visions of the āfuture of analyticsā for a couple of years now, replete with promises of industry-shocking transformations through AI-powered workspaces, genBI, Agentic Analytics Suiteā¢s, AI data teams. Thereās no real criticism of the vernacular shift here ā anyone could see the inevitability of an industry-wide pivot. Data has always been an industry where value has largely come from doing things (building dashboards, reports, decision support, ad hoc questions, etc.) that have historically been gatekept by a single technical barrier: SQL proficiency. But out-of-the-box LLMs have toppled that wall ā they perform SQL generation so well that people had to make more extreme benchmarks to just keep the benchmark game going. And so the industry had to change, and of course, it all started with a bunch of new landing pages.
But somehow, the actual solutions still feel mostly shitty (great for us, I suppose). I suspect the data world at large has somehow both simultaneously overhyped the solution and severely underestimated the amount of work required to get something out the door that meets that promise. And so, in spite of how loud the discussions around AI seem, it also seems like nothing truly substantial is happening.
Iāve heard variations on the following, repeatedly:
āOnce AI just gets better, weāll be able to use it to replace data teams entirely.ā
But the world ā founders especially, surprisingly ā seems to be waiting for this to happen. Itās a valid strategy, particularly when LLM advancements have repeatedly invalidated entire LLM-based businesses over the last year. But LLMs gave gotten so much better ā why are our problems still not solved?
First, I posit that the sheer intensity of LLM advancement has force-framed the problem in a way that puts far too much focus on things the LLM can do well, rather than the problems we actually need to solve. In particular, it seems like the analytics world is all-too-focused on the text-to-SQL problem, the vibe-analysis problem, generative BI ā the showy technical capabilities that the LLM can solve now and will continue to solve better with each new model release. But technical prowess has never been the main thing that makes data valuable, and Iād argue that theyāre not the limiting factor keeping LLM value from being unlocked within enterprise data. In the face of something like AI that shatters existing paradigms, we seem to have lost our past convictions entirely. Weāve let the technology redefine the problem.
And as a direct consequence of this reframing, I feel like thereās an intellectual laziness that seems to have swept over the industry. Our collective optimism towards LLMs has stunted progress against the hard work thatās actually required to build a robust, AI-native tool. Weāve relinquished all aspects of the problem to LLMs, and are now sitting around waiting for them to get better. But bare-metal AI lack the operational properties needed for serious data work. Instead of plausible vibe-coded analyses, we need reliability and determinism. Rather than filling knowledge gaps with reasonable assumptions and best guesses, we need context retrieved against explicit decision boundaries. And where LLMs offer immediate, generated answers, we need trusted responses verified by actual peopleāor better yet, messy, iterative conversations that reveal what people actually need versus what they think they want.
But I donāt think people get that? And so, still, the collective sentiment remains, hands waving: AI will solve it! And in the wake of this sentiment, the world has built weird products. Weāve fallen into some sort of skeuomorphic phase state, where the solution space has two attractors:
Skeuomorphic AI glued to existing products.
Preserve the existing product and integrate LLMs on top, so in the case that LLMs improve drastically we can preserve our initial product offering.Gimmicky auto-pilots
Leave as much as possible to the LLM, so in the case that LLMs improve drastically, all of our product problems will be solved. Of course, none of this works yet, but the key word is āyetā.
So, in a world where the latter is barely functional, the former is where folks must turn. But if a thing is made for one thing, itās unlikely that itāll be the optimal shape for something else. At first, it was just chat prompts here and there, AI bolted on things, and thereās certainly utility there. But seeing the the limitations that come with the duct tape chatbot model, everyone in the industry seems to be scrambling to position themselves as the solution, attaching LLMs deeper and deeper into existing systems. And as a result, it's not just the chatbots anymoreāeven systems that donāt directly represent analytics interfaces seem to be succumbing to a desire to be skeuomorphism king. We have semantic layers that claim that the semantic layer is the key to success. We have metadata companies that claim the same thing. Hell, even some modern notebooks are saying theyāre the right shape for the ensuing refactor.
This isn't a complaint, reallyāit's a prediction more than anything else. It all feels very Blockbuster. Data companies canāt seem to imagine a world where the store [BI / governance / warehouses / semantic layer / orchestration] isnāt at the center of everything. But the optimal algorithm is one in which we start over, repeatedly, until we get it right, no? Where we think from first principles about how we should solve problems and then, only then, build accordingly. And so, maybe the real question isn't how to make our existing tools smarter. Maybe it's whether we need these tools at all. Itās horses all the way down, no matter how many engines you attach to them.
A final comment here: obviously, I have my own, evolving take on what the final end-state should look like, and Iām sorry discussion to that end has been a bit scant here. Over subsequent posts, Iāll open the kimono a bit. In truth, itās been really hard to build. The tendency towards e/acc laziness pervades all decisions. Iām excited to share why things have been so difficult, and offer insights and solutions that weāve found along the way.
Your honesty and willingness to challenge sacred cows in the AI + data space is genuinely refreshing. Have you come across any tools that truly broke away from skeuomorphic design and built something AI-native from the ground up?