Every 10 years or so, a tech-driven platform shift enables people to do more with less. Our lives abstract further away from wrought, rote routine and we enjoy productivity gains by spending our time elsewhere. It always happens gradually, and then all at once.

The last cycle to fully play out was in mobile. But the smartphone as we know it today didn’t just appear out of thin air – it resulted from decades of incremental improvements. Before smartphones could exist, electrical components had to shrink, batteries had to get denser, telecom networks had to expand, bandwidth had to get cheaper, and more.

Even so, it took a year for the App Store to come online following the iPhone’s release. Only then was the full potential of mobile realized and it’s only improved since with a more robust developer ecosystem, more performant hardware, and far faster network speeds.

We are now living through a similar cycle with artificial intelligence. The invention of the transformer, weekly improvements to various models and architectures, a flourishing open-source ecosystem, and increasingly powerful chips have all acted as inputs to the output of generalized knowledge. Eighteen months ago we witnessed that “iPhone moment” with the release of ChatGPT. Now the rush is on to find where value will be created on the application layer.

Each of these cycles follows a similar two-hop pattern – enabling technologies result in a breakout abstraction platform, leading to an avalanche of companies built atop it all fighting for airtime in an economic race to the bottom. At that second leg, few players survive.

Leg

The contemporary frenzy toward AI use cases is exciting (and something I actively track). Today, the de facto winner of the model race is OpenAI. From this launchpad, agents, assistants, and animated characters are scrambling to win market share, pushing the second leg of this pattern upwards. Those unsure of how to capture end customers default to building tools and infrastructure – they’re too late to contribute to the first leg but willing to concede that some vague opportunity exists.

This is all very compelling in the land of venture capital, but this moment in time has been years in the making. To get here, we’ve seen enabling technologies generate trillions of dollars in value in the first leg. The cloud boom provided the foundation for modern data centers and gaming brought GPUs mainstream all while data became the new oil.

We’re in the middle of a blindingly obvious bull run in tech in the stock market. While hindsight is 20/20, this value capture should have jumped off the page to venture capitalists interested in the space a half-decade ago. Saying one should have bought Nvidia when GPT-3 was released is a silly exercise and would require serious prescience, yet the point remains – you definitely should have.

As a venture investor, I’m conditioned to spot fledgling businesses that could return my entire fund. The competition and downward pressure caused by the mad dash onto the application layer is not an efficient use of my time. Venture investing requires a clear vision of what the future could become and where value could accrue. If one believes in a specific vision and sees infantile companies moving towards it, then there’s no reason why a swell couldn’t become a wave. The catch is that identifying what is signal and what is noise has proven itself quite difficult.

I’m operating under the assumption that platform shifts bring greater abstraction to the human experience. Thus, when looking for the next first leg, I’m looking for technologies that help bridge the gap between man and machine past where AI in its current form factor will conceivably go.

We’ve reached a point where discrete interaction between humans and systems has become pervasive, but I believe that the ubiquity of continuous interaction and data collection (especially in the physical world) will be the input for the next great platform shift. What that shift will look like I do not know for certain. But I do know that machines will further enable humans to the point that they no longer have to rely on their decision to interact with said machines, but machines will increasingly rely on their interactions with humans. The current interaction contract will be flipped on its head.

The goal then becomes identifying the technologies that enable continuous digital interaction in the physical world. Several disparate yet related systems are converging towards this point and parallel social pressure will further direct us there. Many of the niches I’m spending quite a bit of time on feel way too early, but the foundation is there to develop a powerful network of systems that continually sense physical representations in the environment, relay information back to decision-makers, and re-orient incentives to the contributor.

Over the next few years, an educated and begrudged population will see data acquisition deals by large model providers as a direct threat to their privacy. Old publishers and social platforms selling off data will be the last hoorah for a bygone era of technology companies. These models are hungry for more training data and the corpora of usable language are decaying with time. Users will want to benefit from the data they contribute.

At the same time, more data than ever will be created in the physical world. What we take for granted as computing in the background today for discrete use later will shoot to the top of our mind as being of immediate monetary value – including dash cams in cars, wearable health trackers, home security systems, and more. The greatest decentralized network of sensors for mass data collection in history has slowly been building for the past 20 years in the form of the global consumer base. Companies won’t exploit that immense data opportunity without allowing users to participate in the upside.

With time, these human sensor nodes will grow to create more and more data. I am excited about the future of brain-computer interfaces and neurotechnology. Everything in our physical world is an abstraction of human thought and creativity. Plugging directly into the source would provide the most generalized data feed ever. However, the challenge will ultimately be in distribution and convincing consumers to wear such devices. The biohacking revolution currently taking place will permeate into other categories. Consumers won’t just obsess over tracking health, but all aspects of their lives. All embodied interactions will be quantifiable.

A large human sensor network will finally be primed to reap the benefits of its diligent collection. Decentralized physical infrastructure networks will serve as the rails by which users will contribute their data and be financially compensated. No longer will consumers feel tethered to the data policies of incumbents, but rather empowered to own and capitalize off of their production. This quasi-socialistic concept will ironically prove to be one of the world’s best examples of capitalistic diffusion.

The creation of a decentralized network of continuous, embodied data providers with proper incentive structures will be the next first leg. Like the first legs before, they will appear to be related to the previous cycle. While that is true on the surface, they will ultimately be directing resources toward the next platform and thus the next second leg.

We’ll ascend to a new level of abstraction where the real world will be as numerically rich as the digital. It will be a place where the decisions we must make as humans will not be driven by our conscious rationale, but by the machines we’ve become so reliant on. Black and white will turn into gray as humans and computers become something else entirely.