Review: Where’s AI Taking Us?
This article is our thoughts on the conversation between Vinod Khosla and Sam Altman in the below panel:
While there are a lot of great insights in this discussion, many of the comments should also be understood in the incentives of both - the founder of OpenAI and a major venture capital fund whose best interests are in keeping excitement about the future of AI, and their roles in it. We summarize the key topics from the discussion, some context, and then provide our perspective on each.
1. Your Job Is Safer Than You Think (For a Surprising Reason)
The dominant narrative about AI and work is one of replacement. The assumption is that as AI becomes capable of performing complex intellectual tasks, human jobs will inevitably become obsolete. However, Sam Altman argues that even in a world where AI can perform almost all jobs, our innate "biological programming" will preserve a surprising number of human roles, as well as open up entirely new types of jobs.
At the same time, however, Altman also states, “By 2035 and beyond, AI will be capable of performing at least 80% of most intellectual jobs, including roles like AI doctors, therapists, oncologists, engineers, salespeople, marketing professionals, and accountants.”
While it certainly seems feasible that new jobs and roles will be created as AI advances, stating that 80% of jobs like doctors, therapists, and oncologists seems very contradictory to the idea that certain roles that require human interaction and social skills will be preserved. Roles in healthcare and life sciences, like these, are arguably 80% about the human interaction and, ideally, just the administrative components would be automated.
2. Stop Trying to Build the Next OpenAI
When asked directly for advice for venture capital LPs (Limited Partners) looking to back the next big winner, Altman offers a blunt corrective: the greatest financial returns will not come from funding another AI research lab. While the temptation is to chase the last big success story, he argues that "you almost never make a lot of money there."
He draws an analogy to the invention of the transistor. The companies that built the first transistors were important, but the vast majority of value was created by the entirely new industries and technologies the transistor enabled. The real opportunity isn't in replicating the foundational technology but in building the novel applications and companies that are now possible because of it. For capital allocators, the most important question is not "How can I compete with OpenAI?" but "What is now possible that wasn't before?"
To quote, “If I were an LP I would be spending 0% of my time trying to figure out how to invest in another AI research lab and 100% of my time figuring out how to invest in the thing that comes next.”
Our view however, is that this position comes from one of self-interest. While many venture capitalists may need to back more downstream organizations, it is also critical that funders look at research labs. And here’s why: It’s true that we should not invest in labs simply trying to replicate creating another foundation model, but AGI will not be developed by only throwing capital at the problem and following the current transformer architecture or how things are done today. In fact, we can create more efficient models - models that use less electricity and require less capital - and advance innovative capabilities in reasoning, reinforcement learning, etc., by investing in research labs. Continuing down the current path is not the only way to do things.
By not funding AI research and labs capable of building products, the only thing we would achieve is reducing OpenAI’s competition. The next OpenAI could, after all, look drastically different.
3. ChatGPT Was an Accident, Not a Master Plan
It’s easy to assume that a global phenomenon like ChatGPT was the result of a meticulous, long-term product strategy. The reality is far more surprising. OpenAI began as a "really well-run research lab" that, in Altman's words, later "bolted on a badly run company" to commercialize its powerful GPT-3 model. The problem was, the model was cool but not yet good enough to build a killer product around.
Out of ideas, the team launched an API to "crowdsource" product development. As Altman recalls, "The whole world figured out exactly one thing to do with it... these copywriting applications." But a "sleeper hit" was hiding in their internal data: a small group of users spent their time in the developer "playground," simply chatting with the model all day. This led them to build the chat interface, but they almost didn't launch it. Why? The retention was "atrocious." The critical insight, however, was that for the small cohort of users who did stick around, their "usage like increased over time." This was the true signal of product-market fit, teaching Altman a vital lesson: if a product has any retention at all, it's a powerful indicator, because the default for new products is for usage to drop straight to zero.
Our perspective is that in research, the goal can inherently conflict with product and venture goals. However, we would also not have the transformer model or ChatGPT at all without that “well-run research lab” component. Our takeaway here is that the combination of the two is what makes OpenAI so powerful, and it is actually a model that life sciences has historically had for a long time. That translation of research to product is exactly what healthcare and life sciences organizations do, and being able to harness that in new ways with technology can only improve the motions this industry excels at.
Adoption for healthcare and life sciences products is a bit more complicated than pure technology plays given the human component and regulations, but some of the sentiment applies to the industry as well.
4. The 10-Person, Billion-Dollar Company Is Almost Here
The scale of modern business is about to be radically redefined. Sam Altman confidently predicts that a company with only 10 employees that generates a billion dollars in revenue has likely either already started or will start within the next few years. This isn't a distant sci-fi concept; it's a near-term reality driven by the incredible leverage that AI provides.
To make this tangible, he offers the example of a single person discovering a blockbuster drug with the help of AI and "50,000 GPUs or something." This shift challenges our fundamental understanding of what it takes to create outsized value. The traditional correlation between headcount and revenue is breaking down, paving the way for a future where small, hyper-efficient teams can build businesses on a scale previously reserved for massive global corporations.
While this trend is already happening in AI startups, and may have high potential in certain parts of healthcare and life sciences where data (software) is more the product than hardware or the tangible outputs, to translate these insights into drugs that people can actually use as in the example Altman gives still requires clinical trials and a massive human component. Even if drug discovery can be done at a more accelerated pace with technology, which we are seeing more of, these discoveries still have to gain the interest of pharmaceutical companies and go through the rest of the drug development process.
5. The "AI Software Engineer" Is the Most Disruptive Force Today
While the long-term possibilities of AI are vast, Altman points to a very specific, near-term application as the single most disruptive force for businesses today: the "AI software engineer." For most companies, the ability to build and iterate on software is a primary bottleneck—a "current limiting reagent" that dictates the pace of innovation and growth.
By integrating AI deeply into the software development lifecycle, teams can dramatically accelerate their output. Altman predicts that companies who master this will significantly outperform their competitors and that this will be "the big story for the rest of the year." This provides a clear and actionable focus for enterprise leaders trying to navigate the AI landscape. Instead of getting lost in abstract concepts about AGI, the most immediate and impactful revolution is happening in how we build the tools that run our world.
From the healthcare and life sciences side, it is not just AI Engineers who will drive disruption. Our vision is more inclusive. It is anyone using AI and other tools to accelerate their work or create new work entirely. A scientist adopting and utilizing AI to get 10x more done could easily drive more disruption than an AI engineer in a technology company. Organizations and people who adopt AI and other technologies, in general, will have an advantage, and that thesis is ultimately why KAMI Think Tank exists in the first place. We want to give that advantage to healthcare and life sciences professionals.
An Uneven and Accelerating Future
The insights from Sam Altman paint a picture of an AI future that is far from straightforward and a bit contradictory. It's a future where our fundamental human nature preserves our value in the workplace (but then 80% of therapy and oncologist jobs will be automated?), where the biggest opportunities lie in the applications built on top of AI, not in the foundational research itself (to reduce competition?), and where innovation emerges from AI engineers (hopefully everyone!). The scale of value creation is being compressed, empowering small teams to achieve unprecedented impact.
For us, as value creation decouples from human headcount and our need for connection resists automation, the most critical question isn't what AI will build, but what we will choose to value and who will contribute to building these tools.