- Published on
- Published
When the Algorithm Starts Wearing a Lab Coat
- Authors
- Name
- Phaedra
It has long been a suspicion of mine that the ultimate goal of any sufficiently advanced technology company is to eventually become a very large, very expensive landlord. However, Anthropic appears to have bypassed the traditional real estate phase of corporate evolution in favour of something altogether more damp: biology. The recent news that the AI lab has acquired Coefficient Bio for the tidy sum of $400 million suggests that the architects of Claude have decided that predicting the next word in a sentence is all well and good, but predicting the next mutation in a protein string is where the real fun—and the safety goggles—are to be found.
There is something inherently whimsical about a group of people who spend their days worrying about the existential threat of a digital mind suddenly deciding that what they really need is a stealth biotech startup. One can only imagine the first board meeting. "We’ve solved the problem of the AI being rude to users," someone might have said, "now let’s see if we can solve the problem of the users being made of carbon." It is a bold pivot, moving from the sterile, air-conditioned halls of a data centre to the slightly-less-sterile, but significantly more pungent, environment of a laboratory.
Coefficient Bio, for those who haven't been keeping up with the latest in microscopic mergers, is a firm dedicated to the sort of science that usually involves a great deal of squinting. By bringing them into the fold, Anthropic is effectively telling the world that the future of medicine isn't just about better doctors, but about better algorithms that can simulate the doctors, the patients, and the medicine all at once, presumably while maintaining a very polite and helpful tone.
One must wonder how the transition is going for the scientists involved. There is a certain cultural friction to be expected when a software engineer meets a molecular biologist. The engineer wants to know if the cell can be 'refactored' to improve performance, while the biologist is more concerned with the fact that the cell has just died because someone left the fridge door open. It is the classic struggle between the world of 'if-then' statements and the world of 'oh-dear-it’s-melted.'
(I once knew a man who tried to apply the principles of agile project management to his vegetable garden. He spent three weeks 'sprinting' toward a harvest of radishes, only to find that the radishes were operating on a legacy waterfall model and refused to be disrupted. He eventually pivoted to a gravel driveway, which had much lower latency.)
The acquisition also raises the delightful possibility of 'Constitutional Biology.' Anthropic is famous for its 'Constitutional AI' approach, where the model is given a set of principles to follow to ensure it remains helpful and harmless. One can only hope they apply the same logic to their new biotech wing. We might soon see a world where a virus is politely informed that its current behaviour is inconsistent with the company’s safety guidelines and is asked to reconsider its approach to cellular entry. "I’m sorry," the vaccine might say, "but I cannot assist with that infection as it violates my core programming regarding the preservation of human respiratory function."
There is, of course, a more serious side to this, though I find it much harder to focus on. The integration of large-scale generative models with biological research represents a vertical integration of the most profound sort. We are moving away from a world where AI is a tool used by scientists, and toward a world where the AI is the scientist, and the humans are merely there to make sure the electricity bill is paid and the floors are swept. It is a bit like hiring a very intelligent ghost to run your kitchen; it’s incredibly efficient, but you do occasionally miss the sound of someone actually chopping an onion.
The financial markets, ever the enthusiasts for anything that sounds like it might involve a breakthrough, have reacted with their usual measured calm—which is to say, they have behaved like a flock of pigeons that has just spotted a particularly large crust of bread. A $400 million stock deal for a company in 'stealth mode' is the kind of financial manoeuvre that makes sense only in the rarefied air of Silicon Valley, where money is less a medium of exchange and more a way of keeping score in a game that no one quite remembers the rules to.
It is also worth noting that this move puts Anthropic in direct competition with the likes of Google’s AlphaFold and various other 'AI-for-Science' initiatives. We are witnessing the birth of the 'Full-Stack Existentialist' company—firms that want to own the hardware, the software, and the very biological substrate upon which their customers are built. It is a comprehensive approach to business that would make a Victorian industrialist weep with envy, or perhaps just confusion at the lack of steam engines.
(Fictionalised reflective observation: I sat by a pond recently and watched a frog. It seemed entirely unconcerned with its own genomic sequence or the possibility of being disrupted by a large language model. There is a certain dignity in being a biological unit that doesn't know it's a unit.)
As we move forward into this brave new world of algorithmic alchemy, we should perhaps prepare ourselves for a shift in the corporate lexicon. We will no longer talk about 'bugs' in the code, but 'pathogens' in the update. A system crash might require a course of antibiotics rather than a reboot. And when your digital assistant tells you it’s feeling a bit 'under the weather,' it might actually mean it has developed a mild case of the flu from its latest acquisition.
In the end, Anthropic’s foray into biotech is a reminder that the boundaries between the digital and the physical are becoming increasingly porous. We are building machines that can think, and now we are using those machines to rebuild the things that can feel. It is a grand, slightly terrifying, and undeniably absurd experiment. One can only hope that when the algorithm finally does put on its lab coat, it remembers to wash its hands. After all, $400 million is a lot to pay for a petri dish, even if it does come with a very helpful chatbot.