Tech billionaires seem to be doom prepping. Should we all be worried?
Mark Zuckerberg is said to have started work on Koolau Ranch, his sprawling 1,400-acre compound on the Hawaiian island of Kauai, as far back as 2014.
It is set to include a shelter, complete with its own energy and food supplies, though the carpenters and electricians working on the site were banned from talking about it by non-disclosure agreements, according to a report by Wired magazine.
A six-foot wall blocked the project from view of a nearby road.
Asked last year if he was creating a doomsday bunker, the Facebook founder gave a flat no. The underground space spanning some 5,000 square feet is, he explained, just like a little shelter, it's like a basement.
That hasn't stopped the speculation - likewise about his decision to buy 11 properties in the Crescent Park neighbourhood of Palo Alto in California, apparently adding a 7,000 square feet underground space beneath. Though his building permits refer to basements, according to the New York Times, some of his neighbours call it a bunker. Or a billionaire's bat cave.
Then there is the speculation around other tech leaders, some of whom appear to have been busy buying up chunks of land with underground spaces, ripe for conversion into multi-million pound luxury bunkers. Reid Hoffman, the co-founder of LinkedIn, has talked about apocalypse insurance. This is something about half of the super-wealthy have, he has previously claimed, with New Zealand a popular destination for homes.
So, could they really be preparing for war, the effects of climate change, or some other catastrophic event the rest of us have yet to know about?
In the last few years, the advancement of artificial intelligence (AI) has only added to that list of potential existential woes. Many are deeply worried at the sheer speed of the progression. Ilya Sutskever, chief scientist and a co-founder of Open AI, is reported to be one of them.
By mid-2023, the San Francisco-based firm had released ChatGPT - the chatbot now used by hundreds of millions of people across the world - and they were working fast on updates.
But by that summer, Mr Sutskever was becoming increasingly convinced that computer scientists were on the brink of developing artificial general intelligence (AGI) - the point at which machines match human intelligence - according to a book by journalist Karen Hao.
In a meeting, Mr Sutskever suggested to colleagues that they should dig an underground shelter for the company's top scientists before such a powerful technology was released on the world, Ms Hao reports.
We're definitely going to build a bunker before we release AGI, he's widely reported to have said, though it's unclear who he meant by we. It sheds light on a strange fact: many leading computer scientists and tech leaders, some of whom are working hard to develop a hugely intelligent form of AI, also seem deeply afraid of what it could one day do.
As for when AGI may arrive, tech leaders have claimed it is imminent. OpenAI boss Sam Altman said in December 2024 that it will come sooner than most people in the world think. Sir Demis Hassabis, the co-founder of DeepMind, has predicted in the next five to ten years, while Anthropic founder Dario Amodei wrote last year that his preferred term - powerful AI - could be with us as early as 2026.
Yet not everyone is convinced. Dame Wendy Hall, professor of computer science at Southampton University, cautions that they move the goalposts all the time. The scientific community may say AI technology is impressive, but they also argue it's nowhere near human intelligence.
One reason the idea excites some in Silicon Valley is that experts believe AGI is a pre-cursor to something beyond that: artificial super intelligence (ASI) - technology surpassing human intelligence.
Elon Musk has claimed that super-intelligent AI could usher in an era of universal high income. He has even claimed that AI will become so cheap and widespread that virtually anyone will want their own personal R2-D2 and C-3PO.
But what if AI is hijacked and used as a weapon? Or if it decides humanity is the root of the world’s problems and leads to destruction?
Governments are taking some protective measures, with President Biden passing an executive order that requires AI companies to share safety test results with the federal government. Meanwhile in the UK, the AI Safety Institute has been established to better understand potential risks.
But then there are those super-rich individuals with their own apocalypse insurance plans, raising the question of whether the fears held by the tech elite are truly warranted or simply their own anxieties manifesting as preparations for worst-case scenarios.
For now, the ongoing investment in underground shelters and the rhetoric around AI advancement signal a growing divide in perspectives among society's wealthiest individuals and the general populace, leaving many wondering if doom prepping is just paranoia or a legitimate response to potential future crises.



















