
SAN FRANCISCO — On the busy streets outside Anthropic’s offices in downtown San Francisco, Guido Reichstadter is entering his 31st day of a hunger strike against the development of artificial general intelligence (AGI) – a term used to describe AI systems that are as smart as humans – which he believes will lead to human extinction.
Sustaining only on water, vitamins and electrolytes, he is determined to not give up.
“I’m going to continue as long as I’m able,” Reichstadter told the Peninsula Press on Sept. 26.

At one point during the interview, he became visibly emotional, his voice breaking, as he spoke of the prospect of AI killing hundreds of billions of people, including his family and loved ones.
“ I’m not willing, and people should not be willing, to sit back as spectators in their own future and wait until a disaster like this happens.”
The goal of the strike is to make Anthropic – the company behind the popular chatbot Claude – stop building AGI and have their CEO Dario Amodei join the “effort to end this threat.”
Inspired by Reichstadter’s lead, a handful of activists across the world also went on separate hunger strikes against AGI.
In London two activists with reportedly both having a background in AI, Michaël Trazzi and Denys Sheremt, protested outside Google Deepmind’s offices before ending their strikes after 18 and 16 days respectively. In Bengaluru in India, protesteI r Samuel Shadrach livestreamed a 18-day hunger strike before ending it Oct. 1. The hunger strikes are part of a small but growing anti-AI movement. The Peninsula Press also met with protesters from the organization Stop AI at their monthly protest outside OpenAI’s offices on Sept. 26.
About a dozen protesters articulated a scattered mix of concerns around AI – like its impact on the environment, autonomous weapons, job loss and fear of mass extinction.
The fear that AI will pose existential risk to humanity is an idea that has existed for a long time. However, it has gotten renewed attention with the newly publicized book If Anybody Builds It Everyone Dies by Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute, and Nate Soares, its current president. The book lays out the case for why a superintelligence will lead to extinction.
The argument can roughly be summarized as follows: there are no instances in nature of a less intelligent being controlling a more intelligent being, as the more intelligent inevitably will have the upper hand. If humans create machines that are more intelligent than us, humanity will lose control.
Nobody claims to know exactly how a superintelligent AI would kill all humans – it may be through creating deathly viruses, nuclear holocaust or some novel weapon we haven’t thought of – but proponents argue that it nevertheless would be the inevitable outcome. Some liken it to an amateur playing chess against the world’s best player: You cannot predict their exact moves, but you know you will lose the game.

The view that the global AI race poses an existential risk to humanity is shared by many prominent researchers in the AI field, among them Nobel laureate Geoffrey Hinton, Turing Award winner Yoshua Bengio and Stuart Russel, co-founder of the International Association for Safe and Ethical AI.
Interestingly, this view is also shared by many of the CEOs of the AI companies. In 2015, the year OpenAI was founded, Sam Altman said, “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”
Dario Amodei, CEO of Anthropic, said last month there’s a 25% chance that the future of AI will go “really, really badly.”
Despite their warnings, all the major AI companies are still pushing to develop various forms of human-level intelligence AI systems. Google Deepmind and OpenAI use the term AGI, Meta calls it supertintelligence, while Anthropic simply refers to it as powerful AI.
“ The fact of the matter is none of these companies have any plan for controlling super intelligent systems, which they are all racing to create,” Reichstadter said.

Out of the large AI companies, Anthropic is considered by many within the AI safety community to be the most concerned with safety. Anthropic regularly publishes research detailing security risks from their models – such as the tendency for models to act more ethically when they believe they are being tested – and their CEO Dario Amodei has on several occasions publicly warned of existential risk due to rogue AI systems.
But for Reichstadter, there can exist no such thing as a responsible AI company pursuing AGI.
“ I think Anthropic is doing an incredible public disservice by creating this impression that they are a responsible company, that participating in this global race to super intelligence in any way can be done safely or responsibly.”Reichstadter has previously made headlines when he scaled a bridge in Washington D.C. in 2022 to protest the overturning of Roe vs. Wade.