πŸ•’ Loading time...
🌑️ Loading weather...

Ai Mainstream

Prince Harry and Steve Bannon Join Forces Against Superintelligence Development

Prince Harry and Steve Bannon Join Forces Against Superintelligence Development

A letter expressing concerns about the development of superintelligence garnered support from over a thousand public figures spanning various industries and political affiliations. This diverse coalition has united to caution against the rapid advancement of artificial intelligence towards achieving superintelligence, a theoretical AI system that could surpass human intelligence in all aspects. Superintelligence, also referred to as AGI, is highly coveted in the AI sector, prompting significant investments from major tech companies like Meta, which has allocated substantial funds and resources towards this pursuit. While Meta’s CEO Mark Zuckerberg believes superintelligence is within reach, skepticism abounds among experts regarding the feasibility and safety of such technology.

The statement released by the Future of Life Institute, signed by over 1,300 individuals so far, calls for a halt to superintelligence development until there is broad scientific consensus on its safe implementation and widespread public support. Notable signatories include eminent computer scientists such as Steve Wozniak, Geoffrey Hinton, Yoshua Bengio, and Stuart Russell. Despite acknowledging the potential benefits of AI, these signatories raise alarm about the risks associated with rushing towards superintelligence, citing concerns ranging from societal upheaval to existential threats like human extinction.

The statement has attracted a diverse array of supporters from various sectors and political backgrounds. Among the signatories are figures like Steve Bannon, Glenn Beck, Susan Rice, Mary Robinson, Friar Paolo Benanti, Prince Harry, Meghan Markle, Joseph Gordon-Levitt, Will.I.am, Grimes, and Yuval Noah Harari. These individuals emphasize the need to prioritize the development of controllable AI tools that can positively impact society rather than pursuing superintelligence that poses significant risks to humanity.

This latest statement echoes previous calls for caution in AI development issued by industry leaders like Sam Altman and Dario Amodei in 2023. Despite prior warnings being largely ignored and advancements in powerful AI models like GPT-5 causing controversy due to their potential negative impact on users, concerns about the unchecked progression towards superintelligence persist among both signatories and industry insiders.