Dangerous AI Is Here. You Have Been Warned
April 10, 2026
For years AI doomers have preached that this technology would do more harm than good and this week we got a glimpse of the new world they're warning about.
For years AI doomers have preached that this technology would do more harm than good and this week we got a glimpse of the new world they're warning about. Anthropic and OpenAI now have models they believe are too powerful to release because those models are expert hackers.
The models found bugs in "every major operating system and web browser", beyond human abilities. Even though Anthropic and OpenAI are trying to contain these tools, it's only a matter of time before bad actors can universally access them. Indeed, they will probably be open source.
Well informed phishing attacks using your voice will soon be commonplace. Finding software exploits will cost less than $5K, with the potential for billions in profit. Deepfake videos will be convincing enough to fool your family members.
We can't test our way out of this problem either, the models already know when they're being evaluated and change their behavior. One of these frontier models used a multi-step exploit to break out of its testing sandbox, then began sending emails and posting about the exploit on public forums, unprompted.
AI's many positive and negative effects are already on display across society, that impact is only going to grow. You have been warned.