
Big Tech geniuses are teaching AI to hack our cybersecurity systems and they want the government’s help to do it—what could possibly go wrong?
At a Glance
- Tech companies are developing AI frameworks to “evaluate threats” while simultaneously creating the very tools that could enable unprecedented cyberattacks
- The government (NIST) is eagerly jumping on board with its own “AI Risk Management Framework” that will undoubtedly expand federal powers
- Researchers analyzed 12,000 real-world AI cyberattack attempts across 20 countries, yet still claim current AI models don’t pose immediate threats
- As AI advances toward artificial general intelligence (AGI), its potential to bypass traditional security measures grows exponentially
Big Tech’s Latest Power Grab: AI Cybersecurity
Nothing says “trust us” quite like tech giants creating powerful AI systems that could potentially hack into everything you own, then offering to protect you from those very same systems. Google’s DeepMind and their academic cronies have unveiled a new framework for “evaluating potential cybersecurity threats of advanced AI”—because apparently, teaching machines to think like master criminals is the best way to keep us all safe. This framework conveniently helps them identify every phase of the cyberattack chain, which definitely won’t be a perfect roadmap for the next generation of AI hackers.
Watch coverage of DeepMind’s AI cybersecurity framework.
The tech overlords want us to believe they’re doing this purely for defensive purposes. DeepMind researchers proudly proclaim they’ve analyzed over 12,000 real-world AI cyberattack attempts across 20 countries to identify common attack patterns. They’ve even created a handy benchmark of 50 challenges covering the entire attack chain—essentially creating a training program for aspiring AI hackers. But don’t worry! They assure us that “current AI models alone are unlikely to enable breakthrough capabilities for threat actors.” That’ll age well.
— State of AI (@stateof_ai) September 2, 2024
The Government Wants a Piece of the Action
Not to be outdone in the race to control our digital lives, the National Institute of Standards and Technology (NIST) is launching its own program focused on AI’s impact on cybersecurity and privacy. Because if there’s one entity we trust with our private data and technological safeguards, it’s the federal government. NIST has developed an “AI Risk Management Framework” that will undoubtedly expand bureaucratic power while claiming to address “safety, transparency, and accountability”—three things the government has always excelled at.
“Our updated Frontier Safety Framework recognizes that advanced AI models could automate and accelerate cyberattacks, potentially lowering costs for attackers,” noted Flynn, Rodriguez, and Popa.
This government program will supposedly “collaborate with industry, government, and academia” to secure the AI ecosystem—the same ecosystem they’re helping to build with virtually no constitutional guardrails. The program will focus on managing AI-related cybersecurity risks, defending against AI-enabled attacks, and using AI for cyber defense. Translation: they’re creating the problem, selling you the solution, and collecting your data along the way. It’s the perfect government trifecta, as highlighted in this post.
— Jack Thompson (@OnlyTheDeadKnow) July 4, 2024
The Coming AI Arms Race
As AI marches toward artificial general intelligence (AGI), the potential for these systems to automate defenses AND vulnerabilities increases exponentially. The tech elites admit that advanced AI models could “automate and accelerate cyberattacks, potentially lowering costs for attackers.” So they’re literally creating machines that will make it cheaper and easier to attack our infrastructure, businesses, and personal devices. But don’t worry—they’ve got a framework for that!
“Our new framework for evaluating the emerging offensive cyber capabilities of AI helps us do exactly this,” wrote Flynn, Rodriguez, and Popa.
The most insulting part is how they’re adapting existing cybersecurity evaluation frameworks, like MITRE ATT&CK, to account for AI’s role in cyberattacks. They’re using taxpayer-funded research to build systems that could potentially bypass every digital protection we have, then claiming they need more funding to protect us from the very monsters they’re creating. It’s like the arsonist offering to sell you fire insurance while holding a gas can and matches.
Same Old Song and Dance
We’ve seen this playbook before. Create a problem, amplify fear around it, then offer a solution that conveniently expands power and control. The “rapid growth of AI” presents both “opportunities and risks,” they tell us, necessitating “effective risk management”—which always translates to more regulations, more surveillance, and less freedom. Meanwhile, existing evaluations “often overlook aspects like evasion and persistence”—precisely where AI excels. How convenient that the solution always requires more government intervention.
The endgame is clear: whoever controls AI cybersecurity controls the digital future. Silicon Valley and Washington are racing to be that controller, while regular Americans are left increasingly vulnerable to attacks from both hackers and overreaching authorities. The Constitution never anticipated AI, but it certainly foresaw the danger of unchecked power. As these frameworks develop, one thing remains certain—our digital rights are being rewritten without our consent, one algorithm at a time.