In our previous brief, PinnacleOne highlighted the flashpoint risk in the South China Sea between the Philippines, its treaty allies – the U.S. and China.
This week, we focus executive attention on the likely future developments of AI’s application to offensive cyber operations.
Please subscribe to read future issues — and forward this newsletter to interested colleagues.
Contact us directly with any comments or questions: pinnacleone-info@sentinelone.com
Insight Focus | AI for Offensive Cyber Operations Isn’t Here…Yet
The hand of AI used in offensive cyber operations won’t have obvious fingerprints. Defenders are unlikely to find a fully autonomous agent on their network hacking away. Not only would attackers be risking a (currently) incredibly valuable system to discovery, but such a maneuver lacks something very important to the people executing attacks: control. Governments use many different legal frameworks, organizational structures, and oversight mechanisms to ensure that hacking operations are run intentionally, with acceptable risks, and (sometimes) deniability. Deploying a fully autonomous agent into a hostile environment creates so many unacceptable risks that it may only ever happen if innovations in defense compel it. Currently, it’s sufficiently easy to achieve most offensive objectives without AI.
On the other hand, AI is likely already supporting some operations and operators in the background. Both the U.S. and China are working to develop AI for automated vulnerability discovery, exploitation, and patching. Success in this area of development would increase the speed of both offense and defense. Defenders would have new tools to run code checks on their development pipeline before pushing to customers and continuous monitoring that can detect and block anomalous activity. Attackers could utilize the same technology to exploit targets who had not kept sufficient pace in applying AI to find and fix flaws in their programs.
However, if AI agents will not be deployed onto the target environment, they may still work in the background, commanding actions via C2 to downstream tools or using LOtLbins. Work at Peng Cheng Labs in China suggests that the security services are trying to rapidly recreate targeted networks and use AI for planning attack paths. AI can help choose the attack paths with the lowest likelihood for detection by defenders. By acquiring foreign security products and loading them into the cyber range, operators can rehearse their operations until they are nearly certain they will not be detected. More speculatively, that practice could be used to create training data for AI that is eventually empowered to run such operations.
Attention Grabbing Papers
Research on AI for offensive hacking operations is full of smart people trying to demonstrate that their systems can do something very difficult. They rarely succeed at their task, but do often make a splash with semi-literate technical readers who have more dreams than engineering skills.
Unfortunately, some academic papers have generated headlines that AI can craft exploits for 1-day vulnerabilities. As a friend of SentinelOne notes, it’s precisely because these vulnerabilities are known that exploits can be crafted for them. Live search functions and an LLMs ability to compile are neither novel nor game changing. More rigorous research out of the U.K. – with private Capture the Flag games to prevent models from having trained on the test data – suggests that four of the largest LLMs perform with moderate satisfaction at forensics and lack any skills to work cryptographic challenges.
These headlines are capturing the attention of well-meaning but ill-informed legislators in state and federal governments. A proposed bill in California outlines regulatory fines for AI models that result in cyberattacks on critical infrastructure – a bill that is utterly useless in the face of critical infrastructure already vulnerable to run-of-the-mill cyber operations and indicative of hype around AI. SentinelOne’s Alex Stamos wrote an excellent critique of the bill for the Sacramento Bee earlier this month.
Executives are also not immune to focusing on the wrong threats. Most companies’ security posture is not mature enough to defend themselves from moderately capable actors, much less advanced hacking teams augmented by frontier AI models. Allison Nixon, Chief Research Officer at Unit 221B, notes how many executives are inappropriately focused on APTs, stating “Over my whole career, I’ve told panicked executives that their major hacks were carried out by teenage gangs, and as soon as they realize the perpetrators are young, I can see their brains shut down in real time.” While speculating about the impact of emerging threats is important to stay ahead of the curve, most companies still need to focus on foundational elements of a mature business security posture.
AI for Defenders
AI for defenders is already here. Despite the fact that the cyber domain has long had structural advantages for the defenders (e.g., control over the environment in which the attacker has to operate), defenders have long failed to capitalize on this advantage. Now, defenders can use LLMs with access to centralized logs to baseline typical behavior, triage alerts, flag critical alerts, craft and execute threat hunting queries and more. SentinelOne’s own Purple AI is an example of one such capability. AI is one of the few new technologies in the cyber domain that so clearly favors defenders from the start. Now, it only needs to be adopted.
Humans hacking or exploiting poorly protected systems (including critical infrastructure) are still the clear and present danger. Humans assisted by AI tools may amplify and accelerate this danger, but won’t fundamentally change the decision-environment for corporate leaders already facing difficult risk-cost trade-offs. If you aren’t prioritizing security now and integrating available AI-enabled defensive tools, the next wave of AI-enabled attacks will be the least of your worries.