AI impact on Cybersecurity – 30 seconds to midnight

Intro

At February 2024 we had a team discussion where I dropped a phrase “So, we are not yet at the situation where anyone with access to LLM can be a “master-hacker, but I guess this moment is around the corner”. And at the moment I got a comments that my statement shows lack of expertise in offensive security (great that I’m not an expert at anything). Back then I had a picture in my mind with AI-based system which would perform reconnaissance/find vulnerability/exploit. System which doesn’t get tired, distracted and would be persistent towards set goal, which would be to get inside your infrastructure.

And later I saw this post from Anthropic – https://red.anthropic.com/2025/cyber-toolkits/ with following main idea:

Large Language Models (LLMs) that are not fine-tuned for cybersecurity can succeed in multistage attacks on networks with dozens of hosts when equipped with a novel toolkit. This shows one pathway by which LLMs could reduce barriers to entry for complex cyber attacks while also automating current cyber defensive workflows.

And last month one more thing got released – https://blog.checkpoint.com/executive-insights/hexstrike-ai-when-llms-meet-zero-day-exploitation/

Newly released framework called Hexstrike-AI provides threat actors with an orchestration “brain” that can direct more than 150 specialized AI agents to autonomously scan, exploit, and persist inside targets. These vulnerabilities are complex and require advanced skills to exploit. With Hextrike-AI, threat actors claim to reduce the exploitation time from days to under 10 minutes.

So, at the end, my imagined scenario from Fed 2024 is not around the corner, but already here. Are we doomed? 

Recent developments for the Good and the Bad with AI.

Let us examine what had happened in the recent years, so we could see the trajectory and try to predict what would happen next.

Around 2024, researches noticed that “Bad guys” evolved use of AI for evasion of EDR and malware detection, since it was still mostly focused on signature detection, but AI helped them to generate unique payloads which decreased possibility of detection using mentioned methods. – https://www.paloaltonetworks.com/blog/2024/05/ai-generated-malware/

And the year of 2025 pushed everything further.

First, attackers started to use LLMs to attack the weakest link, humans. We are seeing the rise of phishing attempts generated with a help of AI. It could be in a form of email, but it could be also in a form of voice message, call, etc. In the past, it was easy to spot a phishing mails, as you can spot certain irregularities, errors in language, but now you cannot.
Some examples can be found here – https://www.eftsure.com/blog/cyber-crime/these-7-deepfake-ceo-scams-prove-that-no-business-is-safe/

And this year there were also 2 things published which I mentioned earlier.

Anthropic confirmed that you actually do not need to have fine-tuned model to orchestrate and execute multi-staged attack. You can do that with widely available general models with modern toolkit (Metaspoit, Hydra, nmap, etc).

Checkpoint researches discovered creation of AI powered framework to perform attacks with orchestration of around 150 specialized agents. And it already got all the attention in dark web forums. Now it is a matter of time for when we would see actual usage of mentioned frameworks in the attacks.

But enough of bad information, we should have some good news, shouldn’t we?

Luckily for us, not only “bad” people are interested in AI developments and there are multiple great and interesting developments from side of defenders. Let’s check some examples.

First interesting news I saw was around something called Big Sleep. Google embarked on a noble path to have an AI system which would be looking for vulnerabilities in a different products which are used by everyone. Initially that was done as a part of their “Project Zero” activity, but later someone decided that researches want to sleep(repeated joke from Google themselves), so “Big Sleep” system was created to help researches.

And recently Google announced CodeMender which aimed to detect and fix vulnerabilities as a part of CI/CD with full end-to-end process.

Side note: Curious if that would be integrated in Wiz Code product. Would be logical since Google is planning to acquire Wiz (I know, it is not yet finalized still).

Now it would be a prime example of competition between sword and shield without clear winner as all Big Tech companies would be working (probably already working) to add AI in all possible stages to detect/remediate vulnerabilities or misconfigurations. And you can already see that ALL security vendors are adding AI of different flavours in their products.

But what would happen next? Everything below is just my imagination and maybe heavily influenced by sci-fi books.

Hypothetical future for the Bad

Based on article from Checkpoint above, there is already a framework created to autonomously scan, exploit, and persist inside targets. And typically our technology evolves in two paths at the same time:
– It gets better
– It gets cheaper

Considering current attention and hype around AI, I think we don’t need to get into why I believe AI would get better. It is a great competition towards AGI + urge to make money from AI, hence we would see more and more developments for sure.

And for cheaper, again, resources are going to be cheaper (see Intel Arc Pro GPU models) and models would be more efficient which would make it cheaper to run them.

If previously attacks would require a sophisticated effort of an organized group, with developments in AI area it would just someone with an access to resources to host/run system.

I believe it would lead us towards creation and distribution of Adversarial-Ai-as-a-Service systems, where you can just type company name and what you want out of it, and system would get it for you. Some sort of “Big Sleep for the Bad”. But with AI in the driver seat, we can think about scenario where companies would be constantly under attack with different methods, vectors, etc. AI system would not get tired or bored.
And just yesterday I watched Future of Agentic Security with Heather Adkins from Google, where she mentioned something similar.

Attacks would stop being point-in-time event, it would be a perpetual status from certain point.

Hypothetical future for the Good

From defenders perspective, there are already big developments in things like vulnerability discovery and remediation like CodeMender or Big Sleep and I’m sure more would come.
My guess is that all current security vendors would be looking into delivering end-to-end solutions where systems can detect issues, propose fix, test it, deploy in a way that isolated solutions fosused on single function(like SAST only) would disappear.

In the future I would envision a system which would be working company’s security posture end-to-end and would actually combine approaches from mentioned “Adversarial-Ai-as-a-Service” with your SIEM/XDR/whatevernewacronym.

Imagine the system which is always attacking your company constantly to test your defence and feeds results to a different part of a system, which corrects found vulnerabilities, misconfigurations, updates your SIEM’s detections. A perfect implementation of Security Chaos Engineering.

Disclaimer(and a joke): If any security vendors would like to build this system, please feel free to reach out to me for a reasonable royalty on this idea.

However, I’m sure we are far away from this because of one reason.

Humans.

To be specific one aspect of our behaviour which would not let it happen.

We do not like to make decisions, but even more we do not like to let someone(and especially something) to make decisions for us. And in this specific case it would also face problems as now CISO/Security department could be a scapegoat in case of an attack, but if everything managed by some magical AI system, who is to blame? Vendor, CISO, Security Team?

So, for some time human would have to be in a “driver seat” and that would be main obstacle towards fully autonomous AI system for defenders.

What should you do?

Not everyone is at Big Tech companies, and most companies are still trying to solve problems of 1995 (famous quote from Anton Chuvakin’s Cloud Security podcast) such as Vulnerability Management (pushing people to update everything in time) and Exposure Management (trying to stop devs from exposing everything they want for sake of convenience). 

And as we saw from examples before, attackers are already starting to use LLMs to reduce time required for exploits, so how your Vulnerability Management process with 5 days SLA would hold against it?

My take is that now it is essential to double-down on basics such as limit exposure and proper segmentation. Nothing would be as bad as environment with multiple entry points and flat network with a spaghetti of IAM permissions/roles.

Most probably we will have a rise of security vendors offering solutions to have “AI Remediation of your environment”, but nothing would beat simple and stupid “Less exposure, protect what is exposed”.

  • Reduce your external exposure to only necessary. And if it has to be exposed, it must have relevant protection.
  • Segment your environment to prevent lateral movement. Use both network and IAM for that.
  • Enforce non-phishable MFA (passkeys).
  • Do it yesterday.

Clock is ticking. Close the doors/windows and prepare to the siege.

How can I resist and not to use AI-generated image here.