The Google Threat Intelligence Group (GTIG) published a report on November 5, 2025 that signals a clear shift in digital security. This isn’t just about using AI to be more productive anymore: attackers and state-backed actors are testing new capabilities powered by large language models and other AI techniques.
What the report found
GTIG notes that adversaries are expanding their toolkit with AI at several stages of an attack. Some of the most notable findings are:
-
State actors from countries like North Korea, Iran, and the People’s Republic of China trying to use AI to improve reconnaissance, craft phishing lures, and ease data exfiltration.
-
AI-powered malware capable of generating malicious scripts and modifying its own code in real time to evade detection systems.
-
People using prompt pretexts (posing as students, researchers, or others) to bypass model safeguards and extract restricted information.
-
Underground marketplaces offering sophisticated AI tools for phishing, malware development, and vulnerability discovery.
The takeaway is simple: AI amplifies known tactics and creates variants that are faster and harder to detect.
What Google did and why it affects you
Google outlines concrete steps to disrupt these operations: disabling assets tied to malicious activity and using that intelligence to improve classifiers and model security. That helps cut attack chains, but it doesn't make the problem disappear.
What does this mean for you or your company?
-
If you work in tech or security, now's the time to review access controls, logging, and anomaly detection. Real-time monitoring and telemetry analysis matter.
-
If you manage teams or employees, reinforce phishing training. A highly personalized email created by AI can fool even experienced users.
-
For individuals: enable multifactor authentication, keep software updated, and be suspicious of unexpected requests for sensitive information.
A concrete example
An attacker uses a model to generate hyper-personalized phishing messages in minutes: references to names, projects, and the victim's usual language. Result: higher click rates and a better chance of stealing credentials.
A practical countermeasure: email filters that analyze patterns (not just keywords), verification policies for data access requests, and clear procedures to confirm sensitive changes through secondary channels.
What remains to be done
AI is a tool: in the right hands it improves productivity and diagnostics; in the wrong hands it amplifies harm. The response needs to combine technology, good practices, and collaboration between companies, researchers, and authorities.
That means implementing technical controls, educating people, and sharing threat intelligence so what one organization learns can help others. Defense has to use the same innovation attackers use.
Original source
https://blog.google/technology/safety-security/gtig-report-ai-malware
