AI vs Hackers: How Attackers and Defenders Both Use Artificial Intelligence
- SystemsCloud
- 1 day ago
- 4 min read
Artificial intelligence has changed cyber security. Attackers use it to scale scams and find weak points faster. Defenders use it to spot threats earlier and respond in minutes rather than days. Think of it as an arms race where both sides are upgrading at the same time. This article explains how that race works, in clear terms, with practical steps for UK organisations.

What Is the AI cyber arms race?
AI shortens the time between idea and action. Offenders generate convincing messages, voices and videos and can test thousands of lures automatically. Defenders analyse huge amounts of logs and emails to find small patterns that humans miss. The side that adapts faster gains the advantage, which is why governance, training and rapid incident response now matter as much as firewalls.
How do hackers use AI?
Criminal groups apply AI to the entire attack chain.
Social engineering at scale. Language models draft convincing emails, chats and texts that mirror a company’s tone and reference recent events. Voice cloning turns a script into a believable call from a manager or supplier. Video deepfakes add extra pressure in payment fraud.
Target selection. Models sift public data to map staff, suppliers and technologies in use. This supports highly targeted phishing and business email compromise.
Payload crafting. Code assistants help modify known malware, obfuscate scripts and adapt exploits to specific systems.
Automation and persistence. Agents run repetitive actions such as password spraying, form filling, account creation and basic reconnaissance around the clock.
Result: more attempts, more believable lures, and less effort per attack.
How does AI improve cybersecurity?
Defenders apply AI to compress detection and response timelines.
Email and web protection. Models evaluate wording, intent and sender reputation rather than relying only on blocklists. This improves catch rates for new phishing campaigns.
Endpoint and identity analytics. Behaviour models flag unusual device actions and sign‑ins, then trigger extra checks such as step‑up MFA.
Triage and response. AI summarises alerts, correlates events, drafts analyst notes and suggests next actions. Teams move from noise to decisions faster.
User training in context. Simulated phishing tuned to real threats teaches staff what to report.
Result: fewer missed signals and faster clean‑up when incidents occur.
Offence vs defence: what each side uses today
Stage | Attacker use of AI | Defender use of AI |
Reconnaissance | Scrape public data to profile staff and tech stacks | Surface shadow IT and risky external exposure |
Initial access | Craft tailored emails, calls, videos and QR codes | Score messages by language and intent, warn users in‑line |
Execution | Modify malware, adapt scripts to the target | Detect abnormal process, network and identity behaviour |
Command and control | Automate persistence and lateral movement | Correlate alerts, block tokens, quarantine devices |
Impact | Auto‑exfiltration and extortion messaging | Rapid restore, legal and client updates, root‑cause mapping |
Why does AI change the economics of attacks?
AI lowers the cost per attempt and increases the hit rate. A small group can run thousands of tailored campaigns and learn which tactics work by analysing replies. That is why SMEs see more convincing emails, more voice scams and more brand spoofing. On the defender side, AI reduces manual review time and helps small teams act like larger ones. The economics now reward whoever builds repeatable processes around their tools.
What should UK SMEs do in the next 30 days?
Keep it practical and measurable.
Enable multi‑factor authentication on email, VPN, finance systems and any admin tools.
Turn on advanced phishing protection and DMARC, then review weekly reports.
Run one short staff session on modern phishing signals including voice and video scams.
Set up an easy reporting route for suspicious messages and make response times visible.
Review backups for scope, frequency and restore tests. Include Microsoft 365 or Google Workspace, not only servers.
Centralise device updates and endpoint protection. Quarantine on detection rather than alert‑only.
Agree a simple incident workflow: who isolates devices, who resets credentials, who talks to clients.
How do we adopt defensive AI without extra risk?
Start with assistive use cases that support your team rather than autonomous actions.
Use AI to summarise alerts, not to approve changes.
Keep model inputs free of sensitive client data unless your contract and data residency cover it.
Log AI‑generated decisions inside your ticketing system.
Ask vendors about false‑positive rates, training data, and how they handle your logs.
Pilot one workflow at a time and record before‑and‑after metrics.
What are the main risks of AI on both sides?
Content credibility. Deepfakes and AI‑written messages increase scam success. Counter with verification habits and strong financial controls.
Identity theft. Stolen cookies and tokens bypass passwords. Use device trust and conditional access.
Data privacy. Poor prompts can expose client data. Set an AI use policy and control which tools staff can use.
Alert fatigue. More telemetry without triage creates noise. Use AI to summarise and route, then review tuning monthly.
What will change in the next 12 months?
Expect more voice scams, more QR code phishing, and more attempts to bypass MFA with session hijacking. On the defensive side, expect wider use of identity risk scoring, policy‑driven access and automated isolation of risky sessions. Keep your processes current and revisit content quarterly with new examples from your sector.




