“Cybercrime costs the world $18 million every minute… In the Middle East, cybersecurity incidents cost an average of $8.05 million per breach, almost double the global average of $4.45 million.”
— Sheikh Salman Bin Mohammed Al Khalifa, CEO of Bahrain’s National Cyber Security Center
As the GCC region accelerates into a hyper-connected digital future, its
cyber threat surface is expanding just as fast. With smart cities, IoT, and AI-driven services surging, attackers now have more vectors than ever to exploit. What’s more, AI is helping them do it faster, cheaper, and smarter.
Cybercriminals are using AI to scale operations, evade detection, and exploit human trust at a level never seen before. The result? A new wave of cyber threats that can mimic CEOs, bypass SOC filters, and hijack trust in seconds.
Both the
Dubai Electronic Security Center (DESC) and
Saudi NCA have sounded the alarm on rising AI-fueled cybercrime.
How cybercriminals are using AI: Real-world examples of AI-driven cybercrime
AI has leveled the playing field in cybercrime. Complex cyber tactics are no longer limited to elite hackers. Even low-skilled actors now use generative tools to automate malware development, launch deepfake scams, and bypass traditional security layers.
- Vishing: In 2024, UAE authorities reported a surge in AI-powered voice phishing (vishing) scams where fraudsters impersonated bank officials using cloned voices to trick victims into revealing OTPs and banking credentials. One such case involved a Dubai resident losing over AED 200,000 after receiving a call mimicking his bank’s customer service line. The UAE Cybersecurity Council issued a warning urging the public to stay vigilant against AI-driven social engineering attacks.
- Polymorphic malware at Aramco: Saudi Aramco faced malware strains believed to be AI-generated, capable of evading signature-based detection.
- Deepfake CEO fraud in the UAE: A $35 million fraud occurred in 2024 when a UAE executive was tricked by a deepfake video call mimicking the CEO.
- Dubai Taxi Company breach: A misconfigured MongoDB in 2023 exposed 220,000 records including driver IDs, customer info, and internal logs.
- Kuwait Financial Group incident: Cited in Gulf News, this case involved AI-generated spoof emails imitating regulators, causing credential theft in two banks.
- Qatar smart grid attack: According to The Peninsula Qatar, attackers used voice cloning to impersonate a grid engineer and disrupt test operations for over 36 hours.
AI for cyber defense: How cybersecurity teams use AI to detect threats
“The UAE faced 71 million cyberattacks in Q1 2024, but showed resilience, using AI for early threat detection. AI is a game‑changer.”
— Dr. Mohammed Hamad Al‑Kuwaiti, Chairman of the UAE Cybersecurity Council, speaking on GITEX Tech Waves Podcast
As AI-powered threats escalate, GCC cybersecurity teams are turning to AI itself to defend themselves against attacks.

AI is especially helpful to cybersecurity teams to detect threats as it excels at:
- Pattern recognition at scale, spotting threats humans may miss
- Real-time anomaly detection across endpoints, networks, and cloud
- Predictive analytics that surface risks before attacks occur
- Automated response to attacks that limits dwell time and reduces manual triage
Security teams in the GCC are now embedding artificial intelligence into every layer of their threat detection stack.
GCC in action: National AI cyber initiatives
- DarkMatter Group (UAE): The firm’s cyber fusion centers use AI to monitor and respond to threats in real time across the UAE’s critical systems.
- Saudi NCA SOC modernization: According to Arab News, the NCA's SOC upgrades have improved incident detection and response time by more than 70%.
- Microsoft Security Copilot: Microsoft's AI security assistant is now in use in GCC banks like FAB and Emirates NBD.
Risks and challenges of AI in cybercrime
AI may strengthen cyber defenses, but it also introduces new risks that traditional models can’t handle. These include:
- Blurring the line between attacker and automation: Generative AI tools make it easier for low-skilled actors to launch high-impact attacks such as phishing, deepfakes, and malware, all created at scale.
- No clear liability for misuse: Most AI tools are built with open-source components. When weaponized, who’s accountable? The user, the vendor, or no one at all?
- Defensive AI can be deceived: Threat actors are experimenting with data poisoning and adversarial inputs to blind security models or trigger false positives.
- The regulatory vacuum is real: Unlike the EU, the GCC lacks a unified framework to govern AI safety and enforcement, leaving loopholes wide open.
How to protect against AI-driven threats
GCC organizations must move from reactive to proactive, intelligence-led defense models to shield themselves from AI-driven threats.

Key steps include:
- Adopt AI-augmented MDR platforms like Microminder Cyber Security's custom GCC MDR stack.
- Build AI awareness in security training: Staff must learn how to spot deepfakes, spoofed audio, and realistic AI phishing.
- Use sovereign cloud providers like Khazna Cloud or STC Cloud to ensure jurisdictional control and compliance.
- Continuously update AI threat models: Even AI-powered defenses need training. Models trained on outdated data may misclassify AI-generated attacks. Feed your systems with up-to-date threat intelligence, especially GCC-focused IOCs and TTPs. This would help maintain detection accuracy against polymorphic malware and zero-day attacks.
- Deploy deception technology (honeypots and decoys): Use decoy environments to trap and analyze AI-enhanced attack strategies before they hit production. Pair with behavioral AI to track attacker movement in real time.
- Integrate AI into your incident response (IR) playbooks: Apart from detecting threats, AI should also recommend and automate IR workflows, especially during ransomware or deepfake-related fraud attempts.
Tip: Use AI-assisted playbooks that simulate deepfake scenarios (e.g., impersonated CEO calls or CFO emails).
- Adopt a zero-trust architecture with AI enforcement: Incorporate AI into your identity verification, session analysis, and access control systems to ensure every request is context-aware, especially in high-risk sectors like oil and gas or banking.
- Audit third-party AI integrations: Many vendors embed AI in their platforms. Ensure those algorithms follow security-by-design principles and aren’t inadvertently introducing new attack surfaces.
Key question: Is your vendor’s AI model explainable, auditable, and compliant with regional standards (e.g., NCA ECC/AI-0101)?
The future of AI in cybercrime
The threat horizon is shifting rapidly. While security teams begin to deploy AI for defense, cybercriminals are evolving just as fast.
- Agentic AI threats: Microsoft and the UK NCA warn of autonomous AI agents that can plan and execute multi-vector attacks.
- Dark LLMs rising: Tools like FraudGPT and WormGPT are already used to craft advanced phishing and evasion strategies.
- Synthetic video attacks: Deepfake impersonation in real-time video calls will redefine executive-level social engineering.
- Attacks on AI models: Expect adversaries to target the models themselves via data poisoning, model inversion, and evasion tactics.
- AI vs. AI: As generative threats grow, AI-powered defense must evolve with continuous learning and contextual threat modeling.
- Regulatory uncertainty: The speed of AI-driven cybercrime is outpacing GCC regulatory readiness, leaving gaps that bad actors will exploit.
Wrapping up: Secure now or pay later
Cybercrime in the GCC is no longer a human-led activity. It’s automated, intelligent, and increasingly invisible. Organizations must harden their defenses against a range of cyber crimes, from billion-dollar deepfake fraud to AI-crafted malware.
"Audit your AI-security readiness now, before hackers automate their next attack."
At Microminder Cyber Security, we help GCC businesses stay one step ahead with AI-driven threat detection, MDR solutions, deepfake defense, and regulatory compliance support.
Book a free cybersecurity consultation today and protect your critical digital infrastructure against the next generation of AI-powered cybercrime.
Don’t Let Cyber Attacks Ruin Your Business
- Certified Security Experts: Our CREST and ISO27001 accredited experts have a proven track record of implementing modern security solutions
- 40 years of experience: We have served 2600+ customers across 20 countries to secure 7M+ users
- One Stop Security Shop: You name the service, we’ve got it — a comprehensive suite of security solutions designed to keep your organization safe