AI in Security: Balancing Between Digital Savior and Snake Oil

AI in security is marketed as a flawless shield, spotting threats and predicting risks with superhuman precision. But the reality is messier, with false positives, black-box flaws, and overblown promises. This article cuts through the hype to ask: Is AI truly transforming protection, or is it just another tech illusion where human judgment still matters most?
By: Mirza Bahić
“AI is whatever hasn’t been done yet.” This wry observation from Princeton professors Arvind Narayanan and Sayash Kapoor in their book AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference perfectly captures how the definition of AI keeps shifting, allowing the term to mean almost anything to anyone. Once the technology works reliably, however, be it spell-check, speech recognition, or autopilot in planes, we strip away its AI designation and take its magic for granted.
AI Shrinks the Haystack, But Humans Still Find the Needle
Yet, as AI security solutions flood the market with promises of almost supernatural vigilance, a sobering reality check comes from a tragic event at Antioch High School in Nashville. In January 2025, despite a nearly $1 million investment in an AI-based gun detection system called Omnilert, a 17-year-old student, Solomon Henderson, managed to bring a concealed handgun onto campus. He fatally shot a classmate and wounded another student before taking his own life.
The AI system, designed to identify visible firearms through surveillance cameras, failed to detect Henderson’s weapon. According to district officials, the system did not activate because the gun was not visible to the cameras at the time of the shooting. Omnilert’s CEO eventually admitted that the system requires a clear line of sight to the weapon to function effectively.
Maher Yamout, Lead Security Researcher at Kaspersky’s Global Research and Analysis Team, warns that such overstatements inflate expectations beyond AI’s current capabilities, noting, “AI is often portrayed as an all-knowing, autonomous system that is capable of making decisions instead of us. This is, of course, not realistic.”
In agreement with Yamout, Mohammed Soliman — a member at McLarty Associates and senior fellow at the Middle East Institute — argues that marketing often exaggerates the promises of what AI in security can do, adding that it does not work “as well as the glossy ads claim. They pitch fully autonomous security or zero breaches, but that’s overselling it.”
This incident underscores the limitations of relying solely on AI for critical security measures. The technology’s inability to detect concealed weapons raises concerns about its effectiveness in real-world scenarios where threats are not always overt.
Sajjad Arshad, Business Development Director at AxxonSoft Middle East, puts it bluntly: “AI doesn’t drink coffee to stay more focused, but it also doesn’t understand intent”. While AI excels at scanning images, spotting patterns, and combing through massive volumes of video without fatigue, it falls short where it matters most — in understanding why something is happening.
“Security isn’t just about spotting anomalies,” Arshad emphasizes. “It’s about judgment.” AI can flag someone standing still in a parking lot, but it can’t tell if they’re lost or about to break in. That level of interpretation — the kind that connects behavior to motive — still belongs to humans. As Arshad puts it, “decision-making and context are still human territory,” even if AI is doing the heavy lifting behind the scenes.
His colleague Soliman offers a striking metaphor: “AI is brilliant at shrinking the haystack—flagging 100 suspicious events out of a million—but it’s still up to humans to find the needle. False positives remain a real challenge,” he says.
When AI Meets Human Error
To balance the conversation around AI’s potential and its pitfalls, it’s essential to consider the voices of AI detractors, without letting the pendulum swing too far in the opposite direction.
Take facial recognition, for instance — a technology that has stirred significant concern among civil liberties advocates. While it’s often heralded as a tool for crime prevention and identification, it has also led to multiple false arrests in the United States, particularly among African Americans. This fact alone has fueled calls for a complete ban on its use by law enforcement, with critics arguing that the technology is inherently biased and prone to error.
Soliman offers a more sober appraisal of the situation. “Accuracy’s tricky—overfit models miss threats, undertrained ones spam alerts. Bias is a minefield; feed it skewed crime stats, and it’ll unfairly target certain groups”, he says.
Yet, there is an important twist to this story. The false arrests connected to facial recognition aren’t just a result of AI errors but also a cascade of human failures in the police system. For instance, one person was wrongfully arrested for shoplifting largely due to a security contractor’s testimony, despite the fact that the contractor wasn’t even present during the alleged crime. Another arrest for shoplifting was a direct consequence of poor investigative work. These incidents illustrate that the problems aren’t solely rooted in AI technology; they are a product of flawed human processes and misjudgments within law enforcement.
Thus, while AI may be imperfect, it’s important not to let the debate over its failings obscure the larger picture. Policing errors leading to wrongful arrests happen every day, long before AI systems even entered the picture, and will likely continue, with or without facial recognition.
Perpetual Cycle of Hype Springs and Winters
Yet, the Antioch High School tragedy reveals the risks of over-relying on AI’s promises, but such disappointments are not new, as AI’s history is a rollercoaster of exaggerated claims and humbling setbacks.
The Gartner hype cycle shows how new technologies often start with big hype, then face disappointment, before slowly improving and becoming truly useful over time. While many disruptive technologies like cryptocurrency follow this pattern, artificial intelligence has experienced something more cyclical—alternating between periods of intense enthusiasm (“springs”) and subsequent disappointments (“winters”).
There is one fundamental difference between AI and crypto hype: Despite all the inflated claims, AI has demonstrated genuinely socially beneficial uses, whereas cryptocurrency remains largely a solution in search of a problem.
Arshad of AxxonSoft Middle East doesn’t mince words when it comes to AI misconceptions: “Believe me, we get some truly bizarre requests from clients — the kind that make us stop and wonder: who told them these fairy tales in the first place?”
He’s clear about where the disconnect lies: AI isn’t a crystal ball or a mind reader. It can’t infer intent just from pixels — a person standing still might be loitering or simply waiting for a ride. Without context, AI can’t always decide.
Another major gap, he notes, is the myth of plug-and-play performance. AI video analytics doesn’t just “work” out of the box; it needs careful tuning, adjusting for camera angles, lighting, and environmental variables. “It’s not just about the algorithm,” he explains. “It’s about the ecosystem it lives in.”
Still, Arshad sees real progress. Today’s AI systems are far more resilient, filtering out shadows, detecting objects in crowds, and adapting to harsh conditions. But he’s quick to add: “There’s still no ‘easy button.’ The real challenge isn’t just building smarter AI — it’s aligning expectations.”
But why are these expectations skewed in the first place? One of the suspects is the dominant role of corporate funding. Modern AI technologies like large language models incur enormous development costs in both hardware and researcher time. This has shifted power toward corporations like Google, Meta, and OpenAI, which can afford these investments.
“AI often meets expectations when applied thoughtfully, but it doesn’t always live up to the hype portrayed in marketing materials. While AI is definitely powerful, its success depends on data quality, integration with existing systems, and human oversight”, says Saif AlRefai, Solution Engineering Manager at OPSWAT.
This is echoed by Hans Kahler, Chief Operating Officer at Eagle Eye Networks, who cautions that such hype mirrors the historical pattern of AI springs, stating, “Some vendors overpromise, suggesting AI can do everything. That’s hype, that’s not reality.”
Similarly, Arshad notes that while AI is advancing, its real-world performance is more complex than marketed, adding, “Marketing tends to sell dreams, while AI delivers something a little messier — reality.”
This corporate influence has prioritized engineering breakthroughs over scientific understanding. Companies value improvements that can be integrated into profitable products more than a deeper comprehension of why AI techniques work. The result is often focused on beating benchmarks rather than building verifiable knowledge from real-life field testing.
Dr. Arijana Trkulja, Head of Cyber Security Center of Excellence at Ingram Micro in Dubai, also highlights the gap between AI’s marketing promises and its real-world applications. While AI has been marketed as a game-changer, offering unprecedented automation and accuracy, it often falls short in areas like poor data quality, non-negotiable human oversight (e.g., in legal or healthcare settings), and ethical risks.
“Full autonomy is still a myth for most use cases,” Trkulja notes, emphasizing the need for continuous human input, fine-tuning, and monitoring for AI systems to remain effective. She encourages businesses to adopt a balanced perspective: “When implemented thoughtfully, with clear objectives and data preparedness, AI can be a powerful enabler of efficiency, innovation, and competitive advantage.”
Why AI Succeeds and Fails
While AI’s hype cycles fuel lofty expectations, understanding why the technology excels in some tasks and fails in others is key to separating fact from fiction.
Facial recognition AI is a prime example of how AI can both work remarkably well in some cases and fail in others, depending on the step of the task at hand.
When used for face identification, AI tends to be highly accurate because there is little uncertainty or ambiguity involved. The technology is trained using vast databases of photos and labels, allowing it to distinguish whether two photos represent the same person. Given enough data and computational resources, AI learns the patterns that differentiate one face from another, making it very effective in controlled environments where the information needed to make a determination is embedded directly in the images. In this context, facial recognition AI excels because the task is clear-cut—identifying whether two photos represent the same person.
However, this accuracy breaks down when AI is asked to perform more complex tasks, particularly when it involves prediction rather than identification. Predicting dangerous behavior, for instance, is a far more nuanced and challenging task. It doesn’t just require recognizing a face but also involves making an assessment about a person’s intent or future actions—something that is inherently uncertain and difficult to measure. “AI still needs context, and that often means a human in the loop. AI can recognize a person entering a restricted area, but it can’t always understand intent or nuance. It’s powerful, but it will always need human oversight and judgment”, says Kahler.
When AI is tasked with predicting who might be a dangerous individual, such as anticipating violent behavior or identifying potential threats based on facial features or past data, it is essentially guessing personal traits like emotional state or gender identity. These tasks are much more prone to error because facial expressions or physical features don’t reliably convey these deeper, more subjective qualities.
Nonetheless, there is room for cautious optimism. “In physical security, predictive policing stands out. Across both, anomaly detection is the real star—whether it’s a hacker or a trespasser, AI’s great at sniffing out the weird stuff”, says Soliman.
Yet, it’s clear that while AI-powered facial recognition systems are a powerful tool for identifying individuals, their predictive capabilities are much less reliable and prone to significant mistakes. This underscores the importance of understanding AI’s limits and resisting the temptation to view it as a flawless, all-encompassing solution.
As an answer to this dilemma, Arshad offers a grounded view of AI’s role in security, shaped by years of deployment in real-world environments. “AI doesn’t ‘understand’ the world like a human does,” he says, “but in the right hands, with the right data and infrastructure, it’s a tool that consistently delivers real value.”
For him, the most effective AI applications aren’t flashy — they’re practical, cutting false alarms, accelerating investigations, and helping operators focus on what truly matters. He sees AI-powered video analytics, from real-time object detection to intelligent search, as a core force reshaping physical security. These tools are no longer experimental but essential, reducing reliance on manual monitoring and making surveillance systems smarter and faster. “Does AI live up to the marketing?” Arshad asks. “Only if the marketing is smart enough to recognize that intelligence—artificial or otherwise-is as much about asking the right questions as it is about getting the right answers.”
Why Knowing What’s Under the Hood of AI Matters
The uneven performance of AI, from precise facial recognition to flawed predictions, shapes its role in modern security systems, where narrow intelligence drives both innovation and constraint.
Amid the buzz, the best way to come to terms with the more nuanced reality of AI’s performance in security is to understand how it works.
Arshad, for example, brings a practical lens to AI’s role in security. “We speak with confidence about the impact of AI in physical security because that’s where we live and breathe,” he says, emphasizing that their insights come from field deployment, not theory.
According to him, AI has moved from hype to hands-on utility, particularly in video surveillance. Tools like AxxonSoft’s Axxon One VMS now enable real-time detection of threats such as intrusions, loitering, and crowd formation. “These aren’t just cool features — they cut investigation time from hours to minutes,” he notes. What makes these systems effective isn’t just their capabilities but their adaptability: AI filters out irrelevant data, flags genuine threats, and empowers security teams with what Arshad calls “superhuman vision and speed.”
Security AI tends to follow two approaches. Symbolic AI operates on rigid rules and pre-programmed commands that dictate, for example, when an alarm should be triggered. Statistical AI, on the other hand, uses probability calculations to learn patterns from data. A facial recognition system scanning an airport terminal for a wanted individual is a good example of statistical AI at work.
Based on this, Panayiotis Kapiniaris, Global Sales Director at Monitoreal, emphasizes how AI can shift surveillance from reactive to proactive, stating, “AI has significantly enhanced physical security, particularly in video surveillance systems. Traditional CCTV was largely reactive, allowing users only to review incidents after they occurred. At best, live monitoring enabled operators in control centers to respond to real-time events, but still in a reactive manner.”
Despite their increasing sophistication, today’s AI-based security systems remain limited in scope. Machine learning in security is often confined to basic forms of video analytics, biometric system management, and drone operations. Most systems can only interpret what they have been specifically trained to recognize and may fail when encountering unknown or novel threats. In line with this, Yamout from Kaspersky stresses that AI’s value lies in supporting, not supplanting, human operators, noting that “most of what’s called AI today is actually advanced machine learning. AI in cybersecurity is not about full automation or replacing human expertise and expert work— it’s about enhancing their capabilities with data-driven insights.”
Machine learning (ML) is particularly adept at automating repetitive tasks, such as identifying patterns in data and adapting to changes. It excels at organizing this data into a clear and digestible format. However, as Yamout points out, the interpretation of that data remains a human responsibility. He explains that while AI and ML have already proven their value in physical security, being used in technologies like biometric authentication and CCTV systems, they continue to play a critical role in cybersecurity as well. “AI/ML has proven it can support humans in scaling the automation to analyze a large number of events,” Yamout adds, highlighting its ability to enhance efficiency without replacing human judgment.
As AI evolves, the future promises greater predictive power, deeper integration of security functions, and smarter automated responses. Yet with these advancements comes heightened risk. If standards, ethical frameworks, and robust testing procedures do not evolve at the same pace, organizations could find themselves more vulnerable, not less. For security professionals, the lesson is clear: AI is a formidable tool, but it must be wielded with expertise, caution, and above all, an understanding of its limits.
Getting the Most from AI in Security
AI and ML are powerful tools that can rapidly process large volumes of data and detect patterns that may indicate security threats. In Physical Access Control Systems (PACS), for example, AI can track entry and exit patterns, identifying anomalies that could suggest unauthorized access attempts.
Trkulja notes that AI’s integration into access control systems improves both security and user experience, stating, “AI enhances access security by enabling multi-factor authentication and biometric verification such as fingerprint, voice, and facial recognition.” Similarly, Arshad emphasizes AI’s transformative impact on video surveillance, adding, “Real-time object detection, behavior analysis, and facial and number plate recognition have moved from experimental to essential.” In video surveillance, AI can analyze footage in real time, detecting movements, identifying objects, and flagging potential security breaches.
These technologies support security operations by increasing speed, scale, and consistency. They reduce the burden on human operators by filtering information and surfacing potential threats, enabling faster decision-making and response. A human-in-the-loop approach further enhances this value. By combining AI with trained personnel, teams can filter out irrelevant data and apply real-world context to complex situations. This collaboration improves monitoring capabilities and enables more accurate judgments.
Trkulja highlights several gaps between AI’s promises and its real-world performance in security. While AI has been widely adopted with the promise of revolutionizing threat detection and risk management, its actual capabilities often fall short. “False positives remain a major challenge,” she notes, as many systems lack the contextual awareness to differentiate between genuine threats and benign anomalies. Moreover, “marketing often promotes AI as a fully autonomous solution,” yet in practice, human intervention remains essential.
The accuracy of AI systems is another concern. “Inaccuracies slow down incident response and create alert fatigue for security teams,” Trkulja explains. Bias in AI models, particularly in sensitive areas like facial recognition, further complicates their effectiveness and introduces ethical risks. Additionally, “AI systems often rely on massive volumes of personal data,” raising privacy concerns, especially in regions with strict regulations like GDPR. Integration challenges also persist, as “promised ‘plug-and-play’ solutions often require heavy customization.”
To fully realize AI’s potential in security, Trkulja advises businesses to invest in data readiness, continuous model training, and skilled oversight. Only by addressing these challenges can AI reach its full capacity in transforming the security landscape.
Incorporating human verification into security operations ensures that alerts and anomalies are interpreted correctly. Security professionals can quickly determine whether a detected behavior represents a real threat or a harmless action, maintaining operational focus and avoiding unnecessary disruptions. Human operators bring context, experience, and adaptability—traits AI does not replicate—ensuring security teams respond to legitimate threats swiftly and accurately.
AI Surveillance Market Heats Up
As AI fuels the evolution of security solutions, a booming global market is reshaping the industry, propelled by technological advancements and evolving geopolitical dynamics.
The AI surveillance market, valued at USD 3.90 billion in 2024, is projected to reach USD 12.46 billion by 2030, growing at a 21.3% CAGR, fueled by rising public safety demands and smart city initiatives. The Asia Pacific region, particularly India, leads this transformation, with investments in digital infrastructure and urban security driving adoption.
“Panayiotis Kapiniaris of Monitoreal underscores the importance of addressing privacy in AI deployments, noting, “Data privacy remains the most critical challenge.”
In addition, U.S.-China tariffs have increased costs for Chinese-made equipment, pushing demand toward European and Southeast Asian alternatives and emphasizing solutions compliant with data sovereignty laws. As AI surveillance evolves, it’s reshaping urban safety and smart homes, but balancing innovation with affordability and ethics remains critical.
While market growth highlights AI’s surveillance potential, its broader security applications reveal both transformative benefits and daunting risks.
Key Pain Points in AI-Driven Security Systems
While AI has undeniably revolutionized physical security, offering significant advancements in areas like surveillance, access control, and threat detection, many of the promises surrounding these technologies are often overstated, and current limitations must be carefully considered.
One of the most overhyped aspects of AI in physical security is the idea that it can completely replace human security personnel. “No way. AI’s a data-crunching beast, but it’s got no judgment. A cyber tool might flag a login from China, but a human knows the CEO’s traveling”, says Soliman.
While AI can automate routine tasks and assist in decision-making, it cannot fully replicate human judgment. Complex scenarios that require contextual understanding and nuanced decision-making still necessitate human oversight. AlRefai from OPSWAT highlights a key limitation, noting that AI’s lack of transparency undermines trust in its decision-making, stating, “AI systems can make accurate predictions, but they often operate as ‘black boxes,’ making it hard for teams to understand or validate their decisions.”
The same concerns about the “black box” problem are shared by Yamout, who describes it as instances in which AI makes decisions that are hard to explain. “AI recommendations or conclusions need to be explainable, especially in critical sectors”, he stresses.
AI systems, although powerful in their ability to process large volumes of data, are only as good as the information they are trained on. This means that biases and gaps in data can lead to inaccurate outcomes. Furthermore, AI is vulnerable to adversarial attacks, where malicious inputs can deceive the system, undermining its effectiveness.
Many are unaware of this because of a common misconception that AI systems are inherently secure. In reality, AI technologies are susceptible to the same cybersecurity risks as other digital systems, including data poisoning and unauthorized access. Securing AI systems requires robust cybersecurity measures to protect both the technology and the data it processes.
False positives also pose the risk of desensitizing security personnel. Soliman points out that persistent accuracy issues in AI systems can overwhelm security teams, stating, “False positives are a pain; there are still considerable error rates in some cyber tools.” An overwhelmed team may begin to ignore or delay responses, increasing the chance of missing genuine threats. Excessive alerts can lead to slower reaction times and, in some cases, critical incidents going unnoticed.
Another pain point is the fact that the effectiveness of AI in physical security is closely tied to the quality of data it receives. Many security systems rely on large volumes of labeled data, and if this data is inaccurate or insufficient, it can significantly impact AI performance.
Integrating AI technologies with legacy security systems presents another challenge. Many organizations struggle to make new AI solutions compatible with existing infrastructures, which can lead to inefficiencies or costly overhauls. Without proper configuration and data management, new technologies may introduce blind spots. Kahler confirms the poor state of the existing infrastructure. “Many systems are still closed or outdated, which limits what AI can do. That’s why open platforms matter—they allow for integration and continuous evolution as AI capabilities grow”.
Furthermore, Kapiniaris says that some of the misunderstandings stem from a lack of familiarity with the real capabilities of current-generation hardware. “For instance, accuracy is often influenced more by hardware limitations than by AI itself. It’s important for the market to recognize that poor performance is frequently due to low-quality equipment or improper CCTV installation—not a flaw in the AI”, he says.
AI also requires ongoing maintenance and skilled personnel to manage and update the systems. Without proper oversight, AI systems can become inefficient or even counterproductive. Environmental factors, such as poor lighting or extreme weather conditions, can further limit AI’s ability to accurately detect and analyze threats.
AlRefai points out that deploying AI in security comes with a host of challenges beyond just technology. “Organizations face many struggles,” he explains, “including technical challenges like ensuring model accuracy, minimizing bias, and maintaining data privacy, as well as compliance with regulatory frameworks.” The lack of experienced professionals adds to the burden, with many security teams still unprepared to manage AI tools effectively. “Ensuring ethical use and safeguarding against adversarial manipulation are ongoing concerns,” AlRefai adds, underscoring the need for both technical and human oversight in any AI-driven security deployment.
AI Moves Beyond Security
Beyond security, AI is revolutionizing industries by predicting risks and optimizing operations. In healthcare, algorithms identify at-risk patients to prevent falls, reducing hospital admissions. In retail, video analytics refine store layouts and enhance customer experiences by analyzing foot traffic and dwell times. Smart cities leverage AI for crowd safety and resource allocation, integrating surveillance with civic platforms.
Dr. Arijana Trkulja, Head of Cyber Security Center of Excellence at Ingram Micro, notes AI’s impact in retail and finance, where it strengthens security beyond traditional surveillance, adding, “Financial institutions and e-commerce platforms use AI to monitor transactions and detect signs of fraud in real time.”
Hybrid cloud-edge systems enable this transformation, balancing real-time analytics at the edge with deep analysis in the cloud. Open-source AI models accelerate customization, making these solutions accessible across sectors. To maximize benefits, organizations must invest in training for ethical AI use and privacy best practices, ensuring seamless integration with existing infrastructure. As AI drives efficiency in healthcare, retail, and urban management, its potential is vast—but only if paired with accountability and robust data management to mitigate risks and build trust.
For Hans Kahler, Chief Operating Officer at Eagle Eye Networks, AI is no longer just a buzzword — it’s a practical tool already reshaping security operations through real, measurable improvements. “AI is solving real problems today,” he says, pointing to one of the industry’s long-standing challenges: reviewing video efficiently. Instead of manually scrubbing through hours of footage, AI-powered Smart Video Search lets users “type in a search term — just like on the internet — and instantly find relevant results.” This not only saves time but also allows for faster, more targeted responses. Kahler also highlights real-world use cases like license plate recognition (LPR), AI alerts that flag abnormal behavior, and analytics that help monitor traffic patterns or safety compliance. These aren’t futuristic features — they’re already in the field. “It transforms the way security teams operate,” he adds, making AI not a distant promise but a practical force simplifying operations and improving outcomes right now.
From Promise to Peril to Progress
As AI reshapes everything from smart cities to retail analytics, the challenge remains: how do we harness its potential while avoiding the pitfalls of hype?
Artificial intelligence in security is neither a panacea nor a fraud—it’s a powerful tool, fraught with both opportunity and risk. From the tragic failure at Antioch High School to the predictive prowess reshaping surveillance and beyond, AI’s story is one of dazzling highs and sobering lows.
Historical hype cycles remind us that today’s enthusiasm may precede another winter, yet real progress persists in narrow applications like facial recognition and predictive analytics. As markets boom and AI extends into healthcare, retail, and smart cities, the challenge is to temper excitement with scrutiny, ensuring systems are transparent, ethical, and resilient. Security professionals, policymakers, and businesses alike must prioritize human oversight and robust standards to harness AI’s potential without falling prey to its hype.
Kahler of Eagle Eye Networks reinforces this view, emphasizing AI’s role as a supportive tool that enhances, rather than supplants, human judgment, stating, “AI will not—and should not—replace humans. It’s a powerful assistant, not a decision-maker.”
In agreement, Kapiniaris of Monitoreal argues that AI’s true value lies in augmenting human capabilities, adding, “It might sound overly optimistic, but I’m confident that the current level of AI already delivers remarkably reliable performance. The challenge isn’t a gap in capability, but rather the time required for both the technology and human side to develop the necessary culture, infrastructure, and habits. Only then can they truly integrate and operate in seamless harmony.”
In the end, AI’s place in security won’t be decided by algorithms alone, but by the values and judgment we bring to its use. The future lies not in replacing human insight, but in amplifying it—thoughtfully, ethically, and with eyes wide open. Real progress will come not from chasing the next breakthrough, but from building trust, clarity, and accountability into every system that gets deployed in the field.
AI in the Middle East: Between Vision and Reality
As the global conversation on artificial intelligence intensifies, the Middle East, Turkey, and Africa (META) region is emerging as one of the most ambitious adopters of AI in security. According to recent industry data, 46% of end users in this region plan to integrate AI and machine learning by 2025—the highest rate worldwide. Furthermore, 37% of security decision-makers reported increased budgets for 2024, significantly above the global average. This surge in funding reflects not just a hunger for innovation but a belief in AI as a pillar of modernization and resilience. 80% of end users in the region have also been affected by data privacy regulations, driving a strong push for secure data storage and cybersecurity education.
“AI/ML is an integral part of smart automation at scale,” says Maher Yamout, Lead Security Researcher at Kaspersky’s Global Research & Analysis Team (GReAT). “In Middle Eastern countries, we see it in smart services, law enforcement, cybersecurity, and smart city infrastructure. The need is strategic, but the global race for relevance amplifies it.”
This dual motivation—strategic need and global competitiveness—was echoed across expert interviews. According to Yamout, “Since AI/ML is an integral part of smart automation, there’s a genuine need for it globally, including in the Middle East. But especially in countries aiming to be competitive on the global scene.”
Mohammed Soliman, Senior Fellow at the Middle East Institute, agrees on the rising demand in the region: “The strategic need is real in the UAE’s smart cities, like Dubai’s AI traffic systems, and Gulf states guarding oil assets show that cyber threats, like Iran-linked hacks, push it too.”
Government initiatives like Saudi Arabia’s Vision 2030 and the UAE’s National AI Strategy 2031 are more than symbolic. Saudi Arabia’s National Strategy for Data and AI (NSDAI) aims to generate $135 billion in AI-driven GDP impact by 2030, according to PwC Middle East, reflecting a commitment to economic diversification and technological leadership. Across the region, AI is expected to contribute over $320 billion to the economy by 2030, underscoring the scale of investment and ambition.

“Countries must prioritize not only vision-setting but also execution by aligning AI goals with clear metrics such as economic output, public service improvements, and job creation,” says Trkulja.
Hans Kahler from Eagle Eye Networks highlights the Middle East’s rapid adoption of AI surveillance, driven by digitization and demand for scalable solutions, stating, “Eagle Eye Networks has seen firsthand how demand for AI-powered security is accelerating in the Middle East. Our recent investment in a new data center in the Kingdom of Saudi Arabia is a direct response to the digitization push,” says Hans Kahler, COO of Eagle Eye Networks.
However, the region is not immune to the global wave of AI hype. Yamout draws a comparison to past overhyped technologies: “It’s somewhat a déjà vu with the AI hype and previous tech such as blockchain, where everyone wanted to insert blockchain in a task it’s not meant to. We must ensure proper education and awareness on the AI/ML capabilities, realistic expectations, and limitations.”
For sustainable and inclusive AI implementation, regional experts stress long-term thinking over short-term marketing.
Dr. Arijana Trkulja, Head of the Cyber Security Center of Excellence at Ingram Micro in Dubai, offers a strategic vision for the future of artificial intelligence in the Middle East. She identifies five crucial areas for development and implementation:
“Countries must prioritize not only vision-setting but also execution by aligning AI goals with clear metrics such as economic output, public service improvements, and job creation,” says Trkulja. She emphasizes that national AI strategies, like Saudi Arabia’s NSDAI, must focus on measurable impact to ensure tangible benefits, such as the projected $135 billion contribution to GDP by 2030.
Trkulja stresses the importance of education and talent development, urging governments and businesses to invest in AI-focused curricula at schools and universities, as well as in vocational training and certification programs to address the region’s shortage of skilled AI professionals.
She also highlights the need for strong ethical and regulatory frameworks, cautioning that without proper oversight, AI could exacerbate existing inequalities or enable misuse. “Countries like Egypt and Jordan are just beginning to draft their frameworks, but it’s critical that others follow suit,” she notes.
In terms of addressing regional disparities, Trkulja advocates for strategies that bridge urban-rural and sectoral gaps. “To avoid a digital divide, strategies must address underserved areas and industries.” She suggests applying AI in agriculture to enhance food security in rural North Africa, optimizing crop yields and improving resource management; leveraging AI-powered language tools to support Arabic content and services, expanding access to digital resources; and supporting AI start-ups through inclusive funding and incubators to foster innovation.
Finally, she calls for greater regional cooperation. GCC countries should coordinate AI research centers, cross-border projects, and joint regulatory frameworks. By working together, these nations can harness the full potential of AI to drive innovation and economic growth across the region.
Soliman pushes for similar pragmatism: “They’ve got to get practical. Use local data—sandstorm-blurred cameras, Arabic phishing scams—not Western hand-me-downs. Train people, not just buy tech. Few Arab STEM grads do AI—that’s a gap. And set rules early—privacy and accountability laws will dodge Europe’s mess. Less flash, more follow-through.”
Yamout reinforces this. “By educating ourselves on the different use cases and potential implementations,” he notes, “we can gradually use the tech to our advantage, and at the right place.”
As budgets rise and pilot projects multiply, the META region stands at a crossroads. AI is no longer a future promise—it is a present imperative. But as with all tools of power, how it is wielded will matter more than how much is spent.