Discover insights from leading mobile app security experts | Promon

Emerging threats in mobile AI: What businesses need to know

Written by Morten Ruud | Sep 25, 2025 7:18:04 AM

We are witnessing the rapid integration of AI into mobile applications, from customer-facing assistants to intelligent automation. AI-driven mobile apps are becoming core to mobile UX and business logic. But they also bring new, often overlooked, attack surfaces.

As AI becomes more embedded, the mobile app threat surface expands well beyond what traditional mobile security was designed to handle. This blog post unpacks what security leaders need to consider as they adopt on-device AI, and why mobile AI security must evolve to meeting emerging AI security threats.

Why mobile AI is on the rise 

AI is no longer confined to the cloud. Businesses are moving toward embedding AI directly into mobile apps. The shift from cloud-based AI to on-device AI security—including small language models (SLMs), decision engines, multimodal AI, and federated learning—is driven by:

  • The need for real-time decision-making and low-latency performance 
  • Stronger privacy demands and offline functionality 
  • Lower cloud processing costs and reduced data transmission

More reading: The future of AI in cybersecurity: Why nothing really changes

From AI-powered mobile apps in banking, healthcare, and gaming to securing on-device AI assistants in smart devices, embedded AI is transforming mobile workflows across industries. But this evolution introduces entirely new mobile app AI risks. Most threat models haven’t caught up to this reality, so many apps today aren’t protected against AI-specific runtime threats. And the closer AI gets to the device edge, the more exposed it becomes to manipulation, extraction, and misuse.

“The global on-device AI market size was estimated at USD 8.60 billion in 2024 and is projected to reach USD 36.64 billion by 2030, growing at a CAGR of 27.8% from 2025 to 2030. The industry is driven by progress in AI technologies, growing demand for real-time data processing, and heightened concerns around privacy and security.” [1] 

New threat surfaces: What’s at risk 

As businesses and organizations embrace embedding AI into mobile apps, they must understand the novel threats and risks it introduces.

Runtime manipulation and behavior tampering

Attackers can manipulate app memory at runtime to alter AI logic or bypass internal guardrails. For example, in a healthcare app, an attacker might modify AI logic to prioritize certain treatments for malicious or biased reasons. In a lending app, AI runtime manipulation could be used to force approval of high-risk or fraudulent applications.

AI model theft and reverse-engineering 

AI models stored within mobile apps are valuable intellectual property. Without proper AI IP protection, attackers can extract, clone, or resell these models. This leads to loss of competitive advantage and may also trigger licensing violations or compliance risks.

More reading: Breaking & defending mobile apps: Prevent reverse engineering in the age of AI

Prompt injection attacks and data abuse 

Apps using language models or dynamic inputs are particularly susceptible to prompt injection attacks. Malicious users can hijack input/output flows to corrupt model behavior, exfiltrate data, or trigger unintended decisions. This undermines AI data integrity and user trust. 

In-app data manipulation 

Many embedded AI systems depend on local configuration files or decision trees. If these internal data sources are not properly secured, attackers can alter them to manipulate AI behavior or gain unauthorized access. 

Malicious code execution in AI runtime 

If the AI runtime environment isn’t hardened, attackers can inject or execute malware, gaining persistent access or compromising the system more broadly. This is a growing concern in AI-powered mobile apps operating offline or in untrusted environments. 

Together, these risks represent a fast-expanding AI threat landscape and require a new defensive strategy. 

High-risk use cases by industry 

Some industries face a particularly urgent need for AI model protection for their mobile developers and app security teams. 

Finance and healthcare 

AI-powered advisors, diagnostics tools, fraud engines, and risk calculators are being deployed on-device to enable faster and more private decision-making. But this also introduces high-stakes risks: 

  • Tampering with financial AI logic could authorize fraudulent transactions 
  • Altering medical triage models could lead to dangerous or biased recommendations 
  • Regulatory scrutiny and breach liability are significant for both sectors

“Software Solutions is the most lucrative component segment registering the fastest growth during the forecast period.” [2]

Read more: Financial App Security in 2025: Combating Traditional Malware and Emerging Ai Threats

Gaming and anti-cheat 

AI models that control in-game logic or detect cheating are frequently targeted: 

  • Players reverse-engineer models to bypass enforcement 
  • NPC logic is modified to gain unfair advantages 
  • Game studios lose proprietary innovation to competitors 

Smart devices and IoT deployments 

Threats to AI-driven apps on smartphones are expanding to edge devices. For example: 

  • A smart lock using voice-enabled AI could be manipulated for unauthorized access 
  • AI in smart sensors or appliances could be altered to misreport conditions 
  • Without local protection, these systems are exposed to security risks in mobile AI deployments 

Rising regulatory pressure on AI deployments 

As the risks of embedded AI models grow, so does the regulatory spotlight. Even mobile apps that operate offline must now align with emerging legal frameworks. 

  • EU AI Act: Requires tamper-resistance, logging, and post-deployment monitoring for high-risk AI systems, especially those influencing finance, health, or safety. 
  • GDPR: Apps that use embedded AI to process personal data must meet Article 25 (data protection by design). 
  • NIST AI RMF/ISO/IEC 42001: Set best practices for autonomous AI security, robust design, and risk management. 

Many apps unintentionally fall under “high-risk” categories, even when their AI features seem lightweight. Businesses need to align AI security for developers, compliance teams, and product leads, early and decisively. 

Why conventional app security falls short 

Legacy mobile app protection tools were not built to handle embedded AI risks. Here are some reasons traditional mobile security tools fail to address today’s mobile AI threats: 

  • They tend to focus on app logic/structure, not on AI model integrity, protection, or behavior. 
  • They are blind to memory-level AI manipulation and can’t prevent the misuse of local AI agents. 
  • Their static protections can’t stop dynamic runtime threats in AI apps. 
  • They don’t detect subtle manipulations of internal AI logic or data. 
  • Their application hardening tools don’t protect AI models, decision paths, or input/output flows. 
  • They leave openings for AI model tampering, query-level prompt injection, and data exfiltration. If a query can be written or a response can be seen, it can be manipulated or intercepted without proper runtime protections. 

The result is that many teams are still building powerful AI-driven mobile apps but without sufficient protection. 

A new threat model for AI in mobile 

To stay ahead and stay safe, businesses need to shift security thinking and evolve beyond traditional app security. AI changes how mobile apps behave, how they can be attacked, and therefore how they should be protected. If the AI runs inside the app, so do the risks, and so can an attacker. Mobile AI security requires an updated risk model with the assumption of internal threats. 

Protection against AI threats in mobile phones must work at the app level. Static controls or cloud-side safeguards are no longer sufficient. What this means is that modern AI model security must now include: 

  • Input/output hardening to stop corruption or hijacking
  • Integrity checks, anti-tempering, and obfuscation that prevent unauthorized model modifications in AI-driven apps

Read more: AI deobfuscators: Why AI won’t help hackers deobfuscate code (yet)

This isn’t future gazing. It is what is required of businesses today to protect trust, compliance, and IP in modern mobile apps. 

Business comes next 

While the AI security threats are evolving, so are the application protections. Promon has exciting ways to mitigate AI risks that empower mobile app teams to deploy proven, layered defenses that protect their embedded AI, while maintaining UX, compliance, and speed to market. 

Businesses embedding AI in mobile apps need to rethink their threat models today. In our next post, we’ll explore how to secure AI in mobile apps using layered mobile app protection. We’ll show how you can keep your embedded AI secure while building trust, compliance, and resilience into AI-driven products.

Sources 

[1] https://www.grandviewresearch.com/industry-analysis/on-device-ai-market-report 

[2] https://www.grandviewresearch.com/horizon/outlook/ai-in-healthcare-market-size/global