Discover insights from leading mobile app security experts | Promon

From framework to action: A new roadmap for securing AI in mobile apps

Written by Dr. Anton Tkachenko | Nov 13, 2025 9:32:04 AM

AI is no longer a backend feature. It’s rapidly becoming integral to mobile experience and embedded into mobile applications across multiple industries. This shift has led to the number and sophistication of emerging threats increasing exponentially. 

Due to the rise in on-device AI, organizations need a new kind of threat framework to deal with a new class of threat.  

  • It must be broad enough to include a full range of new AI threats. 
  • It must be detailed in its description of individual AI-related threats. 
  • It needs to prioritize threats in terms of potential impact, as well as prioritize solutions in terms of protection relevance. 
  • It should be compliance-oriented and compatible with other CyberSec frameworks. 
  • It needs to be capable of bridging stakeholders across security, technical, and business roles.

In May 2025, Dr Anton Tkachenko, Security Researcher with Promon, released a paper called AI Security Threat Model: A Comprehensive Approach. His aim was to promote a preventative approach to application security by focusing on those threats within the AI development lifecycle that fall within Promon’s purview. Promon specializes in the security aspects of the AI development lifecycle from model deployment to application integration.

The comprehensive threat model is especially developed for these environments. 

“Our systematic approach covers the entire AI lifecycle—from initial data preparation through model operation. This methodology helps organizations identify vulnerabilities and implement preventive measures based on Promon’s specialized expertise and leading cybersecurity frameworks including OWASP, MITRE, and NIST.” (p. 1)

We strongly encourage you to explore the comprehensive model in this research for yourself. Here’s a rundown of some of its key takeaways. 

Business and AI security stakeholder alignment

A wide range of cybersecurity stakeholders will find this threat model a critical tool in AI implementation. These include: 

  • Software engineers who are responsible for model development and training 
  • CyberSec specialists who are responsible for protecting data and IT infrastructure 
  • Solution architects who integrate AI into business processes 
  • Business executives who evaluate AI implementation risks in mission-critical operations

More reading: Emerging threats in mobile AI: What businesses need to know

AI compliance requirements in the on-device age 

This comprehensive AI security threat model and the protection mechanisms it outlines are designed to support key regulatory requirements. There are several different protection mechanism types that directly support compliance while aligning with emerging AI regulations. These regulations include: 

  • EU AI Act  
  • Data protection regulations (e.g., GDPR and similar frameworks) 
  • Sector-specific requirements (e.g., finance)
  • Transparency requirements 
  • Security standards (e.g., ISO and NIST) 

Various industry frameworks were analyzed to help build the threat classifications used in our comprehensive model. 

Individual AI threats targeting mobile apps 

We have documented forty-nine main AI security threats. These threats are divided into four major categories: 

  1. Device threats (Dev01 to Dev16) 
  2. Model threats (Mod01 to Mod10) 
  3. Application threats (App 01 to App14) 
  4. Agent threats (Agt 01 to Agt09) 

Each category is determined by the attack surface where the threat occurs. Together, they form systematic documentation of specific threats related to model operation and application integration (the later stages of AI development lifecycle).  

The threat model does not just contain a listing of all AI security threats into major categories. For each threat, the following information is provided: 

  • Threat name 
  • Description 
  • Consequences 
  • Target object 
  • Violated property 
  • Persons responsible for threat mitigation 

Here’s an example of an individual threat breakdown on one type of prompt injection attack.

 

App08 

Direct Prompt Injection Due to Inadequate Input Validation 

Description 

Using specially crafted inputs to conduct prompt attacks against AI models integrated into mobile applications 

Consequences 

Incorrect or unauthorized model behavior, confidential information leaks 

Target Object 

Input processing mechanisms within mobile applications 

Violated Property 

Confidentiality, integrity, availability, accuracy 

Persons Responsible for Threat Mitigation 

Application security team; mobile development team 

More reading: Why prompt injection attacks are the emerging critical risk in mobile app security

Threat analysis and threat prioritization of AI risk 

After breaking down each individual threat, the model provides an analysis of them in terms of their significance for organizations deploying AI systems in mobile environments. Beyond the structural categories (Device, Model, Application, Agent) we identify four critical threat groups on attack patterns: 

  • Runtime model tampering 
  • Prompt injection attacks 
  • Local data store compromise 
  • AI agent runtime exploitation 

However, a further analysis of these threats can be offered based on the potential impact of each as well as their relevance to Promon’s security capabilities. This combines to form an AI Security Threat Prioritization Matrix that prioritizes each threat in terms of criticality. The matrix achieves this by grouping key threats into one of nine threat categories and ranking each group according to Promon relevance. 

  • High—Threats that can be effectively and immediately mitigated with Promon’s current solution portfolio 
  • Medium—Threats that can be partially addressed by Promon and are next-phase priorities but that currently require additional support 
  • Low—Threats that require significant support due to their strategic nature 

Here is the top section of the matrix that includes threats of high Promon relevance. 

Threat Category 

Key Threats 

Primary Impacts 

Promon Relevance 

Runtime Model Integrity 

(Shield for Mobile, Code Protect) 

 

Dev01: Model Substitution or Modification 

Dev09: Unauthorized Modification of System Prompt 

Model behavior manipulation, backdoor insertion 

 

High 

Model Code Protection 

(Code Protect) 

 

Dev02: Model Theft 

Protection of model inference code from reverse engineering 

High 

Local Data Integrity 

(Shield for Mobile, Data Protect) 

Dev10: Unauthorized Modification of Internal Data Sources 

Persistent vulnerabilities, data poisoning, integrity violations 

High 

Certificate Pinning Protection 

(Shield for Mobile) 

Dev06: Interception or Substitution of Requests 

 

Protection of cert pinning implementation from bypass/tampering 

High 

Input/Output Infrastructure Protection 

(Shield for Mobile) 

Dev07: Unauthorized Disabling of Input/Output Filtering App02: Bypassing Application-Level Input/Output Controls 

Safety control tampering, filter infrastructure compromise, application control bypass 

High 

Data Store Protection 

(Data Protect) 

Dev11: Information Leaks from Internal Data Sources 

App05: Indirect Prompt Injection into Internal Data Sources 

App06: Information Leaks from Internal Data Sources 

Local data exfiltration, data store poisoning, secrets exposure 

High 

Agent Infrastructure Protection 

(Shield for Mobile)  

Dev14: Unauthorized Modification of AI Agent 

Agent code tampering, runtime manipulation 

High 

Component Integration Security 

(Shield for Mobile) 

App01: Insecure Component Integration 

Insecure APIs, lack of encryption, incorrect access controls 

High 

AI Runtime Code Protection 

(Shield for Mobile) 

App03: Malicious Code Loading 

Agt03: Malware Deployment in AI Agent Runtime 

Unauthorized code execution, privilege escalation 

High 

More learning: Find out about Promon Shield for Mobile™

Promon’s layered protection as an AI defense 

One of the strengths of this comprehensive model is that it doesn’t only detail and prioritize AI security threats. It also maps these critical, high-relevance AI security threats against the protection layers and mechanisms provided by Promon. This is important for those who need to know what protective shielding a security platform can provide, and which specific methods it might employ to counteract AI as well as other security threats. This is outlined in the Promon Protection Matrix below. 

Promon Protection Layer 

Threats Addressed 

Protection Mechanism 

Runtime Application Shielding 

(Shield for Mobile) 

Dev01, Dev07, Dev09, Dev14, App02 

Anti-tampering controls, run time integrity verification, repackaging detection, hooking framework detection for AI components, filtering infrastructure, and application-level controls 

Code Obfuscation 

(Code Protect) 

Dev02 

Advanced obfuscation techniques protecting model inference code from reverse engineering 

Certificate Pinning Protection 

(Shield for Mobile) 

Dev06, App01 

Protects certificate pinning implementations from tampering and bypass attempts 

Application Hardening 

(Shield for Mobile) 

App03, Agt03 

Prevention of unauthorized code execution, control flow integrity, isolation of AI runtime environments 

Local Data Protection 

(Data Protect) 

Dev10, Dev11, App05, App06 

Encryption of stored data stores, secure key management, integrity verification of databases and files accessed by AI components, protection against data poisoning 

More learning: Secure your AI-powered mobile apps against next-gen attacks

Implementation strategy with a security roadmap 

A robust implementation strategy for addressing critical AI threats must be based on a multilayered approach that addresses vulnerabilities across the entire application stack. These layers include: 

  1. Model-level protection: securing the AI model though runtime protection  
  2. Communication-level protection: ensuring the integrity of data flowing between app components and AI systems  
  3. Input/output security: implementing validation to block malicious inputs to and outputs from AI 
  4. Data store security: protecting databases that store AI-related data 
  5. Runtime environment hardening: securing the execution environment for AI agents 

“These layers combine to create a defense-in-depth strategy that addresses the unique challenges of securing AI components within mobile and desktop applications. By implementing Promon’s protection capabilities, organizations can significantly reduce the attack surface available to adversaries targeting their AI-enabled applications.” (p. 13)

A phased Implementation Roadmap for organizations to secure their AI deployments with Promon solutions is outlined in five stages. This roadmap enables organizations to use Promon’s products to protect their AI-related complements against the most relevant and potent threats.

Customer guidance on Promon value delivery

Here are important insights into Promon’s present product strategy. They provide some directions on where we can provide immediate value for organizations deploying AI models on mobile apps. 

  1. Current areas of strength for Promon revolve around protecting against device-level threats to AI systems and related application security concerns. This includes those threats involving runtime integrity, code protection, and secure communications. 
  2. AI agent protection and enhanced input/output validation for GenAI systems are near-term development priorities. 
  3. We recommend that low-relevance threats are addressed by partnership with specialized AI security venders.

High Relevance 

Threats Effectively Addressed by Current Promon Solutions 

Device Security 

Dev01: Model Substitution or Modification 

 

Dev02: Model Theft 

 

Dev06: Interception or Substitution of Model Requests/Responses 

 

Dev07: Unauthorized Disabling of Input/Output Filtering 

 

Dev09: Unauthorized Modification of System Prompt 

 

Dev10: Unauthorized Modification of Data in Internal Sources 

 

Dev11: Information Leaks from Internal Data Sources 

 

Dev14: Unauthorized Modification of AI Agent 

Application Security 

App01: Insecure Component Integration 

 

App02: Bypassing Application-Level Input/Output Controls 

 

App03: Malicious Code Loading from External Sources 

 

App05: Injecting Indirect Prompt Attacks into Internal Data Sources 

 

App06: Information Leaks from Internal Data Sources 

Agent Security 

Agt03: Malware Deployment in AI Agent Runtime 

AI threat mitigation strategies with Promon solutions 

There are concrete, practical mitigation strategies for the high-relevance threats that Promon’s solutions are specifically designed to address. These include the range of Promon products from application shielding to anti-reverse engineering techniques, and from runtime integrity checks to data protection features.

More reading: How to protect your AI-driven mobile apps against emerging security threats

A comprehensive new framework for embedding AI security

The AI threat landscape is evolving and complex. Not all threats are equal in their relevance and impact. Practical protection starts with clear categorization that allows for subsequent prioritization and a pathway toward prevention. 

Anton Tkachenko’s paper uses multiple classification schemes: structural categorization by attack surface (Device/Model/Application/Agent), critical threat groupings by attack pattern, and prioritization by Promon relevance. When taken together, they result in a comprehensive AI security threat model that takes you on a journey from framework development to deployment and integration with mobile and desktop applications.