As attacks against iOS and Android continue to multiply, both platforms have introduced several mitigations at the application and system levels to help protect manufacturers and users.
This Application Threat Report breaks down these defenses into broad areas where they’re most effective. Each area describes how the mitigation helps prevent dangerous behaviour, or at least makes such behaviour more difficult for attackers. These areas are:
- App stores
- Application sandboxing
- Kernel protections
- Device integrity
After examining the iOS and Android mitigation innovations in these areas, the report concludes with practical concerns you should be aware of about iOS and Android attacks.
App stores
While Android and iOS implement several security mechanisms to prevent and mitigate attacks (more on this under Application Security below), they also try to stop malicious code from reaching your device in the first place. Both platforms use their first-party app stores as the primary trusted sources for you to install apps from:
- Google Play Store on Android
- Apple App Store on iOS
App store security measures
To be available on Google or Apple app stores, every app must meet certain policies and requirements. These policies enforce quality and stability in the apps and provide you with some guarantees regarding app security.
For example, if you have an app that requests invasive permissions, both stores need you to submit a manual justification and get approved by the store owner before your app goes live on the store.
Both app stores also have a review process that all apps must pass to be published. While the exact details of the approval processes are not public, it is known that the process involves a mix of automatic and manual checks to ensure the app launches correctly, behaves legitimately, and is safe to use.
These policies and processes help prevent malicious apps from reaching app stores and ultimately finding their way to devices. But while these measures are necessary, they don’t always prevent security threats.
Limits to app store measures
Though these policy measures improve app security for the average user, malicious apps still end up on the app stores and thus, on user devices.
And one factor in this is the use of third-party app stores.
Android allows a variety of third-party app stores, and these stores may not always hold app security to the same standards. Recently, due to the EU’s Digital Markets Act (DMA), Apple allowed a small number of third-party app stores too, like Scarlet iOS. But it comes with a clause. Apps on those stores must still be signed (indirectly) by Apple’s certificate, which Apple can revoke if the third-party store poses a risk to iOS users.
Another reason apps with security risks end up on a device is sideloading, which lets users install an untrusted app from unofficial app stores. Despite the inherent risks in bypassing official app store protections, sideloading continues to be a tempting option because it makes a lot of apps and their customization available to users.
App stores are a form of cultural protection, not a technical defense mechanism. This is why both platforms need to implement security measures beyond their stores to protect user privacy and the integrity of other apps.
Application security
Application security is the class of mitigation measures most effective at preventing a user from altering your app’s functionality or from a third party distributing a modified version of your app. These defenses also prevent attackers from causing your app to modify itself at runtime.
Although app security defenses take many forms, they’re all bound together by one theme: an app should only do what it declares it’ll do before you run it.
Here are the key mitigation mechanisms employed in app security and whether iOS or Android possess them.
Let us explain all the mitigations and how they apply to iOS and Android. But first, an explanation of each defense and whether iOS and Android feature it:
Mitigation |
Explanation |
iOS |
Android |
App signing |
Your app must be cryptographically signed with a trusted certificate. If not, the OS will not run the app. To be effective, the user should not be able to alter the list of trusted certificates so that attackers cannot trick the system into accepting unsigned apps by inserting their own certificates. |
|
|
Encrypted code |
Your app’s executable code should be encrypted at rest to prevent attackers from observing (and hence, modifying) its functionality. It also requires that for any modification to the app, the attacker must somehow discover the key to re-encrypt the app. Ideally, this key should be different for every installation of every app. |
|
|
No writable and executable memory |
It should not be possible to map executable and writable memory to prevent attacks that change the code during runtime, or in cases where malicious code is written into the stack or heap. It is more effective if the user cannot swap memory back and forth between writable and executable. |
|
|
No dynamic X maps |
Your app should not create additional executable memory at runtime. When combined with W^X memory, this means that all subsequent running code is statically determined before execution. This prevents nasty behaviour from sneaking in. |
|
|
Pointer tagging |
Pointers should be tagged in a way that distinguishes them from non-pointer values. This prevents an attacker from coercing the app to use a pointer when one wasn’t expected, like in buffer overflow or return-oriented programming (ROP) attacks. A user should not be able to forge a pointer, and the device should refuse to use incorrectly tagged pointers. |
|
|
App signing
When an app is to be signed and published on an app store, the publisher and the store can both agree on which certificate should be used for that app.
But before publishing, the situation is complicated. Your developers expect to develop and test the app before it goes live, and some users want to run apps that will never be published.
Maintaining app security under these additional constraints is difficult, and both iOS and Android approach it with different philosophies.
App signing on iOS
As an iOS app developer, you’re issued a unique certificate signed by Apple that you can use to sign or re-sign apps. This certificate proves your identity and ensures your app’s integrity.
But before a device runs your app, it first checks if your certificate is allowed to run on that specific device. This limits sideloading and unofficial app distribution, making it harder for attackers to push modified apps to users.
There is a special category of certificates called enterprise certificates. These aren’t restricted to a single device but are more difficult to obtain. If your company wishes to distribute internal apps to employees, you can manually request Apple for an enterprise certificate. It requires an annual re-justification of your company’s need for one.
Unfortunately, these certificates are often leaked, purchased, or stolen. Illegitimate certificates are revoked quickly (between 1 hour to 30 days), which renders them and the apps they signed, useless. The Scarlet app store works on enterprise certificates.
App signing on Android
On Android, your app must be signed to run on a device. But the keys and the certificate you use don’t need to be signed or approved by Android or Google. This means that anyone can create an app and sign it with an arbitrary certificate, and any app can be re-signed with another arbitrary certificate.
And this app signing situation has security implications. It lets malware developers easily modify an existing app by adding their malicious code, while still providing a signed app. This modified version can then be installed and run on Android devices.
However, signing does still matter on Android.
It prevents unauthorized developers from modifying your app on the Play Store, and prevents users from updating an app already installed on their device with a malicious version of the app.
Both the Google Play Store and the Android platform require that the certificate in the update matches the one from previous versions to ensure only you, the legitimate developer, can update the app.
Encrypted code
For the OS to run your app, its code must be unencrypted in memory. This means that a malicious actor can potentially access the unencrypted code while your app is running. This reduces the value of code encryption unless you combine it with other protections.
Homomorphic encryption offers a solution. It allows operations to be performed on encrypted data without needing to decrypt it first. With this, your app could process sensitive data and keep it confidential and secure. This technology is still evolving, and some time in the future, it might help mitigate the risk of decrypting code in memory to run.
Encrypted code on iOS
Only a small section of the code is encrypted, and that section is different for each device installation. Because older iOS devices have weaker encryption requirements, an actor with two devices can merge the two unencrypted parts of your app from each device to reconstruct the full unencrypted code.
Encrypted code on Android
Android has no default code encryption requirement. But if you’re a developer concerned about security, you can implement code encryption manually or use services from runtime application self-protection (RASP) providers. These providers often offer encryption as part of a broader set of app protections.
No writable and executable memory
When you’re developing an app, you want to prevent any memory segment from being both writable and executable (W&X) at the same time because allowing W&X memory makes it easier for attackers to inject their own code.
Sometimes, your app might need to generate machine code dynamically. Like in web browsers where JavaScript is delivered and run at runtime. While a simple interpreter can circumvent this restriction, it is better to translate the JavaScript directly into machine code for performance considerations. And this requires writing and executing code at the same memory location. So, W&X memory is allowed but controlled through permissions that attackers cannot forge.
No writable and executable memory on iOS
On iOS, memory segments cannot be both writable and executable simultaneously. This restriction improves security by preventing easy code injection.
Some apps may be able to map dynamic executable memory (that is, temporarily change a memory segment from writable to executable and back). This limits the effectiveness of W&X restriction, but these capabilities are tightly controlled and available only to a limited number of apps.
No writable and executable memory on Android
On the other hand, Android doesn’t enforce strict memory permission controls by default. Your app can mark memory readable, writable, and executable (RWX) at the same time. It can also change the permissions of any of its memory segments at runtime.
While attackers often abuse this flexibility to modify app code or inject runtime hooks, it also allows you to build sophisticated protections and implement custom code encryption strategies.
On the newest devices (running Linux Kernel 6.10), you can use the mseal system call to prevent a memory segment’s permission changes. This gives you more control and helps you close one of Android’s security gaps.
No dynamic X maps
Like W^X memory, the concept of no dynamic executable maps (X) maps is a security measure that helps prevent apps from creating executable memory regions at runtime. This is especially important when you’re dealing with just-in-time (JIT) compilation, where your app generates code on the fly and needs to execute it.
Since a JIT compiler can’t predict how much code will be compiled ahead of time, dynamic X maps become necessary. But they should be gated behind a permission to prevent attacks.
No dynamic X maps on iOS
The ability to map dynamic executable memory is controlled by a permission that Apple has historically disallowed any app on its App Store from using. But with the passing of the EU’s DMA legislation, Apple has started approving selected apps that use this entitlement. Because of this legislation, third-party app stores use dynamic executable memory without going through Apple’s strict approval process.
No dynamic X maps on Android
On Android, you face no such restrictions. Your app can freely map new memory regions with executable permissions. While this level of freedom helps JIT compilation or runtime-generated code, it also introduces a risk that attackers can exploit to load and run malicious payloads.
If you’re a developer working with JIT or other dynamic code execution techniques on Android, you can combine it with other security controls like RASP, memory permission sealing (mseal on newer devices), and runtime integrity checks.
Pointer tagging
Pointer tagging is a security feature that adds metadata to pointers in the form of unused high bits of a 64-bit address. As a developer, you can think of it as a way to detect memory corruption or forged pointers at runtime.
Since only about the top 16 bits of a pointer are available for tagging, this gives you 216 (65,536) possible tags. This means, if your app is set to crash on an invalid tag, then this provides a good layer of defense against corrupted pointers. But a determined attacker could brute-force 216 attempts at defeating this mitigation, potentially forging a valid pointer.
Pointer tagging in iOS
Apple has experimented with pointer tagging for some time through pointer authentication code (PAC). A PAC is a cryptographic signature added to pointers to verify them before use, making it harder for an attacker to modify a pointer in memory without being detected. PAC helps you verify that pointers haven't been tampered with, providing cryptographic assurance rather than just statistical protection.
But you cannot rely on PAC alone. Apple hasn’t settled on a stable interface, and attackers have discovered various PAC bypasses. System libraries (Objective-C and Swift runtimes) are compiled with PAC enabled. But until Apple stabilises the PAC interface, your app code won’t be able to use PAC as a defense mechanism directly.
Pointer tagging in Android
Android started supporting pointer tagging with Android 11, but only on devices running a kernel with TBI (Top Byte Ignore) support, which is standard Linux Kernel 4.14 onwards. There are two types of pointer tagging in Android:
- Hardware enforced on ARM64 devices with memory tagging extension (MTE) support.
- Software enforced that works as a fallback when MTE is unavailable.
Android’s open-source Scudo heap allocator uses these pointer tagging features for dynamically allocated memory. It uses MTE by default but falls back to a software implementation if the CPU does not support MTE.
Application sandboxing
On an average device, it’d be highly unlikely to find only one app installed. To prevent malicious apps from interfering with each other, both iOS and Android use app sandboxing. Sandboxing ensures your apps run in isolation from others and cannot access system resources or other apps’ data unless specifically granted. Both platforms enforce this with different methods and offer controlled escape mechanisms if your app needs more access.
Application sandboxing on iOS
Apple applies sandboxing through a capability-based system that allows apps limited and controlled access to services using XPC (cross-process communication) and entitlements. At the lowest level, this is built on mach ports. But Apple has designed more convenient high layers, like XPC and its Objective-C counterpart NSXPC.
- XPC and NSXPC: Apple has split iOS into many small services, which send information to each other over XPC. Your app only gets access to the specific information that those services send over XPC instead of every resource in the system. For example, when your app requests access to the photo library, Apple can restrict it to only a subset of your user’s photo library. This has shifted attackers to focus on XPC itself, and the past 5 years has seen several NSXPC-based attacks deployed in the wild.
- Entitlements: When you sign your app (before uploading it to the App Store), you list which system services your app needs, and the OS will disallow any attempt by the app to communicate with other services. For example, the com.apple.developer.networking.wifi-info entitlement allows your app to access information about which Wi-Fi network the device is connected to. Without this entitlement, the OS blocks the request.
- Restricted capabilities: Apple also uses entitlements to control whether your app can use features that require cooperation with the kernel, like mapping in W&X memory. Some entitlements—like get-task-allow entitlement, which enables debugging—are disallowed. Apple won’t sign and distribute an app with these entitlements.
Application sandboxing on Android
Android uses a combination of Linux security features to enforce sandboxing:
- Linux user/group permissions: Each app runs under its own Linux UID to prevent apps from accessing each other’s data.
- Isolated mount namespaces: Your app won’t mount the local storage of other apps and will not see their files.
- SELinux policies: Even if Linux file permissions allow something, SELinux can further restrict access based on app behavior.
- Seccomp filters: They limit the syscalls your app can use.
Android provides controlled sandbox escapes to access information or resources that an app cannot access directly. They include:
- Android permissions: Your app can use Android permissions to access more files/directories or perform more actions (like accessing the user’s pictures). This works by adding the Linux user running the app to other privileged Linux groups.
- System server requests from Binder: Your app can request information or action from the System Server using Binder. System server is a privileged process that offers services like Package Manager. Binder itself is a kernel component that lets apps communicate with each other or with the System Server. Although the underlying Binder protocol is complex, Android exposes it through high-level Java/Kotlin interfaces.
- Accessibility Services: Android devices have default Accessibility features to help users with impairments. But because these features are limited, Android allows users to register an installed app as an accessibility service app. This allows the app to use and control the device as if it were the user. While meant for accessibility use cases, many malware apps misuse this as a sandbox escape.
Kernel protections
Ensuring that apps work well with each other is an important protection mechanism. And one of the most important things is to prevent a compromised app from taking control of the entire system. You can do this by hardening the most privileged part of the OS—the kernel.
Both iOS and Android implement multiple layers of kernel defense:
- Kernel address space layout randomisation (KASLR): Makes it harder to locate important kernel structures by randomizing them.
- Control flow integrity (CFI): Prevents redirection of execution flow to malicious code.
Kernel protections on iOS
Because Apple controls the hardware, firmware, and OS, it can implement deep kernel protections enforced directly by hardware. This protects iOS from being completely compromised due to one compromised app. While your app has no direct way to interact with these protections, understanding them helps explain why kernel-level attacks on iOS are rare and complex. Key iOS kernel protections include:
- Kernel text read-only region (KTRR): Prevents modification of kernel code post-boot.
- Page protection layer (PPL): Protects specific memory pages from being tampered with.
- Secure page table monitor (SPTM): Monitors and protects kernel page tables from manipulation.
- Trusted execution monitor (TXM): Helps enforce execution rules at the hardware level.
While iOS kernel protections are tightly integrated with the device’s hardware, some of them have been circumvented, showing that even strong mitigations are not foolproof.
Kernel protections on Android
Unlike Apple, Google cannot enforce strict hardware controls across all Android devices. This makes it a challenge to secure all those devices, especially with the many security-critical components that are manufacturer-specific.
As a result, Android focuses on architectural isolation, strong guidelines, and modularity to limit kernel compromise risks.
- Compatibility test suite (CTS): A set of compliance checks that equipment manufacturers must pass to support Google Play Services.
- Project Treble (introduced in Android 8): Separates the OS framework from device-specific implementations to make it easier to update Android without depending on manufacturers.
- Generic kernel image (GKI): Aims to create a universal Linux kernel for Android. It isolates manufacturer drivers and modules from the core kernel. If a vulnerability is found in a device module, it won’t compromise the entire kernel.
Device integrity
Device integrity mechanisms implemented in iOS and Android are essential because an attacker can modify the device itself. Both Android and iOS have a similar design to establish a root of trust on the device that travels all the way back to the fabrication of the hardware.
On both Android and iOS devices, the multi-stage boot process starts from the Boot ROM to the kernel, and finally to userland. During this boot process, each stage validates the integrity of the next using cryptographic signatures. The root public key used to verify the digital signature is embedded in a read-only, tamper-resistant hardware, like Secure Enclave.
The Boot ROM (often called primary bootloader or stage-1 bootloader) uses this key to verify the signature of the next stage bootloader, called the secondary or eXtended bootloader on Android, and iBoot on iOS. The stage-2 bootloader is then responsible for verifying the stage-3 bootloader (if there is one), and so on until the final bootloader stage verifies and runs the kernel.
Userland components’ integrity is guaranteed by cryptographically signing the system partitions and verifying this signature piecewise when the files are used. Both iOS and Android use a Merkle Tree to verify the integrity of the system partitions efficiently.
Device integrity on iOS
In iOS, the secure boot is mandatory and cannot be disabled. iOS verifies system partitions continuously using hash-based validation. It doesn’t have a method to disable this boot integrity chain on production devices. And while jailbreaking opens the device to modifications, it requires exploits because iOS does not offer any official unlock paths.
Device integrity on Android
In Android, similar to iOS, there are signature checks at every boot stage. dm-verity verifies the integrity of the system partitions during runtime using a Merkle tree. On some devices, users can disable secure boot by unlocking the bootloader but keep in mind, it is the first step in rooting or installing custom ROMs.
Even with an unlocked bootloader, stage-1 Boot ROM still validates stage-2. But from there, the rest of the chain can be bypassed. Because this makes devices less trustworthy and vulnerable to malware, some device manufacturers lock bootloaders permanently to enhance security.
Other concerns
Aside from theoretical defenses, there are some practical concerns you should be aware of when assessing iOS or Android security.
Firstly, most iOS apps are compiled down to native code, while Android apps typically use Java bytecode. This matters because obfuscating Java has technical limitations—verification is performed on the DEX code that restricts some obfuscation patterns. So if you’re trying to protect your app, this difference affects your strategy.
Secondly, Apple exercises a tight control over its hardware and can ensure that newer iOS devices have the hardware features necessary to run all the mitigations necessary. On the other hand, the Android ecosystem has a wide range of devices to support with different feature sets, and many users still use old and unsupported Android devices. This allows an attacker to focus on the weakest link when seeking to compromise an app or bypass system defenses.
Also, while tools to hack iOS apps do exist, it is possible to find a larger and more accessible toolkit for Android, which makes it quicker and easier for new attackers to make progress.