The mobile security industry has never had better tools, more comprehensive guidelines, or more publicly documented case studies of what goes wrong.
And yet the same critical vulnerabilities surface in security audits year after year. For organisations, the cost is rarely just technical breaches, which bring regulatory penalties, reputational damage, and remediation bills that far exceed what secure design would have cost upfront.
We spoke with Andrii Mykytiuk, who sheds light on why security remains one of mobile development's most persistently misunderstood challenges.
Background & experience:
With more than 7 years of experience, Andrii specialises in iOS application development and cross-platform mobile solutions, building scalable applications, developing frameworks, and delivering high-quality user experiences.
1. In your experience, what are the reasons mobile applications fail security reviews?
Andrii Mykytiuk: As an IT specialist with a strong interest in application architecture and cybersecurity, I see that most mobile applications fail security reviews, not because developers are incompetent. It happens because they systematically fail to understand threat modelling. It is a fundamental gap in engineering thinking.
First, according to industry research, a significant number of mobile applications still rely on outdated cryptographic algorithms or insecure modes of operation (for example, AES in ECB mode). For organisations, misconfigured infrastructure is especially costly — it often goes undetected longest and carries the heaviest regulatory exposure.
This does not happen because developers are unaware of modern standards, but because they do not analyse realistic attack scenarios. Without a threat model, developers have no reason to examine how encryption is configured — they simply pick an algorithm, check the box, and move on. The details that determine whether that encryption actually holds up under attack go unexamined. In practice, an incorrect configuration undermines the use of a "strong" algorithm.
Second, there is an important statistic: a substantial percentage of mobile data breaches occur not because of the compromise of the application itself, but because of misconfigured cloud infrastructure — publicly accessible S3 buckets, exposed Firebase instances, or overly permissive IAM roles. This demonstrates that modern mobile security is not limited to application code; it encompasses the entire surrounding ecosystem. If the threat model focuses exclusively on the client, the team simply fails to see infrastructure-level risks.
Another illustrative issue is excessive permissions. Many applications request access to geolocation, camera, or contacts without a strict necessity. This expands the attack surface. Once the application is compromised, the volume of potentially accessible data increases dramatically. A proper threat model must incorporate the principle of least privilege not only on the server side, but also at the mobile platform level.
Finally, bug bounty practice shows that logical vulnerabilities in business processes are often more valuable than technical flaws. Attackers increasingly exploit design errors rather than low-level issues. These problems can only be identified by modelling abuse scenarios.
All these examples reinforce my position: mobile application security is a systemic property of architecture and processes. Without comprehensive threat modelling, even technically "clean" code can be vulnerable.
2. What’s your approach to incorporating security into a mobile app from the start?
AM: In the mobile development industry, security is still frequently perceived as a checklist of technical measures: "HTTPS enabled — check," "JWT implemented — check," "data encrypted — great." But security is not equivalent to a collection of technologies; it begins with analysing the adversary. A threat model is a structured answer to three questions:
- Who is our attacker?
- What capabilities do they possess?
- Which assets are valuable to them?
If we don’t answer these questions, any defensive measure becomes accidental.
OWASP research shows that the most common mobile vulnerabilities are: insecure data storage, insufficient API protection, weak authentication, and lack of application integrity checks. These are not zero-day exploits or advanced cryptographic attacks. They are basic architectural flaws that could have been prevented at the design stage.
3. What makes the mobile environment fundamentally different from other development environments, and what unique risks does this create for security?
AM: Developers often misjudge the mobile app as part of their own infrastructure, rather than a user-controlled device. From a threat modelling perspective, the client device is potentially hostile. Users can root or jailbreak devices, install custom firmware, attach debuggers, or inject instrumentation frameworks such as Frida or Xposed.
If this assumption isn’t formalised, common anti-patterns appear: storing secrets in code, moving business logic to the client, performing authorisation checks on the app side, or relying on local flags. For example, some production apps unlock premium features with a simple "isPremium=true" flag — easily bypassed via decompilation or runtime patching.
Reverse engineering is another underestimated risk. APKs can be decompiled, and obfuscation only partially protects logic. Without including binary analysis in the threat model, developers may expose antifraud algorithms, API structures, or third-party keys. I’ve seen payment gateway API keys embedded in apps — “hidden,” but extractable in steps.
Communication layers also matter. TLS alone isn’t enough; without certificate pinning, apps remain vulnerable to man-in-the-middle attacks if a trusted root certificate is compromised or a custom CA is installed.
Finally, attackers increasingly target API abuse rather than the app itself. Automated scripts can emulate client behaviour, test parameters, and exploit business logic. If the server trusts the client, vulnerabilities like limit bypass or price manipulation may arise. These are not classic SQL injections, but business logic flaws that require threat modelling to detect.
4. How would you approach identifying potential threats in an application you’re building? Can you give an example of a framework you use?
AM: I start by mapping out data flows and identifying the most critical assets in the system. Then I evaluate potential threats to those assets. One framework I often use is STRIDE — Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege.
For example, if the app stores user profiles locally, I would ask: could a malicious user tamper with local data? Could someone impersonate another user? Even a simple diagram showing how data flows between the client and the server can reveal trust boundaries and highlight where validation or protection is needed.
5. Some teams assume their app isn’t a target for attackers. What’s your take on this assumption, and how might it affect security decisions?
AM: That assumption is risky. I frequently observe that security review is treated as a final checkpoint before release. But if the architecture was not designed with threats in mind, the audit becomes a painful list of critical findings: excessively long-lived tokens, insecure refresh mechanisms, sensitive data cached unencrypted, and logs exposing personally identifiable information. Fixing these issues late in the software lifecycle is expensive and complex because they affect foundational architectural decisions.
Another important aspect is the misjudgement of the attacker's motivation. Many teams assume that "our product is not interesting to anyone." However, automation has transformed the threat landscape. The growing use of AI-driven tools has made automated scanning faster and more sophisticated, lowering the barrier for attackers significantly.
Today, attacks are often not targeted. Bots scan applications at scale to identify common misconfigurations, exposed endpoints, and weak authentication mechanisms. Even a small application can become part of a broader fraud ecosystem.
6. What separates a team that passes security audits from one that truly builds secure software?
AM: From my perspective, the core reason for these failures is the absence of a true "security by design" culture. Threat modelling must occur during architectural design, not after code is written. It should be a cross-functional effort: architects, developers, DevOps engineers, and security specialists analyse data flows, define trust boundaries, and identify critical assets. Even a simple data flow diagram can reveal hidden risks.
When a team genuinely adopts threat modelling, the development philosophy changes. The principle of minimal trust toward the client becomes central. All critical validations are moved to the server. Tokens become short-lived. Strict server-side enforcement of business rules is implemented. Mechanisms for detecting anomalous behaviour are introduced. Security ceases to be an "add-on" and becomes an inherent property of the system.
Understanding threat modelling is a core competency of the modern engineer. The mobile environment is inherently hostile: the device is user-controlled, the network may be compromised, and the binary is accessible for analysis. Ignoring this reality means building systems on false assumptions.
That is why I believe the future of mobile security lies not in increasingly complex tools, but in a shift in mindset. Without a clear understanding of who might attack the system and how, it is impossible to build a truly resilient application. Security begins not with encryption, but with the right questions. And the absence of those questions is precisely what most often leads to failure in security reviews.
FAQs
Mobile application security is not a set of technologies or a checklist of measures. It is a systemic property of architecture and processes — the result of deliberate decisions made about who might attack a system, what they are after, and how the application should behave under adversarial conditions.
Threat modelling is a structured approach to assessing risk before development begins. It addresses three key questions: who is the attacker, what capabilities do they possess, and which assets are valuable to them? Without these answers, defensive measures lack direction.
Related insights
The breadth of knowledge and understanding that ELEKS has within its walls allows us to leverage that expertise to make superior deliverables for our customers. When you work with ELEKS, you are working with the top 1% of the aptitude and engineering excellence of the whole country.
Right from the start, we really liked ELEKS’ commitment and engagement. They came to us with their best people to try to understand our context, our business idea, and developed the first prototype with us. They were very professional and very customer oriented. I think, without ELEKS it probably would not have been possible to have such a successful product in such a short period of time.
ELEKS has been involved in the development of a number of our consumer-facing websites and mobile applications that allow our customers to easily track their shipments, get the information they need as well as stay in touch with us. We’ve appreciated the level of ELEKS’ expertise, responsiveness and attention to details.