SBN

(More) Common Security Mistakes when Developing Swift Apps – Part II

In my post last week I shared common security mistakes developers make when building Swift applications – covering insecure data storage, symmetric key algorithms, insecure communication and more. If you haven’t read it, please take a few minutes to review this information. It’s critical to understand these mistakes as well as the ones I’m sharing now. Plus, there are plenty of examples to review and references to read through.

Code Tampering

Without access to the mobile application source code, attackers are able to exploit code modifications by creating counterfeit applications. This is one of the ten most critical mobile applications weaknesses covered by the OWASP 2016 Mobile Top 10: M8 – Code Tampering, and all mobile code is vulnerable to it.

Once a mobile application is delivered to the app store and available for download, attackers have both the time and techniques needed to reverse engineer it: binary patching, local resource modification, method hooking, method swizzling, and dynamic memory modification are some of the approaches to do so. Although “method swizzling” is not as easy for Swift as it is for Objective-C applications (because the later allows method invocation at runtime), tools like Cycript and Frida are improving Swift support, so that may change. If you’re interested in Reverse Engineering iOS Apps, you can read more about it in the OWASP Mobile Security Testing Guide.

For now, let’s consider a real-world use case: mobile banking. To create a counterfeit mobile banking application, an attacker can download an official application from the official app store so that he can reverse engineer and modify it at will. Adding the ability to exfiltrate personally identifiable information (PII) makes an interesting addition to the application (for an attacker, at least), leading to fraud against the bank.

To perpetrate the attack, users must install the counterfeit application. Typically counterfeit apps are hosted on third-party app stores but, despite of Apple’s screening, fake applications do also get in the official store. Once the counterfeit mobile application is available for download in an app store, a phishing campaign is certain to catch some unfortunate users.

The goal of this sections is just to create awareness that code tampering is a real threat. It’s up to you to assess whether it is worthwhile to detect and try to prevent unwanted code modification. If you’re developing mobile banking applications, as in our previous use case, you need to be aware of the possibilities and decide how your organization will address this threat.

To tackle code tampering, “the mobile app must be able to detect at runtime that code has been added or changed from what it knows about its integrity at compile time” and it “must be able to react appropriately at runtime to a code integrity violation.”

Underprotected APIs

Although this is not a Swift or mobile application specific problem, it is a common weakness on mobile applications that interface with a back-end server. Back in 2013 Insecure Direct Object References (IDOR) was one of the ten most common weakness on OWASP Top 10 (A4-2013). In the 2017 version it merged with previous A7:2013 – Missing Function Level Access Control, creating the new category A5:2017 – Broken Access Control.

In essence, mobile applications are human-computer interfaces. When creating a mobile application backed by a back-end server, the developer does not expect anyone to interface directly with the back-end server.

Let’s consider a simple example: a ToDo mobile application that allows users to create ToDo lists.

To make lists available on multiple devices, users are required to sign up for a user account.

In this scenario, after successfully signing in to the mobile application, the app issues an HTTP request to retrieve the user’s lists

My Lists - Mobile Application Main Interface
Figure 1: Mobile Application Main Interface

Based on our assumption of Insecure Direct Object References (IDOR), you may think that because HTTP requests are issued behind the scenes, no one will know how the lists are retrieved from the back-end or what resources are available in the back-end.

In the previous section we covered Code Tampering, which hackers can use to learn what HTTP requests are issued by the application and how they look. However, code tampering might be overkill for this purpose. Using a Proxy or a tool like Wireshark is often sufficient for you to inspect HTTP traffic.

With one of these tools in place, after a successful login, you’ll be able to see the request on your logs.

GET api/v1/lists?user=36211 
HTTP/1.1 Host: todoapp.com

 

HTTP/1.1 200 OK
Content-Length: 97
Content-Type: application/json
Connection: closed

[
   {
      “id”: 1,
      “name”: “Personal”
   },
   {
      “id”: 2,
      “name”: “Professional”
   },
]

Focusing the HTTP request issued to retrieve the user’s lists, we find the user query string parameter. This parameter tells the back-end to return lists whose owner is the user with the identifier 36211.

Modifying the request a bit, let’s change the parameter’s value from 36211 to 36212 (36211 + 1), let’s resend the request and see what the response looks like.

GET api/v1/lists?user=36212 
HTTP/1.1 Host: todoapp.com

 

HTTP/1.1 200 OK
Content-Length: 144
Content-Type: application/json
Connection: closed

[
   {
      “id”: 22252,
      “name”: “Music”
   },
   {
      “id”: 30435,
      “name”: “Wishes”
   },
   {
      “id”: 41787,
      “name”: “Books”
   }
]

Although it may sound like a very basic error, it is pretty common not only on mobile applications but also on web applications. To avoid making this error, keep in mind that “Access control is only effective if enforced in trusted server side code or server-less API, where the attacker cannot modify the access control check or metadata” (OWASP Top 10 2017). Read our article on Data Storage and Communication Security to avoid Insecure Communication and other common security weaknesses in mobile applications.

Extraneous Functionality

Although ranked in the last position of OWASP 2016 Mobile Top 10, M10 – Extraneous Functionality is a common security weakness of mobile applications. Due to how easy it is to exploit coupled with its severe impact, this weakness is something attacker usually look for.

Usually extraneous functionalities are left behind when applications are packed for production. Debugging flags, code and development environments (such as staging and QA) configurations are quite common patterns. This information gives attackers detailed information about how the back-end works and enables access to environments that are commonly less secure, thereby increasing the attack surface.

Another common pattern regarding extraneous functionality is the use of switches to lock professional or paid features without any server-side check. Based on the information we shared in the Code Tampering section, you can imagine that some of the available counterfeit applications offer professional or paid features for free.

In order to prevent or avoid extraneous functionality, you need to inspect your applications’ configurations to identify any hidden switches or flags: code review is a great way to uncover these kinds of issues. Then be sure to remove all dead code before packing the application.

The JavaScript Guide: Web Application Secure Coding Practices - Whitepaper

References

External

Projects

  • Wireshark – network protocol analyzer.
  • Burp Suit Community Edition – graphical tool for testing Web application security.
  • Cycript – allows developers to explore and modify running applications on either iOS or Mac OS X using a hybrid of Objective-C++ and JavaScript syntax through an interactive console that features syntax highlighting and tab completion.
  • Frida – dynamic instrumentation toolkit for developers, reverse-engineers, and security teams.
  • MobSF – Mobile Security Framework – automated, all-in-one mobile application (Android/iOS/ Windows) pen-testing framework capable of performing static analysis, dynamic analysis, malware analysis, and web API testing.
  • needle – open source, modular framework to streamline the process of conducting security assessments of iOS apps.
  • appmon – automated framework for monitoring and tampering system API calls of native macOS, iOS and Android apps. It is based on Frida.
The following two tabs change content below.

Paulo Silva is a Security Researcher with a degree in Computer Sciences. In the last +10 years he has been building software but now he’s having fun, also breaking it. He’s a free and open source enthusiast and a regular OWASP contributor. Apart from IT stuff, you’ll find him on his mountain bike mostly doing cross country (XC).

*** This is a Security Bloggers Network syndicated blog from Blog – Checkmarx authored by Paulo Silva. Read the original post at: https://www.checkmarx.com/2018/11/02/security-mistakes-developing-swift-apps/