SBN

Automating Mutli-Touch Takedowns with Large Language Models at Scale

As part of our Large Language Models at work blog series, we now delve into how we integrated the generative AI capabilities of LLMs to automate our critical takedown processes. The creation of apps has lead to a many facilitated procedures across all industries, and consumer livelihood. But as we’ve seen with other technological advances, as the popularity of apps have grown, so have the cybersecurity threats related to them, and the fake app takedown procedures.

Fake App Takedown Procedures

Consider this scenario. It’s 2004, Joe has to attend an important meeting and in the streets of Manhattan, he is trying to convince a taxi driver to help him reach his office, where he will be attending his meeting after picking up coffee he ordered via a phone call and will be firing up his laptop to check on latest stock market trends.

Fast forward to 2023 and Joe can use an app on his smartphone for each of these tasks. According to statistics published by Statista, 218 billion app downloads were recorded in 2023, which was a roughly 7% increase from 2022.

Businesses, especially B2C vendors, are increasingly reliant on mobile apps to engage traffic and target their key demographics.

With this reliance on app technology also comes the threat of phishing attacks on brands utilizing the convenience of apps.

Claroty
Mobile app usage trends by Statista

Most large scale brands face an overwhelming phishing onslaught by malicious entities, who modify and crack the original apps and host them on independent app stores. These apps can result in stealing of sensitive information of customers downloading these cracked apps and transmitting of malware and viruses. For businesses, these attacks not only steal away their potential customers but also often break their paywalls for premium content.

A fake mobile app for Instagram

An example for such a phishing attack we have found can be seen on the app hosting platform “apkmody”. The platform is hosting a modified or cracked version of Instagram, furthermore they make promise of free unlimited likes which is misleading. We assess and classify the damage caused by these into two categories:

  1. Direct damage : These include explicit account takeovers (ATO) after a user signs up throught their malicious platform and stealing of sensitive financial information of users such as credit cards and bank account information.
  2. Indirect damage : These attacks compromise user devices to turn them into botnets to be used as mules for DDoS and VPNs.

These malicious attacks target Instagram to steal its customer base and also steal crucial information of users.

Using Technology for Fake Mobile App Takedown

At Bolster, we relentlessly protect some of the biggest brand names from these malicious attacks through our App Store Monitoring and Fake App Takedown module. Our team of SOC analysts scans the internet for app download platforms hosting the apps for these brands without authorization, we hunt the scam apps and work together with app store hosting and domain providers to take them down.

Once the Bolster engine sources the malicious app URLs after scanning and detection, our team of analysts reaches out to hosting or domain providers of the app stores requesting a takedown. These providers require evidences like image screenshots and URL to conduct an internal investigation, and upon confirmation of the veracity of the claim they remove the apps from their platform, thus protecting our client.

Action suite of Bolster’s Fake App Takedown module
Our analysts maintain close correspondences with the hosting providers, and often requires some back and forth of emails between our team and hosting providers. We streamline and optimize our workflow for these app stores to continuous improve and neutralize as many attacks as possible.
Our software automation pipelines play a big role in scanning, detection and initiation processes. And now, we have raised the performance by introducing Machine Learning and Generative AI to increase the efficacy of our workflows.

Natural Language as a Stakeholder in Fake Mobile App Takedowns

The key to swift takedowns is a decisive and comprehensive outreach from analysts, similar to the workflow the Bolster team follows, to the app store providers. The email correspondences play a large role in converting these requests to actual takedowns.

With an initiation, we reach out to app store providers with evidence and proof of authorization from the client to act on their behalf. This kicks off a conversation between the SOC team and app store provider teams. These conversations can involve correspondences from providers regarding a straightforward takedown, asking for further evidence, requesting making contact with alternative hosting providers or in some cases a rejection of the takedown request.

At Bolster, our SOC analysts have countermeasures to handle all these cases, and respond to the providers accordingly. These correspondences are often manual on both ends and utilize a significant amount resources in time and effort. However majority countermeasures don’t require an analyst intervention and can be executed by a software pipeline.

A breakdown of the type of responses teams might experience when requesting an app store takedown

Unmistakably, one can notice these correspondences are exchange of natural language data over emails.
The emails are generated by humans, for the humans. Amazingly with current state-of-the-art machine learning and generative AI tools, we now have the capability to receive, analyze and respond to natural language data. Developing and integrating tools powered by machine learning and generative AI is the key to our revamp of the fake app takedown workflow.

Large Language Models and generative AI to the rescue

Ever since the launch of ChatGPT to the masses, the landscape of AI has changed. Tools enabling conversational and generative AI have penetrated deep, and companies are actively exploring ways to integrate these tools into their processes. ChatGPT uses the powerful GPT3.5 and GPT4 models, which belong to the family of Large Language Models (LLMs). These models are trained on gigantic corpus of publicly available natural language data. This data comprises wikipedia articles, reddit threads, books and articles. These models are probabilistic in nature and tailor made to understand the rich nuances of human language, draw connections between the words and interpret the contexts. The basic principle to train these networks is to help them learn prediction of next words in a sentence.

Generic LLM workflow

At Bolster, we are leveraging the power of LLMs to extract, parse and read the natural human language to explore ways to streamline our processes to conduct takedowns of malicious attacks on our clients. One of the ways we are using these generative AI LLMs is to engage in the email correspondences pertaining to the takedown requests.
The LLM extracts the human language text data from the email and interprets it to prepare a response. The email correspondences targeted are human to human interactions between app store hosting providers and the SOC analysts at Bolster.
Human involvement makes conducting these takedowns over emails in bulk challenging and drastically limit the scaling potential if SOC analysts directly engage in the email correspondences.

Our App Store Monitoring and Fake App Takedown modules tackles this issue by deploying LLMs directly into production through an end-to-end automated software pipeline to conduct takedowns in bulk. The pipeline is capable of sending out automated emails requesting the takedowns at scale and also monitor the email inbox for responses from the hosting providers for these app stores.

Once a response is received, our pipeline extracts the text body of the email and triggers the deployed LLM model. Our LLM module analyzes the text and decides upon an appropriate response. If the response action does not require involvement of an analyst, such as closing the ticket or providing more evidence, LLM takes the action unilaterally, prepares and send out the response.

In special cases where a response action requires an expert involvement to resolve the case, the LLM raises an alert to the SOC analyst teams to take control of the situation.

Strategies For Integrating LLMs into Your Fake Mobile App Takedown Workflow

Zero-shot inference and fine-tuned inference are two distinct methods for leveraging Large Language Models (LLMs) like GPT-3 or its successors to perform specific natural language understanding tasks.

A walkthrough of the Zero Shot learning workflow

Zero-shot inference involves using a pre-trained LLM without any task-specific fine-tuning. In this approach, you present a prompt or input to the model, and it attempts to generate a relevant response based on its general knowledge and the patterns it has learned during its initial training.

Zero-shot inference is versatile and can be used for a wide range of tasks without the need for task-specific training data. However, its performance can be limited when tackling complex or domain-specific tasks, as the model’s knowledge may be too general to provide accurate or contextually appropriate responses.

A walkthrough example of the Fine Tuned learning workflow

On the other hand, fine-tuned inference involves taking a pre-trained LLM and further training it on a specific task or dataset. This fine-tuning process tailors the model’s capabilities to excel in a particular domain or task, making it more contextually aware and capable of providing highly accurate and task-specific responses.
Fine-tuned models are especially valuable for tasks that require domain expertise or a high level of precision, such as medical diagnosis or legal document review. However, fine-tuning typically requires access to task-specific data and careful optimization, making it more resource-intensive and less versatile than zero-shot inference.

Conclusion

For the end to end pipeline for the fake app takedowns, we have employed the fine tuning strategy to train and deploy our LLM to handle the email correspondences.

The fine tuning allows us to use a pre-trained LLM’s generalization capabilities and customize it to solve the inference problem on the custom use case of handling email based conversations in the fake app takedown context. The LLM module helps our fake app takedown achieve full automation, from initiation to response handling.

This optimized workflow boasts of features to initiate of takedowns, receiving and analysis of responses using our custom fine tuned Large Language Model and produce natural language email while undertaking the required action. Apart from this, the workflow is also capable of alerting SOC team to take control of matters if and when needed.

To learn more about how Bolster’s automated, machine-learning technology can help prevent against fraudulent apps, request a demo with our team today.

References:

  1. Bolster AI’s App Store monitoring and Fake App takedown module
  2. App usage statistics by Statista
  3. Generative AI
  4. Large Language Models
  5. Zero Shot Learning
  6. Automated Takedowns

*** This is a Security Bloggers Network syndicated blog from Blog – Bolster AI authored by Adithya Singh. Read the original post at: https://bolster.ai/blog/fake-mobile-app-takedown