SBN

Lessons Learned from Twitter Security Disclosures

In January, Twitter confused the security world by firing their head of security Peiter Zatko — or as he is more commonly known in security circles, Mudge.

Mudge was brought on at Twitter following a serious attack that called their security into question. The community applauded their decision to bring in a recognizable, respected, and by all accounts one of the top security people in the game to get them back on the right track.

So why did they let him go?

The severing of ties came without much of an explanation. Until recently.

The Whistleblower Complaint

In August, pieces of the puzzle started coming together when Mudge came out with an explosive whistleblower complaint against the social media company. 

The list of alleged violations and mismanagement is long, including the company knowing that it had foreign spies in its employ and that it was not complying with its commitments to the FTC from previous violations. 

But what stood out to many in the security community was Mudge’s claim that everyone in their engineering team, nearly half of the company, had access to their production environment. 

Not only did an estimated 4,000 people have access to their production environments, but according to Mudge, they have no real way of tracking who is accessing resources there because they have no logging mechanism in place. 

If Mudge’s allegations are accurate, Twitter left the door wide open and is not making the effort to know who is going in and out of their holy of holies. This is a situation that most security folks would label as suboptimal to say the least.

Looking at the report, these la(ck of)x controls had consequences. Mudge claims that, “In 2020 alone, Twitter had more than 40 security incidents, 70% of which were access control-related.” The situation was so dire that he feared Twitter could suffer an “Equifax-level hack.”

For their part, Twitter has denied the allegations in the report. 

Stepping back, there are lessons to be learned from this story. First is the evergreen point that by grace go we. Poor access visibility and controls are ubiquitous across industries, so we should all show a little humility and think twice before throwing stones about where Twitter may or may not be deficient. 

Let’s take a look at two central elements that stood out and are valuable examples of potential improvement in securing our development pipeline. 

Too Much Access, Too Much Exposure, No Controls

First and foremost on our list is Mudge’s allegation that waaay too many people within the organization had access to their production environment. Given that apparently they ran everything in production with no dev or staging, every environment was a production environment.

Access to the cloud infrastructure (IaaS) like AWS, Azure, and GCP is generally deemed to be privileged access, even if someone is not defined as an admin. The ability to access databases and potentially alter code is a significant risk that must be guarded against.

Twitter already knew about having had a serious case of over-privilege. According to the whistleblower complaint:

In 2011, the FTC had filed a complaint against Twitter for its failure to properly protect nonpublic consumer information, which included users” email addresses, Internet Protocol (“IP”) addresses, telephone numbers, and nonpublic information exchanged on the platform.” The complaint alleged that, from 2006 to 2009, far too many Twitter employees exercised administrative (“God mode”) control within Twitter’s internal systems and user data, thereby allowing any attacker with access to an employee account to easily compromise Twitter systems. And Twitter’s systems were, and are, full of highly sensitive personal user data that enable a hostile government to find precise geo-location(s) for a specific user or group, and target them for arrest or violence.

Given that nearly half the company allegedly retained access to production, it would seem like they failed to address their over-privilege issues. This put them at significant risk because it greatly expanded their threat surface. A malicious employee or an attacker who compromised one of these over-privileged accounts could easily steal, erase, or alter data, potentially harming both users and the company which would be liable for the damages.

While a lot of organizations will allow pretty wide access to development environments (far more than they really should), they are supposed to at least put tools in place to log usage.

Mudge claims to have discovered the lack of controls on January 6th 2021 during the riots at the Capitol. He was concerned that a rogue developer could have slipped something into their production at a sensitive moment.

Going back again to the report:

There was no logging of who went into the environment or what they did. Nobody knew where data lived or whether it was critical, and all engineers had some form of critical access to the production environment.

This was of course a red flag for Mudge on how dire the lack of visibility into their cloud infrastructure really was.

Even under ideal conditions this is hard to do because of the IAM structure that we have in cloud infrastructure like AWS, Azure, or GCP.

Tracking a user is tricky because they can assume roles or be a part of groups that grant them additional privileges that you cannot know about based on their provisioned privileges as seen from your IdP like Okta.

Once the user goes into AWS, you lose critical visibility over what they are accessing because they are using the assumed role, and not their user.

With all of these challenges combined, it would seem that Twitter’s security team was flying blind into a storm.

But their bad times did not stop there.

Partial Offboarding Leaves GitHub Exposed to Former Employees

Ideally, when an employee leaves the organization, they should have all of their access to assets revoked. Much of this can be done via the IdP, which is great, but it can miss some things.

For example, when former Twitter employee Al Sutton left, whoever was responsible for his offboarding must have forgotten to make sure that his GitHub access to the company’s private repos was removed.

By the looks of Sutton’s posts, it would appear that he held onto the access to their repos for 18 months, only being booted after he posted very publicly about retaining the access.

While partial offboarding can happen on any application or service, GitHub poses specific challenges because of its “Bring Your Own Identity” (BYOI) model. This is where the individual account belongs to the developer, and the organization grants that user access to its environment. The upside here is that developers can hold onto their accounts as they move from organization to organization, while still taking part in open source projects etc.

The downside is for the organization that has to overcome a significant visibility hurdle to securing their repos and development pipeline.

Simply revoking the developer’s account via the IdP may not work if their account is not directly associated with their user in the IdP. Maybe they have a second account that was granted access, or simply never registered.

Booting them out of AWS also might not be enough because they can still access the repo before it is pushed to production.

Moreover, if you are only monitoring your cloud infrastructure (AWS, Azure, GCP) but not your SaaS apps like GitHub, then you are going to potentially miss a partially offboarded user.

Achieving the necessary level of visibility is key.

Authomize Provides Cross Cloud Visibility and Control

While the Twitter case was hopefully an outlier, talk around the Twitters showed that the issue of too many people having admin privileges or even just access to sensitive information or environments is way more common than we would like to admit. 

This stems from the fact that organizations lack the necessary visibility across cloud environments, from cloud infrastructure like AWS, Azure, or GCP, to their services like GitHub, Salesforce, G Suite, or one of a million other apps that store or control sensitive data. 

So how would Authomize handle this?

For starters, Authomize gives security teams visibility over the entirety of the access path, regardless of which cloud environments are involved. Because we connect to not just your IdP but all your cloud apps and services, we can collect and analyze data across every cloud environment you are using.

In practice, this means we know who has access to what, and how that access is being used. 

 

Using these insights, Authomize can help your team to understand: 

  • The effective access paths for how identities can reach assets, showing you paths that you may not have known were possible
  • Who has access privileges and isn’t using them. If an identity has not used a privilege in say 30 or 60 days, then we can reasonably revoke it without negatively impacting their work.
  • If there are any changes to access privileges. This could be new admin privileges granted to an identity that could allow them to do some damage.
  • Privilege escalation paths that could lead to exploitation
  • Beyond what is just provisioned to understand not only who has access to what was declared, but also what is accessible to who

This last point is crucial for dealing with employees’ code repository access. It helps that we can also merge accounts. This is useful for understanding that Sutton’s code repository account should be linked to his corporate Twitter one, hopefully reducing the challenge to deprovisioning when it comes to offboarding. 

With these insights, driven by our Machine Learning SmartGroups, teams can set enforceable security policies and use webhooks to work with their orchestration tools (SIEMs, SOARs, ITSMs) to remediate quickly and effectively.

For more information on how Authomize enables organizations to achieve Least Privilege,  visibility, and control over their identity and access layer for everything they build, own, and use in the cloud, contact us for a Free Assessment and Demo.

The post Lessons Learned from Twitter Security Disclosures appeared first on Authomize.

*** This is a Security Bloggers Network syndicated blog from Authomize authored by Gabriel Avner. Read the original post at: https://www.authomize.com/blog/lessons-learned-twitter-security-disclosures/

Avatar photo

Gabriel Avner

Gabriel is a former journalist who loves learning and writing about the cat and mouse game of security. These days he writes for WhiteSource about the issues impacting open source security and license management and training Brazilian Jiu-Jitsu.

gabriel-avner has 51 posts and counting.See all posts by gabriel-avner