Authentication attacks are big business, and no one is immune from them. In fact, two men were recently arrested and charged in the Twitter employee account compromise that happened in July 2020. Using employee account credentials, the attackers took over several highly visible celebrity Twitter accounts, which were then used for bitcoin scams. Think: “If you give me one bitcoin, I’ll give you two!” Sounds legit, right?
A report published by the New York State Department of Financial Services states that the push notification authentication factor used by Twitter was easily circumvented by the attackers. The report recommends using physical security keys, an authentication factor that would have stopped the attacks from succeeding. This is something that the FIDO Alliance has been advocating since their founding in 2012. The Alliance is responsible for developing a universal authentication framework to allow for simpler, stronger authentication to be used and integrated into other solutions. Akamai is proud to have joined the Alliance this year.
In order to understand the Twitter attacks, it’s important to understand the evolution of the current technology solution, why it has failed, and how to use that analysis to inform reasonable risks when it comes to authenticating users going forward. This analysis should also clearly demonstrate areas in which we find “the technology is only good if used by good people, as intended” — meaning many solutions can solve the problem.
A good place to begin is with the question of why multi-factor authentication (MFA) is even needed. Many researchers have been focusing on this problem for a long time. In 2004, Microsoft’s then-CEO and co-founder, Bill Gates, proclaimed at Microsoft’s annual IT forum in Copenhagen that he was announcing the password was officially dead. Thus began the hype for all business applications to get rid of passwords in favor of using some other source of secure MFA. At the time, biometrics and smart cards were being pushed heavily by Microsoft.
Fast-forward to 2020, and we’ve seen the emergence of a solid roundup of PUSH 2FA technologies that have become the de facto standard. I say “de facto” because over the years we’ve seen the systematic breakdown by attackers who managed to figure out numerous ways to bypass things like one-time passwords (OTPs), time-based rolling crypto-generated passcodes, and even SMS-based OTP challenges that were supposed to be more secure because we “own” the secondary factor device and hold it in our hands. More recently, we saw the rise of SIM swapping attacks. Attackers figured out the weakest link: Call up the phone company and trick them into adding a new SIM card to the victim’s account and activating it so the SMS code could be recovered without the owner’s involvement. If anything, this proves that just having a solution that is supposed to work (if used as advertised), but not backing it up with a solid, secure framework can prove to be disastrous.
Yet, despite widespread use of the de facto standard, enterprises are still subject to compromise because of the use of the technology. Two-factor authentication (2FA) is used to handle authentication requests and provide a secondary means of validation; this usage, however, does not mean all organizations are subject to the same vulnerabilities that allowed the two attackers to gain access to Twitter. Organizations that have a strong public key infrastructure (PKI) that manages client and server certificate relationships have the ability to enable a cryptographic bond between certificates. These certificates exist on a corporate-owned mobile device or laptop, as well as on the system that they’re trying to communicate with, such as a web application or server. That means there’s a signed client cert on the laptop that’s performing the login, as well as on the phone that was previously registered as a 2FA device. Once authentication is granted, the resource that is being accessed also has a client certificate. The private/public key pairing provided by these certificates is what gives the authentication server the ability to validate who or what device is actually making the request for authentication and is paired with the 2FA key, which is then passed onto the MFA service in order generate a 2FA challenge.
But I’ve noticed something interesting. When MFA solutions are deployed in order to bypass a 2FA challenge, the attacker’s target is given access to the authentication token that is granted to the browser at the time of successful authentication and acceptance of that challenge. Previously, 2FA bypass techniques have exploited flaws in a password reset function or OTP validation, but for the most part getting access to a victim’s valid cookies is essential.
If an attacker is able to trick a client into going to a malicious version of the social media page or bank login page the victim is trying to access, attacker tools such as EvilGnix allow the attacker to front this malicious login page. This also enables the attacker to become a “man in the middle” to proxy all requests and gain access to the “keys to the kingdom” authentication token. Once they have it, along with the username and password combination, they simply take this authentication token and insert it into their browser. Once they press refresh on their keyboard, they are effectively logged in as the victim.
Many of the organizations I talk to understand PKI and how a proper certificate-rich environment can protect against a lot of different types of attacks such as this one. But I also talk to an equal number that understand the challenges behind maintaining and upgrading this environment over time. Due to the complex nature of these requirements, most organizations are looking for more simplified means to provide this same type of security controls. This is where the aspect of physical security keys comes into play. These physical security keys are cryptographically tied to the browser of the registered user who is making a 2FA request. When a request is made for authentication, it is validated that it’s coming from the client themselves and not from a proxy or “man in the middle” connection. It also signals through the multifactor key that a request has been made and authorized from a valid source.
So, why wasn’t Twitter already using physical security keys if they are more secure? These keys, using the U2F/FIDO2/WebAuthn standards developed by the FIDO Alliance and the World Wide Web Consortium, can prevent phishing attacks (of the kind seen by Twitter), most credential stuffing, and other account takeover attacks from succeeding. I think the issue is the trade-off between increased security and the cost of implementation and ease of employee adoption. Purchasing security keys for every employee, and managing the distribution of the keys, is an expensive proposition. And employees would revolt against the idea of another piece of hardware to use and keep track of. A push notification on the employee’s smartphone does not add cost and is easily adopted — which is why it is so widely used today. It is also why so many companies are at risk of being breached, just like Twitter.
But for many, the smartphone itself has challenges. Earlier, I mentioned SMS validation and the SIM swapping schemes that made the FBI and top U.S. military revoke the use of SMS-based 2FA. This has called into question whether the phone itself is the issue, or whether the phone is just a tool that almost every user you could think of would have at hand. Instead of looking at the phone as the problem, I believe we should be focusing on how using public/private key pairing technology (that’s used for all of the major security transactions in the world today) can enable that secure communication conduit via a secure framework agreed on by the world’s top experts. Maybe then, the use and simplicity of smartphone-based 2FA technology could work wonders to change the threat landscape for global businesses. We will continue to explore this idea in our upcoming blog posts.
*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Tony Lauro. Read the original post at: http://feedproxy.google.com/~r/TheAkamaiBlog/~3/Z3QOS3wXPpM/the-evolution-of-mfa-authentication-technology-and-what-needs-to-change-next.html