SBN

Modeling Risk for Business in Authentication Flows

Risk-based authentication (RBA) is quickly growing in popularity amongst identity and access management solutions. The reason is simple: it allows for improved customer experience by reducing friction in authentication journeys while maintaining appropriate security levels. While you can find many publications about what signals or technical controls RBA uses, the architecture of risk policies is somewhat customized and not widely discussed. Enterprises usually rely on their identity experts or vendors (third parties) during the implementation efforts. One of security’s most important and essential goals is to enable and enrich business, therefore establishing a high-level logic that can be applied to various models.

Is there a blueprint you can universally use to deliver quality and relevant policies yourself? Which stakeholders should take part in designing risk policies?

The stakeholders

Identity architects should refrain from making the ultimate decision about risk policies in isolation from the rest of the business. Depending on the organization’s structure, policy creation may require cross-business unit collaboration. After all, your privileged access management (PAM) and workforce identities may be something your security team oversees. Customer identity and access management (CIAM), on the other hand, may reside a bit closer to the business, namely the product. It depends on the use case, but the key stakeholders for CIAM are usually involved with security, risk, product management, and identity.

The classic outcomes of risk in authentication

Let’s take a look at business-specific factors and discuss risk outcomes in authentication. There is no industry standard in RBA, but most vendors have adopted the three-level model of low/medium/high. In simple terms, these designations mean:

Low: You do nothing. (Access is allowed.)

Medium: You do something. (For example, you compensate for the risk using multifactor authentication [MFA].)

High: This marks the end of the road. (Access is denied.)

What most organizations tend to forget is that there is always a fourth outcome, which I like to call “default,” but, in reality, it should be called “failed.” If you look at RBA as an enhancement of the authentication (nonessential), the failure to assess the risk should not be a disqualifying factor. If you cannot validate the username and password, there’s no path to success because it’s strictly essential for security. If the security baseline, however, accounts for specific risk levels, you have to classify it as essential, and then the inability to assess risk could mark the end of the road (access denied).

What should the business do if the system failed to evaluate the signals? After all, your RBA engine may be that of a third party, or it may be using a third party to deliver signals, like an IP address reputation service. What’s interesting is that there is no right or wrong answer. It depends on the business and on risk levels that business is willing to accept (risk appetite).

BUSINESS FACTOR 1: The (Forgotten) Default Outcome

Any security or risk wizard will tell you it is a balancing act. The more security, the more the friction (at least in principle). Business (product) may vote for “low,” as much as the security or risk teams vote for “high.” Usually, the argument would be supported by some numbers. An example would be loss associated with an estimated number of false positives.

You could venture a rule of thumb here and say, “when in doubt – average out,” meaning default to “medium.” While it’s not a showstopper, this level provides a certain amount of assurance.

BUSINESS FACTOR 2: The Risk Outcome Formula

How do you calculate risk scores to deliver the right outcomes? The answer is of paramount importance. Though it’s a finite number of permutations, every use case and every business will be different. CIAM will be different from workforce identity and access management in the same way that government use cases differ from the private sector. Some solutions allow you to assign weights to the signals, and this is a first step in modeling the risk. But many think of the risk score as a direct derivative of signals in some proportionate mathematical model.

Averaging risk scores out across the board is ineffective. It makes sense in some use cases (global email providers, for example), but not everywhere all the time. You know your audience and use cases well enough to build a high-level picture of what should be classified as normal and what would be considered anomalous. To put it into context, here are examples of what you need to examine.

The landscape of devices being evaluated

For the workforce, you may be issuing a standard laptop to our employees. Therefore, a drastic change of context (moving from MacOS to Linux) should be a stronger signal than in a consumer case, where I may use my laptop (MacBook), my wife’s desktop (an old iMac), or my son’s iPad to purchase items online.

Geolocation

Geolocation is often called impossible travel. An organization, like an air-traffic control center, for instance, has a secure location, from which the staff operates. Due to safety and strict regulations, employees must work from the operation center. They will never experience an impossible travel scenario, so it should be a signal strong enough to take us to the “high”outcome on its own. A business using VPN technology (without split-tunnel approach) is prone to false positives as far as impossible travel is concerned. If you hit anomalous geo-velocity on its own, you should classify this as “medium.”

Signal combinations

In the example above, impossible travel may be anomalous but plausible. Therefore, you can treat it with MFA. (The risk was “medium.”) What if it was accompanied with a strong user and entity behavior analytics (UEBA) signal, such as an operating system change? If the user moves from MacOS to Windows along with impossible travel, does that still qualify as a “medium” risk? Certain signal combinations may elevate to higher risk levels and, in this particular workforce scenario, it could well be classified as “high.” The combinations depend on the signals used and the level of knowledge about the customer’s patterns. As far as the rest is concerned, it’s okay to average out.

Once you have created this blueprint, you can then translate it to technical controls. Both risk engine and policy (or multiple policies) that average, sum or multiply individual signals to to deliver the scores.

BUSINESS FACTOR 3: The “When”

Your RBA policy may differ in the function of time. For example, a bank may not want to allow an account to be opened if the request is anonymous (for example, coming from TOR – The Onion Router – or a known proxy/VPN anonymisation service). In the language of RBA, the risk is “high.”

If, however, you are trying to access the account, the risk may be classified as “medium.” Unfortunately, privacy aficionados will be classified as risky for the foreseeable future. Not because you dislike their desire for total privacy, but because there are so many bad actors trying to cover their tracks in defrauding businesses. This approach has been widely accepted by email providers. Try to open a Gmail account from TOR, and you will see for yourself. Global providers may resort to “medium”, while smaller (countrywide vendors) may end the journey and require identity proofing to confirm the requestor.

BUSINESS FACTOR 4: The “What”

Risk is not just a front-end function. It has its back-end counterpart. What I mean by that is the criticality or data classification in the protected resources themselves. Confluence or Jira may require an MFA once a month if the risk is “low,” but a competitive intelligence portal or a financial system may need much higher assurance levels more frequently. You may want to design policies based on the application classification, too. This involves weighing the signals, calculating, and treating the outcomes differently.

BUSINESS FACTOR 5: The Custom Model

It’s relatively straightforward to classify the outcomes as “low,” “medium,” or “high.” But what if an organization chooses to use MFA every time? Salesforce is a good example here, and it was a subject of conversation in this year’s Identiverse conference. You now need to “do something” as a baseline (just as in a “medium” risk outcome). The “low” and “medium” outcomes are somewhat combined into one, effectively moving into a “low”/”high” framework.

Well, that’s not necessarily true.

Let’s return to the reasons for applying RBA in the first place. It’s a balancing act between customer experience (lower friction) and security (fraud prevention). If you need to perform MFA for everybody, you could introduce the MFA cookie. If the risk is “low,” you can challenge the user, but only once a while (a day, a week, a month) for a given device/browser. If the risk is “medium,” you can challenge them every time.

The business may also determine that no risk level should be a definite showstopper. Once again, it’s a balancing act. It turns out that false positives and showstopping friction may be more costly than factoring the fraud loss into the business. There are formulas that allow us to calculate losses from non-returning customers due to false positives.

What you can do is reach for much stronger controls such as MFA methods at certain assurance levels. A push notification is not as prone to man-in-the-middle (MiTM) attacks as one-time passwords (OTPs) are. You could also use multiple channels to confirm the identity (voice recognition and an authenticator app), which makes security much stronger. Ultimately, you may want to carry out proofing as a compensating control to reduce the risk level below its appetite threshold.

Last, but not least, risk calculation and treatment are most effective when you implement the concept of continuous authentication.

Continuous Authentication

It’s important to remember that risk level may change within the context of a device if, for example, you have multiple app classifications or policies. At the same time, if the bad actor steals your SSO cookie, there’s nothing that stops access from being granted to a federated application. RBA could potentially remediate this problem, but only if the authentication is a continuous process. It doesn’t require a challenge for credentials every time someone lands on the authorization server or login page (that would defy the SSO concept). But you can honor the session and assess the risk every single time. If it all checks out, the experience is excellent (minimal friction). If you suspect it’s fraudulent, you now have an opportunity to stop the access or add controls to validate the request.

OAuth workgroups are working on adding a standard for “origin protection,” but it’s in its relatively early days. A similar concept is used today in dynamic authorization, where you evaluate every call against a fine-grained, ABAC/PBAC engine. So why not do that for the first out of the three As, too? (AAA = Authentication, Authorization, Accounting.)

Brace for False Positives

No matter how well you design the risk policies, you will hit false positives. If it’s designed well, there won’t be many, but you need to be prepared to treat them. Ideally, you will have a whitelist or an override mechanism that allows you to quickly remediate showstopping paths if they’re inappropriate. This is a business function that you need to develop yourself. While risk solutions may have technical controls to allow you to bypass certain signals or outcomes in the journeys, the decision about which one is classified as a false positive and its validation is entirely the business’s responsibility. You may have a salesperson going into a meeting with an amazing prospect who recently has been hacked and their IP reputation is low (bad). They may be using a VPN or a service to make their guests anonymous. You really want to be able to run the demo no matter what. A good process and procedure of classification and remediation is essential.

One-Off, or a Continuous Process?

RBA deployment is not a one-off process. When considering risk outcomes and the ability to calculate them, break them down into a binary classification of those you know about and those you need to learn about. Known credential attacks (spraying, stuffing, brute-force) can be acted on right from the beginning, while user behavior is something the system needs to learn.

Aside from that, analyzing RBA data periodically (every six months, for example), will allow you to further fine-tune the policies and the engine. You may find that your most risky events come from a specific geolocation, network, or application, in which case you would tighten the controls around those use cases. On the other end of the spectrum, you may find there are a lot of false positives, and you may want to relax the controls a little around where they occur.

How Does ForgeRock Autonomous Access Fit into This Model?

The ForgeRock Autonomous Access engine design allows for the flexibility to create desired outcomes. Its custom logic adapts to almost any scenario or blueprint you want to implement, while the orchestration in the journeys (authentication trees) allows for accommodating “the when” and “the what” and for managing controls for false positives.

It also allows for fine-tuning of the AI/ML models, which is unique in the market. Autonomous Access is an extremely powerful anti-fraud tool that is easy to implement and follows the logic of the business-first approach.

Conclusion

Risk-based authentication is a brilliant concept. It may seem that it’s all about signals and treating risk. But of equal importance are the calculation of security breach likelihood, tweaking of the engine, and customizable policies designed for specific operations.

If you are considering implementation of risk-based authentication, engage with key stakeholders and look at the technology/security as a function of a “business first” approach. Adapt the product to your enterprise instead of accepting the default model from the vendor. That way you can create excellent journeys, with minimal friction for your consumers (both external and internal), and a quality security posture driven by fraud prevention. It’s a balancing act and “secure-enough” is much better than just “secure.”

Read about ForgeRock Autonomous Access, AI-driven threat prevention.

*** This is a Security Bloggers Network syndicated blog from Forgerock Blog authored by Marcin Zimny. Read the original post at: https://www.forgerock.com/blog/modeling-risk-business-authentication-flows