Protecting the Digital Experience

Optimizing digital experience is all the rage today, as the tech industry finally got religion about ensuring end customers—whether external buyers or internal employees—can seamlessly and simply do what they need to do with the systems we build and deploy. That means less focus on shiny tech objects and more emphasis on performance, latency and downtime. Identifying and remediating issues that result in the degradation of delivery or quality of the experience must take precedence now.

Evaluating the digital experience is complicated. Your typical system involves dozens of components encompassing microservices, APIs, third-party libraries, open source components, PaaS services, cloud provider offerings and maybe even a little of your own code. In addition, these components need to be orchestrated so that the application can be continuously integrated and deployed via a DevOps pipeline.

And if any of these components fail due to an operational mistake or a malicious attack, the entire digital experience comes crashing down. We’ll focus on the malicious attack angle for the rest of this article. To be clear, a misconfiguration or operational mistake has the same impact, but framing the adversary as an external attacker typically gets people’s attention rather than the well-intentioned sysadmin that fat-fingered an operational change.

From a design standpoint, you want security in place but you don’t want it to be obtrusive or draconian. An authentication challenge may seem irritating, but it provides a mental trigger regarding security. A simple pop-up to inquire whether the user intended to share data outside the organization can be intrusive. Still, sometimes users don’t realize what they are doing and forcing them to acknowledge their actions markedly increases security. So the first step in protecting the digital experience is to protect the user from themselves.

Next, it’s about integrating security practices into the development and operational processes. The initial forays into integrating security into the DevOps process included three main tactics:

  1. Shifting left: Involve the developer by increasing their accountability for secure coding; emphasize the idea that it’s cheaper and less damaging to address security issues earlier.
  2. DevSecOps: Integrate some operational security capabilities into the automated DevOps pipeline, including scanning code for vulnerabilities, assessing third-party libraries and enforcing guardrails that can automatically remediate unauthorized configuration changes.
  3. Infrastructure-as-code (IaC): Describe the environment as code, enabling the quick and consistent deployment of technology infrastructure on various platforms.

So what do these three initiatives have to do with the digital experience? Pretty much everything—because we’re generating visibility and telemetry from all steps in the process across these three initiatives. We collect data from every part of the stack. Data about code changes and deployment, data about the underlying network and computing environment, data about the applications, data about the users, etc., from the time the application code is committed to the code repository to the time the user uses the application. And in that data are the answers to refining and protecting the digital experience.

But just having the data isn’t sufficient. Digital experience attacks can be challenging to detect because these applications are designed for users, and confusing an edge case with misuse is easy. Determining what’s an attack and what is legitimate (if maybe a bit ill-advised) behavior is critical to the success and perception of the security team. There are so many nuances that contribute to the customer’s perception of their digital experience, the job of the security professional within this context is to make sure any disappointment is not because of a security issue.

How do you leverage this data to protect the digital experience? You use machine learning and advanced data analytics to build profiles of acceptable activity and have an analytics capability to monitor the data to identify potential misuse, attacks and exfiltration. Building these user profiles requires different tooling and processes than traditional security monitoring. Combing through logs and security events across cloud providers, infrastructure and data sources does not scale nor keep pace with digital businesses. The required tooling is probably closer to modern observability platforms focusing more on malicious intent and attacks. 

But I’m not religious about the tooling; it’s about ensuring all applicable data is aggregated and analyzed to identify security issues. The tools need to be integrated into all aspects of the modern development process so you can understand the impact of infrastructure on the user experience. The good news is that tooling within this broad set of data sources continues to mature, so we’re now able to get a sense of not just what’s being attacked but how it’s impacting the most important constituency—customers. 

Avatar photo

Mike Rothman

Mike is a 25+-year security veteran, specializing in the sexy aspects of security, such as protecting networks and endpoints, security management, compliance and helping clients navigate a secure evolution to the cloud.

mike-rothman has 38 posts and counting.See all posts by mike-rothman