SBN

A Sub-Domain Takeover Story, Two Questions for Every WAF Provider | Sunil Agrawal (CISO, Glean)

In this SaaSTrana podcast, Sunil Agrawal (CISO, Glean) shared his insights with Venky on the evolution of cybersecurity attacks and changes in hacker behavior over the years.

He also shares his experience of a sub-domain takeover and how it led him to build foundationally secured SaaS products.

Introduction to Sunil and Glean

Can you tell us a little bit about yourself? 

I’m currently the chief information security officer at Glean. Glean is an enterprise search company. You can think about it as a Google for your enterprise. A single interface to search through all the data sources you might have in an enterprise. Be it G-suite, OneDrive, Slack, Asana, or Jira Confluence. And, of course, you can also bring in some custom data sources.

And as a CISO, I’m responsible for all aspects of security, including corporate security, compliance, infrastructure security, and product security. One of the unique things I usually take on before I join any gig is not just the security of the product, but security as a product.

So that means I only pretty much take gigs up where I also get a chance to develop security products we can provide to our customers. Glean is no different we have certain products in data governance.

Can you throw some light on any one of your patents that you come across? 

Let me talk about some of the innovative work I did about three or four years back, which is what we call the virtual browser.

We all know that whenever you open your browser and visit the Internet, there are just lots of bad things that happen on the Internet.

You’re always opening yourself up to risk. You need to have the right security tools. So, what we did was created a virtual browser, a browser that would run in the cloud.

You would talk to this virtual browser in the cloud using your browser. Now that browser would talk to the big bad Internet.

So that browser would –

  • Go and communicate with the various sites that you want to visit
  • Run all the JavaScript that comes along with the HTML pages
  • Download the files that you would want to download
  • Render those files in the cloud

And you only get the rendered version of the web page or any document you’re trying to visit.

You would interact with the page as you would normally. All those interactions get replayed on the virtual browser. Now you have a layer between you and the big bad Internet. And this layer is a virtual browse, and it protects you.

Now, you are much more comfortable browsing the Internet, and you don’t have to change anything with your browsing pattern.

And the good thing is that after your session, this virtual browser will be toned down, and you will get a fresh new browser the next time you try to browse the internet. So, we have a few patents around this space.

And you could imagine, again, security was right front and center. We had some banking customers who wanted to provide this to their customers because they were unsure whether they had malware on their devices.

So they were always concerned, “Hey, when they’re logging in, I can authenticate them, but I can’t authenticate the device.”

But if they’re coming through this virtual browser, they know that everything that is getting executed is within a clean environment. So even if there was malware on the end user device, the banking session is not impacted.

So there are many customers in there, many security use cases.

How was the adoption of your virtual browser?

We had big adoption in Japan, where SoftBank was one of our customers. They were using it internally and reselling it. And, of course, we eventually ended up selling the company to an East Coast-based security company called Digital Guardian.

The evolution of the security landscape

How has the security landscape evolved, in your view?  

When I started my career in the security space, Web 1.0 was just starting. In those days, the challenges were slightly different. We had to convince consumers to be ready to share their credit cards over the Internet and do shopping.

A lot of the effort in the security industry was to build products that would convince consumers that it is safe to shop online.

And even though we knew a lot of the insecurities of the various Web protocols, there was just no ready way of monetizing it. So, all the effort was more into enabling Internet usage.

And then, slowly, that evolved into Web 2.0, where customers and consumers were fairly comfortable shopping online and sharing a lot of information about themselves.

So now came this entire cohort of hackers where we say, “Okay, there’s so much information available, so if I can steal Venky’s PII, then I can impersonate, and there’s a financial benefit. Or if I can steal Venky’s credit card number when he is shopping online, then I have a real way of monetizing all the effort I’m putting into hacks”.

So that’s where we started seeing a lot of attackers, hackers focusing on the internet as an attack vector. So now the focus shifted slightly in the security industry from building tools to convincing users to shop online or get online to protect them when they do get online. So, then a whole slew of vendors came about.

Then Web.3 started happening where we had cryptocurrencies. And unfortunately, along with that came the Darknet. Even though those attacks in the 2.0-time frame were somewhat difficult to carry out, what happened with the advent of the Darknet and cryptocurrencies is now one smart hacker somewhere in the world would develop an exploit.

Now that person would be selling that SDK on the Darknet for all the other script kiddies to buy and carry out that exploit. So the problem that was limited to a few smart hackers; now they’ve figured out another way of monetizing it. Now you see that there are hackers worldwide who may or may not have the technical expertise but can carry out successful attacks.

Of course, I never want to say that cryptocurrency is bad. That’s a really good tool. It is just being used for a bad purpose.

And now, we are in the phase that I call the 4.0 phase in the generative AI phase. Now, this is going to make things, unfortunately, even worse. Now you will see script kiddies using tools like ChatGPT to generate spear phishing campaigns that are highly targeted to Sunil Agarwal or Venky.

And it will be very difficult for you to recognize those patterns that, hey, this one seems to be from a non-native English speaker because now they have tools.

They will be generating code, which is really specific malware. So now you will have to develop a whole new set of security tools that can deal with all the auto-generated tools and messages.

So are challenging times; whenever new technology comes out, there are always people who abuse the technology. And that’s the challenge for the security industry.

What is a tech stack of Glean? Is that completely cloud, or is it something that enterprises subscribe to it can install on their own corporate networks?

The company started about four years back. We are born in the Cloud; we offer two different hosting models.

We offer a SaaS version where everything is hosted and run by us, and managed by us, and that’s completely in GCP.

You need to crawl and index all the data. Only then you can provide a good service. And for that, of course, we’ve got to have access to all the relevant data within an enterprise.

Because of that, certain security-conscious customers would want to deploy it within an environment they control. So, we also provide a Cloud Prem solution, not an On-Prem, but they can deploy our solution within a GCP account they control.

So, they take our software, we install it, and then the customer goes and connects all the data sources. Now they manage their infrastructure from that point onwards, but that option can only be installed within the GCP account.

What steps do you take to protect those data or security?

Because of all the data we have access to; customers want to ensure we take the utmost care of their data.

We do it as a completely single-tenant solution. Every tenant has their own GCP project. Think about it as their own AWS account.

Right from the beginning, it’s a single-tenant solution. And, of course, we talked about the two hosting models available so they can host within their own GCP account and control everything.

Then, we are a completely authenticated search system, meaning the user must authenticate.

Because we provide them with a simple ability to search through, Glean. They do not get access to anything more than they had access to before using the Glean.

Now that’s a requirement because “hey, if using Glean, you could see more data than you had access to”. Then, of course, this product would not work.

So, we are a completely authenticated system. We integrate with an enterprise SSO, Okta OneLogin, Azure AD, and Google Auth.

Then once we know that it is Venky, who is trying to search, we understand what the things Venky has access to in G-drive, in JIRA, in Asana, in Slack are, and those are the only things that we would bubble up in your search results.

We integrate with 50-plus data sources, and then we understand the schema in each one of them. We know the permissions system in each of them and then normalize it into our internal schema to provide this consistent interface across those 50-plus data sources.

There’s a lot of integration between Glean and all those data sources we connect to. The integration happening with your SSO provider. And then, we provide a rich set of APIs that we export because customers can use us. We provide our own Google-like interface where you can just enter into the search bar, and we will show results.

Sometimes, we also provide API, and they might want to integrate it into their custom system. So we had a fairly rich API system. As you can imagine, we make sure that we consider everything that is exported externally public because that is accessible to things outside our perimeter and hence need to go through very stringent security checks.

What do you do besides authentication and authorization? What is it you do with the API integration? 

We ensure that our system remains secure and not vulnerable to all OWASP attacks or CWE attacks. We have built security right into the product DNA.

The company is about 200-odd people, and they very early realized that security would be right, front and center as they decided to bring me on. So as a CISO early on within an organization, it’s a little uncommon, but this just talks to the company’s maturity as to how soon they realized this is very important.

So we have built security in every phase of our development. When we design the product, an extensive security review happens both by the core engineering team and the security team.

Right from there to make sure that once it gets implemented, we want to make sure that we are only using memory-safe languages. No buffer overflows. We are using React Native so that there is no cross-site scripting, and we do not use any dangerous constructs within that.

And once it’s all been implemented, we carry out our regular pen tests, and of course, many of our very security-sensitive customers do their own pen tests.

And so we just get the benefit of doing our own pen test regularly and all our customers pen testing it and providing us confirmation that it is all secure. So we get 10 to 12 pen tests a year, some funded by ourselves, and our customers do it on us.

But we all in our entire community of customers benefit from all such efforts. And then, of course, we get external validation also.

Medium and low vulnerabilities that, too, if they find, we fix them, and we make sure that those fixes are available to our customer base. Our entire set of customer base benefits from this.

Do you think it’s an opportunity to optimize what you tell the customer, “We are doing it, and here’s the report? You can save the cost of not doing it.”

We do that. We get the report and provide it to them; some of them are okay with that, and some will still want to do that extra diligence, and we are completely fine with that model.

Sub-domain takeover & a good cyber citizen

Any stories to share of incidences or late-night 12’ O clock calls?

More than talking about a bad story, let me talk about a good one that comes to mind – (of course, I won’t name the company I was at.) That was a case where we had a subdomain takeover.

For listeners who might not know what a subdomain takeover is, let’s say you have a subdomain.acme.com, which points to an IP address you own. Now in the cloud, it can very easily happen that if you tear down your machine, that IP address could get assigned to someone else. So you still own the subdomain, but it’s pointing to an IP address that someone else owns.

In one such case, we didn’t realize this IP address pointed to another company, the AWS instance. And then this endpoint we were using to collect a lot of our customer telemetry. Now the customer is using our product, which includes some amount of their email address, not a lot, but their email address and all their actions.

All this telemetry was going to someone else and not to us. The security person from that particular company contacted us.

Of course, we spent much time trying to find the root cause. Where did this happen?  What’s the fix?

And it took three or four days in this particular case just because of the complexity of the stack. But I just want to say that this person went above and beyond. Of course, that was not their security issue to be concerned with; that person was just a good cyber citizen.

But that person went above and beyond and was to the extent, ready to set down the instance that was receiving all the customer telemetry so that we don’t make those matters worse for our consumers by sending that data somewhere else.

It is better to get an error than that data leaking and be ready to give up that IP address so that we can reclaim it until the root cause is.

So, just a case where someone was a good cyber citizen and ready to help without any expectation of gain.

Here are some ways to prevent subdomain takeover:

1. Monitor your subdomains: Keep track of them and monitor them regularly to ensure that they are active and under your control. This can be done manually or through automated tools that scan for subdomains.

2. Remove unused subdomains: Delete any subdomains that are no longer in use or have been abandoned. This will reduce the attack surface and make it harder for attackers to find vulnerable subdomains to take over.

3. Use CNAME records: Instead of pointing your subdomains to an IP address, use CNAME records to point them to a domain name you control. This will make it harder for an attacker to take over the subdomain.

4. Use HTTP(S) services with valid SSL certificates: Ensure all your subdomains use HTTPS, have valid SSL certificates, and are only available over HTTPs via HSTS. This makes it harder for attackers to spoof your subdomain and take it over.

5. Use a reputable DNS provider: Choose a reputable DNS provider with a good security track record and measures to prevent subdomain takeover.

– Sunil Agrawal (CISO, Glean)

At what point in the procurement process do buyers prioritize security as a key concern for their applications?

Security is one of the big things that our customers and buyers look for in our product. We are in a unique situation where we can do this annual pen test but still give our buyers the flexibility to do their own pen test to ensure they’re comfortable with the security.

Not every SaaS provider can do that. Let’s assume that I was not at Glean and I was at a typical SaaS provider; then what I would say is build-in continuous security into your pipeline.

The annual pen test is good but insufficient because you must look at your entire surface. You have hundreds of APIs that are exposing; you bring in ethical hackers to look at those hundreds of APIs.

You ask them to do a white or grey box, where they even have to look at the source code, and you give them two weeks. We all understand that is humanly impossible, but that’s what the industry does.

And the better way of doing this would be to work with someone like INDUSFACE. Go for continuous security from when the code is developed; bring someone from the industry and say the secure patterns that should be used.

So that we reduce the chances of vulnerabilities downstream, as soon as it’s developed, it’s available in your dev stage environment; make sure you are testing it.

Make sure you are setting up automation. No more annual pen tests. It happens regularly as you develop and ship new products. And this is a very specialized skill set.

You may hire your focused set of pen testers if you are a big organization. But if you are a mid-sized company or a small company, I suggest working with companies like INDUSFACE who have this specialized skill set so that they can partner with them.

So it should not be a relationship that, “hey, you are a vendor who just provides a security functionality, but rather a partner who is there along the entire product development journey.”

Why does it take 200 days to patch a vulnerability?

What is your view on virtual patching?

Venky: One of the data we have is it takes about 250 to 300 days from knowing a vulnerability to fixing them in code through the assessment. And that’s probably because it’s not in their control. There are too many third-party components they integrate with that are just part of the app Stack.”

Do you think virtual patching has a role to play here through a web application firewall that has to be an integral part of the app Stack?

Sunil: Absolutely. And the numbers that you quoted, I kind of completely buy into that. And the complexity is both your first-party application and the third-party applications.

For the 1st party applications, having a software bill of material (SBOM) for all places using the vulnerable version of Log 4j. Are you using the vulnerable constructs of the log 4j?

That’s a very easy question to ask, very difficult to answer. Because, first of all, you don’t even have an SBOM. So how do you go about doing that? That’s one aspect.

And then the third-party products or the software you use within your stack, finding out that dependency tree, whether they are using one of the vulnerable constructs, is also a fairly uphill task.

Once you have that visibility, you discuss the second challenge set. Now, who is the owner of that application within your company. You can get it into part of their sprint cycle that needs to be fixed by this timeline. Because that’s an internal organizational challenge, there’s no clear application mapping for the team.

If it’s a monolith, it’s even harder if the microservices and you have good attribution. But yeah, but very few companies have reached that set of maturity.

Most of them don’t have it, and I can completely buy into it because you cannot wait for those 200-plus days for the vulnerability to be sitting out there.

So what do you do? You apply a virtual patch in your WAF so that at least you applied a patch. You stop being vulnerable, you stop the bleeding, and then you have time to fix the core issue.

I’m a big believer. Whenever we do evaluate a WAF or a WAAP, we upfront ask,

  • Do you have virtual patching capability?
  • And if you have it, are you responsible for creating those rules?
  • Because my internal folks might not know how to create rules for your platform, will you keep me informed?
  • Will you apply the virtual patch, or will it appear in my admin console as a one-click application?

So those are all the set of things that we do consider before we want to go ahead.

Venky

I will also add one more point to that. The virtual patch benefit is not just time to fix the benefit. I mean, the time to fix benefit is certainly a very compelling benefit you immediately get because to under 250 days, and you get it within a day here. Still, it’s also an intelligence-gathering benefit of our hacker intent.

Because what does the hacker do? They also try to find a vulnerability and then target and exploit it. And now the virtual patch is there. It’s doing two things. One is preventing that exploitation doesn’t happen. But when that virtual patch policy gets set, we know whom we are dealing with on that session, identity, and IP is the hacker on the other side.

So we can probably dynamically increase our defense posture without worrying about false positives for those whose attack intent is established.

Sunil

Absolutely. And for platforms like yours, I’m sure just the way we find an issue, and once we fix it, all our customers benefit. I can see the corollary for you guys that once you know that a hacker from this set of IP addresses is trying to poke Customer X, you can include that as part of your blacklist of IP addresses.

Has the audit and compliance framework evolved?

As most people in this space say, your main aim should be security, and compliance should be a by-product. Many buyers, because they don’t have a good way of measuring security, instead ask for compliance certification.

Unfortunately, now something that was supposed to be a by-product has become the main product that companies now focus on. Compliance is the thing that they aim for.

But, of course, that entire game of compliance has evolved. Back in the day, it used to be a spreadsheet where you would have a set of controls.

Let’s say you are going for SSAE 16 or now called SOC 2. You just open a spreadsheet with a bunch of hundred-plus controls, you would assign names to individuals within your company responsible for those controls, and then you start gathering evidence. I mean, fairly manual-intensive process.

And, of course, you would do the same thing every year for one kind of certification. If you are in the US, SOC 2 is enough. If you were in Europe, they insist on ISO 27001, and for targeting federal customers, there are different certifications, and you do this exercise for everyone.

Is SOC 2 applying to application providers or just the data center? 

It applies to the entire stack, not just the cloud providers not just the data providers, because they want to look at the security on the end-to-end stack.  After all, that’s what eventually matters.

And to that, having a consolidated controls framework is where the industry is headed towards that. Now let’s develop a set of security controls to ensure you are doing security properly and let each compliance be a by-product.

The original intention is slowly coming back. Now the industry realizes that let’s come up with the controls which are the right set of controls, and a subset of them would apply to a particular certification or a compliance framework.

But let that not drive your main security. Tools like Lenta and Drata help you automate some of that. But everyone, I would say the industry is gravitating toward a consistent control framework.

Having a security partner can provide 10x returns

What would be your advice to a new SaaS CTO or a Tech ahead of a new SaaS company?

I would say just looking at Glean; we had a 200-person company. Very early on, our CEO and CTO realized security is a very specialized field and getting more specialized by the day. You should think about having someone whose 100% job is to always think about security.

And it’s like one of those things; doing it now will save you so much trouble down the road.

If you can fix the security issue during the design phase, more than 100 hours are saved if there is an incident later. So this is one investment that will pay off in our 10x if you invest early on.

Or if you can’t get to have a dedicated CISO, then work with companies like INDUSFACE so that they can be your partner.

To know more, listen to the podcast here.

Stay tuned for more relevant and interesting security articles. Follow Indusface on FacebookTwitter, and LinkedIn.

The post A Sub-Domain Takeover Story, Two Questions for Every WAF Provider | Sunil Agrawal (CISO, Glean) appeared first on Indusface.

*** This is a Security Bloggers Network syndicated blog from Indusface authored by Indusface. Read the original post at: https://www.indusface.com/blog/a-sub-domain-takeover-story/