SBN

The Hacker Mind: Shellshock

The Hacker Mind: Shellshock

Robert Vamosi

·

March 24, 2021

Shortly after OpenSSL’s Heartbleed, Shellshock was discovered lurking in Bash code two-decades old. How could open source software be vulnerable for so long?

This episode looks at how fuzz testing has evolved over the years, how open source projects have for the most part gone untested over time, and how new efforts to match fuzzing to software development are today helping to discover dangerous new vulnerabilities before they become the next Shellshock.

Years ago, I was the lead security software reviewer at ZDNet and then at CNET. That meant I tested the release candidates — not the final product you’d buy in the stores – for consumer-grade antivirus programs, desktop firewalls, and desktop Intrusion detection systems. And I remember testing an internet suite from a major vendor, and I won’t say which one, — and it included a password manager — you know the feature that retains the various passwords you have for your various websites, and also generate new passwords that are sufficiently long and secure and otherwise you wouldn’t be able to remember them on your own. Anyway I was testing this suite when I happened to randomly strike two keys — I think it was control and B — and up popped the password manager, displaying all my test passwords in the clear. Thing was, the manager required its own password, which I had not entered; remember, I had hit only two keys. And I was able to repeat the process over and over. This was a software flaw. The password protected password file clearly was not secure. 

So I reported this flaw to the vendor … and the response was not what I expected. “Why did you press those keys?” “Doesn’t matter,” I replied, “you still have a bug.” And they said “Not if you don’t strike those keys.” 

Well, several days passed, the product shipped, and in my review, posted on CNET, it said something like “be careful with the password manager, it’s got a bug.”  Of course, the security company freaked out, called the editor and chief, threatened to pull their advertising. But, really, shouldn’t they have just fixed the password manager when they could?

My example of hitting random keys and forcing the password manger to pop open illustrates the topic of this episode. Fuzz testing is similar to randomly striking keys and producing an unexpected result from the software. Except fuzzers automate the process and can iterate through thousands of test cases in a matter of minutes. And modern fuzzers are not random, they’re guided so they dynamically work their way through the code, increasing their code coverage to find unknown vulnerabilities that can escape other software testing such as static analysis. In a moment I’ll tell you about a flaw discovered only through fuzz testing in a very old open source product. It’s a flaw that hid itself in open source software for more than twenty years, and remains perhaps the most dangerous vulnerability ever discovered.

Welcome to the Hacker Mind, an original podcast from ForAllSecure. It’s about challenging our expectations about people who hack for a living.

I’m Robert Vamosi and in this episode I’m going to talk about how fuzzing has evolved over the years, how open source projects have for the most part gone untested over time, and how new efforts to match fuzzing to software development are today helping to discover dangerous new vulnerabilities before they become the next Shellshock.

[music]

Mashable: Move over Heartbleed and welcome to shell shock, the latest security threat to hit the internet. And it’s a doozy program.

Vamosi: In the fall of 2014, Shellshock was publicly disclosed. It’s a fundamental vulnerability in Bash, which is a command shell used on many computers, including Macs, and as this Mashable reporter commented, if exploited, Shellshock make it ridiculously easy to execute malicious code. 

Mashable: Basically what this vulnerability means is that an attacker could execute arbitrary code on web servers. So here’s how this vulnerability works. Bash lets users define functions as a way to pass text on to other systems and processes, usually this is just fine and hey it’s convenient, this is what it’s for. The problem is that this vulnerability, which includes specific characters as part of definition, occurs because bash doesn’t stop processing a function after it’s defined, it’ll continue to read and execute shell commands following the function definition. The end result is that basically the malicious attacker can get shell and if you were a malicious intruder, that’s what you want you want to get access to the command line on a server because that means that you can then execute all kinds of malicious code you have access to system files, you can share what you’re finding with the system on the world, we want to be clear getting shell is not the same as getting brute, but it does mean that intruders have a chance at an extra special bonus round known as privilege escalation, which means that getting shell could lead to getting root, which means access to everything, you get access to root, it’s it’s game over

Vamosi: So a vulnerable version of Bash provided an attacker the ability to pass untrusted code to other systems and processes. It’s like the operating system left the front door wide open to attackers. How did this happen? Our story begins in the 1980s where both the tool used for this discovery and Bash shell were created.

In the fall of 1988, Professor Barton Miller was teaching his graduate Advanced Operating Systems at the University of Wisconsin, Madison. One of the class projects was to fuzz test Unix commands with a relatively new technique: fuzzing. Barton had created this technique after noticing during a summer’s thunderstorm that the ambient electrical impulses during a remote login had changed his inputs, and therefore his results and he began to wonder if intentionally sending unexpected inputs commands would trigger a vulnerability within the software.  He wrote his first program, which he called simply fuzz, saying that it generates a stream of random characters to be consumed by a target program

In a subsequent academic paper, Miller demonstrated that 25-33% of the Unix applications tested with fuzz either crashed or hung when reading random Input. After testing Unix, Miller went to fuzz Windows NT and other systems. What’s nice is that the fuzz program recorded what input had been sent, and that record allowed other researchers to later reproduce the crashes, and isolate the vulnerabilities for themselves.

One of early vulnerabilities was the Gets Finger vulnerability. The name comes from the finger protocol which provides status reports on a particular computer system or a particular person at network sites . For instance I can use finger protocol today on my Android to see what other devices are on a shared public wifi. Back then, finger was perhaps useful to see who else was also on the network and able to chat over telnet or meet in person. The name literally came from the idea that some could run their finger down a directory of names to find someone they were looking for. 

What’s interesting is that in November 1988, Robert Morris Jr. used vulnerabilities in sendmail and the fingerd protocol to construct unintentionally what would become the first internet worm. What’s a worm? Computer viruses for example have to spread by humans–via email attachments. Worms, by comparison, are able to replicate on their own so the Morris worm exploited an overflow vulnerability in Finger to spread from system to system.  And, once a successful worm had been demonstrated, internet worms became more common in the early 2000s with CodeRed, MSBlaster, and others.

In 1995, Miller co-authored four papers, arguing that the reliability of software was getting worse. He cryptically wrote that his fuzzing activity, in addition to helping find the Gets Finger vulnerability, had found other software bugs that might indicate future security holes in a variety of different systems. Unfortunately his warning went unheeded at the time.

That’s the tool side. There’s another part to this story: The target. It also begins in the 1980s.

Up until the 1970s, there were only a few operating systems, in part because computing was still a large enterprise operation, requiring massive machines. Only large enterprises, such as AT&T, had the computing power and the developers to create their own applications. In the 1970’s the Bell System Labs, owned by AT&T, set about creating the UNIX OS for its own internal use. 

By the late 1970s, however, AT&T began to license Unix to outside parties and universities. This prompted a number of variants such as Berkeley System Distribution or BSD Unix, Sun Microsystems’ Sun OS, and even Microsoft had its own flavor called Xenix. AT&T later sold Unix to Novell, who in turn sold it to Santa Cruz Operations or SCO Unix, but I digress.

It also prompted a movement toward license-free software and, in September 1983, something called the Free Software Foundation was announced at MIT. One of its early goals was to gradually rebuild all the components of the Unix operating system and share it with the world for free. This was the beginning of open source software

These free software components carried the name GNU (GA-NEW) spelled G N U stands for GNU (GA-NEW)’s Not Unix.  Okay, that’s OG Hacker humor if ever.

I’m sure a few people are going to tell me it’s pronounced NEW, no G, but people who used it extensively back in the day have assured me it’s pronounced GA_NU so that’s what I’m going to use.

One of the more important features of any operating system is its shell. What’s a shell?  A shell is simply an interface for the operating system. It’s where you run commands, programs, and scripts. For simplification, you can think of it simply as the command line.

Bash, then, is the GNU (GA-NEW) Project’s shell— remember GNU (GA-NEW) is not Unix. In keeping with that humor, developer Brian Fox wrote his own shell, and called his shell Bourne Again SHell or simply BASH. 

Bourne in this case is not Jason Bourne, the spy, but Stephen Bourne who at Bell Labs, wrote the original Bourne shell for Unix back.

Remember back then in the late 1980s early 1990s the internet wasn’t as fully operational as it is today, so there’s this crazy story about Fox driving cross country with a carload of computer tapes containing his original version of BASH. He was driving from MIT to California to share his program. And it’s a good thing he did. Bash rapidly  grew in popularity in part because it was used as a sort of the glue that held pieces of the early internet together. And why not? The Bash shell Fox created was a simple yet powerful way for engineers to glue web software to the operating system. Want your web server to get information from the computer’s files? Make it pop up a bash shell and run a series of commands. Today, Bash remains still an important part of the toolkit that helps power the web today. And it’s on your Mac, and virtually any company that runs the Linux operating system.  And while it’s not Windows, it can be added.

Some time in late September 1989, we don’t all agree on the exact date, a serious vulnerability was introduced into BASH. And it may have been introduced by Fox, or it may have been introduced by Chet Ramey, who was an intern at the time, and later took over the maintenance of BASH. It doesn’t matter who introduced the flaw. What does matter is that this flaw within BASH would sit unnoticed for the next twenty-five years.  

There’s this saying within the open source community,“given enough eyeballs, all bugs are shallow”. It’s called Linus’ Law after Linus Torvalds who created the Linux operating system. But it’s actually from Eric S. Raymond, from his 1999 book The Cathedral and the Bazaar. Anyway, the idea is simply this: that unlike proprietary software, the source code for which is locked inside a company’s domain and therefore not available to be audited by outside sources, open source source code is available for auditing. 

The fact is over 85% of the software today is composed of third party components — meaning your developers didn’t code all of it, someone else contributed along the way. And a majority of that third party software is often open source software. And that only makes sense. Why should I attempt to create my own SSL/TLS when I can integrate OpenSSL into my product. Not only do I get a much faster time to market, I don’t have to worry about rolling my own encryption.  Really, never roll your own encryption. Just don’t..

But eyeballs aren’t necessarily on all the open source code and what may have been seen as secure in the 1980s and 1990s certainly would not be seen as secure today. Particularly in IoT, where we find ourselves using MMQT and other ancient protocols, not for what they were originally designed for, but for our immediate need for lightweight communications among devices. But I digress.

So after 1995 when Miller published his papers there wasn’t a lot of testing. That’s not to say there was no testing — there was. It’s just that it was very hit and miss. And part of that was that Miller’s fuzz application remained largely in academia and not well understood outside of that. 

Our story now skips ahead twenty years to 2014. 

In April 2014 the Heartbleed vulnerability was disclosed by Codenomicon and Google researchers who independently found it using fuzz testing. In a previous episode, episode ten, I talked a lot about Heartbleed. While it’s not necessary for this episode, I will be referencing Heartbleed, so if you want to learn more about it, check out that episode first. Basically Heartbleed was a vulnerability in the heartbeat function of OpenSSL that existed for over two year before it was found. This was a serious vulnerability, one that could leak passwords and encryption keys over the internet, and it simply could not have been detected using traditional static code analysis tools. No, for this new class of vulnerability, you needed to test the dynamically while running code. You needed fuzz testing. 

There are of course different types of fuzzers —random, generational, protocol — and we are not going to get into all of them here. Or for the fact that Heartbleed was discovered mainly because the protocol fuzzing tool only looked at the specs for the SSL protocol and so one could argue that this gave its discoverers a huge head start.

For this episode, we really only need to talk about one fuzzer, American Fuzzy Lop or as its commonly known, AFL. A fuzzy lop, by the way, is a type of rabbit, which is why you’ll see that image associated with AFL.  AFL was created by Michal Zalewski and it required you to recompile an application with a special compiler wrapper that adds assembly instrumentation code to the binary. So it doesn’t need the source code, only an input. Further, it starts with an input sample or seed that triggers a new code path, and uses that sample as a starting point for further fuzzing. What this does is allow for even more code coverage than say a random fuzzer. And having that extra depth will  prove to be important with Shellshock. 

Zalewski made AFL openly available under a free license. As free fuzzers go,AFL is pretty easy to use, but to use it well, you more or less have to study it or find an expert who already has.  

So you have a fuzzer. Now, what to fuzz?  You can fuzz your own software, of course. In the summer of 2014, particularly after Heartbled, people began to once again look at open source software with some dedication. 

In the summer 2014, the Linux Foundation rounded up a $6 million war chest and announced it would to shore up the security on a few widely used open source projects, such as OpenSSL (which is where Hearbleed lived), OpenSSH, and the Network Time Protocol. Bash wasn’t on the list.

So if BASH wasn’t on the list, how then did we end up with Shellshock?

The discovery of Shellshock, perhaps, really begins with a researcher questioning some unusual behavior he experienced. Stephane Chazales told StackExchange that in July 2014, he’d reported a vulnerability in g-lib-c localization.  it was a multiple directory traversal vulnerability within GNU C Library that allows attackers to hack into git servers provided they were able to upload files there. This was CVE-2014-0475 which stands for the year of its discovery- 2014- and the number of the vulnerability reported, in this case 0475.  

CVE-2014-0475 was not Shellshock, but in this case BASH was used as the login shell of the git unix user. And this early work with BASH probably got Chezla thinking about other things that lead him to Shellshock. What he noticed as he continued to fuzz was that Bash seemed to allow an adversary to run malicious code, unchecked, on another systems because BASH to be not properly sanitizing its input.

What does that mean, sanitizing?  Sanitizing generally means that the application looks for invalid inputs and rejects those that don’t confirm to its specifications. What he noticed was that he could add some details to a BASH request — and BASH would go ahead and simply process the request without question. He noticed this when using OpenSSH.

SSH or Secure Shell is an encrypted connection over Port 22.  It is used to connect two computer systems together securely, the connection is encrypted, so that you type on one and see the result on the other. In bash, it been noted that any environment variable beginning with quote open parentheses, close parentheses, open bracket, quote was specially processed — this is where we start, with OpenSSH.  CHEZ-LA reported the initial Shellshock vulnerability, CVE_2014-6271 on September 12th, 2014, but it wasn’t not made public then.

Whenever a vulnerability is discovered, it is the responsibility of the discoverer to first inform the vendor, or in the case of open source, the managers privately. This is not a rule, but it is good software etiquette. Over at BASH, Brian Fox had since moved on, so Chazales reached out to Chet Ramey, who on September 16th set about creating fixes for all the current and past versions of bash going back to version 3.0

Concurrently, Florian Weimer, who works for Red Hat, confirmed Chazales findings and helped him get in contact with other relevant parties in secret with a select few internet infrastructure providers and Linux distributors, including Debian, Red Hat, Ubuntu, SuSE and others 

Delays in publicly reporting vulnerabilities is sometimes necessary and has happened before, with other vulnerabilities though only when the vulnerability is particularly significant. For example through the use of both the Finnish and US CERT the details of Heartbleed were given to several companies ahead of public disclosure, making sure that  banking and ecommerce websites that used OpenSSL were patched in time. Perhaps even more significant was in 2008 when researcher Dan Kaminsky found a fundamental flaw in the Domain Name System (DNS) protocol, one that could lead to cache poisoning. Before he presented his findings publicly at Black Hat USA, Kaminsky coordinated his discovery first with most of the major internet players, and for that simple courtesy, Kaminsiky is often cited as the guy who saved the Internet.

It’s also possible that Chazales  could have taken another route entirely. Rather than reach out to Weimer, he could have sold his vulnerability on the dark market or directly to the intelligence community. He told the Morning Herald:  “We joked about how much I could sell it to GCHQ/NSA, or negotiate a pay raise. But in my mind, there’s never been a doubt that the first thing to do was to get it fixed ASAP and minimize the impact. My job as an IT manager is to minimize the risk and put out fires.”

When Chazales initial vulnerability in BASH finally became public on September 24th, he nicknamed it “BASHDOOR”. It was immediately a big deal at least online, as researchers began to see how this could be exploited, how bad it could be. In fact, the MITRE organization gave this a severity rating of 10 out of 10. 

So our story, really, could have ended here. I mean Chalazes responsibly disclosed a twenty-something year old flaw in bash, appropriate companies were given a heads up, the vulnerability is given a CVE with a high severity rating, and a patch is publicly disclosed. All good. Right?

Unfortunately, CVE-2014-6271 aka bashdoor, turned out to be only the tip of the iceberg.

As different online forums and listserve groups began to buzz about the bashdoor vulnerability, they quickly settled upon something really important to discuss: It’s name.  Clearly Bashdoor was uninventive, and, well, it didn’t have a cute logo. So Bashdoor quickly mophed into Bashbug, which did have a cute logo, and that was quickly followed with Shellshock, a name first accredited to Andreas Lindh. With that came an even better — if subject to possible copyright violations– cute logo. Shellshock, as a name, stuck and became the name going forward.  

This momentary obsession over the name is not entirely a joke. I know there’s a whole pro and con argument within the infosec community about whether to name critical vulnerabilities and certainly whether or not they need cute logos. Okay, maybe we can all agree to forego the cute logos, but I talked to the people at Codenomicon who found Heartbleed and I asked them specifically about the criticism they received in naming  that. They argued back that, really, no one’s going to remember CVE 2014- 0160. That’s true given that this episode has talked about CVE 2014-0475 and now CVE 2014-6271, I think you can start to see the confusion.  So when we give the really severe vulnerabilities unique names, they stand out more from the thousands of other CVEs given in a single year. Not only that, the named ones get the media visibility that is sometimes necessary to get a patch out quickly.

Really would you have listened to a podcast about CVE 2014-6271?

In retrospect, this really was a good thing because Shellshock didn’t end up just being multiple CVEs, so deciding upon a name early was a good thing since it really did keep things straight when talking to other infosec people at the time. 

But the name wasn’t the only controversy lighting up the online forums and list serves. It seems more than one person had the uneasy feeling that the original Shellshock CVE only scratched the surface. And they were right.

Shellshock would prove to be an extremely bad vulnerability. Unlike Heartbleed, tShellshock was easy to exploit on vulnerable systems, and it granted attackers immediate control of vulnerable systems when successful. What is worse, the initial understanding of the problem was faulty, so the carefully-crafted response developed at first did not fully fix the problem.

Whenever a patch goes public, both the good guys and the bad guys have equal access to it. They both reverse engineer it, meaning they try to step through what parts of the code were changed so they can get some idea of what the original vulnerability did. So it’s a race to see if anyone can exploit that flaw before everyone gets their system patched. That’s on the bad side.

On the good side, researchers try and reverse engineer it so they can look for other vulnerabilities like it and whether or not the patch created has done its job effectively. Among the skeptics were two researchers at Google, Tavis Ormandy and Michal  Zalewski, you remember, the guy who created AFL.

Zalewski started fuzz testing BASH, identifying and isolating interesting syntax based on coverage signals. It derived thousands of distinctive test cases. Fuzzers sometimes take a while to produce meaningful results. For the first few hours, AFL kept finding issues already known. And then he started to find new flaws in Bash, these affected the parsing of function definitions in environment variables by Bash and the parsing of function definitions in environment variables.These were designated CVE-2014-6277 and CVE 2014-6278.

Meanwhile, Ormandy discovered that he could convince the Bash parser to keep looking for a filename for output redirection beyond the boundary between the untrusted string  and the actual body of the program that bash is being asked to execute. In other words, he could piggyback malicious code. This was designated CVE-2014-7169, So that’s three additional vulnerabilities on top of Chez-las’ initial finding, going deeper into the problem..

Then Todd Sabin and Florian Weimer of Red Hat independently disclosed a static array overflow in the BASH parser. This was CVE-2014-7186, which is the fifth Shellshock vulnerability, while Weimer went on to find a one-off counter error in Bash. That was CVE-2014-7187.

There were now six vulnerabilities associated with Shellshock and all found mostly through fuzzing within a two week window. Chet Ramey, meanwhile, was producing an updated versions of Bash that incorporated these new findings.  

By the beginning of October, three weeks after Chazales first reported his finding, ZA-LEW-SKI completed his fuzzing of the latest patches, and announced that Bash was indeed hardened and would prevent Shellshock from working on a patched system.

In a blog, he summed up that the shell function import feature in BASH “was clearly added with no basic consideration for the possibility of ever seeing untrusted data in the value of an environmental variable. This lack of a threat model seems to be the core issue…” 

Not to put a fine point on it, but damn, that’s an accurate accounting of Shellshock.

Wait, what? So that’s it?

I said at the beginning Shellshock was perhaps the most dangerous vulnerability ever reported. It’s a 10. And it’s easy to exploit. 

So where’s the worm? Where’s all the carnage?

In the beginning there were some reports of exploits in the wild, but not much. Wired reported that some botnets were using the initial vulnerability to spread, but later patches mitigated that. And then, more importantly, there was that massive Yahoo data breach that was reported only a few weeks after Shellshock went public — but Yahoo quickly confirmed that the breach–the largest in US history with over 3 billion– billion with a b — user accounts affected — was not related to Shellshock.

The thing is security is a tricky thing. As Futurama reminds us, Sometimes when you do the right thing, it seems as though you did nothing at all. So, rather than ask why didn’t it break or crash, we should instead talk about what went right. Although the initial vulnerability was misunderstood, as more and more researchers looked at Bash, as they began to fuzz test, they started to expose the underlying issues. There was communication and more importantly there was cooperation and coordination. Remember  “given enough eyeballs, all bugs are shallow” 

Yet it seems like fuzz testing comes in fits and spurts. I mean, in 1995 Miller published four academic papers calling for more fuzz testing — then it appeared that nothing happened. Then in 2014, the Linux foundation embarked on a process to fuzz open source. It seems at times that not much has happened after that. 

Actually that’s not true at all.

In April 2012, Google announced ClusterFuzz, a cloud-based fuzzing infrastructure that is used for testing security-critical components of the Chromium web browser. Security researchers can also upload their own fuzzers and collect bug bounties if ClusterFuzz finds a crash with the uploaded fuzzer.

In September 2016, Microsoft announced Project Springfield, a cloud-based fuzz testing service for finding security critical bugs in software.

In December 2016, Google announced OSS-Fuzz which allows for continuous fuzzing of several security-critical open-source projects.

Maybe the fact that we don’t have recent examples of cleverly named vulnerabilities with cute logos is not to say we’re not testing enough. Maybe it’s to say that the process is finally starting to work. There may not be many more juicy vulnerabilities in Bash or OpenSSL in old code still in use, but we’re producing over 111 billion lines of new code every year. And if we manage to stay on top of all that, shifting left with our testing, integrating fuzz testing into our CI/CD, then hopefully we won’t ever see another Hearbleed or Shellschock ever again.

Stay up to date with the Hacker Mind by following us on Twitter @Th3H4ck3rm1nd.

Stay Connected


Subscribe to Updates

By submitting this form, you agree to our
Terms of Use
and acknowledge our
Privacy Statement.

This site is protected by reCAPTCHA and the Google
Privacy Policy
and
Terms of Service
apply.

*** This is a Security Bloggers Network syndicated blog from Latest blog posts authored by Robert Vamosi. Read the original post at: https://forallsecure.com/blog/the-hacker-mind-shellshock