SBN

Apollo 11 software lessons still relevant today

What did Apollo 11 teach us about software development? Coding practices have changed since 1969, but the lessons learned from the moon mission still hold.

Software development lessons from Apollo 11 still relevant

The original version of this post was published in Forbes.

Given the avalanche of media coverage this past week celebrating the 50th anniversary of the Apollo 11 moon landing, if you didn’t know before, you surely now know that intense competition (with the Russians), monumental courage and American creativity and capability made it possible.

But, although it got a lot less coverage, coding did too.

Yes, in 1969 the internet was still 14 years away, the World Wide Web wouldn’t show up until eight years after that and wouldn’t become a “thing” for the masses for yet another five years or so.

But the Apollo 11 mission couldn’t have happened without computer code—software. As the Wall Street Journal put it couple of weeks ago: “It took more than big rockets to put humans on the moon … It took code.”

Get the Gartner MQ for AppSec Testing 2019

Apollo 11 software development

Indeed, the “giant leap for mankind”—Neil Armstrong and Buzz Aldrin walking on the lunar surface—would have been aborted if software hadn’t functioned correctly. A “program alarm” in the lunar module known as “error code 1202” brought Mission Control within seconds of scrubbing it. But a “restart” (reboot) provision in the software gave those back on Earth the confidence that the landing could proceed.

Fred Martin, 85, who managed much of the Apollo software development, told the WSJ, “The software saved the mission.”

In other words, software enabled one of the great technological achievements of the time.

Still, it is hard to imagine that coding done five decades ago would have any relevance today. The technology gap is vastly wider than that between the four-track, analog gear the Beatles used to record “Abbey Road” and the digital, limitless-track, limitless-effect, tape-free studios that any garage band, never mind music star, uses today.

The complexity of Apollo 11 software pales in comparison to the smartphones of today.

As statistical nerds have noted, the Apollo computer contained about 145,000 lines of code. Compare that to the estimated 62 million lines required today to power the social network Facebook, or the 2 billion it takes to operate Google. Those modern numbers aren’t just about volume either—they reflect the complexity of today’s programs, networks and systems.

Lance Eliot, writing in Forbes recently, noted: “Even your smartphone is by far superior in computer power than were the lunar lander computers.”

But Apollo 11 software and its development remain highly relevant. Eliot also argued that the lessons of Apollo 11 should be applied to the development of autonomous vehicles.

Indeed, in multiple ways it laid the foundation for what software development is, or ought to be, today.

Bug-free software

Start with Margaret Hamilton, now 82, the MIT computer programmer who led the team that created the onboard flight software for the Apollo missions. As The Guardian noted in an interview with her earlier this month, “Her rigorous approach was so successful that no software bugs were ever known to have occurred during any crewed Apollo missions.”

Margaret Hamilton's rigorous approach was so successful that no software bugs were ever known to have occurred during any crewed Apollo missions.

No software bugs. Perhaps if she consented (and it was possible) to be cloned multiple times and all those Hamiltons were in charge of software development today, nobody would have ever heard of Patch Tuesday.

It was she and her team that wrote the software that included the “program alarm” and the restart capability that saved the landing. Speaking of those nail-biting moments, she said, “It quickly became clear the software was not only informing everyone that there was a hardware-related problem but was compensating for it—restarting and re-establishing the highest priority tasks.”

“The error detection and recovery mechanisms had come to the rescue. It was a total relief when they landed—both that the astronauts were safe, and that the software worked perfectly,” she told The Guardian.

Rigorous oversight

Then you could read a bit of history from Chapter 2 of NASA’s Computers in Spaceflight: The NASA Experience, which notes that “Software engineering as a specific branch of computer science emerged as a result of experiences with large-size military, civilian, and spaceborne systems. As one of those systems, the Apollo software effort helped provide examples both of failure and success that could be incorporated into the methodology of software engineering.”

It adds: “Even during the early 1960s, the cycle of requirements definition, design, coding, testing, and maintenance [labeled a “software life cycle”] was followed, if not fully appreciated, by software developers.”

The NASA version of a software life cycle was indeed, in a word, rigorous, at least when it came to oversight.

There were three boards in charge of overseeing the design and construction of the spacecraft itself along with the software that would run it. Any changes in specifications had to run through one or more of those boards. According to NASA’s Stan Mann, “MIT could not change a single bit without permission.”

How many organizations do the equivalent of that today?

Yes, you can debug

Yet another foundational principle of software development established during the Apollo years was cited by Ella Atkins, director of the autonomous aerospace systems lab at the University of Michigan and an IEEE senior member, to Computer Weekly.

“From the Apollo mission, we learned we could do the math calculations fast enough to allow the orbit to be calculated correctly. We learned we could debug code well enough so that there weren’t any problems,” she said.

And we are still debugging code today, in the quest to make it reliable for everything from autonomous vehicles to critical infrastructure.

Apollo 11 software development involved a lot of debugging, and we are still debugging code today.

So, how are we doing?

In Hamilton’s view, not all that well. She told The Guardian that one of the most important lessons of the Apollo mission still hasn’t been learned today.

“What became apparent with Apollo—though it is not how it worked—is that it is better to define your system up front to minimize errors, rather than producing a bunch of code that then has to be corrected with patches on patches. It’s a message that seems to have gone unheeded—in this respect, software today is still built the way it was 50 years ago,” she said.

Perhaps not entirely, but it is true that at every security conference, session after workshop after keynote features speakers preaching the gospel of “building security in” or “shifting left” during the software development life cycle (SDLC). That means addressing security and integrity “up front,” as Hamilton put it, instead of trying to patch it on later.

If the lesson had been learned, there would be no need to keep preaching it.

Heavier lift

It’s not quite that simple, of course. Don Davidson, program management director at Synopsys, noted that debugging software today is a heavier lift, not to mention that the security threats in an online world are vastly more numerous and diverse than at the time of Apollo 11. Back then, they didn’t have to worry about being hacked from somewhere on the other side of the world.

“Apollo focused on quality and systems management/integration, which are good lessons for software development,” he says, “but the software debug control was all about functionality—looking at mission performance and availability—with little focus on confidentiality and integrity.”

In other words, quality doesn’t necessarily mean security.

Quality doesn’t necessarily mean security.

Also, the code for Apollo was all custom—it didn’t use any COTS (commercial off-the-shelf) or open source components. But Davidson agrees with Hamilton’s main point. “The sheer volume of code today requires better and continual testing throughout the life cycle—you can’t do it at the end,” he says. “You need to build security in.”

Jim Manico, global board member of OWASP (Open Web Application Security Project) also agrees. “Security and quality would significantly increase if we listened to Hamilton,” he says. “I think the lack of discipline in building and designing software is a significant problem in our industry.”

But he believes the DevOps movement offers some hope. “The massive move to automate all processes in software development should help force discipline from a technical point of view, to get past the sloppiness that was the ‘90s and 2000s,” he says.

The costs of secure software development

Finally, money is always a factor. Travis Biehn, technical strategist at Synopsys, said it is not that building secure software is impossible.

“I think the argument here is economic,” he said. “It currently costs, in skill set required, in expertise required, in hours of development, too much to build correct software.

“A middle ground has been found where software can be built cheaply by commodity developers,” he said, but to address security, those commodity developers must be “locked into more rigorously engineered platforms.”

“Without a platform engineered to keep a commodity developer on the rails, there’s no chance,” he said.

Get the Gartner MQ for AppSec Testing 2019


*** This is a Security Bloggers Network syndicated blog from Software Integrity Blog authored by Taylor Armerding. Read the original post at: https://www.synopsys.com/blogs/software-security/apollo-11-software-development/