Why skepticism is important in computer security: Watch James Mickens at USENIX 2018 argue for thinking over blindly shipping code

“Technology, in general, and computer science in particular, have been hyped up to such an extreme level that we’ve ignored the importance of not only security but broader notions of ethical computing.”
-James Mickens

We like to think that things are going to get better. That, after all, is why we get up in the morning and go to work, in the hope that we might just be making a difference, that we’re working towards something.

That’s certainly true across the technology landscape. And in cybersecurity in particular, the belief that you’re building a more secure world – even if it’s on a small scale – is an energizing and motivating thought.

However, at this year’s USENIX Conference back in August, Harvard Professor James Mickens attempted to put that belief to rest. His talk – titled ‘Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?’ – was an argument for skepticism in a field that is by nature optimistic (not least when it has a solution to sell).

So, what exactly does Mickens have against keynote speakers? Quite a lot, actually: he jokingly calls them people who have made bad life decisions aand poorrole models.

Although his tongue is firmly in his cheek, he does have a number of serious points. Fundamentally, he suggests developers do not invest time in questioning anything since any degree ofintrospection would “reduce the frequency of git commits”.

Mickens argument is essentially thatsoftware developers are deploying new systems without a robust understanding of those systems.

Why machine learning highlights the problem with computer science today

Mickens stresses that such is the hype and optimism around modern technology and computer science  that the field has largely forgotten the value of scepticism. In turn, this can be dangerous for issues such as security and ethics. Take Machine Learning for instance. Machine learning is, Mickens sayss  “the oxygen that Silicon Valley is trying to force into our lungs.” It’s everywhere, we seem to need it – but it’s also being forced on us, almost blindly,

Using the example of machine learning he illustrates his point about domain knowledge:

  1. Computer scientists do not have a deep understanding of the mathematics used in machine learning systems.
  2. There is no reason or incentive for computer scientists to even invest their time in learning those things.

This lack of knowledge means ethical issues and security issues that may be hidden at a conceptual level – not a technical one – are simply ignored.

Mickens compares machine learning to the standard experiment used in America since 8th grade: the egg drop experiment. This is where students desperately search for a solution to prevent the egg from breaking when dropped from 20 feet in the air. When they finally come up with a technique that is successful, Mickens explains, they don’t really care to understand the logic/math behind it.

This is exactly the same as developers in the context of machine learning. Machine learning is complex, yes, but often, Mickens argues, developers will have no understanding as to why models generate a particular output on being provided with a specific input.

When this inscrutable AI used in models connected with real life mission critical systems (financial markets, healthcare systems, news systems etc) and the internet, security issues arise. Indeed, it begins to raise even more questions than provide answers.

Now that AI is practically used everywhere – even to detect anomalies in cybersecurity, it is somewhat scary that a technology which is so unpredictable can be used to protect our systems.

Examples of poor machine learning design

Some of the examples James presented that caught our attention were:

  1. Microsoft chatbot Tay- Tay was originally intended to learn language by interacting with humans on Twitter. That sounds all good and very noble – until you realise that given the level of toxic discourse on Twitter, your chatbot will quickly turn into a raving Nazi with zero awareness it is doing so.
  1.  Machine learning used for risk assessment and criminal justice systems have incorrectly labelled Black defendants as “high risk” –  at twice the rate of white defendants.

It’s time for a more holistic approach to cybersecurity

Mickens further adds that we need a more holistic perspective when it comes to security. To do this,, developers should ask themselves not only if a malicious actor can perform illicit actions on a system,  but also should a particular action on a system be possible and how can the action achieve societally-beneficial outcomes.

He says developers have 3 major assumptions  while deploying a new technology:

#1 Technology is Value-Neutral, and will therefore automatically lead to good outcomes for everyone
#2 New kinds of technology should be deployed as quickly as possible, even if we lack a general idea of how the technology works, or what the societal impact will be
#3 History is generally uninteresting, because the past has nothing to teach us

According to Mickens developers assume way too much.  In his assessment, those of us working in the industry take it for granted that technology will always lead to good outcomes for everyone. This optimism goes hand in hand with a need for speed – in turn, this can lead us to miss important risk assessments, security testing, and a broader view on the impact of technology not just on individual users but wider society too.

Most importantly, for Mickens, is that we are failing to learn from mistakes. In particular, he focuses on IoT security. Here, Mickens points out, security experts are failing to learn lessons from traditional network security issues. The Harvard Professor has written extensively on this topic – you can go through his paperon IoT security here.
Perhaps Mickens talk was intentionally provocative, but there are certainly lessons – if 2018 has taught us anything, it’s that a dose of scepticism is healthy where tech is concerned. And maybe it’s time to take a critical eye to the software we build.

If the work we do is to actually matter and make a difference, maybe a little negative is a good thing. What do you think? Was Mickens assessment of the tech world correct?
You can watch James Mickens whole talk at Youtube

Read Next

UN on Web Summit 2018: How we can create a safe and beneficial digital future for all

5 ways artificial intelligence is upgrading software engineering

“ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018



*** This is a Security Bloggers Network syndicated blog from Security News – Packt Hub authored by Melisha Dsouza. Read the original post at: https://hub.packtpub.com/why-skepticism-is-important-in-computer-security-watch-james-mickens-at-usenix-2018-argue-for-thinking-over-blindly-shipping-code/