SBN

How Can We Improve Trust in Technology?

Who do you trust online?

That question may be the No. 1 litmus test for governments, companies, social media interactions, politicians, traditional news media, global websites and the people who use technology over the next decade.

Allow me to explain one perspective on this trust question with an example from this past week.

It was Tuesday morning, Jan. 29, 2019, and I was wrapping up my morning tour of reading intriguing LinkedIn posts. One summary of takeaways from the World Economic Forum (WEF) in Davos, Switzerland, grabbed my attention. You can read that well-written, intriguing post here, which is titled: “My three takeaways from Davos” by Stefan Oschmann, who is chairman of the executive board and CEO at Merck Group. 

I really liked Mr. Oschmann’s three key takeaways:

1) Let’s be positive — There are many problems in the world to worry about, but …

2) Data is driving the change — “The key drivers are artificial intelligence, the ever-increasing availability of data and the decreasing costs of technology. …”

3)  We need to build trust in science and tech — “… As business leaders, we must put great effort into demonstrating why and how our work benefits society and act of the basis of firm values. …”

He ended with these words: “Now I’m interested in your thoughts. Do you agree with me or do you have a different view? In general: Which topics are most important to you?”

I decided to take advantage of the offer made by the Chairman and CEO of Merck Group. I wrote my viewpoints in the LinkedIn comments box.

But to my surprise, my comments quickly disappeared. Evidently, someone at Merck (I seriously doubt it was the CEO author, but more likely some social media person or maybe someone at LinkedIn) didn’t like who I was or what I was saying or both.

Yes — my comments were respectful and did not violate any LinkedIn rules. I even complemented the author and his insights. I also asked some probing questions.

This situation made me fairly upset. So after waiting over an hour, I decided to repost my comments a second time and take a screenshot to see what happened next.

   

 

I wrote: “Thank you Stefan Oschmann. An intriguing, helpful summary. I agree with your three important items. However, we’ve been talking about building trust in technology for the past decade, and things are actually getting worse. With the UK story of child Internet addiction, along with fake news and similar stories appearing almost daily around the world to undermine trust, how do you propose we do that?

Sadly, that post was also quickly removed within less than one minute. (Side question: I wonder how that happened so fast? When I report a comment as needing review on LinkedIn, it usually takes at least several hours for anything to happen.)

I don’t know why some LinkedIn comments were allowed to remain and others were not. However, what quickly became clear to me is that someone had an agenda which allowed only certain comments that matched their overall narrative.

To be fair, I would certainly understand if comments were deleted for vulgarity or inappropriate language, unprofessional attacks against a person, etc., but that was certainly not the case. (I have seen far worse comments offered by many people which stayed on other LinkedIn posts.)     

The sad reality is that similar things have happened to me (and many others I know) on LinkedIn and on other websites and technologies over the years. These situations do not build trust in the social media platform, nor the author nor the company that they work for. The author’s piece which called for being positive and building trust in science and technology was actually undermined by this process which was well-written and intended for good. It also gave the appearance of only allowing select individuals into the conversation.

Trust in Society

Moving on to the wider trust in technology question, I believe that we have very deep and wide online and offline trust issues right now around the world.

This piece from TheConversation.com describes the reality that “people are both increasingly dependent on, and distrustful of, digital technology. They don’t behave as if they mistrust technology. Instead, people are using technological tools more intensively in all aspects of daily life. …”

Last year, Forbes offered this piece which described the institutions that Americans trust most and least

I encourage you to read the Forbes article. But in summary, we tend to trust the military and small business and mistrust big business, newspapers, the criminal justice system, television news and Congress. In the case of business trust, more trust small business (67 percent) than Big Business (only 25 percent).

Meanwhile, new technologies can sometimes diminish trust, such as the growing use of “deepfakes,” fake news, fake apps and even fake government websites

Phishing scams, hoaxes, and online fraud also diminish trust online — along with cyberattacks.

But, despite these challenges, Americans trust technology more than science, according to The Washington Post. “It is odd that many Americans question scientific expertise yet surrender autonomy when they enter the techno-electronic worlds of the Internet. …

Most people can’t live without their smartphone, and they trust online map directions to get them to their destination or bring them the right answer in a Google search.

People, Process and Technology

More and more we are being asked to trust new forms of technology every day, like AI and autonomous cars and other innovative inventions. And yet, at the same time studies are showing that trust in self-driving cars has plummeted.    

So can we build trust?

I like this quote from a U.K. blog: “In order for people to know what to trust, they need markers of credibility, information about sources, ways to check the information and forms of recourse when they’ve been had. We need the codes, standards and policies that build capacity for individuals to act. …”

But if an organization wants to improve trust in a new (or old but improved) technology, they also need to ensure that the people and processes surrounding the technology are trustworthy.

The people side of the equation reminds me of the “trust paradox,” which states that you can’t trust someone you don’t know, and you can’t begin to know someone without first trusting them. 

So even when the technology itself is reliable, trust in the technology can be undermined by not recognizing the importance of the interaction between people, process and technology. My experience from this week on LinkedIn offers a great example of a good message using helpful technology delivered by an expert with a good reputation, which was undermined by the process (of someone my deleting comments). The end result was that trust was undermined and not built. 

There are many other ways trust can be undermined online despite the innovation offered by well-meaning technologists. For example, when a bad actor creates a distributed denial-of-service attack against a bank or brings down a hospital with ransomware mistrust is spread, even if a new solution offers hope for a better process. 

Bringing It All Together: Who Will We Trust?

So what’s my main point?

Building trust in technology in the 21st century is very complicated. It takes patience and persistence. Using history as a guide, building trust won’t happen when problems are ignored or hidden. Constructive criticism must be an ongoing part of the process. Trust requires a transparent look at the real issues that arise and how they will be addressed. 

For example, I may love your product’s features, but still mistrust how it will be used, who’s in charge, or fear the capabilities may be used to harm my privacy. Knowing about any mistrust from users upfront will help in addressing concerns early.    

This is similar to finding vulnerabilities in software via bug bounties (and patching those holes) before the bad guys find exploits to hack my system.  

I tend to be an optimist regarding technology. I like Stefan Oschmann’s first observation from Davos — I try my best to stay positive. Anyone who stays in cybersecurity for long needs to maintain the hope that they can redeem more parts of cyberspace and make a positive difference.

Nevertheless, I continue to see more and more ways that technology is being used for evil. Each of us must consider the ethical implications of what we are advocating.    

We also need to think more about cyberethics and integrity online to build trust in technology. Our criticisms should be constructive and engage ideas and offer potential solutions.

In conclusion, Steven Covey once said, “Trust is the glue of life. It’s the most essential ingredient in effective communication. It’s the foundational principle that holds all relationships.”

We can’t waver in our efforts to build more trust online. 

 

UPDATE on Monday Feb. 4, 2019: I was contacted this morning by Christian Abold, the head of the CEO Office at Merck Group. He said, “I just read your article about the incident which occurred with your comment on Stefan Oschmanns Davos article. As you can imagine — Stefan Oschmann is very happy about every single comment/feedback (positive negative) he is receiving on his articles. That’s also the main driver and reason behind him posting his thoughts publicly. Anyhow sorry that this incident occurred. We are currently investigating this topic with LinkedIn. Please feel free to get in touch with me directly to discuss this topic in more details. …”  He also asked me to repost my comment, which I did.

I am happy to report that my comment was accepted. I truly appreciate Merck Group’s quick resolution of this matter.