What the TikTok Ban Could Mean for GRC

The White House and TikTok’s critics in Congress have made it clear: They consider TikTok a dangerous social media app and national security threat. Throughout the March 23, 2023, congressional hearing, TikTok CEO Shou Zi Chew defended the company against these charges. Still, lawmakers strongly supported a full ban on the popular short-video app owned by ByteDance, a Chinese company. Rep. Tony Cárdenas (D-Ca.) cited “life and death” issues connected to the app, which has 150 million U.S. users.

Due to national security concerns, over a dozen countries have introduced full, partial or public sector bans on TikTok. Most proposed bans target the public sector or government devices, but a growing number of private companies are blocking the app. The U.S. considered banning TikTok if its Chinese owners don’t sell the U.S. version of the app.

CISOs, CIOs and risk professionals are well aware of social media data security and privacy issues. But the TikTok debate and its possible outcomes have wider-reaching governance, risk and compliance (GRC) implications. The marriage of data collection with automated, AI-enabled applications could usher in an era of devastating cybersecurity incidents that spread with unprecedented speed using adaptive social-engineering intelligence.

Ultimately, spying through the collection of TikTok data isn’t the primary concern of governments. Rather, it’s the ability to use that data paired with artificial intelligence to spread disinformation, create a movement or control the perception of users.

Social media’s role in the 2016 U.S. presidential election made headlines as people sought to understand the full impact of fake news, polarizing filter bubbles and Russian propaganda campaigns executed across social media platforms.

In 2018, numerous Silicon Valley tech executives publicly claimed that social media harms humanity. Chamath Palihapitiya, a former Facebook vice president, ​pronounced that social media is ​“ripping apart the social fabric of how society works.” Sean Parker, who served as Facebook’s first president, warned that social media​ “exploit[s] a vulnerability in human psychology” to turn children into addicts and interfere with productivity.

If these proclamations didn’t serve as a wake-up call to U.S. companies, lawmakers’ argument that TikTok poses a national security threat—that outweighs the wishes of the millions of people and businesses that use it—should. Pairing innovative AI technologies, like ChatGPT, with virtual reality (VR) gives threat actors the power to predict user behaviors and exploit human psychology—at speeds that could make it impossible for the targets of disinformation and deepfake campaigns to distinguish between what is real and what isn’t.

The proposed ban of the most popular smartphone app in the country foreshadows a tsunami of new AI apps that could be introduced into the workplace without any governance or risk controls. As it exists today, there is no standard approach or methodology for deploying AI within an enterprise. Unlike a SaaS-based application that’s housed in a managed environment like AWS and comes with cloud security controls, AI sits on top of that tech stack—so it’s boundaryless.

Business executives tend to prioritize productivity improvements over cybersecurity concerns, but automation is inherent in AI. An AI-enabled application can control its programming and has autonomy over its interactions with users. If AI is developed without cybersecurity guardrails, the opportunity to control it may be lost forever.

Before the AI wave sweeps over us and creates irreparable damage, CISOs and risk managers need to quickly get AI controls in place. Then they can start thinking about the use cases that will deliver the greatest business value.

The high-stakes cybersecurity implications of AI, ChatGPT and TikTok have a lot of folks racing to promote and harmonize best practices, standards and frameworks for AI and related technologies. Cybersecurity professionals can use these resources to build their AI governance and risk management programs.

  • The Holistic Information Security Practitioner Institute (HISPI) is an independent training, education and certification 501(c)(3) nonprofit organization that is working to crowdsource and open source AI governance standards that are suitable for highly regulated organizations.
  • In collaboration with private and public sectors, NIST has developed the NIST AI Risk Management Framework (AI RMF) to help organizations incorporate trustworthiness considerations into the design, development, use and evaluation of AI products, services and systems. NIST’s AI taxonomy helps simplify the categorization of AI life cycle risks so that stakeholders may better recognize and manage them.
  • Individuals from groups like AI Squared and Forward Edge-AI are helping companies adopt and integrate AI correctly. AI Squared’s web browser code initiates an AI work process or workflow based on approved enterprise use cases. This allows businesses to quickly scale AI without building an entire ecosystem to run it. Forward Edge-AI’s tools are helping under-resourced security operations stay ahead of threats.

AI will undoubtedly make our lives and work streams more productive. Businesses will derive value and achieve performance metric improvements through AI. But the boundaryless nature of AI means that threat actors will be able to identify and exploit the weaknesses within an organization’s security controls faster.

GRC program leaders should take heed of the proposed TikTok ban. Act now before it is too late. Get started with a strong governance framework. Build the framework using a standard taxonomy that helps all stakeholders understand and control AI risks. Then make sure employees stay within the guardrails. Outsource the task if the project is too big or falls too far outside your team’s domain of expertise. Taking action now is the only way to make sure that AI serves the business securely, safely and correctly today and in the future.

Avatar photo

Aric Perminter

Aric K. Perminter is Founder, Chairman and CEO of Lynx Technology Partners, a trusted governance, risk and compliance (GRC) managed service partner of a growing list of customers in highly regulated industries worldwide. Respected for his altruism and visionary leadership, Mr. Perminter has helped hundreds of companies achieve a strong cybersecurity stance and high performance throughout his 25-year career. He is the second member and shareholder of THREAT STREAM, an investor in Security Current and CloudeAssurance, and serves on the executive boards of BCT Partners, Cyversity, and Cyware. As chairman of the Board of Directors, Mr. Perminter is responsible for formulating and executing long-term strategies and interacting with clients, employees, and other stakeholders. He assists Lynx’s CEO with making decisions and establishing policies – setting the tone for the company’s values, ethics, and culture. Mr. Perminter exemplifies Lynx’s commitment to helping its clients achieve high performance. He is a proven leader with deep expertise in developing strong customer relationships, a passion for building outstanding client teams, and a disciplined focus on operations and execution. In his 25-year career, Mr. Perminter has held a wide variety of leadership positions across key parts of Information Technology businesses. He founded Lynx in March 2009 and served as the CEO through August 2015. Prior to founding Lynx, he was Regional Sales Manager of Lumension Security’s northeastern region, which services clients’ endpoint security and risk management needs. From June 2004 through October 2007, Mr. Perminter was partner at Secure Technology Integration Group, with primary responsibility for STIGroup’s business development program. In that role, he oversaw sales, marketing, and partner management initiatives globally. He also served as Founder & CEO for Precise Technologies Group from January 1998 through August 2003, at which time it was successfully sold to Infinity Consulting Group. After serving in the United States Army, Mr. Perminter spent his earlier career with the Greenwich Technology Partners in the Financial Services Operating Group, where he served as Sales Manager for the northern region. Mr. Perminter represents a number of external venues. He is the second member and shareholder of THREAT STREAM, serves on the executive board of BCT Partners, is a member of the Employer Advisory Council for Per Scholas, an Advisory Board Member of CloudeAssurance, and investor in SecurityCurrent.

aric-perminter has 1 posts and counting.See all posts by aric-perminter