Extracting Secrets from Machine Learning Systems

This is fascinating research about how the underlying training data for a machine-learning system can be inadvertently exposed. Basically, if a machine-learning system trains on a dataset that contains secret information, in some cases an attacker can query the system to extract that secret information. My guess is that there is a lot more research to be done here.

EDITED TO ADD (3/9): Some interesting links on the subject.

*** This is a Security Bloggers Network syndicated blog from Schneier on Security authored by Bruce Schneier. Read the original post at: https://www.schneier.com/blog/archives/2018/03/extracting_secr.html

Recent Posts

USENIX Security ’23 – Beyond Typosquatting: An In-depth Look at Package Confusion

Authors/Presenters: *Shradha Neupane, Grant Holmes, Elizabeth Wyss, Drew Davidson, Lorenzo De Carli Many thanks to USENIX for publishing their outstanding…

15 hours ago

Breaking Down Cybersecurity: The Real Meaning Behind the Jargon

What really is cyber security and why doesn't the traditional CIA triad of confidentiality, integrity, and availability work? And what's…

20 hours ago

What is General Data Protection Regulation Act (GDPR)?

The widespread adoption of cloud services has introduced cybersecurity challenges and compliance complexities due to various privacy regulations in different…

22 hours ago

RSAC 2024 Innovation Sandbox | Bedrock Security: A Seamless and Efficient Data Security Solution

The RSA Conference 2024 is set to kick off on May 6. Known as the “Oscars of Cybersecurity”, RSAC Innovation…

23 hours ago

Cloud Monitor Automation Improves K-12 Cybersecurity Training & Awareness

   Last week, we hosted Michael Tapia, Chief Technology Director at Clint ISD in Texas, and Kobe Brummet, Cybersecurity…

1 day ago

USENIX Security ’23 – UVSCAN: Detecting Third-Party Component Usage Violations in IoT Firmware

Authors/Presenters: Binbin Zhao, Shouling Ji, Xuhong Zhang, Yuan Tian, Qinying Wang, Yuwen Pu, Chenyang Lyu, Raheem Beyah Many thanks to…

1 day ago