This is fascinating research about how the underlying training data for a machine-learning system can be inadvertently exposed. Basically, if a machine-learning system trains on a dataset that contains secret information, in some cases an attacker can query the system to extract that secret information. My guess is that there is a lot more research to be done here.
EDITED TO ADD (3/9): Some interesting links on the subject.
*** This is a Security Bloggers Network syndicated blog from Schneier on Security authored by Bruce Schneier. Read the original post at: https://www.schneier.com/blog/archives/2018/03/extracting_secr.html
Authors/Presenters: *Shradha Neupane, Grant Holmes, Elizabeth Wyss, Drew Davidson, Lorenzo De Carli Many thanks to USENIX for publishing their outstanding…
What really is cyber security and why doesn't the traditional CIA triad of confidentiality, integrity, and availability work? And what's…
The widespread adoption of cloud services has introduced cybersecurity challenges and compliance complexities due to various privacy regulations in different…
The RSA Conference 2024 is set to kick off on May 6. Known as the “Oscars of Cybersecurity”, RSAC Innovation…
Last week, we hosted Michael Tapia, Chief Technology Director at Clint ISD in Texas, and Kobe Brummet, Cybersecurity…
Authors/Presenters: Binbin Zhao, Shouling Ji, Xuhong Zhang, Yuan Tian, Qinying Wang, Yuwen Pu, Chenyang Lyu, Raheem Beyah Many thanks to…