SBN

FaceApp Warns Us Once Again: Always Read the TOS

Even Leon understands the appeal of FaceApp. 

FaceApp old Leon

If you’ve been anywhere near the internet the past two weeks, you’ve most likely heard about FaceApp. The cute little app that lets people see an AI-generated peek at their future selves became a viral sensation over the last couple of weeks, creating tidal waves of old faces crashing through social feeds – and with it, a lot of backlash about privacy and security. (Interestingly enough, this is actually the second time it’s shot through the cultural zeitgeist in as many years. Last time around, the app makers had to apologize for its poorly thought out “ethnicity filters.”)

Depending on what a person reads or how they approach security, FaceApp is either a proxy for Russian troll farms to subvert American democracy, or a throwaway app with some sketchy terms of service, but … you know, whatever.

The reality is neither – because FaceApp is developed in Russia doesn’t mean it’s an automatic information funnel to Vladimir Putin. The Democratic National Committee was perhaps a bit overzealous in sending out a panicked warning to staffers to delete the app (though the concern is understandable). Likewise, people should really pay more attention to what information they are blindly surrendering to developers in exchange for a quick hit of iPhone-induced dopamine.

Always Read the TOS!

As privacy and platform reporter Charlie Warzel wrote in the New York Times, FaceApp’s TOS are horrendous. With one or two clicks, every user is giving the developer complete and total control of images of their own faces, and probably some friends too. Warzel points out that by downloading FaceApp, the developers get “irrevocable, nonexclusive, royalty-free, worldwide, fully paid, transferable sub-licensable license” over each picture you use.

What does that mean? Well, that’s largely up to the people who made the app, and that’s the problem. FaceApp’s CEO Yaroslav Goncharov told the Washington Post that the company doesn’t “sell or share any user data with any third parties,” and that FaceApp deletes the photos on its server in 48 hours. This is all fine and good right now, but it also doesn’t mean that the company won’t turn around and start selling your data next week, next month or next year.

This is doubly important when it comes to our faces. Facial recognition capabilities are going to become a part of everyday life, whether it’s for unlocking our phones or buying things online, which could enhance our personal security, or to be used to train algorithms and feed databases for smarter surveillance capabilities, which could do the opposite. We as a nation and as individual users haven’t entirely come to grips with what’s being created right now and how we’re contributing to it by playing fast and loose with our personal data.

This is the massive, gaping privacy hole in our online lives right now. Our data – our faces, demographics, activity, likes and dislikes – is the most valuable commodity available. These harmless little clicks we all make every day are literally worth billions of dollars in aggregate and there’s no incentive for marketers, app makers, advertisers and whoever else can make money off it to play fair in gathering it beyond their own good faith.

That’s not enough.

Learn From Other’s Bad Faith

The lesson here is easy: Data is precious and should be protected at all costs. This applies to everyone and everything, whether it’s casual app users, people who live and work online, small businesses or worldwide conglomerates. Protect your data. Treat everything like a potential threat and only give up that suspicion once you’ve proven it to be untrue.

This is the fundamental philosophy underpinning Zero Trust – “never trust; always verify” – and why we talk about it so much. It’s the only way to know that the people who access your data, be they app or device makers, advertisers, employees, or contractors, are who they say they are, and the only way to truly know if they’re acting in good faith.

For companies, this consists of two basic steps. First, make sure security is a priority and that it’s organized around a coherent philosophy, and second, that the company is using a complete set of tools. We recommend a next-gen access model which wraps single sign-on, adaptive multi-factor authentication, workflow and lifecycle management and enterprise mobility management into a single package in order to reduce friction and avoid having to piece it together through a mishmash of vendors. We happen to know a place to find that, too.

For individuals, it’s much harder, but it all starts with the same defensive posture a business would take, combined with learning about where your data goes, how it’s shared and what you can do to keep as much of it in your own hands as possible.

It’s hard work and it absolutely should not be, but it will pay off.


*** This is a Security Bloggers Network syndicated blog from Articles authored by Corey Williams. Read the original post at: https://www.idaptive.com/blog/FaceApp-Always-Read-TOS/

Secure Guardrails