Clearview AI’s source code and app data exposed in cybersecurity lapse

Technology

A security lapse at controversial facial recognition startup Clearview AI meant that its source code, some of its secret keys and cloud storage credentials, and even copies of its apps were publicly accessible. TechCrunch reports that an exposed server was discovered by Mossab Hussein, Chief Security Officer at cybersecurity firm SpiderSilk, who found that it was configured to allow anyone to register as a new user and log in.

Clearview AI first made headlines back in January, when a New York Times exposé detailed its massive facial recognition database, which consists of billions of images scraped from websites and social media platforms. Users upload a picture of a person of interest, and Clearview AI’s software will attempt to match it with any similar images in its database, potentially revealing a person’s identity from a single image.

Since its work became public, Clearview AI has defended itself by saying that its software is only available to law enforcement agencies (although reports claim that Clearview has been marketing its system to private businesses including Macy’s and Best Buy). Poor cybersecurity practices like these, however, could allow this powerful tool to fall into the wrong hands outside of the company’s client list.

According to TechCrunch, the server contained the source code to the company’s facial recognition database, as well as secret keys and credentials that allowed access to some of its cloud storage containing copies of its Windows, Mac, Android, and iOS apps. Hussein was able to take screenshots of the company’s iOS app, which Apple recently blocked for violating its rules. The company’s Slack tokens were also accessible, which could have allowed access to the company’s private internal communications.