Featured

Clearview AI fined £7.5m for scraping UK faces — ordered to delete data

The UK Information Commissioner’s Office (ICO) has fined facial recognition firm Clearview AI Inc £7.5 million for scraping millions of photos of British citizens from the internet without their permission.

Clearview AI Inc’s bots have crawled the internet to collect more than 20 billion images of people’s faces and data from publicly available information and social media platforms for its global database.

Paying customers can upload an image into the company’s application, then check this against photographic matches in the database, which are stored alongside metadata like where the photo was sourced.

It has over 3,000 US customers but does not currently do business in the UK.

Clearview AI fined by ICO: Indefinite data retention, no lawful reason

The fine follows a joint investigation with the Office of the Australian Information Commissioner (OAIC).

The two agencies found the US-based company in breach of data protection laws for “failing to have a lawful reason for collecting people’s information and failing to have a process in place to stop the data being retained indefinitely”. The privacy watchdogs also blasted the company for “asking for additional personal information, including photos, when asked by members of the public if they are on their database”.

Clearview AI’s CEO Hoan Ton-That claimed in January 2022 to have 3,100 law enforcement agencies around the United States as customers, claiming that in “just one year, in just one field office within a federal agency, Clearview AI’s technology has resulted in the arrest of 66 child abusers and the rescue of 103 children.”

(The UK’s own national facial image database and image matching, available to police officers across the UK, currently has almost 18.5 million facial images. These are sourced from custody images.)

John Edwards, UK Information Commissioner, said on May 23: “The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service.”

He added: “That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.” (The Stack has asked for further clarification from the ICO on what it means by “effectively monitors their behaviour” and will update this piece when we receive it. We have also asked Clearview AI for its response and when it plans to comply with the ICO’s enforcement notice.)

The ICO’s enforcement notice orders the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.

CEO Hoan Ton-That in an open January 2022 letter to customers: “Both China and Russia have implemented real-time surveillance to target minority populations. We should not leave it to those countries to show the way for the world. We can set an example of using the technology, not in a real-time way, but in a way that protects human rights, due process, and our freedoms. It is heartbreaking that countries like Canada, the United Kingdom, and Australia, where I grew up, have misinterpreted my technology and intentions.”

He added: “I am disheartened by the misinterpretation of Clearview AI’s technology to society. I welcome the opportunity to engage in conversation with leaders and lawmakers so that the true value of this technology, which has proven so essential to law enforcement, can continue to make communities safe.”

Why is facial recognition controversial?

Clearview AI fined by ICO, UK

So, why is facial recognition controversial? As one recent report on the ethics of facial recognition notes, “Since the face is a unique part of the human body that is deeply linked to personal, social, and institutional identities, whoever controls facial recognition technology wields immense power. That power is the subject of intense debate—debate that has legal implications for privacy and civil liberties, political consequences for democracy, and a range of underlying ethical issues.”

These include the well-documented fact that awareness of being watched affects individuals’ behaviour regardless of whether they intend any wrongdoing in a “chilling effect”, which, the paper suggests, “means that people will be inclined to self-censor their public statements and activities and even unintentionally conform their behaviour to acceptable group norms… [limiting] self-expression, creativity, and growth [and harming] democratic societies by depriving the marketplace of ideas from receiving input from all its members.”

Philosopher Benjamin Hale argues, meanwhile, that it “erodes the motivation for people to engage in ethical deliberation about how they should act and who they should be.”

Such nuanced discussions around free will, agency, creative freedom, the policing of public spaces et al typically tend to be drowned out in public discourse by the simple response of “it helps catch paedophiles”.

In the UK broader live facial recognition use is governed by the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018 (DPA 2018); no specific primary legislation has been proposed around its use or otherwise. In a lengthy 2021 opinion, outgoing ICO commissioner Elizabeth Denham noted that there was a growing commercial interest in the use of facial recognition for digital advertising: “Billboards can be fitted with facial recognition cameras, enabling the controller to process biometric data for a range of purposes.

“This could include processing to: estimate footfall for advertising space (audience measurement); measure engagement with advertising space (dwell time at a particularlocation or other attention measurement); provide interactive experiences (eg turning on media or inviting customers to respond to it); or serve targeted adverts to passing individuals (demographic analytics). While the specific processing involved depends on the product or service in question, typically an LFR-enabled billboard can detect an “engaged” passer-by, capture an image of their face, and create a biometric template. In some examples, this can allow the individual to be categorised by estimating demographic characteristics based on their facial image. These estimated characteristics can include age, sex, gender, ethnicity, race, and even clothing styles or brands, as well as other observed data.”

She added: “Some controllers may wish to capture this information solely for analytical purposes. However, the technology can be capable of estimating personal characteristics and attributes in real-time and displaying adverts or other content based on that information… In most of the examples we observed, LFR deployed in public places has involved collecting the public’s biometric data without those individuals’ choice or control.

“This is not to say that such processing must be based on consent, but controllers need to justify the processing of biometric data without the direct engagement of the individual. Controllers must account for this lack of involvement and ensure the processing is fair, necessary, proportionate and transparent.”

Join your peers following The Stack on LinkedIn

Tags

Ed Targett

Ed Targett is the founder of The Stack. He has served as editor of Tech Monitor, Computer Business Review, and Roubini Global Economics. He has 15 years of experience in newsrooms and consultancies. His interests span technology, foreign policy, and sustainability.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Close