K
Kathleen Martin
Guest
Controversial facial recognition firm Clearview AI has been ordered to destroy all images and facial templates belonging to individuals living in Australia by the country’s national privacy regulator.
Clearview, which claims to have scraped 10 billion images of people from social media sites in order to identify them in other photos, sells its technology to law enforcement agencies. It was trialled by the Australian Federal Police (AFP) between October 2019 and March 2020.
Now, following an investigation, Australia privacy regulator, the Office of the Australian Information Commissioner (OAIC), has found that the company breached citizens’ privacy. “The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” said OAIC privacy commissioner Angelene Falk in a press statement. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”
Said Falk: “When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes. The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”
The investigation into Clearview’s practices by the OAIC was carried out in conjunction with the UK’s Information Commissioner’s Office (ICO). However, the ICO has yet to make a decision about the legality of Clearview’s work in the UK. The agency says it is “considering its next steps and any formal regulatory action that may be appropriate under the UK data protection laws.”
Continue reading: https://www.theverge.com/2021/11/3/22761001/clearview-ai-facial-recognition-australia-breach-data-delete
Clearview, which claims to have scraped 10 billion images of people from social media sites in order to identify them in other photos, sells its technology to law enforcement agencies. It was trialled by the Australian Federal Police (AFP) between October 2019 and March 2020.
Now, following an investigation, Australia privacy regulator, the Office of the Australian Information Commissioner (OAIC), has found that the company breached citizens’ privacy. “The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” said OAIC privacy commissioner Angelene Falk in a press statement. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”
Said Falk: “When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes. The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”
The investigation into Clearview’s practices by the OAIC was carried out in conjunction with the UK’s Information Commissioner’s Office (ICO). However, the ICO has yet to make a decision about the legality of Clearview’s work in the UK. The agency says it is “considering its next steps and any formal regulatory action that may be appropriate under the UK data protection laws.”
Continue reading: https://www.theverge.com/2021/11/3/22761001/clearview-ai-facial-recognition-australia-breach-data-delete