Forums
New posts
Search forums
What's new
New posts
New media
New media comments
Latest activity
Classifieds
Media
New media
New comments
Search media
Log in
Register
What's New?
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Navigation
Install the app
Install
More Options
Advertise with us
Contact Us
Close Menu
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
The Water Cooler
General Discussion
The world’s scariest facial recognition software, explained and now hacked.
Search titles only
By:
Reply to Thread
This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Message
<blockquote data-quote="Snattlerake" data-source="post: 3330008" data-attributes="member: 44288"><p><strong><span style="font-size: 18px">The world’s scariest facial recognition software, explained.</span></strong></p><p><strong></strong></p><p><strong><a href="https://www.vox.com/recode/2020/2/11/21131991/clearview-ai-facial-recognition-database-law-enforcement" target="_blank">https://www.vox.com/recode/2020/2/11/21131991/clearview-ai-facial-recognition-database-law-enforcement</a></strong></p><p><strong></strong></p><p><em><strong>Clearview AI built a massive database of faces that it’s making available to law enforcement, and nobody’s stopping it.</strong></em></p><p></p><p><strong>By Rebecca Heilweil Feb 11, 2020, 12:30pm EST</strong></p><p></p><p>Your Instagram pictures could be part of a facial recognition database that’s been made available to law enforcement agencies. That’s thanks to Clearview AI, a mysterious startup that has <strong>scraped billions of images from across the web, including from social media platforms like Instagram and Twitter and Youtube.</strong></p><p></p><p>Law enforcement has been using facial recognition for a while. But Clearview’s technology represents a scary step further than anything we’ve seen before, according to reporting from the New York Times. The secretive company says it’s created a database of over 3 billion images that have been scraped from all corners of the internet, including social networks like Facebook, Instagram, and YouTube. From just a snapshot or video still, Clearview claims its app lets a police officer identify a face and match it with publicly available information about the person, within just a few seconds.</p><p></p><p>But is this a world we want to live in? Clearview argues that the tech can help track down dangerous people — its site points to “child molesters, murderers, suspected terrorists” —and is only meant for use by law enforcement. As the Times reported last week, the company’s facial recognition has helped identify child victims in exploitative videos posted to the web. But critics say the technology is way too risky, enabling excessive surveillance and threatening our privacy rights. Another concern is that facial recognition, broadly, has also been shown to be less accurate on people of color, women, and other minority groups.</p><p></p><p>Faced with these concerns, the world’s biggest tech companies are stepping up, sending cease-and-desist letters to Clearview that order the company to stop scraping their sites for our data. But it’s not clear how much power those companies have, or how invested they actually are in protecting our personal information. While some lawsuits against Clearview are also popping up, it’s not yet apparent how Clearview could be stopped. That has privacy advocates pointing to the need for a federal law regulating, or even outright banning, facial recognition in the United States.</p><p></p><p><strong>--------------------------------------------------------------------</strong></p><p><strong></strong></p><p><strong>Then there's this:</strong></p><p><strong></strong></p><p><strong><span style="font-size: 22px"><strong>Facial-Recognition company Clearview AI's entire client list stolen in data breach</strong></span></strong></p><p><strong></strong></p><p><strong></strong></p><p><strong><a href="https://www.cnet.com/news/clearview-ai-had-entire-client-list-stolen-in-data-breach/" target="_blank">https://www.cnet.com/news/clearview-ai-had-entire-client-list-stolen-in-data-breach/</a></strong></p><p></p><p><em><strong>The breach affected all of the facial recognition company's customers, many of which are law enforcement agencies.</strong></em></p><p></p><p>Alfred Ng</p><p>February 26, 2020 8:52 AM PST</p><p></p><p>Clearview AI, a facial-recognition software maker that has sparked privacy concerns, said Wednesday it suffered a data breach. The data stolen included its entire list of customers, the number of searches those customers have made and how many accounts each customer had set up.</p><p></p><p>"Security is Clearview's top priority," Tor Ekeland, Clearview AI's attorney, said in a statement. "Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security."</p><p></p><p>The company didn't specify the flaw. The data breach was first reported by The Daily Beast.</p><p></p><p>Clearview's clients are mostly law enforcement agencies, with police departments in Toronto, Atlanta and Florida all using the technology. The company has a database of 3 billion photos that it collected from the internet, including websites like YouTube, Facebook, Venmo and LinkedIn.</p><p><strong></strong></p><p><strong></strong></p><p><strong></strong></p><p><strong></strong></p><p><strong></strong></p><p><strong></strong></p><p><strong></strong></p></blockquote><p></p>
[QUOTE="Snattlerake, post: 3330008, member: 44288"] [B][SIZE=5]The world’s scariest facial recognition software, explained.[/SIZE][/B] [B] [URL]https://www.vox.com/recode/2020/2/11/21131991/clearview-ai-facial-recognition-database-law-enforcement[/URL] [/B] [I][B]Clearview AI built a massive database of faces that it’s making available to law enforcement, and nobody’s stopping it.[/B][/I] [B]By Rebecca Heilweil Feb 11, 2020, 12:30pm EST[/B] Your Instagram pictures could be part of a facial recognition database that’s been made available to law enforcement agencies. That’s thanks to Clearview AI, a mysterious startup that has [B]scraped billions of images from across the web, including from social media platforms like Instagram and Twitter and Youtube.[/B] Law enforcement has been using facial recognition for a while. But Clearview’s technology represents a scary step further than anything we’ve seen before, according to reporting from the New York Times. The secretive company says it’s created a database of over 3 billion images that have been scraped from all corners of the internet, including social networks like Facebook, Instagram, and YouTube. From just a snapshot or video still, Clearview claims its app lets a police officer identify a face and match it with publicly available information about the person, within just a few seconds. But is this a world we want to live in? Clearview argues that the tech can help track down dangerous people — its site points to “child molesters, murderers, suspected terrorists” —and is only meant for use by law enforcement. As the Times reported last week, the company’s facial recognition has helped identify child victims in exploitative videos posted to the web. But critics say the technology is way too risky, enabling excessive surveillance and threatening our privacy rights. Another concern is that facial recognition, broadly, has also been shown to be less accurate on people of color, women, and other minority groups. Faced with these concerns, the world’s biggest tech companies are stepping up, sending cease-and-desist letters to Clearview that order the company to stop scraping their sites for our data. But it’s not clear how much power those companies have, or how invested they actually are in protecting our personal information. While some lawsuits against Clearview are also popping up, it’s not yet apparent how Clearview could be stopped. That has privacy advocates pointing to the need for a federal law regulating, or even outright banning, facial recognition in the United States. [B]-------------------------------------------------------------------- Then there's this: [SIZE=6][B]Facial-Recognition company Clearview AI's entire client list stolen in data breach[/B][/SIZE] [URL]https://www.cnet.com/news/clearview-ai-had-entire-client-list-stolen-in-data-breach/[/URL][/B] [I][B]The breach affected all of the facial recognition company's customers, many of which are law enforcement agencies.[/B][/I] Alfred Ng February 26, 2020 8:52 AM PST Clearview AI, a facial-recognition software maker that has sparked privacy concerns, said Wednesday it suffered a data breach. The data stolen included its entire list of customers, the number of searches those customers have made and how many accounts each customer had set up. "Security is Clearview's top priority," Tor Ekeland, Clearview AI's attorney, said in a statement. "Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security." The company didn't specify the flaw. The data breach was first reported by The Daily Beast. Clearview's clients are mostly law enforcement agencies, with police departments in Toronto, Atlanta and Florida all using the technology. The company has a database of 3 billion photos that it collected from the internet, including websites like YouTube, Facebook, Venmo and LinkedIn. [B] [/B] [/QUOTE]
Insert Quotes…
Verification
Post Reply
Forums
The Water Cooler
General Discussion
The world’s scariest facial recognition software, explained and now hacked.
Search titles only
By:
Top
Bottom