Automated facial recognition

As processing power becomes cheaper and smarter software is produced, it seems inevitable that more and more people and organizations will begin to identify people automatically by recognizing their faces with surveillance cameras.

London’s Heathrow airport is planning to install such a system, and Facebook may be the ultimate database to let freelancers do it themselves.

To me, it is all rather worrisome. At a basic level, life becomes more paranoid and less creative and interesting when you are being watched at all times and all of your actions are being archived forever. It’s only a matter of time before photos from every fun party ever are being combed through by investigative journalists hoping to catch someone who has become famous in an embarrassing-looking situation. Facial recognition allows for the creation of databases that can be used for truly evil purposes, from suppression of political dissent to stalking and blackmail.

Like nerve gas, facial recognition technology is probably one of those things that it would be better if we could un-invent.

Author: Milan

In the spring of 2005, I graduated from the University of British Columbia with a degree in International Relations and a general focus in the area of environmental politics. In the fall of 2005, I began reading for an M.Phil in IR at Wadham College, Oxford. Outside school, I am very interested in photography, writing, and the outdoors. I am writing this blog to keep in touch with friends and family around the world, provide a more personal view of graduate student life in Oxford, and pass on some lessons I've learned here.

90 thoughts on “Automated facial recognition”

  1. Tech.view
    Smile: you’re on camera!
    Face recognition is only the beginning

    DOES that new feature increasingly found in pocket-sized digital cameras—face-recognition technology—really work? It’s actually a lot cleverer than you think. A few years ago, it would have needed a shoe-box of electronics to drive it, and it would still have been hit-or-miss. But in the brutally competitive world of digital photography, Canon, Pentax and Fuji have honed the technology so their popular digicams can take more striking pictures by finding, and then focusing on, the faces in the viewfinder.

    With 70% of all photographs being taken primarily of people, much is to be gained from using a face-recognition algorithm stored in the camera’s chip. This scans the image in the viewfinder for a shape resembling a human face—ie, eyes, ears, nose and chin. Once located, the camera can then adjust the focus exclusively for that part of the picture. Some cameras can recognise up to ten faces in a scene and set an average focus, or select just one face from a group and focus on that.

    To prevent the camera from locking on to faces in the background, the algorithms used in today’s digicams tend to ignore features smaller than 10% of the viewfinder’s height. The result is a pin-sharp image of the subject’s facial features—the part we’re interested in—amid a slightly blurrier foreground and background. Niftier still, the algorithm can also capture the face’s actual location within the scene. That lets the user zoom in automatically on the face immediately after the picture has been taken, to check everything is okay before saving it.

  2. ICBC is the British Columbia provincial crown corporation with a monopoly on the issuance of basic auto insurance in BC and the dominant insurer in the optional coverage. This week it was reported in the Globe and Mail that ICBC offered to the Vancouver police the use of its facial recognition technology to assist in the investigation of the Vancouver Stanley Cup riots. The Vancouver Police have not taken up this offer.

    I am simply now reporting on this development, not really knowing what to think – with feelings ranging from support so that the hooligan rioters can be identified to concern for setting up a precedent that can be misused.

    I am sure that there are benign and positive reasons to have this technology , including the original intended to catch and verify speeding drivers.

    But I am uncertain considering the potential abuse. It s difficult to un-invent something. Nuclear weapons come to mind.

  3. SOCIAL NETWORKING Facebook face-recognition not available to Canadians

    ANITA ELASH Facebook’s controversial facial-recognition technology for photographs will not be available to Canadian users, the social-networking site says.

    A spokesperson for Facebook did not say why Canadians would not get access to the feature but wrote in an e-mail: “Not all of our launches roll out globally and each of our different products and features have varying launch schedules, and we don’t have plans to roll this feature out in Canada at this time.” The feature, referred to by Facebook as a “tag suggestion,” has raised concerns about privacy since it was launched in the United States in December. It recognizes faces in new photos and automatically suggests names of friends in photos as they are uploaded. Facebook users who do not want their photos tagged are not asked their permission but must opt out of using the feature.

    A coalition of digital rights groups in the U.S. has filed a complaint with the Federal Trade Commission alleging that users are not fully informed of the biometric data that is “secretly” being collected about them. A group of privacy watchdogs from European Union countries is also investigating possible privacy violations.

    TIME TO LEAD / BIOMETRIC IDENTIFICATION It’s a reality: facial recognition and privacy

    ANN CAVOUKIAN Information and Privacy Commissioner of Ontario One of the most common forms of biometric identification is when our face is compared with a stored facial image, such as a driver’s licence or passport photo. Facial-recognition technology automates this process.

    First, a biometric “template,” or representation of you, is generated from measurements of your physiological traits (in this case, the image of your face), and retained in a database. Further samples from captured facial images may then be compared against this template – if there’s a match, then you’re identified.

    Imagine a scenario where you’re walking down the street or attending a sports event or shopping at a mall, and your photo is taken, identified, tagged and matched against a database of facial templates, without your knowledge or consent. This would be an affront to privacy that should not be tolerated.

    Two key developments are making this scenario possible. First, sophisticated, high-resolution cameras in surveillance systems – and now conveniently embedded in our mobile devices – are allowing for the frequent capture of high-quality facial images “on the move.” Second, software is now available that is capable of indexing vast numbers of photos, allowing for the creation of biometric databases.

  4. Mug-Shot Industry Will Dig Up Your Past, Charge You to Bury It Again

    Philip Cabibi, a 31-year-old applications administrator in Utah, sat at his computer one recent Sunday evening and performed one of the compulsive rituals of the Internet Age: the ego search. He typed his name into Google to take a quick survey of how the internet sees him, like a glance in the mirror.

    There were two LinkedIn hits, three White Pages listings, a post he made last year to a Meetup forum for Italian-Americans in the Salt Lake City area. Then, coming in 10th place — barely crawling onto the first page of search results — was a disturbing item.

    “Philip Cabibi Mugshot,” read the title. The description was “Mug shot for Philip Cabibi booked into the Pinellas County jail.”

    When he clicked through, Cabibi was greeted with his mug shot and booking information from his 2007 drunk-driving arrest in Florida. It’s an incident in Cabibi’s life that he isn’t proud of, and one that he didn’t expect to find prominently listed in his search results four years later, for all the world to see.

  5. Developments in Facial Recognition

    Eventually, it will work. You’ll be able to wear a camera that will automatically recognize someone walking towards you, and a earpiece that will relay who that person is and maybe something about them. None of the technologies required to make this work are hard, it’s just a matter of getting the error rate down low enough for it to be a useful system. And there have been a number of recent research results and news stories that illustrate what this new world might look like.

    The police want this sort of system. I already blogged about MORIS, an iris-scanning technology that several police forces in the U.S. are using. The next step is the face-scanning glasses that the Brazilian police claim they will be wearing at the 2014 World Cup.

  6. Face recognition
    Anonymous no more

    You can’t hide—from anybody

    IF YOUR face and name are anywhere on the web, you may be recognised whenever you walk the streets—not just by cops but by any geek with a computer. That seems to be the conclusion from some new research on the limits of privacy.

    For suspected miscreants, and people chasing them, face-recognition technology is old hat. Brazil, preparing for the soccer World Cup in 2014, is already trying out pairs of glasses with mini-cameras attached; policemen wearing them could snap images of faces, easy to compare with databases of criminals. More authoritarian states love such methods: photos are taken at checkpoints, and images checked against recent participants in protests.

    But could such technology soon be used by anyone at all, to identify random passers-by and unearth personal details about them? A study which is to be unveiled on August 4th at Black Hat, a security conference in Las Vegas, suggests that day is close. Its authors, Alessandro Acquisti, Ralph Gross and Fred Stutzman, all at America’s Carnegie Mellon University, ran several experiments that show how three converging technologies are undermining privacy. One is face-recognition software itself, which has improved a lot. The researchers also used “cloud computing” services, which provide lots of cheap processing power. And they went to social networks like Facebook and LinkedIn, where most users post real names and photos of themselves.

  7. Facial monitoring
    The all-telling eye
    Webcams can now spot which ads catch your gaze, read your mood and check your vital signs

    IMAGINE browsing a website when a saucy ad for lingerie catches your eye. You don’t click on it, merely smile and go to another page. Yet it follows you, putting up more racy pictures, perhaps even the offer of a discount. Finally, irked by its persistence, you frown. “Sorry for taking up your time,” says the ad, and promptly desists from further pestering. Creepy. But making online ads that not only know you are looking at them but also respond to your emotions will soon be possible, thanks to the power of image-processing software and the ubiquity of tiny cameras in computers and mobile devices.

    Uses for this technology would not, of course, be confined to advertising. There is ample scope to deploy it in areas like security, computer gaming, education and health care. But admen are among the first to embrace the idea in earnest. That is because it helps answer, at least online, clients’ perennial carp: that they know half the money they spend on advertising is wasted, but they don’t know which half.

  8. That’s it! I am sticking electrical tape over the lenses of all my webcams.

  9. Face Recognition Makes the Leap From Sci-Fi

    FACIAL recognition technology is a staple of sci-fi thrillers like “Minority Report.”

    SceneTap, a new app for smart phones, uses cameras with facial detection software to scout bar scenes. Without identifying specific bar patrons, it posts information like the average age of a crowd and the ratio of men to women, helping bar-hoppers decide where to go. More than 50 bars in Chicago participate.

    As SceneTap suggests, techniques like facial detection, which perceives human faces but does not identify specific individuals, and facial recognition, which does identify individuals, are poised to become the next big thing for personalized marketing and smart phones. That is great news for companies that want to tailor services to customers, and not so great news for people who cherish their privacy. The spread of such technology — essentially, the democratization of surveillance — may herald the end of anonymity.

    And this technology is spreading. Immersive Labs, a company in Manhattan, has developed software for digital billboards using cameras to gauge the age range, sex and attention level of a passer-by. The smart signs, scheduled to roll out this month in Los Angeles, San Francisco and New York, deliver ads based on consumers’ demographics. In other words, the system is smart enough to display, say, a Gillette ad to a male passer-by rather than an ad for Tampax.

  10. “I just noticed that not only are all Afghans going to have their biometric data (fingerprints and iris scans) recorded but the government plans to share it with the U.S. From the article: ‘Gathering the data does not stop at Afghanistan’s borders, however, since the military shares all of the biometrics it collects with the United States Department of Justice and the Department of Homeland Security through interconnected databases.’ Talk about ‘know thine enemy’ (or I guess, for now, friend). Does this foretell the near future when the U.S. govt. (and by extension, Chinese hackers) have the biometrics of almost everyone alive?”

  11. A small magazine in Victoria, BC just uncovered a massive public traffic surveillance system deployed in Canada. Here’s a quote from the article: ‘Normally, area police manually key in plate numbers to check suspicious cars in the databases of the Canadian Police Information Center and ICBC. With [Automatic License Plate Recognition], for $27,000, a police cruiser is mounted with two cameras and software that can read license plates on both passing and stationary cars. According to the vendors, thousands of plates can be read hourly with 95-98 percent accuracy. … In August 2011, VicPD Information and Privacy Manager Debra Taylor called me to explain that, even though VicPD had the ALPR system in one of their cruisers, the [Royal Canadian Mounted Police] ran the system, and I should contact them for any information. “We actually don’t have a program,” Taylor said. “We don’t have any documents per se.” … A month later, Taylor handed over 600 pages. … [The claim they kept no documents] was apparently only in reference to digital information. VicPD had kept 500 pages of written, hard-copy logs of every ALPR hit they’d ever seen.

  12. WELCOME to China, the land of video surveillance. Guangdong province boasts over 1m cameras. In 2010 the city of Chongqing, governed by the now-disgraced Bo Xilai, ordered 500,000. Other provinces have hundreds of thousands, according to Human Rights in China, an NGO. Video surveillance constitutes over half the country’s huge security industry, and is expected to reach 500 billion yuan ($79 billion) in 2015. China will soon overtake Britain, with around 3m cameras, as the capital of video surveillance.

    Yet China is far from alone. In many democracies surveillance cameras are multiplying, too. And face-recognition technology is proving a wonder tool for both governments and marketers.

    A jail in Alabama uses it to check those leaving against prisoner records. Mexican prisons use it to identify visitors. Heathrow airport is installing systems to track passengers through lounges and onto the plane. Brazil has plans to equip police with camera-spectacles that can identify troublemakers at the 2014 World Cup.

    Privacy-loving European countries are less easy-going, and usually require cameras to be matched with signs to tell people they are being watched. Facebook’s face recognition has already fallen foul of tough German privacy laws. And America’s Supreme Court is uneasy with technology that enables the persistent tracking of individuals in public.

  13. FBI To Give Facial Recognition Software to Law-Enforcement Agencies

    The speedy onward march of biometric technology continues. After recently announcing plans for a nationwide iris-scan database, the FBI has revealed it will soon hand out free facial-recognition software to law enforcement agencies across the United States.

    The software, which was piloted in February in Michigan, is expected to be rolled out within weeks. It will give police analysts access to a so-called “Universal Face Workstation” that can be used to compare a database of almost 13 million images. The UFW will permit police to submit and enhance image files so they can be cross-referenced with others on the database for matches.

  14. FBI Launches $1 Billion Nationwide Face Recognition System

    The U.S. Federal Bureau of Investigation has begun rolling out its new $1 billion biometric Next Generation Identification (NGI) system. In essence, NGI is a nationwide database of mugshots, iris scans, DNA records, voice samples, and other biometrics that will help the FBI identify and catch criminals — but it is how this biometric data is captured, through a nationwide network of cameras and photo databases, that is raising the eyebrows of privacy advocates. Until now, the FBI relied on IAFIS, a national fingerprint database that has long been due an overhaul. Over the last few months, the FBI has been pilot testing a face recognition system, which will soon be scaled up (PDF) until it’s nationwide. In theory, this should result in much faster positive identifications of criminals and fewer unsolved cases. The problem is, the FBI hasn’t guaranteed that the NGI will only use photos of known criminals. There may come a time when the NGI is filled with as many photos as possible, from as many sources as possible, of as many people as possible — criminal or otherwise. Imagine if the NGI had full access to every driving license and passport photo in the country — and DNA records kept by doctors, and iris scans kept by businesses. The FBI’s NGI, if the right checks and balances aren’t in place, could very easily become a tool that decimates civilian privacy and freedom.

  15. Congress report warns: drones will track faces from the sky

    With the FAA working on rules to integrate drones into airspace safety by 2015, the US government’s Congressional Research Service has warned of gaps in how American courts might treat the use of drones.

    The snappily-headlined report, Drones in Domestic Surveillance Operations: Fourth Amendment Implications and Legislative Responses (PDF here), notes drones now in use can carry thermal imaging, high-powered cameras, license plate readers and LIDAR (light detection and ranging). “Soft” biometrics and facial recognition won’t be far behind, the report suggests, allowing drones to “recognize and track individuals based on attributes such as height, age, gender, and skin color.”

    “The relative sophistication of drones contrasted with traditional surveillance technology may influence a court’s decision whether domestic drone use is lawful under the Fourth Amendment,” the report compiled by legislative attorney Richard Thompson II states.

  16. Facebook Disables Face Recognition In EU

    “Facebook has disabled face recognition features on its site for all new European users. The move follows privacy recommendations made by the Irish Data Protection Commissioner. Tag Suggest information has been turned off for new users, and Facebook plans to delete the information for existing EU users by October 15th. ‘The DPC says today’s report (PDF) is the result of evaluations it made through the first half of 2012 and on-site at Facebook’s HQ in Dublin over the course of two days in May and four in July. The DPC says FB has made just about all of the improvements it requested in five key areas: better transparency for the user in how their data is handled; user control over settings; more clarity on the retention periods for the deletion of personal data, and users getting more control over deleting things; an improvement in how users can access their personal data; and the ability of Facebook to be able to better track how they are complying with data protection requirements.'”

  17. The bigger worry is for those in front of the cameras, not behind them. School bullies already use illicit snaps from mobile phones to embarrass their victims. The web throngs with furtive photos of women, snapped in public places. Wearable cameras will make such surreptitious photography easier. And the huge, looming issue is the growing sophistication of face-recognition technologies, which are starting to enable businesses and governments to extract information about individuals by scouring the billions of images online. The combination of cameras everywhere—in bars, on streets, in offices, on people’s heads—with the algorithms run by social networks and other service providers that process stored and published images is a powerful and alarming one. We may not be far from a world in which your movements could be tracked all the time, where a stranger walking down the street can immediately identify exactly who you are.

    For the moment, companies are treading carefully. Google has banned the use of face-recognition in apps on Glass and its camera is designed to film only in short bursts. Japanese digital camera-makers ensure their products emit a shutter sound every time a picture is taken. Existing laws to control stalking or harassment can be extended to deal with peeping drones.

  18. A newly released set of slides from the Snowden leaks reveals that the NSA is harvesting millions of facial images from the Web for use in facial recognition algorithms through a program called “Identity Intelligence.” James Risen and Laura Poitras’s NYT piece shows that the NSA is linking these facial images with other biometrics, identity data, and “behavioral” data including “travel, financial, behaviors, social network.”

    The NSA’s goal — in which it has been moderately successful — is to match images from disparate databases, including databases of intercepted videoconferences (in February 2014, another Snowden publication revealed that NSA partner GCHQ had intercepted millions of Yahoo video chat stills), images captured by airports of fliers, and hacked national identity card databases from other countries. According to the article, the NSA is trying to hack the national ID card databases of “Pakistan, Saudi Arabia and Iran.”

  19. Biometrics
    Clocking people’s clocks
    Facial-recognition systems are getting better

    The two main techniques used to recognise faces electronically are principal-component analysis (PCA) and linear-discriminant analysis (LDA). Both compare a picture of someone’s phizog with a reference image taken in a controlled environment. Passport photos and mugshots, then, are about as ideal as it gets.

    Governments are not the only ones interested. Earlier this year, Facebook’s DeepFace system was asked whether thousands of pairs of photos were of the same person. It answered correctly 97.25% of the time, a shade behind humans at 97.53%. Although DeepFace is only a research project, and is aided by the fact that many Facebook photos are tagged with the names of people in the images, which lets the system learn those faces in different poses and lighting conditions, it is still an impressive feat.

  20. FBI Can Access Hundreds of Millions of Face Recognition Photos

    The federal Government Accountability Office published a report on the FBI’s face recognition capabilities that says the FBI has access to hundreds of millions of photos. According to the GAO report, the FBI’s Facial Analysis, Comparison, and Evaluation (FACE) Services unit not only has access to the FBI’s Next Generation Identification (NGI) face recognition database of nearly 30 million civil and criminal mug shot photos, but it also has access to the State Department’s Visa and Passport databases, the Defense Department’s biometric database, and the drivers license databases of at least 16 states. This totals 411.9 million images, most of which are Americans and foreigners who have committed no crimes.

  21. In Russia, meanwhile, there has been a recent outcry over an app called FindFace, which lets users take photos of strangers and then determines their identity from profile pictures on social networks. The app’s creators say it is merely a way to make contact with people glimpsed on the street or in a bar. Russian police have started using it to identify suspects and witnesses. The risk is clear: the end of public anonymity. Gigapixel images of a large crowd, taken from hundreds of metres away, can be analysed to find out who went on a march or protest, even years later. In effect, deep learning has made it impossible to attend a public gathering without leaving a record, unless you are prepared to wear a mask. (A Japanese firm has just started selling Privacy Visor, a funny-looking set of goggles designed to thwart facial-recognition systems.)

  22. In Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition, researchers from Carnegie-Mellon and UNC showed how they could fool industrial-strength facial recognition systems (including Alibaba’s “smile to pay” transaction system) by printing wide, flat glasses frames with elements of other peoples’ faces with “up to 100% success.”

    As the software learns what a face looks like, it leans heavily on certain details—like the shape of the nose and eyebrows. The Carnegie Mellon glasses don’t just cover those facial features, but instead are printed with a pattern that is perceived by the computer as facial details of another person.

    In a test where researchers built a state-of-the-art facial recognition system, a white male test subject wearing the glasses appeared as actress Milla Jovovich with 87.87% accuracy. An Asian female wearing the glasses tricked the algorithm into seeing a Middle Eastern man with the same accuracy. Other notable figures whose faces were stolen include Carson Daly, Colin Powell, and John Malkovich. Researchers used about 40 images of each person to generate the glasses used to identify as them.

  23. What machines can tell from your face

    Life in the age of facial recognition

    Technology is rapidly catching up with the human ability to read faces. In America facial recognition is used by churches to track worshippers’ attendance; in Britain, by retailers to spot past shoplifters. This year Welsh police used it to arrest a suspect outside a football game. In China it verifies the identities of ride-hailing drivers, permits tourists to enter attractions and lets people pay for things with a smile. Apple’s new iPhone is expected to use it to unlock the homescreen.

    Set against human skills, such applications might seem incremental. Some breakthroughs, such as flight or the internet, obviously transform human abilities; facial recognition seems merely to encode them. Although faces are peculiar to individuals, they are also public, so technology does not, at first sight, intrude on something that is private. And yet the ability to record, store and analyse images of faces cheaply, quickly and on a vast scale promises one day to bring about fundamental changes to notions of privacy, fairness and trust.

    Anyone with a phone can take a picture for facial-recognition programs to use. FindFace, an app in Russia, compares snaps of strangers with pictures on VKontakte, a social network, and can identify people with a 70% accuracy rate. Facebook’s bank of facial images cannot be scraped by others, but the Silicon Valley giant could obtain pictures of visitors to a car showroom, say, and later use facial recognition to serve them ads for cars. Even if private firms are unable to join the dots between images and identity, the state often can. China’s government keeps a record of its citizens’ faces; photographs of half of America’s adult population are stored in databases that can be used by the FBI. Law-enforcement agencies now have a powerful weapon in their ability to track criminals, but at enormous potential cost to citizens’ privacy.

    Google has explicitly turned its back on matching faces to identities, for fear of its misuse by undemocratic regimes. Other tech firms seem less picky. Amazon and Microsoft are both using their cloud services to offer face recognition; it is central to Facebook’s plans.

    Advances in AI are used to spot signs of sexuality

    Machines that read faces are coming

    When shown one photo each of a gay and straight man, both chosen at random, the model distinguished between them correctly 81% of the time. When shown five photos of each man, it attributed sexuality correctly 91% of the time. The model performed worse with women, telling gay and straight apart with 71% accuracy after looking at one photo, and 83% accuracy after five. In both cases the level of performance far outstrips human ability to make this distinction. Using the same images, people could tell gay from straight 61% of the time for men, and 54% of the time for women. This aligns with research which suggests humans can determine sexuality from faces at only just better than chance.

    Researchers produce images of people’s faces from their genomes

    Facial technology makes another advance

    Human Longevity has assembled 45,000 genomes, mostly from patients who have been in clinical trials, and data on their associated physical attributes. The company uses machine-learning tools to analyse these data and then make predictions about how genetic sequences are tied to physical features. These efforts have improved to the point where the company is able to generate photo-like pictures of people without ever clapping eyes on them.

  24. What’s more, the smartphone will do for face recognition what smart speakers, such as the Amazon Echo, have done for speech recognition: make it acceptable to consumers. Millions of Chinese already “swipe” their faces on smartphones to authorise payments. On September 12th Apple is expected to unveil a new version of its iPhone, with technology that can reliably identify the owner’s face and then unlock the device, even in the dark. That will come only a few weeks after Samsung presented a new Galaxy Note with a similar but less sophisticated feature.

    It makes sense to separate facial-recognition technology into two categories: the underlying capability and the applications that make use of it. Megvii’s Face++ belongs in the first category, as do similar offerings from SenseTime, another Chinese startup, NTechLab, a Russian firm, as well as Amazon, IBM and Microsoft. All provide face recognition as a cloud-computing service. Megvii’s customers can upload a batch of photos and names, and use them to train algorithms, which then can recognise those particular people. Firms can also integrate the recognition service into their own offerings, for instance to control access to online accounts.

    Megvii’s and SenseTime’s services are largely founded on good data. They have access to the Chinese government’s image database of 700m citizens, who are each given a photo ID by the age of 16. Chinese government agencies are also valuable customers—more and more of the country’s hundreds of millions of surveillance cameras will soon recognise faces. In Shenzhen facial recognition is used to identify jaywalkers; names and pictures go up on a screen. In Beijing the municipality has started using the technology to catch thieves of toilet paper in public restrooms (its system also prevents people from taking more than 60 centimetres of paper within a nine-minute period).

  25. Airport Face Scans Could Be a Dry Run for a National Surveillance System

    Sixteen years after Sept. 11, we’re all well-aware that in order to go through security at an American airport, we will have to use a driver’s license or passport to identify ourselves. But Congress is quietly laying groundwork to take things much, much further—to build out a face scanning system that identifies you when you walk into an airport and tracks your every move, until you board the plane.

    The TSA Modernization Act is scheduled to be marked up in the Senate Commerce Committee this week. From the title it sounds like a harmless bill to get the airport security agency new computers. But an alarming section in the bill would give the Trump administration a green light to begin using biometrics to identify people in airports nationwide. And right now there’s only one technology fit for a biometric surveillance system: automated, real-time face scans.

    This could mean scans of every person’s face throughout the entirety of every American airport—“at checkpoints, screening lanes, bag drop and boarding areas, and other areas.” Basically, it could go anywhere the Department of Homeland Security determines that the technology could improve security.

  26. US Government Will Be Scanning Your Face At 20 Top Airports, Documents Show

    These same documents state — explicitly — that there were no limits on how partnering airlines can use this facial recognition data. CBP did not answer specific questions about whether there are any guidelines for how other technology companies involved in processing the data can potentially also use it. It was only during a data privacy meeting last December that CBP made a sharp turn and limited participating companies from using this data. But it is unclear to what extent it has enforced this new rule. CBP did not explain what its current policies around data sharing of biometric information with participating companies and third-party firms are, but it did say that the agency “retains photos … for up to 14 days” of non-US citizens departing the country, for “evaluation of the technology” and “assurance of the accuracy of the algorithms” — which implies such photos might be used for further training of its facial matching AI.

  27. An even subtler idea was proposed by researchers at the Chinese University of Hong Kong, Indiana University Bloomington, and Alibaba, a big Chinese information-technology firm, in a paper published in 2018. It is a baseball cap fitted with tiny light-emitting diodes that project infra-red dots onto the wearer’s face. Many of the cameras used in face-recognition systems are sensitive to parts of the infra-red spectrum. Since human eyes are not, infra-red light is ideal for covert trickery.

    In tests against FaceNet, a face-recognition system developed by Google, the researchers found that the right amount of infra-red illumination could reliably prevent a computer from recognising that it was looking at a face at all. More sophisticated attacks were possible, too. By searching for faces which were mathematically similar to that of one of their colleagues, and applying fine control to the diodes, the researchers persuaded FaceNet, on 70% of attempts, that the colleague in question was actually someone else entirely.

    Training one algorithm to fool another is known as adversarial machine learning. It is a productive approach, creating images that are misleading to a computer’s vision while looking meaningless to a human being’s. One paper, published in 2016 by researchers from Carnegie Mellon University, in Pittsburgh, and the University of North Carolina, showed how innocuous-looking abstract patterns, printed on paper and stuck onto the frame of a pair of glasses, could often convince a computer-vision system that a male ai researcher was in fact Milla Jovovich, an American actress.

  28. Guo Bing, a legal academic in the eastern city of Hangzhou, likes to spend his leisure time at a local safari park. But when the park informed season-pass holders like him that admission would require a face-scan, Mr Guo objected. Late last month he filed a lawsuit, claiming the new rules violated his privacy. Facial-recognition technology is widely used in China. Doubtless to the relief of the government which makes extensive use of it, there has been little public debate about it. State media, however, seized on Mr Guo’s case, trumpeting it as the first of its kind to be lodged in a Chinese court. Netizens have been hailing Mr Guo as a champion of consumer rights. A thread about his suit has garnered 100m views on Weibo, a social-media platform.

    It is surprising that it has taken so long for the judiciary to get involved. Some 300 tourist sites in China use facial recognition to admit visitors. The safari park says doing so can shorten queues. Many office workers in Beijing’s main financial district clock in and out of work by scanning their faces. Some campuses and residential buildings use facial-recognition cameras to screen people entering. WeChat, a messaging and digital-wallet app, allows users to pay with their faces at camera-equipped vendors. Facial-recognition systems are ubiquitous at traffic intersections, in railway stations and airports (visitors to a public-security expo are pictured being scanned).

  29. Airports will also emphasise hygiene. “I think the move to minimising contact during any travel experience will just push us over the edge to having a contactless journey,” says John Holland-Kaye, Heathrow’s chief. “Once you get into the terminal, you’ll scan your passport, have an image of your face taken, drop your bags,” and then stroll through checkpoints as cameras use facial recognition to open gates.

    Some of this may sound far-fetched, but citizens of some three dozen countries can already use e-gates to get through passport control on arrival at Heathrow and many other airports, allowing them to go from gate to kerb without talking to another person. Security will still involve slowing down, but even there it should soon be possible to leave laptops and liquids inside the bag. Automation will reduce the need to touch trays. Hand-sanitiser is already everywhere. Once implemented, such changes are unlikely to be undone.

  30. Clearview AI is telling investors it is on track to have 100 billion facial photos in its database within a year, enough to ensure “almost everyone in the world will be identifiable,” according to a financial presentation from December obtained by The Washington Post.

  31. Online Sleuths Are Using Face Recognition to ID Russian Soldiers

    It takes five minutes to put a name to a soldier’s face using little more than a screenshot, but there’s a catch.

    That power to identify people from afar could bring new accountability to armed conflict but also open new avenues for digital attack. Identifying—or misidentifying—people in videos or photos said to be from the front lines could expose them or their families to online harassment or worse. Face algorithms can be wrong, and errors are more common on photos without a clear view of a person’s face, as is often the case for wartime images. Nonetheless, Ukraine has a volunteer “IT Army” of computer experts hacking Russian targets on the country’s behalf.

    If distant volunteers can identify combatants using face recognition, government agencies can do the same or much more. “I’m sure there are Russian analysts tracking Twitter and TikTok with access to similar if not more powerful technology who are not sharing what or who they find so openly,” says Ryan Fedasiuk, an adjunct fellow at the Center for a New American Security.

  32. Facial recognition technology for policing and surveillance in the Global South: a call for bans

    The use of facial recognition technology (FRT) for policing and surveillance is spreading across Asia, Africa and Latin America. Advocates are saying this technology can solve crimes, locate missing people and prevent terrorist attacks. Yet, as this article argues, deploying FRT for policing and surveillance poses a grave threat to civil society, especially systems to identify or track people without any criminal history. In every political system, this has the potential to deepen discriminatory policing, have a chilling effect on activism and turn everyone into a suspect. The dangers rise exponentially, moreover, in places with inconsistent rule of law, poor human rights records, weak privacy and data laws and authoritarian rulers – traits common across scores of countries now installing FRT. Regulating use is unlikely to prevent these harms, the article contends, given the powerful political and corporate forces in play, given the ways firms push legal limits, exploit loopholes and lobby legislators, and given the tendency over time of surveillance technology to creep across state agencies and into new forms of social control. Calls to ban FRT are growing louder by the day. This article makes the case for why bans are especially necessary in the Global South.

  33. Artists disrupting facial recognition technologies

    URME Surveillance is an subversive intervention that protects the public from facial recognition surveillance systems in a variety of ways. The principle method is by inviting the public to wear a photo-realistic, 3D-printed prosthetic of my face. When a user dons the prosthetic, camera systems equipped with facial recognition software identify that user as myself, thus attributing all of their actions to the identity known as “Leo Selvaggio.” In this way, wearers of the prosthetic safeguard their own identities by performing my persona in surveilled areas.

    Facial Weaponization Suite protests against biometric facial recognition–and the inequalities these technologies propagate–by making “collective masks” in workshops that are modeled from the aggregated facial data of participants, resulting in amorphous masks that cannot be detected as human faces by biometric facial recognition technologies. The masks are used for public interventions and performances.

  34. The Technology Facebook and Google Didn’t Dare Release

    Mr. Leyvand turned toward a man across the table from him. The smartphone’s camera lens — round, black, unblinking — hovered above Mr. Leyvand’s forehead like a Cyclops eye as it took in the face before it. Two seconds later, a robotic female voice declared, “Zach Howard.”

    “That’s me,” confirmed Mr. Howard, a mechanical engineer.

    An employee who saw the tech demonstration thought it was supposed to be a joke. But when the phone started correctly calling out names, he found it creepy, like something out of a dystopian movie.

    The person-identifying hat-phone would be a godsend for someone with vision problems or face blindness, but it was risky. Facebook’s previous deployment of facial recognition technology, to help people tag friends in photos, had caused an outcry from privacy advocates and led to a class-action lawsuit in Illinois in 2015 that ultimately cost the company $650 million.

    With technology like that on Mr. Leyvand’s head, Facebook could prevent users from ever forgetting a colleague’s name, give a reminder at a cocktail party that an acquaintance had kids to ask about or help find someone at a crowded conference. However, six years later, the company now known as Meta has not released a version of that product and Mr. Leyvand has departed for Apple to work on its Vision Pro augmented reality glasses.

  35. The $999 pair of augmented reality glasses, made by a company called Vuzix, connects the wearer to Clearview’s database of 30 billion faces. Clearview’s A.R. app, which can identity someone up to 10 feet away, is not yet publicly available, but the Air Force has provided funding for its possible use at military bases.

    On a fall afternoon, Mr. Ton-That demonstrated the glasses for me at his spokeswoman’s apartment on the Upper West Side of Manhattan, putting them on and looking toward me.

    “Ooooh, 176 photos,” he said. “Aspen Ideas Festival. Kashmir Hill,” he read from the image caption on one of the photos that came up.

    Then he handed the glasses to me. I put them on. Though they looked clunky, they were lightweight and fit naturally. Mr. Ton-That said he had tried out other augmented reality glasses, but these had performed best. “They’ve got a new version coming,” he said. “And they’ll look cooler, more hipster.”

    When I looked at Mr. Ton-That through the glasses, a green circle appeared around his face. I tapped a touch pad at my right temple. A message came up on a square display that only I could see on the right lens of the glasses: “Searching …”

    And then the square filled with photos of him, a caption beneath each one. I scrolled through them using the touch pad. I tapped to select one that read “Clearview CEO, Hoan Ton-That;” it included a link that showed me that it had come from Clearview’s website.

  36. Police and federal agencies are responding to a massive breach of personal data linked to a facial recognition scheme that was implemented in bars and clubs across Australia. The incident highlights emerging privacy concerns as AI-powered facial recognition becomes more widely used everywhere from shopping malls to sporting events.

    The affected company is Australia-based Outabox, which also has offices in the United States and the Philippines. In response to the Covid-19 pandemic, Outabox debuted a facial recognition kiosk that scans visitors and checks their temperature. The kiosks can also be used to identify problem gamblers who enrolled in a self-exclusion initiative. This week, a website called “Have I Been Outaboxed” emerged, claiming to be set up by former Outabox developers in the Philippines. The website asks visitors to enter their name to check whether their information had been included in a database of Outabox data, which the site alleges had lax internal controls and was shared in an unsecured spreadsheet. It claims to have more than 1 million records.

Leave a Reply

Your email address will not be published. Required fields are marked *