Winning numbers & odds
|Number of correct numbers||Number of winners||Prize (CHF)|
|6 + 1||0||0.00|
|5 + 1||11||7’602.00|
|4 + 1||716||133.00|
|3 + 1||10’055||23.80|
|Jackpot next draw:||CHF 37’600’000|
|Number of correct last digits||Number of winners||Prize (CHF)|
|Jackpot next draw:||CHF 400’000|
Next draw on Sat, 30.01.2021 Closing time 17:00
|Number of correct numbers||Number of winners||Prize (CHF)|
|6 + 1||0||0.00|
|5 + 1||11||7’602.00|
|4 + 1||716||133.00|
|3 + 1||10,055||23.80|
|Number of correct last digits||Number of winners||Prize (CHF)|
Next draw on Fri, 29.01.2021 Closing time 19:30
Winning numbers Winning numbers & odds Number of correct numbers Number of winners Prize (CHF) 6 + 1 0 0.00 6 1 1’000’000.00 5 + 1 11 7’602.00 5
Facial Recognition Enables the End of Freedom!
Sharing is caring!
Advances in facial recognition technology have the potential to pose the biggest threat to freedom in human history. Anonymity is essential to privacy which is a fundamental right that all humans should enjoy and Americans expect. Rapid advancements in facial recognition systems combined with virtually no regulations or laws governing the collection, dissemination, and use of facial recognition data jeopardizes the rights that America was founded upon – every single one of them!
Unlike other biometric technologies, facial recognition offers very few options for average citizens to opt out of. Cameras are virtually everywhere now and it’s nearly impossible to go about your daily routines without your face being captured on film. Facial recognition takes camera systems to a whole new level that has virtually no benefits and only downside risks to the average citizen.
An individual’s face is not the property of a government or private business and should not be captured, analyzed, and put into a database without the consent of the individual. Technology has tested our concepts of privacy and rights and facial recognition is the Grand Daddy of all intrusive technologies. Rights existed before governments and it is the duty of government to protect the rights of the citizen, not to grant rights. A government with the power to grant rights has the power to violate and rescind rights. It is imperative that the people take action on facial recognition to strictly regulate its use by government and industry to prevent it from being used to create and implement a surveillance state with abilities unlike any police state in world history.
These introductory statements may seem extreme to many people reading this. After all, people have grown accustomed to surveillance cameras in stores, on traffic poles, parking lots, and even in common areas such as apartment building hallways. Many people feel safer because they feel that cameras at least help identify and apprehend wrong doers and may even act as a deterrent. The purpose of this article is not to debate the legitimacy of private citizens using cameras for general security purposes. The purpose of this article is to raise awareness in the reader regarding the serious threats to privacy and civil rights in general posed by the rapid emergence and proliferation and unregulated use of facial recognition technologies.
In order to do that the article is organized as follows:
- Overview of facial recognition systems and associated technologies.
- Current U.S. state and federal laws regarding the use of facial recognition technology.
- Federal Law related to facial recognition (or lack thereof.)
- Illinois Biometric Information Privacy Act (BIPA.)
- Court cases and challenges to Illinois BIPA law.
- Application of privacy laws to other technologies.
- Industry stakeholders’ arguments for facial recognition and against privacy.
- Examples and discussion of applications of facial recognition in other countries to demonstrate its capability for abuse.
- China – totalitarian example
- United Kingdom – Western democracy example
- Germany – Western democracy trying hard not to be associated with its totalitarian past
- Examples and discussion of applications of facial recognition by business and industry to highlight other avenues for its abuse.
- Examples of beneficial uses of facial recognition technology.
- Examples of government and police abuses of other technologies in the United States.
- Examples of how facial recognition is being used by governments in the United States and how little our local, state, and federal governments differ in their behavior from other countries including oppressive regimes.
- Conclusion and recommendations for citizens to take action to protect their rights.
The privacy concerns about facial recognition technology are very different from concerns about other technology.
New technology has always caused some level of social disruption and it often generates debates about its usefulness, legality, and morality. Eventually, society adjusts to new innovations and often embraces the underlying technology. Recent advancements in the collection and analysis of data provide the potential for amazing breakthroughs in virtually every category affecting human quality of life such as agriculture, clean water, energy costs, transportation, security, and health care. Each new breakthrough in data analysis comes with a potential erosion of privacy and new ethical issues to ponder, however.
Technology by itself isn’t inherently good or evil.
It doesn’t have a moral compass to guide its development. The pursuit of knowledge drives human beings to create and innovate. Successive discoveries build upon each other and in the past few decades the pace of innovation has exploded. Generally speaking, one invention or even category of invention by itself doesn’t pose significant ethical or legal dilemmas. Cameras fit into this category. In the 1840’s when the technology to capture and print film images was invented it had countless positive applications. For the next 180 years or so camera development was mostly focused on camera features such as producing higher quality images, shrinking camera size, and making cameras more affordable so more people could enjoy them.
Multifunction devices combining cameras with computers and phones have grown in popularity since around the year 2000. The market for cameras and video cameras has changed dramatically since now virtually everyone has a phone with a built-in combination still and video camera. Camera manufacturers have shifted their product development more to commercial uses such as surveillance cameras, dash cameras, and application-specific cameras such as Go Pro’s. Devices have gotten smaller, image quality has increased by orders of magnitude, and the cost of all cameras has plummeted which continues to fuel their proliferation and use in new applications. Integrating cameras with network connectivity, storage devices, and digital processing software has transformed cameras from being devices that required intention and a bit of skill to capture a good picture to devices that are always on, passively collecting everyday images that are stored indefinitely in most cases without consent of the subject.
Facial recognition software analyzes an image of a face and uses an algorithm to generate a unique numerical identifier for it. There are two primary categories of algorithms, geometric and photometric. Geometric algorithms analyze distinctive features and photometric algorithms employ a statistical approach to distill an image into values and compare those values with templates to eliminate variances. Popular recognition algorithms include principal component analysis using eigenfaces, linear discriminant analysis, elastic bunch graph matching using the Fisherface algorithm, the hidden Markov model, the multilinear subspace learning using tensor representation, and the neuronal motivated dynamic link matching. An emerging technique is to add skin texture analysis to enhance system performance by 20-25%. The accuracy of facial recognition systems has improved by orders of magnitude in recent years and most of them advertise around 97.5% match rates.
Facial recognition technology takes more than just a lot of cameras and a good algorithm to make it work. It needs to be connected to a database of pictures with corresponding identifying information about each subject. When combined together, the algorithms, the image collection system, and the database create a powerful intelligence system that allows for the nearly instantaneous identification of anyone in the database who happens to pass by a camera feeding the system. The infrastructure for massive facial recognition-driven surveillance is largely in place and is growing daily.
Most of us enjoy taking photos of places we visit and people we enjoy being with. Smart phones give people the convenience of being able to take photos and share them with others in virtually real time. With almost a quarter of the world’s population on Facebook, it is easy to see that people everywhere enjoy taking and sharing photos of themselves, family, and friends. What is unclear is how many Facebook and other social media site users know that their own photos are being sold to governments and corporations for the purpose of building massive identity databases that will enable them to identify them in images taken by other cameras that they wouldn’t consent to if offered the choice.
“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.”
The arguments for and against technologies like facial recognition often fall into the security vs liberty debate. Proponents of the technology argue that it is imperative that the government possess every possible tool available to keep society safe from those who would do us harm. On the other side, civil libertarians claim that it is worth accepting the risks of a terrorist attack or other crimes to protect and maintain the rights that we all cherish.
As with many other technological advances, governments and corporations don’t wait for legislative bodies to write new laws before implementing facial recognition systems. They haven’t asked the courts to rule on the legality of using such systems in the construct of current laws either. Instead, governments and corporations race to install and implement facial recognition systems and aggressively lobby against new laws or court challenges to their usage. A few years ago, I was shocked when an employee at the Department of Motor Vehicles told me I was prohibited from smiling for my driver’s license photo because it would affect the facial recognition software. I asked her to give me the specific section of State Code that had been written and voted on by our elected representatives and reviewed by the governor. She looked at me with an astonished look on her face and said she didn’t know what I was talking about, just that it was department rules and they were told that everyone had to comply. This is where the debate really needs to be. Who has the authority to capture and analyze facial information? Being left to administrative agencies headed by political appointees seems like a slippery slope to me. Similarly, allowing corporations and individuals to establish their own acceptable use policies appears to be a slippery slope. The issue of privacy and the rights associated with it are too important to be left up to civil servants, corporate managers, and individuals.
Search Capability and Cheap Storage Power Can Identify Everyone in a Crowd in Real Time.
The real power in facial recognition technology comes from its search capabilities and cheap storage. It gives someone the ability to identify people in a crowd and then to search for those same individuals in databases of previously taken images and video feeds. The possibilities for relational analysis are endless but positive benefits for the subjects are virtually non-existent. It is not a stretch to say that hardly anyone can or should be trusted with this type of power. Warrantless searches by police are clearly prohibited in the United States Constitution, yet police agencies across the country are scrambling to spend as many tax dollars as possible to implement facial recognition systems without any meaningful debate, opposition, or reasonable oversight.
What does the law say about facial recognition?
There are very few actual laws on the books and there are very few court challenges, although that number is growing. According to a 2015 report by the Government Accountability Office (GAO), “The United States does not have a comprehensive privacy law governing the collection, use, and sale of personal information by private-sector companies. In addition, we did not identify any federal laws that expressly regulate commercial uses of facial recognition technology in particular. However, there are three areas in which certain federal laws that address privacy and consumer protection may potentially apply to commercial uses of facial recognition technology: (1) the capture of facial images; (2) the collection, use, and sharing of personal data; and (3) unfair or deceptive acts or practices, such as failure to comply with a company’s stated privacy policies.” https://www.gao.gov/assets/680/671764.pdf
Only two states have significant privacy legislation addressing the use of facial biometrics, Texas and Illinois. Both state laws are similar in their approach and language regarding notice provisions, retention limitations, and other privacy concerns. The following section provides a detailed overview of the Illinois law and a current court case involving it.
“In October of 2008, Illinois enacted the Biometric Information Privacy Act. The act was aimed at protecting an individual’s biometric identifiers, including facial geometry, from everyone but federal, state and local government, and sets forth requirements for those who possess face recognition data. The legal requirements are discussed below.
For the sake of convenience, an entity that deals in facial recognition data is referred to as the “Company” even though the law applies equally to individuals and non-profits.
Written Consent Is Required
In order for a Company to collect, capture, purchase, receive through trade, or otherwise obtain face recognition data or facial geometry data, the subject of that data must sign a written release. If any such data is collected or used without a written release, the Company is in violation of the law.
In addition to obtaining a written release, the Company must also advise the subject, in writing, that his or her facial recognition data is being collected or stored, the purpose for which it is being collected or stored, and the length of time for which the face recognition data will be collected, stored, and used. If these requirements are not met, the Biometric Information Privacy Act has been violated.
A Written Policy Is Necessary
Companies in possession of facial geometry data must have a written policy regarding the length of time the facial recognition data will be held. The policy must be available to the public, and the policy must provide that the data will be permanently destroyed when the purpose for collecting the data has been satisfied or within three years after the individual’s most recent interaction with the Company, whichever comes first.
The policy must include guidelines for destroying the face recognition data, and the Company must follow its policy except in cases of a valid warrant.
No Profiting, Trading, or Disclosure Is Permitted
Even if a Company has written consent from the subject to collect, store, or use face recognition data, the Company is not allowed to sell the data or profit from someone’s facial geometry under any circumstances. This is hugely important because many of the companies that collect biometric data do so in the hopes that they can sell the (hugely valuable) data in the future. There is nothing in the Biometric Information Privacy Act that allows a waiver of this provision.
The law also specifically prohibits the trading and leasing facial recognition data. Once again, there is no provision allowing this prohibition to be waived. Companies cannot sell or make money off of the subject’s data in Illinois.
Conversely, a Company is allowed to disseminate or disclose the subject’s facial recognition data if one of the following conditions are met:
- The subject consents to the disclosure.
- The disclosure completes a financial transaction that was authorized or requested by the subject.
- The disclosure is required by state, federal, or local law.
- The disclosure is pursuant to a valid warrant.
The Holder of Face Recognition Data Must Keep It Safe
If a Company has written consent from the subject and has a written policy with respect to the destruction of facial geometry data, the Company is allowed to collect and hold the facial recognition data, but the Company is still required to keep the data safe.
As we are all aware, hackers have targeted all valuable data that is stored in computers. The massive 2017 hack of the Equifax data aggregator demonstrated what many of us already know; Even the most sensitive information held by the biggest of companies can be exposed to hackers and those that may abuse it. Over 100 million people were potentially impacted by the Equifax breach. The data held by credit reporting agencies like Equifax is of the most personal type – the type often used to confirm the identity of people completing remote transactions. The breach will likely impact those affected for the rest of their lives.
Though the damage done by the Equifax breach (and the many others that have occurred in recent years) is massive, most of the data collected (such as social security numbers and account numbers) can theoretically be changed. However, biometric markers such as facial geometry, fingerprints, retina characteristics, voice prints, and hand geometry are almost completely unchangeable. If biometric data is compromised, the subject must either cease using the biometric for identification or transactions or be at risk for identity theft for the rest of his or her life.
Illinois considered the sensitive and immutable character of face recognition data when drafting the Biometric Information Privacy Act. Accordingly, the state imposed the following requirements on Companies that possess face recognition data:
- The Company must protect facial geometry data in accordance with the reasonable standard of care in the Company’s industry when storing or transmitting the data; and
- The Company must protect face recognition data in a manner that is equal to or more protective than the manner in which it protects other confidential and sensitive information (for example social security numbers and account numbers).
Victim’s Rights When the Law Is Violated
If a Company violates the Biometric Information Privacy Act when collecting, handling, storing, or transmitting face recognition data, the subject of that data has substantial rights under Illinois law.
Illinois provides the victim with a monetary award when a Company captures, uses, stores, or transmits facial recognition data without complying with every part of the law. The amount of the monetary award depends on whether the violation was negligent, reckless, or intentional.
If the Company negligently fails to comply with the Biometric Information Privacy Act, the victim whose facial geometry was mishandled is entitled to a $1,000.00 award or the value of the actual harm caused, whichever is greater.
If the Company recklessly or intentionally violates the Act, the victim is entitled to a $5,000.00 award or the value of the actual harm suffered, whichever is greater.
Because Companies are not in the habit of admitting they violated a law or paying out monetary awards to victims, the victim will usually have to take the Company to court. Fortunately, the Biometric Information Privacy Act also provides that if the victim’s suit is successful, the Company will have to pay attorney fees, expert witness fees, and litigation expenses.”
Has the Illinois law been successful in protecting Illinois residents’ privacy?
A class action lawsuit was filed against Facebook in 2016 under the Illinois BIPA law. Following is a summary of the lawsuit’s progress to date from the National Law Review.
“This past week, (March 2, 2018), a California district court again declined Facebook’s motion to dismiss an ongoing litigation involving claims under the Illinois Biometric Information Privacy Act, 740 Ill. Comp Stat. 14/1 (“BIPA”), surrounding Tag Suggestions, its facial recognition-based system of photo tagging. In 2016, the court declined to dismiss the action based upon, among other things, Facebook’s contention that BIPA categorically excludes digital photographs from its scope. This time around, the court declined to dismiss the plaintiffs’ complaint for lack of standing under the Supreme Court’s 2016 Spokeo decision on the ground that plaintiffs have failed to allege a concrete injury in fact. (Patel v. Facebook, Inc., No. 15-03747 (N.D. Cal. Feb. 26, 2018) (cases consolidated at In re Facebook Biometric Information Privacy Litig., No. 15-03747 (N.D. Cal.)). As a result, Facebook will be forced to continue to litigate this action.
This dispute is being closely watched as there are a number of similar pending BIPA suits relating to biometrics and facial recognition and other defendants are looking at which of Facebook’s defenses might hold sway with a court.
In ruling the plaintiffs had standing, the court stated that, with BIPA, the Illinois legislature codified a right of privacy in personal biometric information, and that “a violation of BIPA’s procedures would cause actual and concrete harm.” According to the court, the statute deemed procedural protections for biometric data “crucial” in the new digital age, particularly since biometric identifiers cannot be changed if compromised:
“When an online service simply disregards the Illinois procedures, as Facebook is alleged to have done, the right of the individual to maintain her biometric privacy vanishes into thin air. The precise harm the Illinois legislature sought to prevent is then realized.”
“Consequently, the abrogation of the procedural rights mandated by BIPA necessarily amounts to a concrete injury. This injury is worlds away from the trivial harm of a mishandled zip code or credit card receipt. A violation of the BIPA notice and consent procedures infringes the very privacy rights the Illinois legislature sought to protect by enacting BIPA. That is quintessentially an intangible harm that constitutes a concrete injury in fact.”
The court ultimately rejected Facebook’s argument that the collection of biometric information without notice or consent can never support Article III standing without “real-world harms,” as the court pointed to holdings within the circuit finding that “privacy torts do not always require additional consequences to be actionable.” The court also distinguished some prior decisions in other federal courts that had dismissed BIPA claims for lack of standing, including a holding where the Second Circuit had little trouble affirming the dismissal of BIPA claims against a videogame maker related to a personalized avatar feature because the defendant had satisfied BIPA’s notice and consent provisions and plaintiffs could not allege a material risk of harm to a concrete interest protected by the statute. In the prior dismissals, the court noted, the plaintiffs “indisputably knew” that their biometric data would be collected before they accepted the services offered by the businesses involved, as opposed to the plaintiff’s allegations in the instant case that claim that Facebook afforded plaintiffs no notice or opportunity to say no.
While the ultimate outcome of this case is uncertain, as many substantive issues have yet to be fully aired, the ruling was a clear victory for privacy advocates, particularly those who have cautioned that facial recognition and authentication techniques that employ biometrics implicate important privacy and data security concerns. The ruling was not entirely a surprise, as the Ninth Circuit recently found standing in a Video Privacy Protection Act (VPPA) case (yet dismissed the action on other grounds), holding that the statute “generally protects a consumer’s substantive privacy interest in his or her video viewing history” and the plaintiff need not allege any further harm to have standing.
The Facebook litigation is an important bellwether on the scope and reach of the Illinois biometric privacy law. As noted above, other BIPA defendants are closely monitoring the case. In addition, many organizations that are trying to understand the range of their obligations under BIPA – and are looking to implement practical methods of compliance – are also watching this case for lessons to be learned. Finally, a number of states are considering biometric privacy laws, and given that the “lack of standing” argument under BIPA relates in part to BIPA’s specific statutory language, they hope to learn from this and other cases as to how best to craft their legislation (and decide whether such legislation might contain a private right of action or merely delegate enforcement powers to the state attorney general).”
Are there existing legal precedents regarding technology and privacy?
In criminal cases where a person is convicted of a crime based on digital evidence, courts are often deciding admissibility of evidence questions. The US Supreme court is in the process of hearing a case on police obtaining mobile phone data without a warrant and using it to identify potential suspects. https://www.theguardian.com/law/2017/nov/29/supreme-court-considers-limits-on-police-tracking-via-mobile-phone-data
In this case, Carpenter v. United States, the police obtained 127 days of cellular phone records from Metro PCS without a warrant to try to place Mr. Carpenter at the scene of several robberies around Detroit. The government relies on two previous Supreme Court cases, Smith v. Maryland, and United States v. Miller which established what is known as Third Party Doctrine. Third Party Doctrine holds that information a person voluntarily shares with a third party such as bank deposit and withdrawal records (Miller) or numbers dialed on a phone (Smith) isn’t protected by the Fourth Amendment because a person can’t expect a third party to keep that information secret.
Although the Supreme Court has not yet ruled on the Carpenter case, oral arguments last November indicate that the court is concerned about the privacy implications of all technologies that allow for the tracking and recording of people’s movements that may be stored for months or even years and accessed by the police without a warrant. Facial recognition technology would fall into this category. If the court decides against the Third Party Doctrine and at least requires a warrant prior to accessing databases containing information that identifies people and logs their movements it will be a major victory for privacy advocates and will not significantly diminish the ability of organizations to gather and store facial biometrics for legitimate purposes.
Are privacy concerns with facial recognition really all that new or unique?
Industry stakeholders such as representatives of biometrics companies and retail industry groups argue that privacy concerns regarding facial recognition are not unique to this technology and offer several counter claims to support this view. The following are 5 arguments made in support of facial biometrics by biometric industry spokespeople. https://www.gao.gov/assets/680/671764.pdf It is interesting to note that none of these arguments actually point out any benefits to citizens of facial recognition, rather they try to make the case that privacy is already being surrendered in the name of convenience or security, so what’s the big deal?
Argument 1: Should people expect anonymity when out in public?
The National Retail Federation claims that people should not expect complete anonymity in public. They also argue that anonymity and privacy are not intrinsically linked together. They argue that by making their face available in public, people give up any expectation that their image won’t be captured and that such capture doesn’t necessarily remove a person’s anonymity because it doesn’t directly reveal a person’s name, social security number, or other personally identifiable information.
In light of the advancements in facial recognition algorithms and the proliferation of networked databases, this argument is very weak. Implying that people essentially consent to having their identity captured by facial recognition systems simply by going out in public with their faces exposed is a disturbing viewpoint in my opinion. I doubt the National Retail Federation would conversely argue for patrons who wish to remain anonymous to wear ski masks in their stores. This argument is very similar to the often-used line of “if you don’t have anything to hide, why do you care about being watched?” The second part of this argument, that the facial image doesn’t reveal additional personal identifying information also falls short in light of the algorithmic advances that allow for a faceprint data to be almost instantly linked to a real person’s identity.
Retailers have many reasons to use facial recognition systems to identify store patrons, so arguing that being identified by your face doesn’t give up other personally identifying information misses the whole point. The store wants to match your facial recognition to a whole host of data sets about you from your normal shopping habits to your susceptibility to advertising to whether or not you have shoplifted in the past. The shoplifting application sounds like a reasonable idea. Who can blame a retailer for wanting to identify past shoplifters in order to watch them a little closer? The problem is, of course, that it creates a sort of scarlet letter in which petty offenses follow people around for the rest of their lives. There is something to be said for redemption and having your punishment end upon satisfaction of the sentence. With facial recognition technology, retailers can and will build databases of shoplifters and sell or share it with other retailers. People who as teenagers did something stupid like shoplifting may find themselves subjected to additional surveillance, unfavorable credit terms, and a host of other negative consequences long after they have paid society for their crime.
Argument 2: People are already accustomed to widespread surveillance.
The second argument is that because surveillance is already in use and a part of our daily lives facial recognition doesn’t add to or alter the level of privacy invasion people are already accustomed to. A second component of this argument is that cameras aren’t 100% reliable, generally aren’t networked to other camera systems, and it would be expensive to connect camera systems for the purpose of tracking individual people’s movements.
This argument is also extremely weak because it sidesteps the fundamental privacy issues regarding surveillance in general and makes the lame assumption that the presence of some surveillance systems justifies the addition of others. It also makes the assumption that because people are accustomed to the level of surveillance that currently exists that they are somehow ok with it. Many people simply aren’t aware of it or assume that cameras are only being used in stores and other public places to monitor shoplifting and other criminal behavior. They simply aren’t aware of the multitude of ways their faceprints can potentially be used by businesses and other organizations, and most people would not give consent up front if given the choice.
“The Center for Democracy & Technology has stated that if use of the technology to identify individuals in public were deployed widely enough in the future, and if businesses shared facial recognition data with one another, the result could be a network of cameras that readily tracked a consumer’s movements from location to location. Further, it noted that unlike other tracking methods, facial recognition does not require an individual to wear a special device or tag, which reduces individuals’ ability to avoid unwanted tracking. A representative of the World Privacy Forum told us that most consumers would find it invasive of their privacy for security cameras to be used to track their movements for marketing purposes. Likewise, FTC staff have reported that the privacy risks for consumers would increase if companies began using images gathered through digital signs to track consumers across stores. These issues are underscored by concerns consumers have expressed about being tracked in other contexts. For example, a 2009 consumer survey on marketing found that about two-thirds of respondents did not want online advertising targeted to them if it involved having their offline activity tracked. Likewise, a representative of the National Retail Federation told us that customers of a major department store chain reacted negatively after the store posted signs disclosing that customers’ movements within the store were being tracked via their mobile phones.”
Argument 3: Consumers have shown a willingness to give up some privacy for the benefits technology offers.
Industry stakeholders note that some people, quite possibly a majority of people are willing to trade privacy for convenience and other benefits offered by technology. They also point out that many people share personal information on social networks as evidence that societal viewpoints towards privacy have changed. This argument doesn’t stand up to logical scrutiny either. The United States Constitution and government was founded on the principal of protection of the minority from the tyranny of the majority. Public opinions on every topic swing drastically over time. Civil rights cannot be determined by taking a public opinion poll. If that were the case there would be a long and always growing list of banned speech, a situation that is very close to reality on most college campuses. The whims of the masses are almost always eventually proven wrong. In the case of facial recognition and surveillance the stakes are too high to trust the opinions of the idiot masses.
I also don’t think that most people actually look at the issue as one of accepting benefits of technology in exchange for giving up privacy. The marketers often point to the acceptance of their terms and conditions as proof that consumers willingly give up their privacy rights in exchange for the benefits of technology. The simple truth is that virtually every program or application requires the acceptance of its terms of service or it won’t work. There isn’t a button to push that allows for objections or to open negotiations, it simply doesn’t work. Most people accept these terms because there simply isn’t a workable alternative. For example, my online banking application periodically informs me that the terms have been updated and that I must accept these terms before I can use the application again. It inevitably presents itself right when I’m in the middle of trying to pay a bill from my phone or something that is urgent. The last time this happened I actually took the time to read the terms of service and I was shocked by what I read. Whoever wrote the terms of service for this particular online banking application took it upon themselves to decide what banking customers should be allowed to do with their own money. For example, by using the application, a consumer agrees that they can’t spend their money on alcohol, drugs, or firearms. Who cares that two out of the three are legal virtually every place in the United States and in many places all three are legal. Facial recognition is no different. People are being hoodwinked into agreeing to things that they never would agree to if the privacy ramifications were honestly explained to them.
Argument 4: The need for consent varies depending on the context.
Trade associations for the facial biometrics industry argue that the requirement for a person to give consent for the collection and usage of their facial data depends on who is consuming it and under what conditions. For example, if the facial recognition data is gathered and used for security purposes, the industry argues that there is no requirement for consent by the subjects. They contend that the facial biometrics in security systems are generally part of a closed system that is accessed and consumed by the organization deploying it. Social media sites are different, this argument claims, because they create repositories of facial images that can be used to identify individuals outside of the social network.
This argument has some merit provided the organization running the security system has tight security policies and procedures to protect the facial biometrics database from being accessed or shared with any other individual or organization for any reason. The problem with that approach is that there is little practical evidence that companies actually abide by those policies if they indeed have them. Phone companies, internet service providers, and other companies routinely hand over data to police and other government agencies without warrants or subpoenas. Furthermore, facial biometrics are different than other biometric data in that the capture process can be done without knowledge, let alone consent of the subject. And unlike fingerprints or most other biometric data, a person’s facial biometrics can easily be used to identify them in other places past, present, and future. A person can leave fingerprints in public places without concern that those fingerprints will be captured and stored for purposes of establishing a location and activity history. It is highly impractical that anyone will lift fingerprints just for the sake of building a database of who was in a crowd, but with facial recognition systems it is easy to identify everyone who passes by a camera and save a record with metadata regarding the time, date, associates, and even assessment of emotional state.
Argument 5: Facial recognition technology should not be singled out.
Industry representatives argue that facial recognition is not much different from other technologies like gait and voice recognition which can also be used to identify individuals from afar without their knowledge or consent. Therefore, the industry claims, policy makers should focus on protecting personal information including all biometric information instead of trying to regulate the circumstances and methods of its collection.
In my opinion identity protection is a made-up concept designed to fool the public into getting used to the idea of proving their identities and providing multiple data points for non-repudiation by businesses and government. This lowers risks and associated costs for businesses which they rarely pass on to customers. I recently went with a family member to view apartments. Both complexes had a $50 application fee and a $150-200 “processing fee.” The leasing manager explained that the processing fee was to cover their costs for running credit and background checks. It is amazing to think that people are asked to pay $200 for credit and background checks based on data they likely provided the background checking organizations for free. The actual cost of purchasing these reports is about 1/10 of the processing fee being charged too.
At the same time people are told to carefully safeguard their personal information that same information is demanded as a condition of transacting with virtually every business, employer, government entity, medical provider, etc. To make matters worse, businesses and other entities sell and share that information with each other and if consent is required by law they simply make consent a condition of doing business with them. They lose it too, despite lecturing us on the importance of safeguarding our personally identifying information. After every major breach the offending organization issues a half-hearted apology and promises to provide identity monitoring services or other essentially worthless compensation to the victims. They make the argument that we need to entrust them with more and more personally identifiable information so that no one else can pose as us, but at the same time they scare us into purchasing identity monitoring services because others will easily obtain and fraudulently use our information. How is that possible given the sheer volume of data these organizations possess on virtually everyone?
The argument that there are other technologies that allow for remotely and secretly identifying individuals doesn’t diminish the importance of regulating the collection and permissible use of that information. If anything, it reveals that industry representatives recognize that the true intention of facial recognition and other remote identification systems is to enable mass surveillance.
The arguments in favor of facial recognition ignore or downplay the numerous alarming ways in which this technology is already being deployed. Interestingly, the industry arguments for facial recognition outlined above claim no benefits to the public. Instead, the industry stakeholders point to the existence of other surveillance technologies and the public’s use of some intrusive technologies to claim that facial recognition is not that big of a deal. The following pages examine several examples of how facial recognition has already been used by police, businesses, and other organizations.
To see the very near future of this technology’s application just look to China and see how their government is using it. https://www.theguardian.com/world/2017/sep/01/facial-recognition-china-beer-festival
In China, people at a beer festival identified as former drug addicts by facial recognition were forced to take a drug test and arrested if testing positive. The article about this particular application was written in September, 2017. Most of the articles about facial recognition systems being employed by police or governments are within the last year or two, demonstrating that the technology to enable effective use of facial recognition has recently advanced sufficiently to enable its use for widespread surveillance. It’s no longer a futuristic big brother conspiracy theory. It is real and it is being used for seemingly petty crimes which supports the fears of privacy advocates.
In the Chinese beer festival example, twenty-five suspects were identified and apprehended because their faces were matched to a police database of convicts and known drug users. This is interesting because the police database included not only people who were wanted for actual crimes, but those whom the government deemed suspicious because of known or suspected behaviors, in this case drug use. One suspect was apprehended on an outstanding warrant and had been on the run for over 10 years. The article didn’t specify what this person was wanted for, but insinuated it was for actual criminal behavior. The article also stated that dozens of others were turned away and not allowed to enter the festival because they were identified as having criminal records or past drug abuse. The article wasn’t clear as to whether the group of drug abusers who were turned away were also forced to submit to drug testing and passed the test or if there were other criteria for simply turning them away.
Commonly, stories of abuses in other countries are used by politicians and others in the United States to highlight the differences between our government and China’s. Americans are often told how lucky we are to have a government that doesn’t subject us to this type of abuse and we are reminded of our constitutional freedoms. When our rights are being trampled we are told it’s a temporary emergency measure and there isn’t any other way. We’re also promised of limits on this new intrusion and judicial or other oversight. Most of the promises don’t last very long and the public eventually gets used to the new normal of less freedom.
Sometimes these stories are used to warn Americans of the potential for abuse of a new technology such as facial recognition. It would surely spark outrage in the United States if the police scanned crowds and singled out drug users for additional screening. The Fourth Amendment protection against unreasonable search and seizure should be enough to stop the police from being able to force people from taking random drug tests.
Unfortunately, there are many legal precedents already in the United States that could be used to enable this very type of police activity. In the United States all it seems to take for the government to ignore the Constitution is a good crisis. Currently we have an opioid crisis, an illegal immigration crisis, a gang crisis, and several other situations that are being hailed as unprecedented crises. Drug-themed crises have been employed by governments in the United States for generations.
Civil rights have been consistently squashed in the name of fighting drug and alcohol abuse. In the 1980’s drunk driving became a major issue and was used to justify previously unheard of civil rights violations such as random police checkpoints.
Facial recognition is being used to identify, publicly shame, and text fines to people caught jay walking in China.
This story is amazing not only because it illustrates one of the many potential “Big Brother” type applications for facial recognition, but it demonstrates that the technical capabilities already exist for massive surveillance, governments see it as a useful instrument of power, and private companies willingly partner with governments to employ such systems.
In south eastern China, Shenzen City deployed camera systems at busy intersections to publicly shame jay walkers. Images of the offenders were flashed on a large LED screen as an effort to deter them from future jay walking. Recently, officials worked with the camera system company Intellifusion as well as mobile phone carrier We Chat and social media provider Sina Weibo to upgrade the system to identify the people caught on camera and send them fines by text message.
In a ten-month period at the one intersection used as a pilot the new system identified 13,930 individuals caught in the act of jay walking. Apparently, jay walking is a common offense in Shenzen City because in 2016 the city cited 123,200 people for jay walking without the facial recognition system. Officials now plan to roll out the new facial recognition system city-wide to replicate the success it enjoyed in the pilot intersection.
In addition to sending the offenders an instant text message fine, the police also recently rolled out a new website which displays the picture of each offender, their full family name, and a partial identification number. There is apparently little choice for someone identified by this system other than to pay the fine and accept the public embarrassment of it.
Camera systems that capture traffic offenses and send fines to the vehicle’s owner have been employed by local governments across the United States for several years. In most, if not all cases the systems were installed by an executive branch agency without legislative oversight or a public debate or vote. These systems have been successfully challenged in court in some jurisdictions, but they persist in many others. A typical legal challenge is that a citizen given an automatic fine by a camera system is denied due process rights. In the United States, the Constitution guarantees the right of every citizen to face his or her accuser, to challenge the prosecution’s evidence, and to receive a trial that fairly examines all the evidence in the case. In many traffic camera cases one of the key elements was that the camera identified the car, but it was not able to positively identify that the registered owner of the vehicle was the driver at the time of the offense. Advancements in facial recognition technology potentially eliminate this defense and strengthen the government’s position in issuing fines based on traffic camera evidence. Once this legal precedent is breached it won’t take long for applications such as the jaywalking application to spread like wild fire across the United States.
One of the more bizarre applications China has deployed is using facial recognition to regulate toilet paper usage by limiting the amount each person gets. Apparently elderly people in China have a nasty habit of unrolling extra toilet paper from public bathrooms and taking it home with them. In response, the government has added a facial recognition system to the toilet paper dispensers. Each person has to stand in front of the camera for 3 seconds, removing eyeglasses and hats, in order for the machine to dispense 60 centimeters of toilet paper. Each patron must wait 9 minutes before the system will dispense another 60 centimeters of toilet paper. If a person returns too many times for more toilet paper the system will deny them completely.
Although this article seems funny it is very serious and should be seen as a warning by Americans of the extent governments will go to in order to enforce even seemingly trivial laws and regulations. This article points out that Beijing police boast that 100% of the city is covered by video surveillance with 46,000 cameras being watched by 4,300 officers. London is similarly surveilled as are numerous large cities across the globe. The article quoted this post on Sina Weibo, a Chinese version of Twitter: “I thought the toilet was the last place I had a right to privacy, but they are watching me in there too.” Isn’t it interesting that this person says they thought the toilet was the last place they had a right to privacy? It seems like an acknowledgement that they have no privacy rights anywhere else. https://www.theguardian.com/world/2017/mar/20/face-scanners-public-toilet-tackle-loo-roll-theft-china-beijing
If evaluating each of the above examples as a separate application, many people might not see much harm or danger in the use of facial recognition technology. Many Americans consider themselves law-abiding citizens and a good percentage of them wouldn’t find it alarming for the police to target known drug users in a festival crowd. After all, we are reminded daily by our government and complicit media that the country is in the midst of an “opioid epidemic.” If it would save a few people from overdosing, I am certain there are a significant number of Americans who would trade their freedom and privacy to employ such a tactic. The jaywalking application could likely be sold to the US public, especially in a place like New York City, as an instrument of safety. And most Americans would probably consider the toilet paper dispenser example as just a silly thing happening in China. Americans for the most part can’t relate to the desperation of Chinese citizens reduced to the humiliation of stealing toilet paper. Most Americans are aware of the power of technology and big data and are aware that it is being used, but most of them don’t associate with it personally.
The final example from China should alarm any true American. It is called “Sharp Eyes,” and is the Chinese government’s plan to connect the security cameras that already watch roads, transportation hubs, and shopping malls with private cameras in residential areas, apartment buildings, and stores and integrate them into a nationwide surveillance and data sharing platform. It will use facial recognition and artificial intelligence to track suspects, spot suspicious behaviors, and even predict crime. It will also coordinate emergency services and monitor the comings and goings of China’s 1.4 billion people. The term Sharp Eyes is a reference to a Communist Party slogan, “the masses have sharp eyes” that was used by Mao to get people spying on their neighbors.
The back end of this system is a “police cloud” that merges all the location and activity information with criminal records, medical records, travel bookings, online purchases, and even social media postings. One of the goals of Sharp Eyes is to develop a “social credit” score for each individual based on observed behaviors, associations, purchase history, internet search history and endless other possible data points. The Chinese government wants to know where people are at, what they are doing, and who they are doing it with, and facial recognition is the linchpin to make this totalitarian dream a nightmare in reality.
At a housing complex in Chongquing, the police reported that “90% of the crimes are committed by 10% of people who are not registered residents.” “With facial recognition we can recognize strangers, analyze their entry and exit times, see who spends the night here, and how many times. We can identify suspicious people from among the population.” Adrian Zenz, a German academic who has researched ethnic policy and the security state in China’s western province of Xinjiang, said the government craves omnipotence over a vast, complex and restive population. “Surveillance technologies are giving the government a sense that it can finally achieve the level of control over people’s lives that it aspires to,” he said.
The Sharp Eyes program also aims to mobilize the neighborhood committees and snoopy residents who have long been a key source of information. These people can take a cell phone video or still picture and send it right to the police to report everything from a car without a license plate, people arguing, or someone “suspicious” walking in the neighborhood. The police can quickly load images into Sharp Eyes to identify the subjects and determine whether or not to dispatch an officer to the scene.
By 2020, a little more than a year from now, China’s government aims to make the video surveillance system omnipresent, fully networked, always working, and fully controllable by the government to combine data mining with sophisticated video and image analysis. It is China’s ambition that sets it apart. Western law enforcement agencies tend to use facial recognition to identify criminal suspects, not to track social activists and dissidents, or to monitor entire ethnic groups. China seeks to achieve several interlocking goals: to dominate the global artificial-intelligence industry, to apply big data to tighten its grip on every aspect of society, and to maintain surveillance of its population more effectively than ever before.
The Chinese government relies on assistance from technology companies at home and abroad. The government sees Sharp Eyes as a catalyst to help it revolutionize and dominate the video analysis industry. A major issue with facial recognition and other biometrics systems is false positive matches. An FBI study recently revealed that in 85% of test cases a facial recognition system produced 1 correct match and 49 false positives out of 50 faces scanned. In the other 15% of cases the system flagged all 50 candidates as false positives. One of the main issues with developing deep learning technologies is that it requires massive amounts of data to create more accurate algorithms.
China’s intense focus on facial recognition technology and lack of concern about privacy will provide a virtually unlimited supply of data for testing and improving the algorithms. “Now we are purely data driven,” said Xu Li, CEO of SenseTime. “It’s easier in China to collect sufficient training data. If we want to do new innovations, China will have advantages in data collection in a legal way.” Innovations and breakthroughs in China will benefit governments around the world by making the systems more accurate and cheaper to deploy and operate.
The Chinese are very excited about the possibilities for their Sharp Eyes program. “The bigger picture is to track routine movement, and after you get this information, to investigate problematic behavior,” said Li Xiafeng, director of research and development at Cloudwalk, a Chongqing-based firm. “If you know gambling takes place in a location, and someone goes there frequently, they become suspicious.”
Gradually, a model of people’s behavior takes shape. “Once you identify a criminal or a suspect, then you look at their connections with other people,” he said. “If another person has multiple connections, they also become suspicious.”
The Chinese Sharp Eyes program sounds like a wild story out of a science fiction movie. Unbelievably, much of the infrastructure is already in place and the project is on track to be completely operational by 2020, which is little more than a year and a half from now! What I find surprising is that for the most part, the Chinese uses and plans for facial recognition are publicly known yet the Western media barely makes mention of it. The Guardian has several articles about facial recognition usage in China, but hardly any other media outlets even run syndicated articles about it.
Ok, of course China will abuse this technology, but certainly Western democracies are different, right?
You might think that the Western media isn’t interested in stories about Chinese facial recognition usage because that kind of thing just never happens in the Western world. You’d be wrong. Now let’s look at a few examples of what is happening in Western democracies.
Great Britain Notting Hill Festival: police using Facial Recognition to scan crowd.
Police in London deployed a facial recognition system to scan the faces of festival revelers as they passed the entrance to the Notting Hill festival in September, 2017. The previous year’s festival had been somewhat raucous with 450 arrests and several police officers injured. The police department decided to use the 2017 festival to pilot its new facial recognition system.
Civil libertarians in Great Britain are outraged by this use of technology to examine people in a crowd who aren’t doing anything wrong. Great Britain is a democracy with at least some level of civil rights. People in the United States often point to European countries, including Great Britain, for examples of governmental policies we should emulate in the U.S. This is just one of many examples showing that civil rights laws in most “modern democracies” are not very strong at all and the police employ tactics very similar to those in totalitarian countries. Martha Spurrier, the director of Liberty, said: “This intrusive biometric surveillance has no place at the Notting Hill carnival. There is no basis in law for facial recognition, no transparency around its use and we’ve had no public or parliamentary debate about whether this technology could ever be lawful in a democracy.”
The lack of parliamentary, legislative branch, debate about this technology is a common theme in supposedly free countries including the United States. The executive branch police and other agencies simply obscure the purchase of camera and facial recognition equipment in their budgets and install it. When a bill does come up to limit their use of it the police agencies join the opposition to the bill and paint their opponents as soft on crime. They also suddenly claim to be governing themselves effectively and provide examples of how they are minding civil liberties.
In a statement explaining its plans, the Met said: “The technology involves the use of overt cameras which scan the faces of those passing by and flag up potential matches against a database of custody images. The database will be populated with images of individuals who are forbidden from attending carnival, as well as individuals wanted by police.” There isn’t an explanation for what criteria were used to determine which individuals are “prohibited from attending carnival.” Maybe it is people who were arrested and convicted of a crime at a previous carnival. Maybe it is simply people the police don’t like for one reason or another. Maybe it is people who are identified as belonging to a certain group, religion, political movement, or even a civil liberties group. None of those examples are reasons to justify mass surveillance and identification of people in a crowd.
London has had closed circuit television (CCTV) cameras in place for over a decade. Events like the Notting Hill Festival are simply a test case for widespread implementation of facial recognition across the CCTV network. Sam Lincoln, a former chief inspector in the Office of Surveillance Commissioners, said: “When you put facial recognition on to CCTV cameras, the ability for crime fighters to recognize people is enhanced. It speeds it up, you don’t have to have lots of people watching video.”
The Met tried out the system last year, but it failed to pick out any suspects. Facial recognition technology is improving rapidly and the force believes it has the potential to provide a powerful new tool to law enforcement. Only images that come up as a match with a wanted offender will be retained by police, the Met said. It is quite telling that despite poor results the police continue to spend tax money to purchase new and improved facial recognition technology. They know the power this technology will give them when it improves and it is this power that they long for.
The Biometrics Commissioner for Great Britain issues a yearly report on the state of biometrics oversight. Here is a quote from last year. “We all leave records of our activities across public and private databases and in public spaces which, if analyzed holistically, provide multiple ways of checking identity, modelling the patters of our social behavior, collecting or inferring our attitudes and recording our activities, movements and decisions.” He rather chillingly adds that “we should now add sociometrics” to biometrics – the idea that there is no aspect of society which cannot be tracked, monitored, analyzed, assessed and stored.
How much different is the police attitude and viewpoint towards facial recognition usage in this “free and open society” as compared to the totalitarian communist regime of China?
Germany tries very hard to claim its government is nothing like its Nazi past. Is it succeeding?
Let’s start with a story about Germany using facial recognition to ID terrorists at a Berlin train station: http://www.homelandsecuritynewswire.com/dr20170612-germany-testing-facerecognition-software-to-help-police-spot-terrorists
Terrorism is often used by governments in the Western world to justify them acting like governments they label as “rogue regimes,” “state sponsors of terror,” or even “axis of evil.” Germany has had many terrorist incidents over the past several decades and people there are justifiably concerned about the prospect of continued terrorism in the future. Police in Berlin recently began testing a facial recognition system designed to identify terror suspects from live surveillance camera footage.
The police set up a booth at a train station in Berlin to recruit volunteers to test the new system. On the first morning of casting they were able to recruit 44 volunteers and hope to eventually enlist 275 people in the test. The police offered incentives to participants including Amazon gift cards and Apple watches for those who passed by the scanners the most times during the test period.
Articles about the testing report that subjects expressed little concern about the possible mis-use of their data or of the overall privacy implications of the system. “Anything you can do to help fight crime is worth doing – that’s why I agreed,” the woman explained to DW. The other two concurred. (Deutches Welle, 2017)
The article in Deutches Welle and others explain that the addition of facial recognition to existing camera systems is currently against German Law. “According to paragraph 27 of the Federal Police Law, authorities are allowed to use cameras to zoom in on people but they’re not allowed to digitally process such data and automatically evaluate it with software,” Matthias Monroy, a member of both groups, told DW. “There are inevitably false identifications.”
In another article, Deutches Welle describes how a new German surveillance law was passed in December 2017 with similar implications to privacy as facial recognition. The new law allows the government to install malware on smartphones, tablets, or laptops that allows the authorities to read WhatsApp messages and other personal communications. The malware, also known as the “State Trojan,” allows the government to circumvent encryption because it captures the message straight from the device before it is encrypted and sent into the network. They will be able to see whatever the person sees on their device.
Politicians from the governing coalition of parties in Germany argue this law is necessary to keep up with terrorists and other criminals who communicate mostly over the internet now instead of phones. “This is how we facilitate efficient, cutting-edge law enforcement that’s keeping us all safe,” Michael Frieser, domestic policy expert with the conservative CSU, said in the Bundestag.
The remainder of the article describes how this law is being implemented in ways that are counter to previous rulings by the German equivalent of the Supreme Court. The court had ruled that all private and intimate conversation is excluded from information that can be gathered when applying laws such as this but proponents of the law say this restriction only applies to previous communications on a device so every conversation intercepted after the State Trojan is installed can be recorded and used by the police. In times of permanent terrorist danger, we have to close security holes,” is how Eva Högl, vice chair of the SPD parliamentary group, put it in a statement for DW.
Another popular tactic employed by politicians is to attach a potentially controversial law to a spending bill that has a deadline for passage and consequences for missing the deadline such as a government shutdown. This German law was a late attachment to another bill that made minor adjustments to the existing penal code. It was added to the other bill just a few hours before the vote was taken, so there was barely time for members to read it let alone debate it. Although the use of facial recognition to search live CCT video feeds for wanted terrorists and other criminals hasn’t yet been passed into law in Germany it will likely be implemented through loopholes in existing laws and creative interpretations of high court rulings.
Because of its Nazi past, German society is mostly seen as being distrustful of police state tactics. In this example, the police had no trouble finding volunteers to help test a tool that makes the Gestapo look like amateurs. And just like in other Western democracies, lawmakers in Germany employ tricky tactics to pass surveillance bills and only if they can’t implement them through loopholes in existing laws or administrative executive branch regulations. I’ve often said that German society shouldn’t be held up as people who understand, support, and defend civil liberties because of their country’s past. They are descendants of the majority. The victims didn’t survive to pass their wisdom to the rest of society.
Are there private or commercial uses of facial recognition we should be concerned about?
It’s not just government abuse that should alarm you. As security cameras have become cheaper and more effective to install and maintain, their usage has exploded. Stores have virtually every square inch of floor space covered by surveillance cameras as well as parking lots, employee break rooms, and even the loading docks and dumpsters. Restaurants, hotels, gas stations, ice cream shops, car dealerships, legal offices, dental clinics, and pretty much any other business a person enters now has multiple cameras watching all public common areas at a minimum and many more private areas too.
Consider the following numerous ways that facial recognition systems are being used by businesses without the consent of the subjects.
Stores are scanning for known and suspected shoplifters.
Wal-Mart admitted to contracting with a company called FaceFirst in 2015. Many other retailers are using FaceFirst, but won’t admit it. The basic idea for this use of facial recognition technology is that every shopper who enters the store is filmed and their facial image is compared to a database of known or alleged shoplifters. FaceFirst hosts the software and database in the cloud, so there is the possibility that retailers can share their offender lists with one another and purchase databases of suspects.
Wal-Mart and FaceFirst insist that the images of shoppers who don’t match a profile in their database are deleted, but they have provided no evidence to substantiate this claim.
The public has grown accustomed to video surveillance in stores. But having your face imaged and compared to a list of known or suspected shoplifters takes surveillance to a level most people are not comfortable with. In addition to identifying potential or past shoplifters, the system notifies the store security staff and provides them with a profile of the suspect and company guidelines on how to handle the person, which can include following them around the store or even confronting them.
With the probability of false positive matches ranging from 2% to 30% these systems create a real risk of public embarrassment for many people who had no intentions of stealing from the store. This could range from being followed and questioned by store security personnel to having the police summoned to detain someone for a past alleged offense.
There is also the problem of how “offenders” are added to the database and if there is a process for removal of an image if a person is cleared of wrongdoing or after a prescribed period of time. Without clear boundaries it is possible for a person who tried stealing a lollipop as an 8 year old to have security personnel still following them around every time they shop when they’re 38 and in the process making a public scene that identifies the person as a thief. It sounds outrageous, but there is little to prevent that kind of scenario from becoming commonplace.
Wal-Mart claimed to have tested the FaceFirst system in several stores in 2015 but decided against full scale implementation citing lack of return on investment. Other retailers are more secretive about their use of facial recognition systems. Walgreens says they don’t discuss specific security measures, Home Depot denies using facial recognition, and Target won’t confirm or deny its use. The technology may not be good enough to justify a return on investment now, but with the amount of investment in the industry it won’t be long before it tips that scale. http://fortune.com/2015/11/09/wal-mart-facial-recognition/
Some retail applications may be just more on the creepy side than a privacy issue.
Retailers have always tried to figure out better ways to display merchandise and signage to increase sales. Now many of them are experimenting with innovative software that allows them to analyze camera footage and determine not only a customer’s approximate age, race, and gender, but some of these systems even analyze the customer’s emotional state. They record if you are smiling or frowning as you stand in front of a particular display, also measuring how long you lingered in the same place and what merchandise you were looking at.
All of this seems like fairly innovative marketing research until facial recognition is added to the mix. With facial recognition a retailer can link any number of additional data elements they have about a particular customer such as other purchases they have made in the store, frequency of shopping, and other details that form a profile that contains a lot of private information a person most likely wouldn’t publicly disclose.
For example, if the retailer operated a pharmacy, many customers wouldn’t want others to know about medicines they are taking or conditions they are being treated for. In a recent example of how data analytics can go awry, a father found out about his daughter’s pregnancy because a Target app recommended several maternity and baby items to purchase for her. This example had nothing to do with facial recognition, but it is easy to see how using facial recognition to identify shoppers could be exploited and would be considered intrusive by most accounts. Merchants use loyalty programs and credit card point of sale information currently to build predictive models of shopper behavior. Facial recognition allows them to analyze those behaviors and shopping trips that don’t result in a sale; or result in a cash purchase. Even if you exclude the possibility of friends and family discovering your shopping habits, how many people really want retail marketing analysts to have access to such private information?
There are conceivably many scenarios in which retailers could use the customer profiles developed through facial recognition to customize pricing and other offers to appeal to what they have determined to be that person’s weaknesses. Banks and other financial services companies do this already by virtue of credit scoring. Of course, banks don’t call it profiling or segregating customers, they call it risk based decision making. By requiring each customer to be uniquely identified and their complete credit history examined before extending terms and conditions, the banks are able to tip the scales in their favor and price loans according to what the bank considers each individual’s risk in terms of ability to repay. Insurance companies similarly amass mountains of data on each individual and are always looking for ways to acquire more. Similar to banks, insurance companies are able to use this data to price each policy according to an individual’s risk profile. Banks and insurance companies are often criticized by consumer advocates for policies that in many cases seem to extend beyond risk profiling and appear discriminatory and unfair to certain groups of people.
By using facial recognition technology to identify shoppers as they enter the store, retailers could use the profile to raise and lower prices for items on shelves as that particular customer passes by. In-store advertising and promotions can be customized to each customer, which sounds “cool” to many people, especially retailers, but might also be seen as manipulative and intrusive. For those shoppers who may visit a store many times without making a purchase the store may decide to employ tactics that make the person feel unwelcome or otherwise assign their profile to a less desirable category of treatment. Regardless of the motives and if the many applications of facial recognition usage in stores are an erosion of civil liberties, or simply creepy, it should give people concern to know that they are not only being watched, but their movements and behaviors are being analyzed by people they don’t even know for purposes they don’t understand and haven’t agreed to.
Certainly, there must be at least a few positive applications for facial recognition technology, so let’s evaluate some of them to determine if these benefits are worth the significant costs to privacy.
According to a page on Wikipedia, facial recognition is being used to prevent voter fraud in Mexico.
In the 2000 Mexican presidential election, the Mexican government employed face recognition software to prevent voter fraud. Some individuals had been registering to vote under several different names, in an attempt to place multiple votes. By comparing new face images to those already in the voter database, authorities were able to reduce duplicate registrations. Similar technologies are being used in the United States to prevent people from obtaining fake identification cards and driver’s licenses. (Wikipedia /wiki/Facial_recognition_system)
This seems like a great application of facial recognition. After all, who doesn’t support election integrity? Unfortunately, in the United States, lots of people oppose voter identification laws or other attempts to ensure that a person casting a vote is a legally registered voter and is only voting once and as the person they claim to be. In every state that has passed voter identification laws political groups oppose them and file lawsuits through their surrogate “civil rights” activist organizations with the claim that requiring a voter to produce identification is somehow racist and suppresses minority votes. These groups also loudly decry as conspiratorial any evidence of voter fraud, although they never produce any evidence to refute these claims and ignore valid evidence of non-citizen voting, people voting multiple times, absentee ballot fraud, and precincts recording more votes than registered voters. They also claim to be concerned about alleged Russian hacking of U.S. election systems, but at the same time insist that voter registration and the actual casting of ballots should be done over the internet to make it easier for “disadvantaged” voters to exercise their right to vote.
The fact that voter identification is so vehemently opposed by some political factions strengthens the arguments for using a technology like facial recognition to authenticate each registered voter and validate that each individual only casts one ballot in each election. The ironic twist that Mexico has successfully used such a system adds legitimacy to the idea.
However, as with many other technological solutions, there are countless other less intrusive ways to ensure election integrity such as simply asking for identification before allowing someone to vote. Creating a database of voter facial biometrics has the potential for serious abuse by political actors. These databases would also be attractive targets for identity thieves and other cyber criminals, creating additional channels to exploit. Before looking to facial recognition or other biometric identification systems the United States needs to solve the problem of political actors and activist judges blocking legitimate attempts to validate voters’ identity and legal authority to vote.
What about using facial recognition systems to find missing persons?
This seems like a very promising use for facial recognition systems at first glance. After all, in a typical year 660,000 people are reported missing in the United States, with 460,000 of them being children. (source: National Center for Missing and Exploited Children.) All but about 2,500 of these cases are solved each year, with most of the reports being cancelled after a short period of time because the person returns home or is otherwise found. Many of the children reported missing are runaways and unfortunately many of those children are sexually exploited while they are in a runaway status. Stranger abductions are the rarest category of abductions, making up a fractional percentage of missing persons reported.
Although stranger abductions of children are extremely rare, this and other types of crimes against children have a profound impact on most people. When a child goes missing and is presumed to have been abducted, law enforcement enlists the media to help them spread the word of this missing child to the public at large. Most of the time the information released includes a picture of the child. If it were possible to enter the child’s facial biometric algorithm into an active surveillance camera network the probability of finding the child would definitely improve, especially if the network was broadly dispersed geographically
In many cities across the United States the camera infrastructure is largely intact to theoretically enable analysis of live camera feeds by facial recognition search algorithms. Many surveillance systems are cloud hosted, making it easier to network them together and perform full scale analysis and comparisons of data feeds to facial recognition databases. With the hardware in place already, it is tempting for many to assume that live stream facial recognition analysis is the logical and presumptive next step.
Political debates regarding further erosion of basic civil rights are often framed in faulty comparisons of two choices offered as the only alternatives. When one of the options is something like reduction of child abductions and sex trafficking those with the opposing view are vilified as being in support of child abduction or something equally outrageous. An example or two of an extreme case is used to support this equally extreme position and media outlets are used to “inform” the public that it needs to exchange additional freedom with the security offered by the new power given to the government.
Implementation of a surveillance system effective enough to scan for specific individuals such as missing children would require coordination and networking between police agencies at the federal, state, and local level as well as thousands of other public and private organizations. It would require a process for the submission of subject data into the system and a system or department to process the video feeds. The figure below depicts the physical architecture of such a system. (source: http://hampentech.com/face-recognition-acap/ ) This particular system can be used to search for specific individuals by loading their image profile into the facial feature and profile server in the middle of the diagram. In this example the cameras themselves are loaded with facial recognition software so they process the video stream at the point of collection with the facial recognition algorithms and transfer a data stream of facial biometrics instead of a pure video stream. The use of this front end “edge processing” depicted in the orange boxes in the diagram significantly reduces network bandwidth, especially when deployed on a large number of cameras.
The capabilities to deploy a system that allows for real time individual searching are available now. But the same system that can search for one missing child across a sea of cameras can also be used to identify everyday people who have a right to be anonymous. In the process of generating a stream of facial biometric data to search for a missing person this system generates a log of every other person who passes by the cameras. It is the routinely collected information of regular citizens going about their daily lives that is of concern. Should the desire to locate missing persons over ride the fundamental rights to privacy guaranteed by our Constitution?
Considering the serious privacy implications, you’d think facial recognition proponents could come up with lots of truly beneficial applications. Besides locating missing persons, facial recognition advocates struggle to come up with applications that have much real value for individuals.
Schools have a couple of interesting applications for facial recognition technology. First of all, attendance can be and is being taken in a few schools by the use of facial recognition. Not only can teachers skip having to take roll call, but they can also ensure that the students in the seats are only registered students. This is an application that seems to provide a decent benefit and can largely mitigate privacy concerns if properly administered. Competition for admission to prestigious colleges, for scholarships, for internships, and jobs with top employers is at an all-time high and many students are tempted to cheat by paying a proxy to take exams for them. As long as the facial recognition database is secure and protected, making it a closed system, the privacy concerns can be adequately addressed. Another application that schools are considering and a few are testing is scanning students’ faces to analyze their emotions in an attempt to determine if they are paying attention. This application also seems to have several benefits for students and teachers. Feedback is important and if this application can truly identify the emotional engagement of students in an actionable way for the teacher then it should meet little objection. It does seem like a sad commentary on our education system that teachers need rely on a sophisticated video analysis tool to know whether or not their students are bored.
What does all of this mean for the average citizen of the United States?
We live in a free country, right? Certainly, our police and government agencies wouldn’t abuse the use of facial recognition technology like other countries do. Let’s put that theory to the test.
You might say, well sure China is using it that way, but this is America. We don’t do that here, we have a constitution. Sure, we have a constitution and laws, and so does China. However, people working in government are not much different from country to country. Those in executive branch roles point to the law as their justification. They will say they are just doing their jobs, fighting crime, fighting terror, etc. But how much different are our law enforcement agencies from those of China?
The FBI has already built a database with the facial data for almost half of the U.S. adult population. I bet most Americans don’t know that. It certainly isn’t something the government goes out of its way to disclose. The FBI calls it the Next Generation Identification program. The following is the FBI’s privacy and information security policy regarding facial recognition technology. The following paragraph is all the FBI saw fit to write regarding the proper use and safeguarding of personally identifiable information by the FBI.
Appropriate Use of Next Generation Identification Facial Recognition Technology
Searches of the national repository of mug shots are subject to all rules regarding access to FBI CJIS systems information (28 U.S.C. § 534, the FBI security framework, and the CJIS security policy) and subject to dissemination rules for authorized criminal justice agencies. Queries submitted for search against the national repository must be from authorized criminal justice agencies for criminal justice purposes.
This policy sounds pretty good on the surface. It talks about dissemination rules for authorized criminal justice agencies, which implies that there is a set of rules and a process to determine which criminal justice agencies are authorized to access facial recognition data. In the last sentence it claims that queries must be from an authorized criminal justice agency for criminal justice purposes. This at least implies that the data won’t be used for non-criminal justice purposes such as spying on citizens who are peacefully assembling, a right guaranteed under the First Amendment.
But can we trust the FBI and other law enforcement agencies, and should we?
Recent examples of how the FBI obtained warrants for surveillance under false pretenses (lying to FISA court judges), shared illegally gathered information with other agencies, and disseminated false, misleading, and illegally obtained information to other government agencies and the media prove that the FBI does not follow its own rules let alone regulations and laws put in place to prevent abuses of power. We don’t have to theorize about how bad actors in the FBI might possibly mis-use the power of facial recognition technology. We can read about other abuses of power by the FBI, the IRS, NSA, the Department of Justice, and countless other agencies routinely.
For example, the Special Counsel and former head of the FBI, Robert Mueller, directed the FBI mob squad in Boston during the Whitey Bulger era. Under his watch, and many say with his blessing if not participation, four men were framed for murders committed by mob members who were under Mueller’s protection for their cooperation. Two of the men who were illegally framed and sent to prison died there and the other two were eventually released and successfully sued the United States Government for $110 million. The main reason for this award was that Mr. Mueller withheld exculpatory evidence from the defense and hid it from the court. Mr. Mueller is hailed as above reproach and an example of integrity by those who support his current project as special counsel. These same people choose to ignore obvious evidence of bias and corruption with Mr. Mueller and his staff because all they care about is their desired end result of seeing him overthrow the president. If framing innocent people is how our best and brightest behave, how should we expect to be treated by “bad” actors?
Another example of questionable FBI behavior is its takeover of 24 child porn sites known as “The Playpen.” In what was known as “Operation Pacifier” the FBI took over and continued to operate 24 child porn sites for over a month in 2016. As Reason’s Jacob Sullum noted when the Playpen news came out, “the government’s position is that children are revictimized every time images of their sexual abuse are viewed or shared,” which “is one of the main rationales for punishing mere possession of child pornography,” not just those who make or distribute or profit from it. Under federal law, distributing a sexually explicit image of a child comes with a mandatory minimum prison sentence of five years and a possible 20 years. “If such actions merit criminal prosecution because they are inherently harmful,” Sullum points out, “there is no logical reason why the federal agents who ran The Playpen should escape the penalties they want to impose on the people who visited the site.” https://reason.com/blog/2016/11/11/fbi-authorized-to-run-24-child-porn-site
The FBI’s deployment of malware, which they call a “Network Investigative Technique” NIT, to identify people visiting child porn sites seems like a noble justification. However, many sources have reported that the FBI has also secretly utilized its NIT on Freedom Web and other “dark web” sites that have no illegal content and are merely intended for journalists, civil rights activists, and others to maintain anonymity and share information related to government corruption and civil rights abuses.
If that’s not enough to make virtually any citizen suspicious of the FBI and similar agencies, consider the statements by former director of National Intelligence and current CNN contributor, James Clapper, who stated in a Senate Intelligence Committee hearing that he is confident that Russian hackers can plant child porn on U.S. computers to frame people. It’s interesting that Mr. Clapper would make such a statement in a Senate hearing. In the same hearing he offered numerous questionable and unsubstantiated opinions regarding his views that the Russians hacked the Democratic National Committee, the Hillary Clinton campaign, and others in order to help the Trump campaign. It is hard to believe much of what Mr. Clapper says not only because he is clearly a partisan political actor, but also because prior to Edward Snowden’s disclosure of the massive NSA dragnet, Mr. Clapper appeared before the Senate Intelligence committee and repeatedly lied when asked specific questions regarding the NSA’s spying on U.S. citizens. Mr. Snowden’s leaks clearly showed that Mr. Clapper had lied under oath on many occasions. However, even when someone lies to the degree Mr. Clapper does, there are often nuggets of truth that come out and are just incorrectly attributed. For example, the remote planting of child porn on computers is likely a capability Mr. Clapper knows about because agents under his direction either have done it, or they developed the capability and briefed him on it. And where might they obtain child porn files to plant? They could just pick up the phone and ask their friends at the FBI.
If law enforcement is actively abusing other technologies, doesn’t logic indicate they might already be violating our rights with facial recognition technology?
It is not hard to imagine that video surveillance could be used to establish that a person was at or near the scene of a crime. It is also not hard to imagine that additional video surveillance establishing that the same person could not have committed the particular crime could be deleted and kept out of evidence. The recent shooting in Las Vegas, Nevada is a good example of this. Despite having cameras covering every square inch of the property from multiple angles, for months after the shooting the MGM Grand Corporation refused to release any footage from the days leading up to the shooting, and the timeframe during and immediately after the shooting.
Almost six months later the MGM Grand released less than 5 minutes of carefully edited video clips of the presumed shooter coming into the hotel with baggage and playing on a slot machine. In the days and weeks after the shooting the MGM Grand and Las Vegas Police Department issued several different and conflicting versions of the timeline surrounding the shooting and police response. These changing stories and lack of transparency by the police and the MGM Grand fueled conspiracy theories that not only persist, but they have strengthened in the now months since the shooting. Why wouldn’t the MGM Grand just release all its video from that time, at least to police and independent investigators? Wouldn’t it help everyone to see Mr. Paddock coming into the hotel, checking in, walking through the casino, getting in the elevator, and every other single minute he spent at the hotel? That footage, combined with the room lock data log, which the MGM Grand has also refused to release, might corroborate their timeline and the story that has been told to the public. It would also help answer questions such as whether he acted alone. Is it possible that the story was developed without reviewing the evidence, so now only the evidence that supports the story is released? Doesn’t the fact that almost 60 people were killed and hundreds more wounded demand transparency?
This case is just one of countless examples in which the police and media have been caught editing video footage to support a specific viewpoint. The problem is not the technology, it’s the people who use it and their agendas. None of these entities should be given the power to use facial recognition technology because they have demonstrated an unwillingness for transparency and honesty with the technology they have already been allowed to use.
Iowa is one of 18 states that use facial recognition technology in drivers’ licenses and has shared it with the FBI’s Next Generation Identification System. The FBI is continuing to pursue memorandums of understanding with the remaining states to add their drivers’ license photos to the FBI’s database. As it is swiftly moving forward in its efforts to build out its facial recognition system, the FBI failed to complete Privacy Impact Statements as mandated by law. https://oversight.house.gov/hearing/law-enforcements-use-facial-recognition-technology/
As Representative Jason Chaffetz, former Chairman of the House Government Oversight Committee said in a hearing in March, 2017. “So, here’s the problem – you’re required by law to put out a privacy statement and you didn’t. And now we’re supposed to trust you with hundreds of millions of people’s faces in a system that you couldn’t protect even with the 702 issue?”
Key takeaways from House Government Oversight Committee hearings include the failure to publish a privacy impact assessment and the secret use of facial recognition technology for years by the FBI without any disclosure to government oversight or the public. The FBI has also gone to great lengths to exempt itself from the Privacy Act and other regulations designed to protect the average citizen.
Rep. Paul Mitchell (R-MI) sums up the essence of the privacy argument. “I think the issue goes beyond the first amendment concerns that were expressed. . .and is broader. I don’t want to just protect someone if they’re in a political protest from being identified, the reality is we should protect everybody unless there is a valid documented criminal justice action. Why should my photo. . .be subject because I get a driver’s license, to access?”
This is the exact thought I had when I first learned that some unelected bureaucrat in the Iowa Department of Transportation assumed he or she had the authority to spend tax money to purchase and install a facial recognition system in the driver’s license process. Learning that the Iowa DOT has provided my photo to the FBI for their system without my knowledge or permission, and without any legislative authority or oversight is upsetting to me. It should be upsetting to every American.
Rep. John Duncan (R-TN) concluded the hearings in May 2017 with the following statement. “I think we’re reaching a very sad point, a very dangerous point, when we’re doing away with the reasonable expectation of privacy about anything.”
Because of the FBI’s secrecy, the many ways in which the agency has used and is now using facial recognition technology to identify, categorize, and surveil ordinary citizens is largely unknown. There are examples, however, of how other police agencies around the country are using facial recognition.
Wikipedia contains the following story about how Baltimore, Maryland police used facial recognition to identify protestors in a crowd. “In recent years Maryland has used face recognition by comparing people’s faces to their driver’s license photos. The system drew controversy when it was used in Baltimore to arrest unruly protesters after the death of Freddie Gray in police custody.  Many other states are using or developing a similar system however some states have laws prohibiting its use.” https://en.wikipedia.org/wiki/Facial_recognition_system
The same Wikipedia page provides another example of police using facial recognition to scan a crowd of ordinary people going to a sporting event, similar to the Chinese Beer Festival and the London Notting Hill Festival examples. “At Super Bowl XXXV in January 2001, police in Tampa Bay, Florida used Viisage face recognition software to search for potential criminals and terrorists in attendance at the event. 19 people with minor criminal records were potentially identified.   ” (https://en.wikipedia.org/wiki/Facial_recognition_system)
And in New York City, the New York Police Department (NYPD) is refusing to provide information about its facial recognition program even when ordered by a court to do so. The court had observed that there are apparently no regulations on who can access and use image data. There were over 30,000 people with access to the facial recognition database and virtually no controls on how it could be used. There have been examples of corrupt policemen across the country accessing driver license information to obtain the home addresses of women they found attractive and then stalking them. Having access to a facial recognition database makes it possible for a rogue actor to secretly photograph someone and simply look them up in the database.
The NYPD has been observed taking video of peaceful demonstrations and running the facial images against databases to identify individuals in the crowd. This is the exact doomsday scenario that civil libertarians hold up as the pinnacle of state surveillance abuse. It is also the same scenario that police and facial recognition advocates claim US police forces wouldn’t even be remotely interested in. Imagine how history might have been altered had the British possessed such a system at the Haymarket Riots or Boston Tea Party. http://www.nydailynews.com/new-york/nyc-crime/nypd-ripped-abusing-facial-recognition-tool-article-1.3847796
Governments are made up of people and although governments can look different from country to country, the people who make up those governments are more similar than different. They are motivated by a desire to perform their duties and to enforce the laws. Many of them have noble and altruistic goals such as keeping the population safe from terrorism or cleaning up pollution. Many of the systems and initiatives that reduce freedoms for all citizens are championed by those in the government as vitally important to solve one particular problem such as crime or terrorism. “Just give us this tool and we will protect you from harm,” is a common theme. Another common theme is “you can trust us.” Unfortunately, the more tools they have, the more they want, and they prove time and time again to be untrustworthy.
As this article has highlighted, the infrastructure for widespread surveillance using facial recognition is already prevalent in many cities around the world, including the United States and Western European democracies. In many ways, Western governments, including the United States, are already using facial recognition similar to how China is using it. Facial recognition technology is simply too powerful to be trusted in the hands of people who demonstrate time and time again that they are inherently untrustworthy and work for agencies that are designed to maintain power over the individual.
Facial recognition is unlike other surveillance technologies in that it allows investigators or simply snoopy people to review old video footage and potentially identify everyone in the video. This is like making a record of everyone’s movements, relationships, and habits subject to replay at any future date. Privacy is a fundamental right and the right to remain anonymous is paramount to the maintenance and protection of privacy.
There are countless negative applications for this technology. For example, hidden cameras could be set up near certain retail stores for the purpose of identifying patrons and later publicizing this information. People could wake up to find their name and picture on a news site exposing customers of smoke shops, or adult entertainment venues, or a mental health clinic, just to name a few possible scenarios. People on medical leave from work could find themselves in legal jeopardy if their image was captured while going out for a bite to eat or to pick up something from the store.
It’s not just the government that people need to worry about. Corporations are increasingly succumbing to pressure from activist organizations to support their causes and suppress free expression by employees. I see it every day at my place of employment. The CEO issued statements on the Dakota Pipeline and gun control basically stating that we are a business and don’t engage in politics or take a stance on issues such as gun control or oil exploration. Interestingly the same CEO recently joined with other CEOs in publishing a letter to the President and Congress demanding amnesty for DACA participants. He also recently issued a statement that the company has joined with other corporations to support the Civil Rights Act, which is a proposed federal statute that would essentially federalize gay marriage and overrule state laws on the matter. The company also heavily promotes employee participation and fundraising for community support campaigns that discreetly donate proceeds to organizations such as Planned Parenthood and other liberal groups. The official policy of the company is to remain politically neutral, but its leaders succumb to activist pressure to take political positions on a variety of controversial issues.
On the company’s internal social networking site there is a “flag for inappropriate comments” feature. On the DACA announcement every single opposing or even mildly questioning comment was flagged and removed. The Civil Rights Act announcement contained about 75 comments in favor and only one comment questioning the company’s hypocrisy on remaining politically neutral was allowed to stay.
Government regulators have actually issued directives that the company must establish a “reputation risk” function to evaluate deals from the standpoint of potential reputational damage in addition to normal financial and opportunity cost evaluations. From what I have seen, this is another channel for activists to assert political pressure on companies. Activists are organizing and pressuring financial institutions to drop business relationships with gun manufacturers, gun stores, the NRA, and others who support maintaining our Second Amendment rights. They are also working on ways to identify gun and ammunition purchases in individual debit and credit card transactions and demand that banks prohibit them. Some banks have already succumbed to this pressure. How hard is it to imagine these same activists filming a peaceful pro-Second Amendment rally, identifying participants as being employees of certain corporations, and then demanding their firing by those corporations? It’s not hard at all to imagine that scenario.
Virtually every company has video surveillance on their premises and it is not hard to imagine that facial recognition technology could be used to identify employees at various times and places throughout the work day. One of the Chinese applications for facial recognition is for clocking employees into and out of their shifts and operating access doors for employees. Companies could compare video footage or images of political demonstrations or other activities that the company views as contrary to its “values” against its facial recognition database to identify any employees in the images. In a recent story about Google punishing political opinions by employees, one of the employees interviewed for the article explained that when they met to discuss their concerns about the company the colleagues were careful to leave their phones at home so the company wouldn’t be able to analyze their location records and identify who was at the meeting. There is no doubt companies would use facial recognition technology to analyze, control, and if needed to punish employee behavior and associations. They use other tools and technologies to do it now.
Can anything be done to stop, or at least control the use of facial recognition technology?
People look to the government to protect their rights. When it comes to facial recognition technology there are very few laws that even attempt to protect the rights of the individual. State laws in Texas and Illinois provide guidelines and regulations for commercial use of facial biometrics. There is currently at least one class action lawsuit pertaining to alleged violations of the Illinois law by Facebook that has survived attempts by Facebook to have it dismissed and is still working through the legal process. The European Union’s General Data Privacy Regulation (GDPR), set to become law in May, 2018, contains many provisions regarding the use and protection of personal information, including facial biometrics.
Interestingly, Facebook, which had stopped using facial recognition features in Europe in 2012 due to regulatory concerns, is now planning to implement it with an opt-out to conform with GDPR. Facebook is now selling their facial recognition feature as a way to identify if someone else has created a fake profile using your pictures. That sounds nice, but it is revealing that in order to opt-out to other facial recognition uses, such as the controversial auto-tag feature, a user has to opt-out to all features including fake profile discovery. https://www.cnbc.com/amp/2018/04/18/facebook-brings-back-facial-recognition-to-europe-after-closing-it-in-2012.html
Legal options to protect civil rights are something people should pursue at every level of government but they will only go so far because governments are the biggest users and abusers of facial recognition technology. As previous examples have shown, governments view facial biometrics usage as a powerful tool to control the population and fight “crime.” There is little hope for citizens in countries such as China and even the European Union to expect their governments to voluntarily regulate themselves and restrict nefarious uses of facial recognition technology. Unfortunately, as the examples have shown, governments in the United States are no different in their approach to facial biometrics.
Perhaps the best hope for the people to put restrictions on government use of facial recognition and perhaps even ban its use commercially is that the government will have to contend with many unintended consequences of its own if facial recognition technology is allowed to proliferate.
For example, it will be virtually impossible for undercover police officers to maintain their cover if their assumed identity is routinely confused with their real identity. Imagine the consequences of an undercover cop being outed by a “FacePay” application while meeting with his mob subjects at a diner. Similarly, crime syndicates could use facial recognition to discover previously used identities of associates and discover undercover agents. With something as simple as a Facebook tag feature an agent could have one of his or her second-grade classmates “tag” their face in a class photo on “Throwback Thursday” and inadvertently reveal the agent’s actual identity.
Gina Haspel, recently named as the next head of the CIA has only a few “officially authorized” published photographs due to decades of clandestine service, but now all it takes with facial recognition is one or two good facial pictures and she can easily be identified from virtually any camera device. Keeping her identity completely hidden is less of an issue for Ms. Haspel now that she is in a very public role, but even with a public position and recognizable face there are countless situations in which revealing her identity could prove problematic. For example, if she were participating in secret negotiations with representatives of another country, a snippet of video tape from a parking lot camera or any number of other locations could confirm her presence at the secret gathering. Other countries or terrorist groups opposed to the meeting could use this information to garner publicity and support for their causes.
And even in the absence of national security or foreign policy implications, politicians and other government officials may find themselves in compromising situations thanks to facial recognition. An Iowa lawmaker was recently caught on camera kissing a lobbyist and resigned in disgrace. In this example, the lawmaker and lobbyist were recognized by another patron in the bar they were at and not by facial recognition. However, with facial recognition, any number of possibilities exist for people to be outed, shamed, and even extorted. The MGM Grand Casino was previously mentioned for its refusal to cooperate with private investigators and law enforcement in regards to video surveillance footage related to the Las Vegas shooting. Imagine if a hotel captured video footage of a prominent politician escorting someone back to his or her hotel room and identified them both through facial recognition. This information could prove very powerful for the hotel. It could be released just prior to an election to cause disarray in the campaign. It could also be retained as a way to influence future legislative decisions or obtain favorable treatment from the government. There are countless possibilities for improper use of this technology.
Even if laws are passed to restrict commercial collection, analysis, usage, and storage of facial images by commercial entities and regulate government usage, there will be too many exceptions to make any patchwork of laws and regulations sufficient to maintain anonymity and privacy for the average citizen without diligent and coordinated action. The genie is already out of the bottle so to speak with facial recognition technology. Ultimately it is up to the individual to decide what is tolerable concerning his or her privacy rights and make choices about how they to respond. I recommend the following methods.
Become more politically involved and engaged in your local community. Most of the laws that affect our day to day lives are state and local laws. The FBI would have a harder time building a facial recognition database if citizens organized and elected lawmakers who passed laws prohibiting not only the sharing of driver’s license photos with the FBI, but also prohibit the state Department of Transportation from using facial recognition technology in the first place. State laws should be passed specifying that driver’s license photos could only be taken with a film-based camera with single use film such as a Polaroid. Therefore, no picture database would exist for the FBI and others to mine at will. Police officers would still be able to further verify individual’s identities through traditional means such as fingerprints if warranted. What won’t be allowed are things like people being identified as they go about their normal routines and pass by the mesh of closed circuit traffic cameras operated by cities and state transportation departments. Speaking of CCTV cameras, additional laws should be passed at all levels of government to remove the installed base of existing cameras and prohibit additional installations. To address abuses by commercial entities, laws should be written that prohibit commercial use of facial recognition technology and provide stiff monetary penalties for its use.
The political process is arduously slow and unpredictable. It is also complicated by lobbying and competing special interest groups, most of which are well funded. In the absence of specific laws, citizens whose privacy is compromised by unauthorized use of facial recognition must seek civil damages through the court system. Especially in cases where individuals are randomly identified such as in a crowd of demonstrators, lawsuits must be aggressively pursued to provide precedent for additional court and legislative oversight and regulation. Employers who discipline employees for political or otherwise legal, but possibly “inappropriate,” activities outside of work hours based on facial recognition usage should face lawsuits.
Lastly, as in previous civil rights movements, widespread civil disobedience should be encouraged. Governments and corporations have proven themselves incompetent in the areas of confidentiality and availability, so why should individuals help them out with data integrity? It is not only the right, but a fundamental duty of all citizens to provide contradictory and inaccurate data to government and corporate data collectors. In the case of facial recognition, there are a variety of methods that can be used to make it more difficult for camera systems to record a usable image with accurate facial recognition data points. Several studies have shown that makeup may be used to obscure facial features enough to confuse facial recognition algorithms. Other studies have used makeup with abnormal light absorbing and reflective properties such as blocking infared reflection or only reflecting ultraviolet wavelengths. More research needs to be done in this area, but it is conceivable that men and women both could employ such makeup to produce a driver’s license photo that would be unusable by other facial recognition schemes.
People should also not only remove or alter photographs stored on social media sites, but they should tag themselves to pictures of other people to confuse and degrade the integrity of facial recognition databases.
Other novel techniques for confusing facial recognition systems have been developed. A brand of sunglasses known as Reflectables uses a highly reflective material on the frames to reflect both visible and infared wavelengths so that it “ghosts” facial images observed by cameras. https://www.magneticmag.com/2016/12/be-seen-and-unseen-reflectacles-are-the-sunglasses-of-the-future/ The same technology could be applied to ordinary glasses. Oversized shades, hats, and hoods can be effective, especially when combined together.
An artist and clothing designer in Europe designed a line of clothing with what appears to be digitized faces. These images serve to confuse camera software and cause it to zoom in on random images it perceives as faces and blur out surrounding images including actual faces. This is a novel idea, but seems to be easily defeated.
Wearing a mask, is of course another alternative for maintaining anonymity. Theatrical masks made of acrylic are rather easy to put on and can fool cameras and even people. These masks conform to the face and even move when speaking. There have been a few cases in which these masks were used by bank robbers very effectively. A young African American man wore a mask that made him appear as an elderly white man when he robbed a bank in North Carolina in December, 2014. He was eventually caught because a surveillance camera caught the license plate on the getaway car he ran to, which was registered in his name. http://www.nydailynews.com/news/crime/bank-robber-caught-mask-old-white-cops-article-1.2145814 In another case, a young white man had successfully passed himself off as a young African American man during several bank robberies. He too, was arrested because police noticed the interior of the car he was driving had been splattered with red dye from a dye pack inserted into a bag of money during one of the robberies. http://www.nydailynews.com/news/national/white-robber-nabbed-wearing-african-american-hollywood-mask-article-1.165988
The kind of masks that these robbers wore are very convincing and realistic. They also cost between $600-700. The point of highlighting these two robbery cases is not to advocate for bank robbery. It shows, however, that facial recognition can easily be defeated with the use of a good mask. The public should look upon mask ownership similar to keeping a fire extinguisher around the house and should get into the habit of wearing their mask in public, especially when they visit stores and public venues such as sporting events. Although these activities are perfectly normal and legal, the public should get into the habit of protecting their identities from marketers and surveillance programs employed by police to identify people in a crowd and compare that list to their database of “suspicious” persons.
We the people should make it as hard as possible for marketers, police, employers, and just nosy people from using facial recognition technology to identify us going about our daily routines. Society has never before experienced a technology this powerful and we are right to be skeptical of those empowered to protect our privacy. The track record of governments and corporations is abysmal when it comes to privacy rights, so it is paramount that we fight giving them something as powerful as the ability to record where each individual has been and who they’ve associated with and access that information without any due process at any time they wish.
I conclude with the quote from Martha Spurrier, the director of Liberty, “There is no basis in law for facial recognition, no transparency around its use and we’ve had no public or parliamentary debate about whether this technology could ever be lawful in a democracy.”
Facial Recognition Enables the End of Freedom! By admin Post date Sharing is caring! Advances in facial recognition technology have the potential to pose the biggest threat to freedom