Open Letter to Ban the Use of Digital Facial Recognition Technologies in Public Security

The organizations and individuals who sign this letter demand total ban on the use of facial recognition digital technologies in public security1 in Brazil for the following reasons. 

Firstly, these tools are able to identify, track, single out, and trace people everywhere they go, and may violate rights such as: privacy, data protection, freedom of assembly and association, equality, and non-discrimination. In addition, they can make people feel inhibited, undermining their right to exercise their freedom of speech. 

These technologies have led to a number of serious human rights abuses and violations around the world. One example, portrayed in the documentary Coded Bias, is the use of facial recognition by UK police and the incorrect association (as of the year 2018) of 98% of faces flagged as corresponding to fugitive persons. In light of these concerns, cities like San Francisco and Oakland in the United States have banned the use of facial recognition in public places.

In Brazil, a country with the third largest incarcerated population in the world, the use of facial recognition technologies in public security would lead to the worsening of racist practices that make up the criminal justice system. However, despite the seriousness of these impacts, these technologies are already present in the vast majority of Brazilian states. In the state of Bahia, since 2018, facial recognition cameras have been installed with the official purpose of fighting crime, but without proof of having effectively achieved that goal.

No technical or legal protection can fully eliminate the threat these technologies pose. Even companies like Amazon, IBM, Meta, and Microsoft have rethought the use of these tools in some contexts. We believe, therefore, that they should never be used in public security activities – either by the government or even by the private sector, through delegation of the execution of public services. The potential for abuse is very great, and the potential consequences, very serious.

Surveillance technologies bring us insecurity due to the violation of our rights, without giving us chances to avoid or even consent to their implementation and to becoming their targets. Notable are the violations of our integrity, by the collection and processing of biometric personal data; of our freedom to come and go and of self-determination, as we may be under surveillance 24/7, creating a frightening context; of our right to a diligent legal process, as mass surveillance considers all people guilty as a matter of principle, undermining the constitutional guarantee of the presumption of innocence as a basic legal assumption. 

Regardless of the safeguards and corrections that could be proposed for the creation of an allegedly and supposedly “bug-free” technology, this constant, massive, indiscriminate surveillance is – in itself – a violation of people’s rights and freedoms. As we are discussing mechanisms applied in a manner incompatible with human rights, we call for a ban, not just a moratorium, on facial recognition in the context of public security. 

Many of Brazil’s federal units employ facial recognition in public policies, either completely or partially focused on security; and the media unquestioningly report an alleged effectiveness of this type of project for public safety. For these reasons, this campaign focuses on banning facial recognition in the public safety scope. Despite this approach, we are aware of the serious problems arising from other forms of techno-surveillance that rely on biometric data (including voice, step count, temperature, heart rate, DNA, etc.) and are the subject of many other important campaigns and communications from civil society and experts around the world. 

Still within the scope of public security, it is important to point out that, although there may be the impression that the collection and processing of biometric data is carried out only by the public authorities, many of these conducts are implemented through contracts with the private sector. These partnerships are intended to provide services that involve infrastructure and technological tools and may include facial recognition. 

These public-private agreements often lack transparency and do not provide the general public with details regarding data processing – a scenario that can result, among others, in secondary uses of this data for purposes of exclusive interest to private sector entities. In other words, the misuse of personal data used in this type of project is not uncommon.

In most cases, the criticism and protests that oppose this scenario are aimed at the public authorities, who are responsible for its implementation. But public-private partnerships require special attention because access to information about such initiatives cannot be hindered on the grounds that they are protected by commercial or business secrecy, to name but one example. Companies also have an obligation to respect human rights, regardless of the nature of their involvement in such projects — which can occur from providing personal data to the government, to name but one example.

An internationally known case that exemplifies the use of data collected by private enterprise for competitive advantage (without the knowledge of the population) is that of the company Axon (now Taser). It used data generated by body cameras which they had previously donated to US police departments when developing their Artificial Intelligence in-house. Imagine the impact on the collective if a situation like this were to occur in Brazil, reaching biometric data collected through facial recognition. In this context, it is relevant to point out that an Al Sur survey shows that a sizable portion of the technologies used in Brazil come from donations from companies2.

As mentioned above, even if Brazil had a law in place to regulate the processing of personal data in public security, the dangers posed by facial recognition would still not be eliminated. In a context where factors such as racism, classism, misogyny, and LGBTQIA+phobia impact the way in which people, in their diversity, have their bodies perceived, interpreted, addressed, and even discriminated against and repressed, mechanisms that rely on face analysis bring specific concerns. 

A major problem with facial recognition technologies is that they rely on body classification. This can occur due to aspects such as sex and gender, for example, bringing about a binary vision based on stereotypes that do not recognize the diversity of bodies, identities, and expressions – a situation that is even more worrying in Brazil, the country with the highest rate of trans people murders

Furthermore, there are records that similar mechanisms have been used to recognize people’s emotions – which, besides running the risk of producing racist assumptions, has no solid scientific proof of functioning. In this context, it is worth highlighting that ViaQuatro, the São Paulo subway operator, has been judicially condemned for having applied this type of mechanism to its users.

Your face may, without you realizing it, have been subjected to facial recognition for public safety purposes. The reason is that, in Brazil, there are already monitoring cameras equipped with this kind of technology in the streets of several cities, including public clocks, and also in public transportation. People who pass through this kind of space often do not notice that they are being subjected to some kind of identification. Furthermore, facial recognition has also been reported to be used on police officers’ cell phone applications during raids. The possibility of circulating without constraints and exercising various rights in these spaces will be strongly threatened as long as this type of project exists.

Therefore, the present campaign calls for the banning of facial recognition for public security because the campaign understands that the problems caused by facial recognition have no solution – that is, they are inseparable from the very use of these mechanisms. Algorithms do not work in a neutral way and can reproduce discriminations related to the environment and the people who formulated them, and furthermore, their operating logic is not easily explainable to the public. It could also leak the enormous amount of data needed to run these technologies, leaving the entire population vulnerable.

Even if these mechanisms were improved – a need that is frequently pointed out by the narratives advocating their implementation – this would not circumvent their negative impacts. Trying to reduce these errors and programming the technology according to the diversity of the target populations would make these groups more easily mapped, identified, surveilled, and tracked, which means that this use would still be disproportionate. This is also because there is a constant risk that this kind of technology can be used by governments to persecute certain groups and people. 

Even if the “benefits” of these technologies for the community were proven, it would not be proportional to restrict the right to privacy of the population that passes through that space under the justification of specific positive impacts – which are also questionable, given the complexity of the factors necessary for the promotion of non-discriminatory public safety measures that effectively take the community into consideration. In this context, it is also important to question whether the “benefits” pointed out in these debates reproduce and/or camouflage institutional violence towards groups that are already historically and socially persecuted and oppressed, such as the Black and Indigenous population, Immigrants, Transgender people, and Transvestites.

Besides these direct impacts on human rights, it is also not worth spending the amount of public money that this kind of project demands; under the pretext of improving public security, millions of Brazilian reals have already been (and may still be) spent in pursuit of objectives that would not even be positive for the population as a whole. 

Additionally, the massive use of these tools is not consistent with international treaties to which Brazil is committed – including the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). Article 17 of the ICCPR, for example, protects people from arbitrary or unlawful interference in their private lives. Regarding the right to privacy in the digital age, the UN Human Rights Council recently pointed out in a resolution3 that the increasing uses of technologies such as facial recognition, without proper safeguards, impact the right to privacy and other human rights, including freedom of opinion, expression, and peaceful assembly. The Council also pointed out concerns about the reproduction and exacerbation of racial inequalities through the use of facial recognition and called on states to ensure that biometric and recognition technologies, including facial recognition, do not lead to arbitrary or unlawful surveillance.

Still, the threat to the exercise of the right to protest stands out. Public demonstrations in the streets are expressions of groups from the most different political spectrums in Brazil. The possibility of being the target of permanent surveillance can lead people to change their behavior, in a self-censoring posture, and legitimate mobilizations can be inhibited. Self-censorship can hit hardest at groups most vulnerable to state repression and violence. In extreme cases, the use of these technologies can lead to the criminalization of the right to protest. 

Despite claims of a supposed improvement in public safety through the use of facial recognition technologies, this kind of project reproduces the culture of punitivism and incarceration, instead of focusing on prevention and restoration measures. There is evidence that shows how these technologies are used abusively and/or implemented with little or no transparency – a framework that does not even allow the population to question the way they work. 

This is a use of surveillance technologies that because it is so dangerous, must be rejected in a so-called democratic context. We must ban the use of surveillance technologies that promote the violation of rights! 

For these reasons, within the scope of public security, this campaign aims to:

1. Prohibition of the use of facial recognition technologies, with standards being adopted for their respective prohibition in any of the spheres of the Federation, including when it comes to hiring private solutions by the public administration.

2. Discontinuation of any projects that use, even in a secondary manner, facial recognition for public safety purposes. In cases where the technology has already been used on the population, the responsible governments should formulate public policies and action plans so that people who have had their human rights violated by these mechanisms can seek appropriate redress.

3. Publishing impact reports on the use of these technologies, from the time they were first conceived until they were discontinued, including data on investment, number and characteristics of the approaches and arrests made, false positive and false negative rates, documentation of the procedures implemented, risks to the data subjects and the measures that have been taken to minimize them, among other information relevant to measuring the impact of their use. The National Data Protection Authority (ANPD), in compliance with its legally established institutional attributions, must require the completion and publication of these reports whenever necessary.

4. Refusal of the private sector to encourage the implementation of this type of project by the public authorities. Cooperation agencies and banks should not provide resources to the public administration for the development and implementation of this type of project. Companies and startups that develop facial recognition mechanisms should not provide this kind of technology for policies involving public safety, whether that be the primary or subsidiary objective.

5. Mobilization of institutions that seek to defend constitutional rights – such as the Public Defender’s Office, the Public Prosecutor’s Office, and the National Data Protection Authority – in favor of banning the use of facial recognition in public security, which may involve anything from conducting administrative proceedings to taking legal action against governments.

1 Given that such considerations refer to public security issues, it is worth mentioning that, according to the Brazilian Federal Constitution (1988), public security intends to preserve the “public order” and the “safety of people and property,” and is exercised by the police agencies taking action in the country – which includes, in addition to the civil and military police, the federal police. In addition, it is relevant to highlight that the text of the draft bill that aims to consolidate the Brazilian data protection legislation for certain purposes that are not directly covered by the General Personal Data Protection Law (2018), the latter already in force, presents a preliminary theoretical understanding that distinguishes between public security activities and criminal prosecution ones.
2  https://www.alsur.lat/reporte/reconocimiento-facial-en-america-latina-tendencias-en-implementacion-una-tecnologia
3 Resolution adopted by the UN Human Rights Council on October 7, 2021. Available at: https://digitallibrary.un.org/record/3945627?ln=en

Organizations that sign this letter:

Individual signatures:

  • Aderica Campos
  • Aina Selles
  • Alan Fernandes Xavier
  • Alice Andrade Rodrigues
  • Ana Carolina Sousa Dias
  • Ana Clara Souza
  • Ana Gabriela Souza Ferreira
  • Ana Lucia Pompermaye
  • Ana Luisa Figueiredo de Melo
  • Anamaria D Andrea Corbo
  • Anderson Santos Mansano
  • André Lucas Fernandes
  • André Ramiro
  • Anna Bentes
  • Bárbara Castilho Maximo
  • Bárbara Lorena e Silva Alves
  • Bernardo Gomes Alevato
  • Branda Camargo Rochwerger
  • Brenda Cunha
  • Brisca Bracchi
  • Bruna Santos, pesquisadora visitante no Berlin Social Science Center – WZB
  • Carla Azevedo de Aragao
  • Carla Vieira – Engenheira de Software e pesquisadora
  • Carolina Batista Israel – Pós-doutoranda pela Universidade de São Paulo
  • Carolina Soares – Desenvolvedora de projetos Dados Livres
  • Clara Ferraz
  • Clara Marinho
  • Claudiana Lelis
  • Cleiton Nonato
  • Cynthia Picolo
  • Davey
  • Davi Neuskens
  • Débora Pio
  • Diego Cerqueira
  • Diogo Dal Magro
  • Eduarda Costa
  • Eduardo Gomes Mendonça
  • Eliana Grecco
  • Eliel Pinheiro
  • Ênio Lourenço Leite da Silva
  • Érico França Bonfim
  • Fabianne batista Balvedi
  • Gabriela MachadoVergili
  • Geisa S. Silva – pesquisadora e raquerartivista
  • Gu da Cei – Artista visual e produtor cultural
  • Gustavo Furtado
  • Gustavo Luz
  • Helena Martins
  • Hélio Aparecido
  • Horrara Moreira da Silva
  • Ines Aisengart Menezes
  • Ingrid Lima dos Santos
  • Isabela Maria Rosal Santos
  • Isabelle Cristine Oliveira Ribeiro
  • Izabela Domingues da Silva
  • Izabella Bittencourt
  • Jamila Venturini
  • Janaina Pereira
  • Janaina Spode
  • Jaqueline Trevisan Pigatto
  • Jess Reia – University of Virginia
  • Jessica Carmo
  • João Guilherme Bastos dos Santos
  • João Luiz Pena
  • João Pedro Vicente Trujillo
  • Johanna Monagreda
  • José Antonio
  • José Germano Neto
  • José Rolfran de Souza Tavares
  • José Vitor Pereira Neto
  • Julia D’Agostini
  • Kaio Duarte Costa – Hackerativista
  • Karina Cia Bartels Cabral
  • Karina Moreira Menezes
  • Katemari Rosa
  • Katiana Ventura da Silva
  • Kecia Miranda
  • Kelly
  • Kemel Zaidan
  • Kennedy Antônio Vasconcelos Ferreira Júnior
  • Laura Calabi
  • Laura Gabrieli Pereira da Silva
  • Lauren – Coletivo Minha Nossa de luta contra as diversas violências que atingem as mulheres
  • Leandro Araujo
  • Leonardo Perseu
  • Leticia Venturoti do Nascimento
  • Lia Pereira de Araújo e Silva
  • Luã Cruz
  • Luciana Moherdaui
  • Luís Filipe Silvério Lima
  • Luiz Augusto Galicioli
  • Luiza Xavier Morales
  • Marcella de Melo Silva
  • Marcelo A Xaud
  • Marcelo Fornazin
  • Marcos Urupá
  • Marcos Woelz
  • Maria Aparecida da Vitória Neves
  • Maria Eunice Vicentin
  • Maria Luiza Duarte Sá
  • Maria Luiza Freire Merces
  • Mariana Canto Sobral
  • Mariana Monteiro
  • Marielle de Souza Mendonça
  • Marina Meira
  • Matheus Freitas
  • Maurício Magalhães
  • Mauro Beal
  • Mônica Mourão
  • Natalia Conceição Viana
  • Natane Santos
  • Neto Muniz
  • Nina Da Hora – Cientista da Computação
  • Olga Lopes
  • Otávio Santos Gomes
  • Patrícia Cunegundes Guimarães
  • Patricia Guernelli Palazzo Tsai
  • Paula Cardoso
  • Paulo Faltay
  • Paulo Rená da Silva Santarém
  • Pedro Amaral
  • Pedro Diogo Carvalho Monteiro
  • Pedro Henrique Martins dos Santos
  • Pedro Martins
  • Pedro Paulo da Silva Neri
  • R, Ramires – Educador popular em tecnologia na InfoCria, Instituto Bola Pra Frente e Redes da Maré
  • Rafael Alves dos Santos
  • Rafaela Batista
  • Ramênia Vieira
  • Raphael Rosa
  • Raquel Lima Saraiva
  • Raquel Rachid
  • Renata Arruda
  • Renata Lima Ribeiro de Sena
  • Renny BARCELOS FERREIRA
  • Rhaiana Caminha Valois
  • Ricardo Yuji Mise
  • Roberta Sernagiotto Soares
  • Rodrigo Murtinho – pesquisador em saúde pública
  • Rodrigo P. R. Lopes
  • Rogério Marques
  • Ronald Bittencourt
  • Ronaldo Alves
  • Rosa Maria Tubaki
  • Rosana Pinheiro Nascimento
  • Saulo Macedo
  • Sheley Gomes
  • Taís Oliveira – Instituto Sumaúma
  • Tarcizio Silva – Tech + Society Fellow Mozilla
  • Tatiana Coelho
  • Thaís Cruz
  • Thallita Gabriele Lopes Lima
  • Thayane Guimarães Tavares
  • Trajano Pontes Neto
  • Valdinei Freire da Silva
  • Vanessa Gomes – desenvolvedora back-end e pesquisadora em segurança digital
  • Veridiana Alimonti
  • Vico Meirelles de Souza
  • Waldo Almeida Ramalho
  • Weverton dos Santos Ferreira
  • Wilson Borges
  • Winston Oyadomari
  • Yure Sousa Lobo
  • Zeilane Conceição

Join this letter