The inexorable extension of surveillance technologies, which are so widely used to counter terrorism and serious crime, will overwhelm any hopes we may have of remaining anonymous.
Our future is already being built in China, where facial recognition technology is ubiquitous and uncontestable.
But even if resistance ultimately proves futile, it is well worth fighting against the indiscriminate deployment of automatic facial recognition (AFR) systems.
‘We can have security without being a security state’
We should all be concerned about how such technology is used to police society and we still have the power to shape the legal framework in which it operates.
Commendably, some campaigners and politicians are militating to do just that.
This month, San Francisco, home to many of the world’s leading technorati, became the first city in the US to ban the official use of facial recognition technology (although port and airport security are exempt because they are operated by federal agencies).
“This is really about saying: ‘We can have security without being a security state. We can have good policing without being a police state,’” said Aaron Peskin, the city supervisor. “Part of that is building trust with the community.”
In the UK, an office worker called Ed Bridges last week launched a legal challenge against the South Wales police, who captured his image while he was out doing some Christmas shopping in Cardiff town centre in 2017.
Mr Bridges, who argued this was a fundamental invasion of his privacy, is now awaiting the outcome.
Arguments against facial recognition
There are two main arguments against the use of AFR: one of principle, the other of practicality.
First, civil rights campaigners say that facial recognition technology has been introduced by stealth without adequate public debate.
There are now some 50 agencies in the US that use facial recognition software, often in wholly unregulated ways, with 117 million Americans included in various databases.
In Britain, Liberty, the civil rights group that is supporting Mr Bridges’ challenge, says that using AFR technology is akin to seizing fingerprints or DNA samples without the citizen’s consent.
The police counter that AFR technology is a cheap and effective weapon in the fight against crime and is only used under the constraints of the Data Protection Act.
But Tony Porter, the surveillance camera commissioner, has himself called for a clearer regulatory basis for the use of AFR. “It is complex and confusing and difficult for the public to understand these issues,” he told the BBC.
The second objection to AFR is that it does not even work particularly well and misidentifies a lot of people.
It is especially error-prone in identifying ethnic minorities. Big Brother Watch, a civil liberties group, recently published numbers from South Wales police showing that their systems triggered 2451 false alarms among the 2685 “matches” they had made between May 2017 and March 2018.
Academic researchers have also shown that certain cosmetics and specially designed “adversarial” glasses can reliably fool the systems.
Even some of the companies that have been developing facial recognition technology, such as Microsoft, have warned of the dangers of misuse and have called for stricter global regulation.
As in so many areas, it is dangerous to pursue purely technological solutions to human challenges.
By itself, technology cannot solve anything. Everything depends on its appropriate and acceptable use and that requires implicit trust between the authorities and the people.
If the very use of technology erodes that trust it will only worsen the problems it is designed to solve.