Recognising humanity in artificial intelligence
Can machines think? Professor Keiichi Nakata offers some thoughts from a human.
“I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” (Alan Turing, 1950)
Over half a century ago, Alan Turing wrote this in his seminal paper published in the journal Mind in which he discusses both technical and philosophical aspects of the question “Can machines think?” Turing was probably not the first person to imagine that one day machines would be able to embody and exhibit characteristics such as intelligence that had been almost exclusively attributed to humans. Indeed, decades that followed have seen the developments in algorithms and approaches towards creating “artificial” intelligence (AI), often accompanied by futuristic scenarios in which AI behaves just like humans, sometimes confusing reality and fiction. But the realm of fiction has constantly been shrinking – talking to a machine to play music of your taste or simply showing your face to gain access to your computer is now a reality. And we tend to take it for granted. In that sense, Turing was right in his future expectation.
We may not normally associate the ability to recognise people’s faces as “intelligent” but visual perception and recognition are among the building blocks of achieving intelligence. The origin of facial recognition technology (FRT) is widely attributed to Woody Bledsoe, Helen Chan Wolf and Charles Bisson’s work in the mid-1960s, and the approaches to FRT have developed in conjunction with so-called AI technologies that involve detection and extraction of salient features, image processing, pattern matching as well as machine learning. The latest tests conducted by the National Institute of Standards and Technology (NIST) reports a 0.06% error rate for the best performing FRT system in recognising visa photos under ideal conditions, making the technology applicable in real-world situations. The high accuracy achieved by recent FRT enables its applications such as in ID verification on personal devices, access control, etc., and security and surveillance, e.g., to detect known criminals among the crowd and for law enforcement. These uses of FRT can improve usability, accessibility and potentially improve safety.
However, almost every technology has undesirable effects or uses that can cause harm, and FRT is no exception. Poor accuracy in real world settings could result in mistaken identity or ID recognition that could impede access for legitimate users or allow access to wrong users. The accuracy could vary due to the bias in the training data, and introduce unfairness. There are concerns about the storage of facial data that violates data protection. Our society may not be ready for machines that outperform humans in their ability in recognising individuals among the crowd and in “remembering” unlimited number of faces, leading to the sense of loss of privacy, freedom and autonomy. The list continues.
These are difficult issues. Due to ethical concerns some governments have banned the use of FRT in public spaces, and some large tech firms have voluntarily suspended the provision of FRT and related image recognition applications. Some may view such self-regulation as self-evident but others are concerned about its impact on innovation. In July 2022, the UK Government proposed a pro-innovation approach to AI regulation and are soliciting views on how best to regulate AI while promoting innovation.
Just like Turing envisaged the change in our attitude towards “thinking machines” over 70 years ago, our expectations and perceptions towards FRT and other AI technologies are likely to change over time. In that process, we need to engage in open debates that scrutinise the use and implications of these advanced technologies so as to maximise its potential while preserving our societal values. The debate must continue.
You might also like
Film exploring ethics of AI, facial recognition and data privacy premieres at Henley Business School
AI in marketing - a useful tool?
The Good, the Bad and the Ugly Internet of Things
This site uses cookies to improve your user experience. By using this site you agree to these cookies being set. You can read more about what cookies we use here. If you do not wish to accept cookies from this site please either disable cookies or refrain from using the site.