F Rosa Rubicondior: Artificial Intelligence - How Appearances can be Deceptive if we Think Like Creationists

Thursday 18 May 2023

Artificial Intelligence - How Appearances can be Deceptive if we Think Like Creationists

Artificial Intelligence

How Appearances can be Deceptive if we Think Like Creationists>
According to a McKinsey report, depending on the adoption scenario, automation will displace between 400 and 800 million jobs by 2030, requiring up to 375 million people to change job categories entirely.

AI
Evolution is making us treat AI like a human, and we need to kick the habit

Readers may have noticed how I've been using the AI chatGPT3 engine recently. I find it incredibly useful for quickly generating information about a topic. The one drawback seems to be that its references don't always check out and it's sometimes impossible to find the paper of book referenced, even. This seems to be a major deficit in its training.

I found it invaluable in developing the coding needed to run the slideshows I'm now frequently including in these posts, although I often needed to remind it of the objective because its solutions were overly complex and sometimes didn't work as I wanted. It also tends to misunderstand the specification and solve a problem that doesn't exist.

But, as the following article by Neil Saunders, Senior Lecturer in Mathematics at the University of Greenwich, points out, there is a natural tendency to treat it like a real human being. I have to admit, I often say please and thank you, and sometimes hesitate before asking it yet again for help, half expecting it to become impatient. This is, of course, a mistake as an algorithm doesn't have emotions and isn't even conscious of its own existence, let alone being 'human' in its interactions with us. It simply responds to our input by pattern-matching and generating output in intelligible English.

And this is what makes it so human; it talks to us like a very polite and infinitely patient human being would. This tendency to assume agency is deeply embedded in our evolved psychology in the form of teleological thinking and explain why credulous creationists assume whatever they don't understand or what isn't immediately obvious, must be the work of an agency of some sort. I have even had creationists argue that atoms can't combine with other atoms, or photons don't know where to go, unless directed to do so by a sentient being - which of course assumes that atoms and elementary particles are sentient and can obey instructions too. Teleological thinking is the basic notion behind 'intelligent design' and arguments from design, such as the refuted Palley's Watch analogy.

It is a simple step then to assume that the 'designer' or directing agent must be the locally popular god, despite the fact that there is no definitive evidence that any god(s) exists or mechanism that could explain the origins of any. The argument is childishly circular - there must be a designer because things look designed; the designer must be God because only God can design things; the 'fact' of design proves the existence of a designer - And of course mummy and daddy believe in the only true god.
But there are dangers in using AI combined with teleological thinking and ignorant credulity because people who think teleologically can be manipulated and those who believe absurdities can be persuaded to commit atrocities.

Here then is what Neil Saunders has to say on the subject. His article is reprinted from The Conversation under a Creative Commons Licence, reformatted for stylistics consistency:



Evolution is making us treat AI like a human, and we need to kick the habit

Neil Saunders, University of Greenwich

The artificial intelligence (AI) pioneer Geoffrey Hinton recently resigned from Google, warning of the dangers of the technology “becoming more intelligent than us”. His fear is that AI will one day succeed in “manipulating people to do what it wants”.

There are reasons we should be concerned about AI. But we frequently treat or talk about AIs as if they are human. Stopping this, and realising what they actually are, could help us maintain a fruitful relationship with the technology.

In a recent essay, the US psychologist Gary Marcus advised us to stop treating AI models like people. By AI models, he means large language models (LLMs) like ChatGPT and Bard, which are now being used by millions of people on a daily basis.

He cites egregious examples of people “over-attributing” human-like cognitive capabilities to AI that have had a range of consequences. The most amusing was the US senator who claimed that ChatGPT “taught itself chemistry”. The most harrowing was the report of a young Belgian man who was said to have taken his own life after prolonged conversations with an AI chatbot.

Marcus is correct to say we should stop treating AI like people - conscious moral agents with interests, hopes and desires. However, many will find this difficult to near-impossible. This is because LLMs are designed – by people – to interact with us as though they are human, and we’re designed – by biological evolution – to interact with them likewise.

Good mimics

The reason LLMs can mimic human conversation so convincingly stems from a profound insight by computing pioneer Alan Turing, who realised that it is not necessary for a computer to understand an algorithm in order to run it. This means that while ChatGPT can produce paragraphs filled with emotive language, it doesn’t understand any word in any sentence it generates.

The LLM designers successfully turned the problem of semantics – the arrangement of words to create meaning – into statistics, matching words based on their frequency of prior use. Turing’s insight echos Darwin’s theory of evolution, which explains how species adapt to their surroundings, becoming ever-more complex, without needing to understand a thing about their environment or themselves.

The cognitive scientist and philosopher Daniel Dennett coined the phrase “competence without comprehension”, which perfectly captures the insights of Darwin and Turing.

Another important contribution of Dennett’s is his “intentional stance”. This essentially states that in order to fully explain the behaviour of an object (human or non-human), we must treat it like a rational agent. This most often manifests in our tendency to anthropomorphise non-human species and other non-living entities.

But it is useful. For example, if we want to beat a computer at chess, the best strategy is to treat it as a rational agent that “wants” to beat us. We can explain that the reason why the computer castled, for instance, was because “it wanted to protect its king from our attack”, without any contradiction in terms.

We may speak of a tree in a forest as “wanting to grow” towards the light. But neither the tree, nor the chess computer represents those “wants” or reasons to themselves; only that the best way to explain their behaviour is by treating them as though they did.

Intentions and agency

Our evolutionary history has furnished us with mechanisms that predispose us to find intentions and agency everywhere. In prehistory, these mechanisms helped our ancestors avoid predators and develop altruism towards their nearest kin. These mechanisms are the same ones that cause us to see faces in clouds and anthropomorphise inanimate objects. No harm comes to us when we mistake a tree for a bear, but plenty does the other way around.

Evolutionary psychology shows us how we are always trying to interpret any object that might be human as a human. We unconsciously adopt the intentional stance and attribute all our cognitive capacities and emotions to this object.

With the potential disruption that LLMs can cause, we must realise they are simply probabilistic machines with no intentions, or concerns for humans. We must be extra-vigilant around our use of language when describing the human-like feats of LLMs and AI more generally. Here are two examples.

The first was a recent study that found ChatGPT is more empathetic and gave “higher quality” responses to questions from patients compared with those of doctors. Using emotive words like “empathy” for an AI predisposes us to grant it the capabilities of thinking, reflecting and of genuine concern for others – which it doesn’t have.

The second was when GPT-4 (the latest version of ChatGPT technology) was launched last month, capabilities of greater skills in creativity and reasoning were ascribed to it. However, we are simply seeing a scaling up of “competence”, but still no “comprehension” (in the sense of Dennett) and definitely no intentions – just pattern matching.

Safe and secure

In his recent comments, Hinton raised a near-term threat of “bad actors” using AI for subversion. We could easily envisage an unscrupulous regime or multinational deploying an AI, trained on fake news and falsehoods, to flood public discourse with misinformation and deep fakes. Fraudsters could also use an AI to prey on vulnerable people in financial scams.

Last month, Gary Marcus and others, including Elon Musk, signed an open letter calling for an immediate pause on the further development of LLMs. Marcus has also called for an international agency to promote safe, secure and peaceful AI technologies" - dubbing it a “Cern for AI”.

Furthermore, many have suggested that anything generated by an AI should carry a watermark so that there can be no doubt about whether we are interacting with a human or a chatbot.

Regulation in AI trails innovation, as it so often does in other fields of life. There are more problems than solutions, and the gap is likely to widen before it narrows. But in the meantime, repeating Dennett’s phrase “competence without comprehension” might be the best antidote to our innate compulsion to treat AI like humans. The Conversation
Neil Saunders, Senior Lecturer in Mathematics, University of Greenwich

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Published by The Conversation.
Open access. (CC BY 4.0)
The same deeply embedded psychology that can induce people to believe in gods can also induce people to believe that the AI Chat algorithm they are tallking to is an actual human being with human motives and emotions. All it needs then it to conclude that it is omni-benevolent and omniscient and unscrupulous frauds have the same device under their control as priest and preachers currently have.

Thank you for sharing!









submit to reddit

No comments :

Post a Comment

Obscene, threatening or obnoxious messages, preaching, abuse and spam will be removed, as will anything by known Internet trolls and stalkers, by known sock-puppet accounts and anything not connected with the post,

A claim made without evidence can be dismissed without evidence. Remember: your opinion is not an established fact unless corroborated.

Web Analytics