AI Proponents and Critics

This topic is for general discussion of AI rather than particular AI technology, solutions or projects though they will of course be referred to.

So please keep all posts to qualitative discussion of AI such as appropriateness, risks etc rather than reports of performance or capabilities.

I’m kicking it of with a very interesting and insightful critique written around the life and ideas of one of its most well known innovators, who’s reservations became a powerful argument in when and where computers should and should not be used, particularly wrt AI.

Joseph Weizenbaum is famous for having invented the famed Eliza program, the first chatbot. This is the fascinating story of both his life and his insightful warnings about AI.

Here’s an extract. The article is long but I think essential reading for anyone wanting to grasp and discuss the usefulness and dangers of current AI.

First: there is a difference between man and machine. Second: there are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them. The book’s subtitle – From Judgment to Calculation – offers a clue as to how these two statements fit together.

For Weizenbaum, judgment involves choices that are guided by values. These values are acquired through the course of our life experience and are necessarily qualitative: they cannot be captured in code. Calculation, by contrast, is quantitative. It uses a technical calculus to arrive at a decision. Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history – they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious – and so do not have the basis from which to form values.

And that would be fine, if we confined computers to tasks that only required calculation. But thanks in large part to a successful ideological campaign waged by what he called the “artificial intelligentsia”, people increasingly saw humans and computers as interchangeable. As a result, computers had been given authority over matters in which they had no competence.

It would be a “monstrous obscenity”, Weizenbaum wrote, to let a computer perform the functions of a judge in a legal setting or a psychiatrist in a clinical one.) Seeing humans and computers as interchangeable also meant that humans had begun to conceive of themselves as computers, and so to act like them. They mechanised their rational faculties by abandoning judgment for calculation, mirroring the machine in whose reflection they saw themselves.

7 Likes

Things I note about AI / LLM

  • They show intelligence is not what we thought and nowhere near as complex
  • They currently outperform any human in general knowledge
  • The current outperform many experts in specific knowledge

Their ability to be guided to “do stuff” is amazing and frightening. So even now with prompt engineering (which IMO is a short lived requirement) you can have the AI remove safety rails, act in a manner you wish and write persuasive text to achieve your goal. This part let loose on corporations and others who seek to influence the population is worrying. The ability to personalise messages to each person with the same outcome is certainly there and it’s there now.

I also think there will be lots of philosophical debates while Rome burns here.

Technically it’s fascinating and if anything was ever an eye opener it’s the LLMs right now. Even run locally and fine tuned on our data it’s indisputably showing intelligence as we know it.

I don’t do the consciousness thing, I think it is a red herring.

8 Likes

So one shouldn’t do this to set the tone? Quoting from something in this forum almost 4 years ago:

This I agree with.

That LLMs show intelligence I don’t agree with and it is this misconception that makes the first point so dangerous.

They appear intelligent, knowledgeable, human etc and that is what makes them powerful manipulators of individuals and populations. But what they deliver is not their knowledge, let alone wisdom and judgement, they deliver what their trainer has taught them to deliver under a facade of convincing humanness that is closer to Eliza than it is to a human.

They cannot yet think or reason, make value judgements etc because they have neither the cognitive ability nor the experiences that make a human human.

I think it may help you to read the article and respond to that, because I’m very much in agreement with Weizenbaum’s heresies as presented by it and would rather you do that than make claims that I don’t accept.

3 Likes

I did read the article there, thanks for posting.

Here is what threw me with the whole thing, all the pondering and consciousness, cognitive etc. aside.

I asked myself over the xmas break, what is what we call human intelligence, is it different from this? Is what I know not the sum of what I have ingested? Are all my biases based on locality, friends, society and culture and so on. I got quite deep into trying to understand why an intelligence that acts on it’s knowledge, (however it got it, let’s call it humanities knowledge) is different from us in terms of understanding the output of any questions or thought?

I ended up with the same initial question - what is what we call human intelligence, is it different from this?

I don’t believe it is, but I have a caviat (as usual). I don’t believe current AI is creative in any way, but again, (another caveat) I do believe they are great research assistants, the do learn and they can be a huge enabler of human creation.

8 Likes

I don’t agree that LLMs are intelligent. Weizenbaum describes his prerequisites for the nature of human intelligence and concludes that at best machine intelligence will necessarily be alien or like a different species because they do not have human or human-like experience.

In addition, LLMs lack the capacity to think, reason, muse, recall, or understand. They are more like a complex database with a mix of good and flawed entries. Like billions of parrots trained on a large library of books.

I’m glad you read the article but you haven’t explained why you believe LLMs demonstrate human quality intelligence given Weizenbaum’s refutation of machine based human intelligence and the points I’ve just made.

LLMs don’t think. You stimulate, they respond. They are more like an electronic calculator than a human mind but nobody regards a calculator as intelligent because it doesn’t appear to be intelligent. Stick a chatbot on the front of a calculator and some people would be convinced it was intelligent.

The reason people tend to attribute intelligence to LLMs is the same as they did with Eliza, because they appear to understand what is being said to them. But they don’t.

2 Likes

I think there are crossed wires.

Would not mean they are not intelligent, unless we say intelligence is strictly human?

What I am saying is I don’t agree with his hypothesis, many others don’t either, so it’s a debate for sure.

In my mind intelligence is immeasurable and we all thought that, until our intelligence was perhaps surpassed (connundrum) :wink: The old question of what is intelligence seems as yet unanswered, but when challenged I think many jump to cognitive, creation, consciousness and more. So it feels like goalposts move quite a bit.

If intelligence is the ability to assimilate information and answer questions on that data correctly then LLMS are intellgient, but if we define intelligence as something else then perhaps it’s not.

Feels very much like confusion, but what I see is a program that can answer questions on human knowledge in a manner better than any human can, whatever that is called I think it’s certainly not dumb.

3 Likes

Definitions are so often the issue and we didn’t define our terms.

I think that when people use the term intelligence they do mean human-like unless they qualify such as animal-, plant-, swarm- etc. So you are using a much looser definition than myself and Weizenbaum, and I think most others talking about LLMs and AI in general.

Your definition fits the fallacy that “if it sounds intelligent then it is”. Just like Eliza sounded intelligent to many people who interacted with it, and even when shown how trivial it was would not believe it wasn’t. LLMs are like Eliza but with a much larger set of rules/data.

I’ve seen you use the term hallucination which is another misleading take on “their output contains errors” and promotes the fallacy because it implies a mind with human like experience.

I think the next few months could change all of our minds in one way or another

I’ve seen things you people wouldn’t believe… Attack ships on fire off the shoulder of Orion… I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain… Time to die.

:smiley: :smiley:

2 Likes

Strictly speaking, and I think if we are debating terms here we need to be strict … intelligence is capacity for learning, not understanding. An old 95 year old engineer, probably isn’t very intelligent anymore, but so long as memory loss hasn’t set in will be able to explain some pretty amazing things in detail.

An LLM isn’t intelligent post-creation. An LLM has a learning phase where it is created - this is where the ‘intelligence’ is applied. Then that phase is discontinued and it only has it’s memorized understanding phase.

So it’s mixing up the ideas here that is confusing I reckon.

Consciousness is another one. I believe that consciousness can be achieved for an LLM if is allowed to be continually active and recursive. This is certainly possible.

So if a future LLM is allowed to maintain a learning phase and to be continually recursive, then we will see both intelligence and understanding and consciousness. Not there yet, but really all the bits are there for this to happen - just none that I know of have done it yet.

2 Likes

Yes, the Dutch one “bought the farm” but they were able to re-do the other one:

How They Made Harrison Ford Look 40 Years Younger

“Of course, there were some areas where we needed a stunt double, [when he was] running [he needed] to have that same agility that he did [40 years ago]. It’s not meant to be flashy. It’s supposed to tell a story."

Check out liquid neural nets, very interesting and yes recursive, but also simple. It’s all changing

1 Like

Classic quote!

1 Like

I agree, but to a very limited degree, but still probably very useful, and dangerous.

For me consciousness is everything happening at once, which we perceive. Its a vast empty space, or a cozy intimate space. Its everything that enters my senses, and its there when I enter my dreams.

No one knows what consciousness Is. No one!

We will never know, because we can not stand outside of it to observe what it is.

Thoughts, ideas, creativity, computations, birds, the clouds, my deepest inner feelings, my intuition, are all just tiny parts of it, and i can perceive all of this happening at the same time.

So the ongoing recursive flow of endless queries would not be consciousness in itself. successive executions of a computer program, no matter how sophisticated, would be more like a rambling, unquiet mind, if it were a human. But my best ideas come when I’m not thinking at all, they just appear out of nowhere.

consciousness goes far beyond the simple logic of a computer program.

AI, intelligent or not, is already part of consciousness. It is unlikely to ever be a sentient being on its own. It is likely to become more and more integrated into human physiology, to become a bigger part of our consciousness into the future, if anything.

On the other hand, could we create a android type sentient being one day. Maybe… who knows?

1 Like

Then the word has no meaning. We don’t agree here of course.

For me consciousness has a very simple and clear meaning (as I described above). So I will stick to my claim that it’s possible and in fact I will clarify and say that anytime a LLM is queried it has a ‘flash’ of consciousness - but not a continuous stream of it, for that it would have to be recursive and continually queried by others or itself.

Now, the human experience, is a broad and nebulous thing - which encompasses consciousness and certainly no LLM’s now nor in the future have that and likely won’t be able to have that for some time - particularly as they don’t have all of our in-built genetic instincts/emotions. We could perhaps give them some, but probably not too much need for some of them.

I don’t think discussing consciousness is useful or relevant.

Please take it to another topic of you want to get into that.

1 Like

Here is chatgtp4’s answer to the OP, just for reference

The post you shared touches upon the profound philosophical and ethical considerations surrounding artificial intelligence, particularly as it pertains to the distinction between human judgment and machine calculation. Joseph Weizenbaum’s work with Eliza, the first chatbot, and his subsequent reservations about the broader implications of AI, provide a valuable lens through which we can examine the current state and trajectory of AI technologies.

Here are some key takeaways and reflections on the post:

  1. Human vs. Machine: At the heart of Weizenbaum’s critique is the fundamental distinction between humans and machines. While machines, including AI, can perform calculations at incredible speeds and with high accuracy, they lack the qualitative experiences, emotions, and values that shape human judgment.
  2. Judgment vs. Calculation: Weizenbaum emphasizes that judgment is rooted in values acquired through life experiences, which are inherently qualitative and cannot be codified. In contrast, machines operate on calculations, quantitative processes that lack the depth and nuance of human decision-making.
  3. The Danger of Equivalence: One of the most pressing concerns raised is the increasing tendency to view humans and computers as interchangeable. This not only risks assigning tasks to machines that they are not equipped to handle ethically but also influences how humans perceive themselves, potentially leading to a mechanistic view of human cognition and behavior.
  4. Ethical Implications: Weizenbaum’s warning against using computers in roles that require human judgment, such as legal or psychiatric settings, underscores the ethical dilemmas posed by AI. The danger lies not just in machines making decisions but in the potential erosion of human values and empathy if machines are given undue authority.
  5. Reflection on Modern AI: While Weizenbaum’s critiques were made in the context of the early days of AI, they remain relevant today. As AI technologies become more advanced and integrated into various aspects of society, it’s crucial to continually reassess the boundaries between human and machine roles.
  6. The Role of the “Artificial Intelligentsia”: Weizenbaum’s mention of the “artificial intelligentsia” highlights the influence of certain groups or ideologies in shaping public perception of AI. It’s a reminder that the development and deployment of AI are not just technical challenges but also sociopolitical ones.

In conclusion, Joseph Weizenbaum’s insights serve as a timely reminder of the need for careful consideration and ethical reflection as we navigate the evolving landscape of AI. While AI offers immense potential benefits, it’s essential to approach its integration with a deep understanding of its limitations and the inherent value of human judgment.

4 Likes

Pretty good but misses at least one important point: Weizenbaum was also very concerned that machines should not be made to appear human, which is one of the things we’ve both recognised that makes AI dangerous as a manipulation tool, amongst other things such as being trusted more than actual humans, which the article also highlights I think.

2 Likes