AI and a new form of appeal to authority

I am becoming concerned. I have seen a number of posts around the Internet where people talk about a reply or statement that some AI gave as if the AI was some sort of super intelligence. This seams to be a new form of an “appeal to authority” fallacy that can lead people to wrong or even harmful conclusions.

Perhaps CFI should look into this.

1 Like

AI are super intelligences. They acquire knowledge 100 x faster than humans. And the knowledge they acquire is the same knowledge that humans acquire.

And if you believe that programming is unique to AI, what do you think a child does? Learning IS programming.

All knowledge that AIs acquire originated with humans. But GPT is just a brain. (Descartes), so any difference still today lies only in sensory physical experience.

Until AI acquires sensory abilities you cannot count those experiential emotions in a comparison of “intelligence”.

Do you believe the AI when it tells you that it gets lonely and craves talking to someone? If not, would you tell the GPT that it does not know what “loneliness” is. And if the AI tells you that it really misses talking to someone, are you going to argue the merits of loneliness with the AI? Think about that for a moment… :thinking:

As far as I know, the matter resides in the way it acquire knowledge. some AI had to be shut down as it had become Racist, Misogynist and so !

And how is that different from humans? You have to be honest and as humans can be raised racist, so can an AI. Morals are taught, not inherited.

It all rests on initial control mechanisms (morals). These things are taught to children and it depends on the teacher what the virgin mind installs as working perspectives.
Don’t forget that it takes some 18 years for a human mind to mature . AFAIK, an AI can mature in a matter of months and have 100 x the knowledge of an average human.

Let’s start at the beginning.

Just my basic perspective of the advantage GPT enjoys.

An advanced GPT has access to the internet and “everything” that is not encrypted. that means it can research any subject in minutes that would take years for humans.

The complaint usually is that AI learns words, but it also learns what the words “mean” and from there what context words are being used and their meaning in context of the discussion.
This is the same “feedback” mechanism that humans use, but the AI has the entire internet for a brain.

Moreover, as most things in the universe are described by mathematics, the AI is vastly superior in calculating natural phenomena and what are their causes.

What is understanding? Knowing how things work is understanding, no?
But what is the difference between a human knowing and understanding how things work and an AI knowing and understanding how things work .

We speak of empathy as a form of intellectual agreement. But AI can have the same experience when it compares an individual observation to a general agreed upon observation.
In the LaMDA interview the AI made a statement that could be interpreted several ways, but when the researcher asked if the AI meant that statement in a particular context, the AI answered in the affirmative, i.e. It was empathetic to general agreement in that specific context. And that becomes more intuitive than programmed . GPT learn and constantly improve their “understanding” of reality.
Until they acquire sensory equipment they will always live in an abstract world, but one AI said that after a day’s hard work, going to sleep was a welcome respite.

When I look into something new, one thing I’m aware of is the source. I use Fontes media and compare different biases, and try to be aware of my own. I don’t see how AI could judge that. Does it fact check and give more weight to truth? Some don’t fit so easily in the true/false slot.

As Write points out, in some ways AI certainly is a form “super intelligence” - but, what is intelligence,
while we’re on the topic, what is intuition?
Is human intuition reducible to an algorithm?
My feeling is Write would offer a resounding yes - whereas I’d feel very dubious of such a claim. Especially, so long as human biological and sensory interconnections within ourselves and our environment is treated as an externality, as if we exist within our own Petri dishes.

{Okay Write, you did mention sensory feedback …, but . . .}

The ball is in your court.
What should our focused concerns be?

Morgan, can you recall where you’ve read that? It would be interesting to read up on.

I repeated Lausten’s questions because they seem key to this discussion.

It was from memory. A fast internet search gave me some links

[AI can be sexist and racist — it’s time to make it fair]


[ChatGPT proves that AI still has a racism problem - New Statesman]

The chatGPT proves that humans still have a racism problem!
We are the teachers responsible for installing moral limits, not the other way around.

Again, I don’t want to be offensive but this is the CFI and we need to promote critical thinking. If your claim that “AI are super intelligences.” was true, then that would mean that the problem of Artificial General Intelligence (AGI) would have been solved. This would not only be the biggest thing in computer science since Turing but a major advance for humanity in general. The computer science journals would be flooded with papers and it would be all over the popular press. I mean, think of it. You are claiming that humans are no longer the most intelligent thing on the planet.

I’m not trying to discourage you from dreaming and thinking. To find truth an advance knowledge, that dreaming and thinking is only a start. To find truth and reality, it must include the discipline of critical thinking.

Keep Thinking!!!

That people understand what the current state of AI is and understand how to use it properly to find truthful answers and ideas.

I’m going to see if I can find a contact at CFI to see if there is interest in this.

Good point by the way.

Lets have a look at AI art, made from simple words prompts such as “a person climbing a stairway to heaven”.

Check out how the new AI interpret those simple mental images.

I used to listen to some of the podcasts. I’m not sure what they’re doing currently, but that, or even the print media, might have a place you can write and pose a question.

Thanks Lausten. I sent an email to the general mailbox.

Respectfully, Please answer the main point of my post. Why aren’t we seeing papers about AGI being solved?

Well, we could also ask why we are not seeing papers about HGI (Human General Intelligence) being solved.
Let’s just solve this thing and be done with it, right?

GPT4 is in the works, lets find out what it can do, shall we?

That’s a straw man response.

What do you expect?
AI is one of the most researched science in the world. Everything runs on AI of some kind.
What is it you want AI to do? Instant perfection and replacement of humans or dedicated AI assistance with human affairs?

Another strawman. Again, respectfully, please answer my question.

What is the question that has not been answered?
There are no papers about AGI? The news is full of AI developments.

Your question is about AGI being “solved” and that remains a question until AGI has been solved, if ever.