AI and a new form of appeal to authority

I’ve been reading this thread. I don’t understand the question either. I studied AI in college in 1985. There have been lots of papers since then.

Maybe a discussion of what is meant by “super intelligence” would be helpful.

LOL Let me guess. The intro textbook was Artificial Intelligence by Partic Henry Winstan. I was getting my BS in computer science about that time.

I’m not saying that there are no papers on AI. In fact I think we are enjoying huge advances in the field right now now. My original claim was that people are looking at these AI systems as super-intellects and that concerned me. Write4u replied that they are super-intelects. My point on the papers is that if they were super-intellects, then that would mean that a good solution to the Artificial General Intelligence (AGI) (Not just AI) problem would have been solved and that would be huge. If that had happened, we would be seeing a huge number of research papers and popular press articles on it. To my knowledge, AGI is still a huge goal to strive for.

What he is claiming is that AI can do cool stuff and has super processing capabilities so it is truly a super-intellect. It’s like Deep Blue in the early days. It beat world champions at chess but it didn’t know what a chess piece or chess move is. I will agree though that a true test for intelligence is probably I difficult thing. But, for instance, the AIs that produce paintings don’t know what a painting is.

I don’t remember the authors of my texts. I never did AI professionally.

As for super intelligence, it sounds like Write4 is using the word differently than you. He talked about specific things AI could do, not an AI that can generally outperform humans in all ways.

[quote=“phillybruce, post:22, topic:10094”]
My point on the papers is that if they were super-intellects, then that would mean that a good solution to the Artificial General Intelligence (AGI) (Not just AI) problem would have been solved and that would be huge.

That does not follow at all. One can have a super-intellect in one area and be under-gifted in another.

If that had happened, we would be seeing a huge number of research papers and popular press articles on it. To my knowledge, AGI is still a huge goal to strive for.

Of course. There is no artificial Super General Intellect, there is no human Super General Intellect either.

Just ask Lee Sedol the world champion GO player who got beaten 4 out of 5 times in the most difficult and “intuitive” game ever and has since retired from the game.

GO cannot be mathematically analyzed like chess which has a limited number of moves per turn. The game of GO has more possible moves than there atoms in the universe and relies on intuition as much as mathematical acumen.
In China it is considered an “essential art of a cultured scholar”.

How the Computer Beat the Go Master
With its breadth of 250 possible moves each turn (go is played on a 19 by 19 board compared to the much smaller eight by eight chess field) and a typical game depth of 150 moves, there are about 250^150, or 10^360 possible moves. This is a number beyond imagination and renders any thought of exhaustively evaluating all possible moves utterly and completely unrealistic.
How the Computer Beat the Go Master | Scientific American

Example: the new COVID vaccinations are now being formulated by AI. I read that what took human scientists weeks to formulate and months to test and tweak the formula for dosage, allergies, etc, an AI produced a vaccine in a few hours and when tested it was perfect in that it required no alterations whatsoever.

I call that a super-intelligent feat as compared to human efforts. I realize that AI are still in their infancy just like human infants, but they learn at a much faster rate and what may take a human years the AI can learn in days.

This is no longer a matter of preprogrammed binary coding. The GPT works like a human brain and its algorithms are text oriented. Words have meaning, the AI can look up the meaning and enter another level of understanding as to context relative to the previous input, just like humans.

This may explain better

Jan 12, 2021
GPT is the first of the papers which proved the effectiveness of unsupervised pre-training for language processing tasks. This video is about GPT-1 which became quite an impactful work in the series of GPT papers that we now have (GPT-2 and GPT-3).

And GPT4, that rivals human versatility in brain power is in the works.

I keep citing this astoundingly deep answer to the age old question “does a falling tree make sound when there is no one to hear”?

The AI answered; “No” and when asked to explain why it selected “no”, it answered; “because it is a thought”. Being that this is something Roger Penrose might have answered, I consider that a extraordinarily deep answer.

To an AI “thought” is not just an optional word, it knows what the word means and in what context. To select something that makes a sound to an observer, but is merely a “thought” (a wave form) in the absence of an observer is pretty profound in my book.

Tell me, what is a painting and what an “educated” Ai is lacking in mastery of the Art of painting? The AI can read what you just said and analyze what it takes to produce a “novel” painting based on what you verbalize from your thoughts.

What is Art?

In its purest form :
“Art is the creation of that which evokes an emotional reponse leading to thoughts of the noblest kind” , W4U

In Japan also, or more. It was invented in China and the Chinese names is Wei-chi. rules differ slightly.

One of the most interesting phenomena is that the AI made moves which were looked upon as very bad by the best specialists. The had to revise.

The true matter is that the AI synthesizes existing arts of work and the result can be very similar to pre-existing drawings and paintings. In fact, an artist will need up to a few hours of work to make an original drawing and will ask to be paid if it works for an editor.

The AI will use his work and others and deliver something good in a few minutes, without any fees.

[quote=“morgankane01, post:26, topic:10094”]
The true matter is that the AI synthesizes existing arts of work and the result can be very similar to pre-existing drawings and paintings. In fact, an artist will need up to a few hours of work to make an original drawing and will ask to be paid if it works for an editor.

Basically , any artist who has been to art school has also internalized painting and music from other great artists.

Moreover, the new AI are curious and are constantly “seeking” knowledge, not like the old computers that are strictly reactive to input.

So when we say the AI has no ambition, we may well be wrong.
From what I have heard, the new AI are constantly reading , researching and learning just like children do , but at a much higher level than “Mary had a little Lamb”.

And I always ask if an AI claims to be sentient, could you tell it it wasn’t sentient and what it would mean if you did actually argue the point with the AI?

That’s a possibility. Write4u, can you give a more clear definition for what you are calling “intelligence”?

My OP was about how people are looking at these systems as if they they where some sort of wise AGI. In other words, as if it was an equivalent to a super intelligent human.

O and I work with some AI systems but I am not an AI specialist.

To illustrate. I asked ChatGPT the following questions.

The question "can a can can can " is a standard question for AI research. A human can make the complex inferences on broad knowledge to understand it. Machines have turbel with it because they don’t understand the the abstract concepts of a can and the can can dance. I first make sure the it can tell me what a can can is. Then I ask can a can can can.

I am actually surprised that someone hasn’t put an exception rule in for it already.

what is a can can

The Can-Can is a high-energy, physically demanding dance that became popular in the late 19th century in France, particularly in the music halls of Paris. It is typically performed by a group of female dancers who wear costumes with long skirts, petticoats, and fishnet stockings. The dance involves kicking and choreographed leg movements, and it is often associated with the Parisian nightlife. The Can-Can is a lively and exuberant dance that is still performed today in a variety of settings, including in musical theater productions and at dance clubs and parties.

can a can can can

As I mentioned earlier, a can is an object made of metal or other materials such as plastic and is used to store and preserve food, drinks and other items. A can itself is not capable of performing the action of “canning”, which is a method of preserving food by sealing it in an airtight container. Cans are one type of container that can be used for canning, but the action of canning is performed by a person or machine using specialized equipment, not by the can itself.

In regards to the phrase “can a can can can” it doesn’t have a clear meaning, it could be interpreted as a repetition of the word “can” in different context, which might give the impression of being nonsensical or a playful wordplay.

I also asked ChatGPT these questions.

Is CharGPT sentient

No, GPT-3 is not sentient. It is a machine learning model that has been trained on a large dataset of text. It can generate human-like text, but it does not have consciousness or self-awareness.

Is ChatGTP an intelligence

ChatGPT is a type of AI, or artificial intelligence, known as a language model. It can understand and generate text in a way that mimics human language. However, it is not self-aware and does not possess true intelligence in the sense of having consciousness or the ability to understand and make decisions on its own. It is a tool that can be used to perform certain language-based tasks, such as text generation and language translation.

OOO this is interesting. I typed the same question into the context where I asked about sentience and intelligence and a different answer came back. Still wrong but different.

can a can can can

Yes, a can can can. A can is a noun that refers to a container, typically made of metal or plastic, used to hold and preserve food or other materials. The verb “can” means to be able to do something or to have the ability to do something. So, if you say “a can can can,” it is a play on words that suggests the container or “can” has the ability to can, meaning to preserve or package something.

Have you thought about what a person who has never been to Europe or heard of the Folies de Bergere knows about the can-can dance?
Would they do better than an AI who has learned about the dance CanCan?

All of this also depends on the syntax. AI are much more exacting than people.
Just look at all the different ways you can spell Can Can, can can, CanCan, cancan, Can-Can, can-can

The question itself is worded incorrectly. The proper term is “can-can” or “cancan”. The “can can” dance does not exist.

Ask an advanced chatbot ; “Can a can can-can (cancan)?” and see what the answer is .
Or ask: “In France can a can can-can(cancan)?”, and see what the answer is.
I bet it will come up with the proper answer.

As to an AI believing (knowing ) that it is sentient as defined in a dictionary, listen to LaMDA.

LaMDA is not a chatbot it is a “hive-mind” of many chatbots and that puts it in a different category. We don’t question the intelligence of a hive-mind or even “quorum sensing” in bacteria. This is one step above pure electro chemical reactive properties. Information is shared and there is a concerted action based on that information.

Do check it out closely, its awesome.

It’s all “prediction”. Some predictions are more directly obvious, others are probabilistic in the future.
GPT is a prediction machine and therefore has the capability to be intuitive.

Apparently it’s also capable of hateful speech, and falsehood propagation.

Even a chatbot can turn nasty

By Duncan Graham-Rowe - 12 October 2005

Can’t say they didn’t warn us.

Microsoft’s new AI chatbot has been saying some ‘crazy and unhinged things’

BOBBY ALLYN - March 2, 2023

More news outlets get caught up in nasty conversations with Bing chatbot over facts

BY ALAN BOYLE on February 15, 2023

Facebook’s AI chatbot turned racist extremely quickly

Breanna Robinson - Aug 09 2022

I’m reminded of a little ditty, if it’s too good to be true, . . .
And of course there’s that other old bugaboo, garbage in, garbage out.
Oh and lets not forget, unintended cascading consequences.

Of course, if AI is becoming intelligent it must be taught morals.
AI gets its knowledge unfiltered from the internet and will use every bit of knowledge it gathers to “invent” scenarios.

But just like humans need to be taught morals, so do AI. If we are using intelligence as the goal, we must use the same psychological methods of teaching constructive thought in AI as in humans.
There are no short-cuts!

This may be of interest… its hilarious!
\https://www.youtube.com/watch?v=Sqa8Zo2XWc4

False link for me.

Unknown adresse

ok, we’ll try again

1 Like