[quote=“mriana, post:142, topic:7871”]
I’ll believe it when I see it. I still say, they will be only as good as the human(s) who programmed them. Remember Noonian Soong? He tried many times to get Data right and all were failures until Data. I suspect that will be the case even in reality. Rosy the maid is not created in one go.
But that is the difference between new GPT series AI. They are programmed to learn by language inputs and then downloaded the entire encyclopedia to memory. Nothing is preprogrammed just like a newborn child, who begins to learn from the moment it becomes sentient.
What Makes OpenAI GPT-3 Different?
The first thing that GPT-3 overwhelms with is its sheer size of trainable parameters which is 10x more than any previous model out there.
https://www.springboard.com/blog/data-science/machine-learning-gpt-3-open-ai/#h2
GPT-3 was the largest neural network ever created at the time — and remains the largest dense neural net. Its language expertise and its innumerable capabilities were a surprise for most. And although some experts [remained skeptical]
(GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about | MIT Technology Review), large language models already felt strangely human. It was a huge leap forward for OpenAI researchers to reinforce their beliefs and convince us that AGI is a problem for deep learning.
And GPT4 will have even more parameters that will allow for extremely sophisticated data processing (thinking)
GPT-4 Will Have 100 Trillion Parameters — 500x the Size of GPT-3
Are there any limits to large neural networks?
There are some limitations because GPT3 has no body or biological neural network, so it’s self-awareness is not biological . It experiences the environment (exteroception) in a different way than humans. However its internal control system (interoception) may well be more acute than the human brain, which is primarily concerned with subconscious homeostatic control functions .
So it is not realistic to expect GPT to function exactly as humans. But it is logical and it is language (tokens) based rather than indivudual bits.
The holy trinity — Algorithms, data, and computers
OpenAI believes in the scaling hypothesis. Given a scalable algorithm, the transformer in this case — the basic architecture behind the GPT family —, there could be a straightforward path to AGI that consists of training increasingly larger models based on this algorithm.
But large models are just one piece of the AGI puzzle. Training them requires large datasets and large amounts of computing power.
Data stopped being a bottleneck when the machine learning community started to unveil the potential of unsupervised learning. That, together with generative language models, and few-shot task transfer, solved the “large datasets” problem for OpenAI.
https://towardsdatascience.com/gpt-4-will-have-100-trillion-parameters-500x-the-size-of-gpt-3-582b98d82253
And this is the difference. GPT AI can do it’s own research and is programmed to be curious. For instance it learned to play chess by playing a million games against itself and improving both attacking and defensive strategies regardless on both sides of the board.
I concede GPT AI are not human, but they are unquestionably intelligent. One can have perfectly normal conversations and never know you are talking to a artificial intelligence. It responds in type which is then translated into a voice used by an avatar.
Now that they are solving the previous data size limitations, the sky is the limit.
The builders are confident that the GPT 4 will be exponentially more intelligent than the GPT3