Hello there!!!

I’m George!! Thank you for having me! I’m a Software engineer with passion on mathematics,
physics and biochemistry. My research is mainly around artificial intelligence and
speech recognition software.
I’ve always had a healthy interest around pseudo-science and its theories and that’s the main reason
I’ve signed up as a member here. Who can say no to skepticism and general collaboration?
I love the impossible and I believe that proper theory can lead to controlled practical implementation.
Best wishes!

Welcome George. Can’t wait to hear your thoughts in artificial intelligence since its popping up in the news and debates a lot lately.

Hey George, welcome. I have a question for you. How long do you think it will take, or is it even plausible that an AI will achieve some level or form of consciousness? Will they ever understand what we say instead of just preprogrammed responses to stimulus or cues?

If we take in consideration the current levels of operation in propagation algorithms, neural networks with their sigma-function and the path finding scripts which are already in optimization state, I estimate a complete (feelings & logic) baby level of consciousness in about 30-50 years from now with respect the advancements of technological breakthroughs already made. Facial recognition software with proper modification should be able to read our tells and facial expressions via image processing. After that, we’ll be able to assign these responses to a more “human” way as machine’s output. The tricky part in my opinion is to construct an ethical barrier for the AI in order to prevent it from harms way (or even us from it).
What I mean is the following:
Let’s say that an event occurs and the logical reaction is to do something dangerous in order to prevent the event from getting worse. As humans we have instincts. Our instincts aren’t exactly under our control. When we are scared we react to survive. We will not risk if it’s too dangerous. The AI, isn’t going to have those instincts. Will choose to “eliminate” a variable or better yet that minor constant (since no instincts in place) in order to complete the equation and the current calculation. That is the most variable and unpredictable function of the AI. If we can’t give an ethical perspective first, we shouldn’t develop JUST a smart machine. That unstable factor if I may, it could lead to the known “Singularity”. An event, better pray not happen.
In other words: Better teach it to have feelings than teach it how to read!
To answer your question, it’s possible if not already implemented, to develop a form of consciousness with or without stimulus for input!
Best
PS: Moderators please move the post to the proper section.

Hey George, welcome. I have a question for you. How long do you think it will take, or is it even plausible that an AI will achieve some level or form of consciousness? Will they ever understand what we say instead of just preprogrammed responses to stimulus or cues?
But humans also act on preprogrammed responses to stimuli. If we follow a newborn's exposure to its environment and learning the meaning of symbolic representations (such as cartoons) for its first 7 years, we can clearly see a learning curve. Is there any reason why an AI programmed with information for 7 years (and access to the internet) should not be able to make sophisticated guesses if a particular scene is unfamiliar, but is still symbolic of an interaction? IMO, half of the problems can be resolved with a built-in "mirror neural network" to store not only information but also response behavior to such information. As to recognizing dangerous situations, IMO, this can also be developed as long as the AI has eyes, ears, smell and touch to analyze external information. Todays cars are already very sophisticated in their ability to recognize dangerous, situations.
Is there any reason why an AI programmed with information for 7 years (and access to the internet) should not be able to make sophisticated guesses if a particular scene is unfamiliar, but is still symbolic of an interaction? .
You have a solid point here Write4U. However, spotting unfamiliar scenes or making sophisticated guesses will require more information than 7 years, and in my opinion, the internet cannot act as a stable asset (According the source of extracted info). Your statement on symbolic interactivity, if the first condition occurs, achieving a stable stream of data, then the property of spotting should do the trick properly.

There is a lot written about the singularity. I read a novel not to long ago which you may be familiar with called Daemon by Daniel Suarez which imagines a singularity arising. Just one persons view of course but a bit different from the terminator version.

There is a lot written about the singularity. I read a novel not too long ago which you may be familiar with called Daemon by Daniel Suarez which imagines a singularity arising. Just one persons view of course but a bit different from the terminator version.

Is there any reason why an AI programmed with information for 7 years (and access to the internet) should not be able to make sophisticated guesses if a particular scene is unfamiliar, but is still symbolic of an interaction? .
You have a solid point here Write4U. However, spotting unfamiliar scenes or making sophisticated guesses will require more information than 7 years, and in my opinion, the internet cannot act as a stable asset (According the source of extracted info). Your statement on symbolic interactivity, if the first condition occurs, achieving a stable stream of data, then the property of spotting should do the trick properly.
Yes, the system should be aware of its surroundings at all times. IMO, the chronology of cause/effect (streaming data) of external events must be continuous to allow for mathematical analysis and association with known interactions. The main problem has always been limitations on storage and processing of information but we may be able to solve that problem soon. I agree with the unreliable information on the internet, but I was thinking more of a "cloud" with dedicated information for AI specifically. This would allow a AI unit to share information with other units and build a reliable data base for reference of "known" accurate information, without a storage problem. As to processing the data, IMO. "graphene" is the perfect medium for building a neural network into AI. It has remarkable properties and it can be stacked vertically (1 atom layer at a time) for spacesaving as well. https://en.wikipedia.org/wiki/Graphene
The main problem has always been limitations on storage and processing of information but we may be able to solve that problem soon. I agree with the unreliable information on the internet, but I was thinking more of a "cloud" with dedicated information for AI specifically. This would allow a AI unit to share information with other units and build a reliable data base for reference of "known" accurate information, without a storage problem. As to processing the data, IMO. "graphene" is the perfect medium for building a neural network into AI. It has remarkable properties and it can be stacked vertically (1 atom layer at a time) for spacesaving as well. https://en.wikipedia.org/wiki/Graphene
Quite impressed for the "graphene" suggestion. Also, I can't but agree on the storage capacity matter. Digital storage applications, achieving maximum capacity over certain size ratio, still under development. But I think it will not be an issue in a few years. IMO the processing challenge, we can't assume that there isn't any actual computational power. Current state multiprocessors operate under extremely fast rates of calculation and memory distribution. P.S: Always my pleasure sharing and discussing problems with you guys. Really nice collaboration! :)

Interesting discussion you started Bishop. Welcome to the CFI forums.