BAM! (aka the BRAIN iniative)

Man, I hope this comes to fruition. If it does, the potential benefits could outshine those of the Human Genome Mapping, the building of the Interstate Highway system, and the Lunar Landing combined.

Ya, what a great project. Just think of all the good things that could come out of this type of research.
But why is our military defense (DARPA) wanting to be involved and part of the program?

Ya, what a great project. Just think of all the good things that could come out of this type of research. But why is our military defense (DARPA) wanting to be involved and part of the program?
For the good of the country, of course. http://abcnews.go.com/Health/MindMoodNews/dark-side-military-funded-neuroscience/story?id=15960496#.UcW42RaE7ww

That the human brain is not just the neurons implies “mapping the activity of every neuron of the human brain” is simplistic because it is much more complex with the glia.
What is the glia?
http://www.ohsu.edu/blogs/brain/2013/04/19/glia-neurons-intelligence-in-humans/

But there is a growing contingent of neuroscientists who study other brain cells called glia, named for the Greek word for glue. For much of the last century of neuroscience research, glia were second-class citizens to neurons, thought of simply as brain “glue" — a structural support system for neurons. That opinion has been changing in the last 20 years or so. Neuroscientists have discovered that these cells are essential for brain development, proper metabolic brain function, neuronal health, and now, perhaps, for intelligence itself.
http://www.the-scientist.com/?articles.view/articleNo/34639/title/Mice-Learn-Faster-with-Human-Glia/
Mice that received transplants of human glial progenitor cells learned much more quickly than normal mice, according to a study published today (March 7) in Cell Stem Cell. The findings support the theory that glial cells made a significant contribution to the evolution of our own enhanced cognitive abilities.
And there are as much glial cells as neurons in the human brain.

That is interesting Kkwan and is indicative of how much we have yet to learn about how our brains work.
Mike and Citizen also made good points about such research’s potential for increasing our destructive capacities. But that is often the case when we learn new things. We can potentially use what we learn in destructive ways, but that is not a good reason to stop seeking to learn.
The Manhattan project was geared solely toward creating something destructive, but what we have learned about nuclear energy since that inauspicious beginning, has been primarily used constructively. So far, knock on wood, nuclear weapons have only been used twice, and that was to end a World War.
The potential positive technologies that can come from effectively understanding brain function are so great that it seems ridiculous to me to hold back on this research effort. As always we must be aware of the potential for destruction and guard against that.
If our ancestors who first discovered uses for fire, had decided not to bother with it because it could be so destructive, where would we be today?

And this will help George’s position greatly by showing how strong the genetic influence is on intelligence. :lol:
Occam

I appreciate your good natured jest at George’s expense, but poking him, is just likely to get this thread off topic.
I think it is an important topic, even if few others apparently do.

That is interesting Kkwan and is indicative of how much we have yet to learn about how our brains work.
Consider this article on Einstein's brain http://www.npr.org/templates/story/story.php?storyId=126229305
Diamond wanted to see if there were more of the glial cells known as astrocytes and oligodendrocytes in Einstein's brain. So she counted them and found that there were, especially in the tissue from an area involved in imagery and complex thinking.
The other brain:
"I just wish I could get across the amazement of that finding — that these cells that were thought to be stuffing between neurons were communicating," Fields says. It was like finding a whole other brain within the one we already knew about, Fields says. He says that idea inspired the title of his new book, The Other Brain, which describes how discoveries about the role of glia in the brain have caused a revolution of sorts in the world of neuroscience during the past couple of decades.
What this means is that the functioning human brain involves intricate interactions between 100 billion glial and 100 billion neurons and therefore it is "infinitely" more complex than just neurons. As it is, in reality, the underlying assumption (wrt neurons) in BAMI is false, simplistic and a delusion.
Mike and Citizen also made good points about such research's potential for increasing our destructive capacities. But that is often the case when we learn new things. We can potentially use what we learn in destructive ways, but that is not a good reason to stop seeking to learn.
Being curious, exploring and learning are innately human abilities, but we must be circumspect as the application of knowledge is a double-edged sword.
The Manhattan project was geared solely toward creating something destructive, but what we have learned about nuclear energy since that inauspicious beginning, has been primarily used constructively. So far, knock on wood, nuclear weapons have only been used twice, and that was to end a World War.
That two nuclear weapons were used in Hiroshima and Nagasaki in Japan to end World War 2 is a moot point and morally indefensible. Wrt nuclear energy, consider these nuclear disasters and accidents. http://en.wikipedia.org/wiki/Lists_of_nuclear_disasters_and_radioactive_incidents http://en.wikipedia.org/wiki/List_of_military_nuclear_accidents And this http://www.theguardian.com/world/2013/sep/20/usaf-atomic-bomb-north-carolina-1961
The document, obtained by the investigative journalist Eric Schlosser under the Freedom of Information Act, gives the first conclusive evidence that the US was narrowly spared a disaster of monumental proportions when two Mark 39 hydrogen bombs were accidentally dropped over Goldsboro, North Carolina on 23 January 1961. The bombs fell to earth after a B-52 bomber broke up in mid-air, and one of the devices behaved precisely as a nuclear weapon was designed to behave in warfare: its parachute opened, its trigger mechanisms engaged, and only one low-voltage switch prevented untold carnage.
bold added by me. Saved from catastrophic nuclear destruction by one switch?
Jones found that of the four safety mechanisms in the Faro bomb, designed to prevent unintended detonation, three failed to operate properly. When the bomb hit the ground, a firing signal was sent to the nuclear core of the device, and it was only that final, highly vulnerable switch that averted calamity.
Sobering isn't it? :roll:
If our ancestors who first discovered uses for fire, had decided not to bother with it because it could be so destructive, where would we be today?
Being curious and intelligent, early humans would not have ignored fire. http://en.wikipedia.org/wiki/Control_of_fire_by_early_humans
An important change in the behavior of humans was brought about by the control of fire and its accompanying light. Activity was no longer restricted to the daylight hours. In addition, some mammals and biting insects avoid fire and smoke. Fire also led to improved nutrition by cooked proteins.

“The Other Brain” sounds like a fascinating story. I would like to see a movie based on it. I would title the movie “Glue”.
So neurons aren’t the whole story, but are you suggesting that we not endeavor to learn the intricacies of brain functioning?
Nuclear energy and the use of fire are only 2 examples of the plethora of discoveries made by humans. My point is that most discoveries can be used in a destructive way. But that fact should not be used to prevent further attempts at discovery.

"The Other Brain" sounds like a fascinating story. I would like to see a movie based on it. I would title the movie "Glue".
How about grue? http://en.wikipedia.org/wiki/Distinguishing_blue_from_green_in_language
Many languages do not differentiate between certain colors on the visible spectrum and do not have separate terms for blue and green. They instead use a cover term for both (when the issue is discussed in linguistics, this cover term is sometimes called grue in English)
Bold added by me. Blue as neurons and green as glial. Grue as the complete brain. :)
So neurons aren't the whole story, but are you suggesting that we not endeavor to learn the intricacies of brain functioning?
Of course, not. By all means elucidate how the human brain actually functions, but it is highly problematic that BAMI (which assumes that only neurons determine how the whole brain functions) is tenable, as it is manifestly false. The interactions of glia with neurons implies that the human brain is inconceivably much more in complexity and capacity and as such, BAMI cannot possibly claim to emulate the actual functioning of the human brain at all. So, back to the drawing board. Think of grue instead? :coolsmile: This is analogous to the central "dogma" of molecular biology.
It is becoming increasingly clear that in reality, the concept of the central dogma of molecular biology is not entirely accurate insofar as it puts emphasis on proteins as the mediator of biological function. It has been speculated that 80% of the human genome is transcribed even though only 1% codes for proteins,
There is much more complexity in nature than what we can conceive.
Nuclear energy and the use of fire are only 2 examples of the plethora of discoveries made by humans. My point is that most discoveries can be used in a destructive way. But that fact should not be used to prevent further attempts at discovery.
Nothing can prevent human exploration and discovery, for better or for worse. From http://www.std.com/~raparker/exploring/thewasteland/explore.html
We shall not cease from exploration And the end of all our exploring Will be to arrive where we started And know the place for the first time. T.S. Eliot -- "Little Gidding" (the last of his Four Quartets)

Perhaps nothing short of extinction can prevent human exploration and discovery, ultimately. But certainly there is a lot that can impede it. I prefer fewer impediments.

Perhaps nothing short of extinction can prevent human exploration and discovery, ultimately. But certainly there is a lot that can impede it. I prefer fewer impediments.
Let's assume no huge asteroid will hit the earth, no malicious aliens will exterminate all humans, humans will not exterminate themselves with a nuclear holocaust or some diabolical device etc. and there is no impediment to the advancement of science and technology in the foreseeable future. There remains this conundrum. http://spectrum.ieee.org/biomedical/imaging/the-consciousness-conundrum
The wetware that gives rise to consciousness is far too complex to be replicated in a computer anytime soon.
Real brains versus artificial brains?
Specialists in real rather than artificial brains find such bionic convergence scenarios naive, often laughably so. Gerald Edelman, a Nobel laureate and director of the Neurosciences Institute, in San Diego, says singularitarians vastly underestimate the brain's complexity. Not only is each brain unique, but each also constantly changes in response to new experiences. Stimulate a brain with exactly the same input, Edelman notes, and you'll never see the same signal set twice in response.
Temporal codes and the Shannon limit?
That's a vital distinction: the biophysicist William Bialek of Princeton University calculates that temporal coding would boost the brain's information-processing capacity close to the Shannon limit, the theoretical maximum that information theory allows for a given physical system.
Do read the whole article. It is enlightening that this is what the mind/brain is.

That is a very interesting article. My point remains, however, that we should not allow the prospect that discoveries will be used destructively to impede our investment in discovery. Also, knowing that civilization may come to an end, should not be used as an impediment. (i.e., That would be like saying, I’m going to die at some point, so why bother doing anything in the meantime.) Likewise, the fact that the brain is complex beyond our understanding, should not be an impediment to intense efforts to understand more. Also the joke that “If the brain were simple enough for us to understand it, we would not be smart enough to understand it.” suggests the task is impossible, so why try? This leads to a self fulfilling prophecy.

I appreciate your good natured jest at George's expense, but poking him, is just likely to get this thread off topic. I think it is an important topic, even if few others apparently do.
Apparently you missed the import of my statement. First, unlike some here, I try to avoid long-winded babble. Second, I was pointing out that this did seem to validate some of George's arguments. It certainly wasn't "at George's expense." Occam
I appreciate your good natured jest at George's expense, but poking him, is just likely to get this thread off topic. I think it is an important topic, even if few others apparently do.
Apparently you missed the import of my statement. First, unlike some here, I try to avoid long-winded babble. Second, I was pointing out that this did seem to validate some of George's arguments. It certainly wasn't "at George's expense." Occam As to your 2nd point: Some of George's arguments are going to be validated by almost any statements that have to do with human behavior, as his arguments tend to be that they are a product of evolution. And that is incontrovertible, as ontogeny is impossible without phylogeny. It's just that he seems to overemphasize phylogeny, to the extent that he has often seemed to consider ontogenic factors as generally irrelevant. (Which, I believe, is as big a mistake as claiming that phylogenic factors are generally irrelevant.) As in the example of our ancestors discovery of the uses of fire. Sure, human's may have been phylogenetically predisposed to discovering fire. But not all of the advances in our technologies, related to the use of fire, can possibly be attributed to purely phylogenic factors alone. All this, IMO, should go without saying. But alas, when George's perspective is brought up... As to your perennial underlying quest for succinctness, I didn't notice anything particularly long-winded in this thread, prior to your comment.
That is a very interesting article. My point remains, however, that we should not allow the prospect that discoveries will be used destructively to impede our investment in discovery. Also, knowing that civilization may come to an end, should not be used as an impediment. (i.e., That would be like saying, I'm going to die at some point, so why bother doing anything in the meantime.) Likewise, the fact that the brain is complex beyond our understanding, should not be an impediment to intense efforts to understand more. Also the joke that "If the brain were simple enough for us to understand it, we would not be smart enough to understand it." suggests the task is impossible, so why try? This leads to a self fulfilling prophecy.
That would be nihilistic and that is not what I subscribe to. By all means explore, discover and invent but be aware of "curiosity killed the cat".
In the long run we are all dead - John Maynard Keynes
is certain but while we are alive, worry and be happy. There are advantages in being a happy pessimist i.e. hope for the best but be prepared for the worst. The term "happy pessimist" is apparently an oxymoron, but as the map is not the territory.... http://news.nationalpost.com/2013/02/27/want-a-longer-happier-life-embrace-pessimism-study-says/
Expecting a less than bright future may even enhance “predictive control," the study authors write.
The power of negative thinking? :-)
I appreciate your good natured jest at George's expense, but poking him, is just likely to get this thread off topic. I think it is an important topic, even if few others apparently do.
Apparently you missed the import of my statement. First, unlike some here, I try to avoid long-winded babble. Second, I was pointing out that this did seem to validate some of George's arguments. It certainly wasn't "at George's expense." Occam As to your 2nd point: Some of George's arguments are going to be validated by almost any statements that have to do with human behavior, as his arguments tend to be that they are a product of evolution. And that is incontrovertible, as ontogeny is impossible without phylogeny. It's just that he seems to overemphasize phylogeny, to the extent that he has often seemed to consider ontogenic factors as generally irrelevant. (Which, I believe, is as big a mistake as claiming that phylogenic factors are generally irrelevant.) As in the example of our ancestors discovery of the uses of fire. Sure, human's may have been phylogenetically predisposed to discovering fire. But not all of the advances in our technologies, related to the use of fire, can possibly be attributed to purely phylogenic factors alone. All this, IMO, should go without saying. But alas, when George's perspective is brought up... As to your perennial underlying quest for succinctness, I didn't notice anything particularly long-winded in this thread, prior to your comment. First, when I used "here" I was referring to the forum, not to this specific thread. Second, I wasn't trying to argue for or against any of George's contentions, however, you demonstrated what I was saying by your own relatively long-winded response immediately above, starting with "As to your 2nd point" and ending with "brought up..." Occam

Taken to the extreme, succinctness may not lead to clarity.

Possibly, but I haven’t seen it to be the case.
Occam

Depends.