|
大腦、知覺、與語言 三者的互動 - R. E. SCHMID
|
瀏覽94,727|回應21|推薦3 |
|
|
Color perception shifts from right brain to left
RANDOLPH E. SCHMID, AP Science Writer
WASHINGTON – Learning the name of a color changes the part of the brain that handles color perception. Infants perceive color in the right hemisphere of the brain, researchers report, while adults do the job in the brain's left hemisphere.
Testing toddlers showed that the change occurred when the youngsters learned the names to attach to particular colors, scientists report in Tuesday's edition of Proceedings of the National Academy of Sciences.
"It appears, as far as we can tell, that somehow the brain, when it has categories such as color, it actually consults those categories," Paul Kay of the department of linguistics, University of California, Berkeley, said in a telephone interview.
He said the researchers did a similar experiment with silhouettes of dogs and cats with the same result -- once a child learns the name for the animal, perception moves from the right to the left side of the brain.
"It's important to know this because it's part of a debate that's gone on as long as there has been philosophy or science, about how the language we speak affects how we look at the world," Kay said. Indeed, scholars continue to discuss the comparative importance of nature versus nurture.
The researchers studied the time it took toddlers to begin eye movement toward a colored target in either their left or right field of vision to determine which half of the brain was processing the information.
The research was funded by the National Science Foundation.
On the Net:
PNAS: http://www.pnas.org
http://news.yahoo.com/mp/1689/20081118;_ylt=Ak9Rikqcq.N.rK.eMAsXCeYbr7sF
本文於 修改第 10 次
|
全新的意識理論 - J. Ouellette
|
|
推薦0 |
|
|
A Fundamental Theory to Model the Mind
Jennifer Ouellette, 04/03/14
In 1999, the Danish physicist Per Bak proclaimed to a group of neuroscientists that it had taken him only 10 minutes to determine where the field had gone wrong. Perhaps the brain was less complicated than they thought, he said. Perhaps, he said, the brain worked on the same fundamental principles as a simple sand pile, in which avalanches of various sizes help keep the entire system stable overall -- a process he dubbed “self-organized criticality.”
As much as scientists in other fields adore outspoken, know-it-all physicists, Bak’s audacious idea -- that the brain’s ordered complexity and thinking ability arise spontaneously from the disordered electrical activity of neurons -- did not meet with immediate acceptance.
But over time, in fits and starts, Bak’s radical argument has grown into a legitimate scientific discipline. Now, about 150 scientists worldwide investigate so-called “critical” phenomena in the brain, the topic of at least three focused workshops in 2013 alone. Add the ongoing efforts to found a journal devoted to such studies, and you have all the hallmarks of a field moving from the fringes of disciplinary boundaries to the mainstream.
“How do we know that the creations of worlds are not determined by falling grains of sand?” -- Victor Hugo, “Les Misérables”
In the 1980s, Bak first wondered how the exquisite order seen in nature arises out of the disordered mix of particles that constitute the building blocks of matter. He found an answer in phase transition, the process by which a material transforms from one phase of matter to another. The change can be sudden, like water evaporating into steam, or gradual, like a material becoming superconductive. The precise moment of transition -- when the system is halfway between one phase and the other -- is called the critical point, or, more colloquially, the “tipping point.”
Classical phase transitions require what is known as precise tuning: in the case of water evaporating into vapor, the critical point can only be reached if the temperature and pressure are just right. But Bak proposed a means by which simple, local interactions between the elements of a system could spontaneously reach that critical point -- hence the term self-organized criticality.
Think of sand running from the top of an hourglass to the bottom. Grain by grain, the sand accumulates. Eventually, the growing pile reaches a point where it is so unstable that the next grain to fall may cause it to collapse in an avalanche. When a collapse occurs, the base widens, and the sand starts to pile up again -- until the mound once again hits the critical point and founders. It is through this series of avalanches of various sizes that the sand pile -- a complex system of millions of tiny elements -- maintains overall stability.
While these small instabilities paradoxically keep the sand pile stable, once the pile reaches the critical point, there is no way to tell whether the next grain to drop will cause an avalanche -- or just how big any given avalanche will be. All one can say for sure is that smaller avalanches will occur more frequently than larger ones, following what is known as a power law.
Bak introduced self-organized criticality in a landmark 1987 paper -- one of the most highly cited physics papers of the last 30 years. Bak began to see the stabilizing role of frequent smaller collapses wherever he looked. His 1996 book, “How Nature Works,” extended the concept beyond simple sand piles to other complex systems: earthquakes, financial markets, traffic jams, biological evolution, the distribution of galaxies in the universe -- and the brain. Bak’s hypothesis implies that most of the time, the brain teeters on the edge of a phase transition, hovering between order and disorder.
The brain is an incredibly complex machine. Each of its tens of billions of neurons is connected to thousands of others, and their interactions give rise to the emergent process we call “thinking.” According to Bak, the electrical activity of brain cells shift back and forth between calm periods and avalanches -- just like the grains of sand in his sand pile -- so that the brain is always balanced precariously right at that the critical point.
A better understanding of these critical dynamics could shed light on what happens when the brain malfunctions. Self-organized criticality also holds promise as a unifying theoretical framework. According to the neurophysiologist Dante Chialvo, most of the current models in neuroscience apply only to single experiments; to replicate the results from other experiments, scientists must change the parameters -- tune the system -- or use a different model entirely.
Self-organized criticality has a certain intuitive appeal. But a good scientific theory must be more than elegant and beautiful. Bak’s notion has had its share of critics, in part because his approach strikes many as ridiculously broad: He saw nothing strange about leaping across disciplinary boundaries and using self-organized criticality to link the dynamics of forest fires, measles and the large-scale structure of the universe -- often in a single talk. Nor was he one to mince words. His abrasive personality did not endear him to his critics, although Lee Smolin, a physicist at the Perimeter Institute for Theoretical Physics, in Canada, has chalked this up to “childlike simplicity,” rather than arrogance. “It would not have occurred to him that there was any other way to be,” Smolin wrote in a remembrance after Bak’s death in 2002. “Science is hard, and we have to say what we think.”
Nonetheless, Bak’s ideas found fertile ground in a handful of like-minded scientists. Chialvo, now with the University of California, Los Angeles, and with the National Scientific and Technical Research Council in Argentina, met Bak at Brookhaven National Laboratory around 1990 and became convinced that self-organized criticality could explain brain activity. He, too, encountered considerable resistance. “I had to put up with a number of critics because we didn’t have enough data,” Chialvo said. Dietmar Plenz, a neuroscientist with the National Institute of Mental Health, recalled that it was impossible to win a grant in neuroscience to work on self-organized criticality at the time, given the lack of experimental evidence.
Since 2003, however, the body of evidence showing that the brain exhibits key properties of criticality has grown, from examinations of slices of cortical tissue and electroencephalography (EEG) recordings of the interactions between individual neurons to large-scale studies comparing the predictions of computer models with data from functional magnetic resonance (fMRI) imaging. “Now the field is mature enough to stand up to any fair criticism,” Chialvo said.
One of the first empirical tests of Bak’s sand pile model took place in 1992, in the physics department of the University of Oslo. The physicists confined piles of rice between glass plates and added grains one at a time, capturing the resulting avalanche dynamics on camera. They found that the piles of elongated grains of rice behaved much like Bak’s simplified model.
Most notably, the smaller avalanches were more frequent than the larger ones, following the expected power law distribution. That is, if there were 100 small avalanches involving only 10 grains during a given time frame, there would be 10 avalanches involving 100 grains in the same period, but only a single large avalanche involving 1,000 grains. (The same pattern had been observed in earthquakes and their aftershocks. If there are 100 quakes measuring 6.0 on the Gutenberg-Richter scale in a given year, there will be 10 7.0 quakes and one 8.0 quake.)
Chialvo envisions self-organized criticality providing a broader, more fundamental theory for neuroscientists, like those found in physics.
Ten years later, Plenz and a colleague, John Beggs, now a biophysicist at Indiana University, observed the same pattern of avalanches in the electrical activity of neurons in cortical slices -- the first key piece of evidence that the brain functions at criticality. “It was something that no one believed the brain would do,” Plenz said. “The surprise is that is exactly what happens.” Studies using magnetoencephalography (MEG) and Chialvo’s own work comparing computer simulations with fMRI imaging data of the brain’s resting state have since added to the evidence that the brain exhibits these key avalanche dynamics.
But perhaps it is not so surprising. There can be no phase transitions without a critical point, and without transitions, a complex system -- like Bak’s sand pile, or the brain -- cannot adapt. That is why avalanches only show up at criticality, a “sweet spot” where a system is perfectly balanced between order and disorder, according to Plenz. They typically occur when the brain is in its normal resting state. Avalanches are a mechanism by which a complex system avoids becoming trapped, or “phase-locked,” in one of two extreme cases. At one extreme, there is too much order, such as during an epileptic seizure; the interactions among elements are too strong and rigid, so the system cannot adapt to changing conditions. At the other, there is too much disorder; the neurons aren’t communicating as much, or aren’t as broadly interconnected throughout the brain, so information can’t spread as efficiently and, once again, the system is unable to adapt.
A complex system that hovers between “boring randomness and boring regularity” is surprisingly stable overall, said Olaf Sporns, a cognitive neuroscientist at Indiana University. “Boring is bad,” he said, at least for a critical system. In fact, “if you try to avoid ever sparking an avalanche, eventually when one does occur, it is likely to be really large,” said Raissa D’Souza, a complex systems scientist at the University of California, Davis, who simulated just such a generic system last year. “If you spark avalanches all the time, you’ve used up all the fuel, so to speak, and so there is no opportunity for large avalanches.”
D’Souza’s research applies these dynamics to better understand power outages across the electrical grid. The brain, too, needs sufficient order to function properly, but also enough flexibility to adapt to changing conditions; otherwise, the organism could not survive. This could be one reason that the brain exhibits hallmarks of self-organized criticality: It confers an evolutionary advantage. “A brain that is not critical is a brain that does exactly the same thing every minute, or, in the other extreme, is so chaotic that it does a completely random thing, no matter what the circumstances,” Chialvo said. “That is the brain of an idiot.”
When the brain veers away from criticality, information can no longer percolate through the system as efficiently. One study (not yet published) examined sleep deprivation; subjects remained awake for 36 hours and then took a reaction time test while an EEG monitored their brain activity. The more sleep-deprived the subject, the more the person’s brain activity veered away from the critical balance point and the worse the performance on the test.
Another study collected data from epileptic subjects during seizures. The EEG recordings revealed that mid-seizure, the telltale avalanches of criticality vanished. There was too much synchronization among neurons, and then, Plenz said, “information processing breaks down, people lose consciousness, and they don’t remember what happened until they recover.”
Chialvo envisions self-organized criticality providing a broader, more fundamental theory for neuroscientists, like those found in physics. He believes it could be used to model the mind in all its possible states: awake, asleep, under anesthesia, suffering a seizure, and under the influence of a psychedelic drug, among many others.
This is especially relevant as neuroscience moves deeper into the realm of big data. The latest advanced imaging techniques are capable of mapping synapses and monitoring brain activity at unprecedented resolutions, with a corresponding explosion in the size of data sets. Billions of dollars in research funding has launched the Human Connectome Project -- which aims to build a “network map” of neural pathways in the brain -- and the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN), dedicated to developing new technological tools for recording signals from cells. There is also Europe’s Human Brain Project, working to simulate the complete human brain on a supercomputer, and China’s Brainnetome project to integrate data collected from every level of the brain’s hierarchy of complex networks.
But without an underlying theory, it will be difficult to glean all the potential insights hidden in the data. “It is fine to build maps and it is fine to catalog pieces and how they are related, so long as you don’t lose track of the fact that when the system you map actually functions, it is in an integrated system and it is dynamic,” Sporns said.
“The structure of the brain -- the precise map of who connects with whom -- is almost irrelevant by itself,” Chialvo said -- or rather, it is necessary but not sufficient to decipher how cognition and behavior are generated in the brain. “What is relevant is the dynamics,” Chialvo said. He then compared the brain with a street map of Los Angeles containing details of all the connections at every scale, from private driveways to public freeways. The map tells us only about the structural connections; it does not help predict how traffic moves along those connections or where (and when) a traffic jam is likely to form. The map is static; traffic is dynamic. So, too, is the activity of the brain. In recent work, Chialvo said, researchers have demonstrated that both traffic dynamics and brain dynamics exhibit criticality.
Sporns emphasizes that it remains to be seen just how robust this phenomenon might be in the brain, pointing out that more evidence is needed beyond the observation of power laws in brain dynamics. In particular, the theory still lacks a clear description for how criticality arises from neurobiological mechanisms -- the signaling of neurons in local and distributed circuits. But he admits that he is rooting for the theory to succeed. “It makes so much sense,” he said. “If you were to design a brain, you would probably want criticality in the mix. But ultimately, it is an empirical question.”
This article was reprinted on ScientificAmerican.com.
https://www.simonsfoundation.org/quanta/20140403-a-fundamental-theory-to-model-the-mind/
本文於 修改第 1 次
|
大腦神經網路如何辨認地方 - G. Bookwalter
|
|
推薦0 |
|
|
A Patient’s Bizarre Hallucination Points to How the Brain Identifies Places
Genevieve Bookwalter, 04/15/14
Dr. Pierre Mégevand was in the middle of a somewhat-routine epilepsy test when his patient, a 22-year old man, said Mégevand and his medical team looked like they had transformed into Italians working at a pizzeria -- aprons and all. It wasn’t long, the patient said, before the doctors morphed back into their exam room and business-casual attire. But that fleeting hallucination -- accompanied by earlier visions of houses, a familiar train station and the street where the patient grew up -- helped verify that a certain spot, in a certain fold in the brain, is a crucial node in the brain’s process of recognizing places.
In the 1950s, the Canadian neurosurgeon Wilder Penfield made a set of remarkable observations in the course of operating on epilepsy patients. As he moved a stimulating electrode around parts of the temporal and frontal lobes of the brain to locate the source of a patient’s seizures, the patients sometimes reported vivid hallucinations. The work was an early contribution to scientists’ understanding of which parts of the brain do what.
Since then, researchers have developed new methods like fMRI for studying the human brain in action without picking up a scalpel. These tools have given them a much better understanding of how the brain is organized -- suggesting, for example, that one particular patch of the temporal cortex specializes in processing faces, while another nearby patch specializes in places. Very few studies, however, have tested these findings by stimulating those parts of the brain to see what people experience.
In the new study, Mégevand and colleagues report what happened when they stimulated a brain region thought to be important for the perception of places -- the so-called parahippocampal place area -- in one particular patient.
“At first we were really stunned. It was the first time in 70 patients that someone gave such a detailed, specific report,” said Mégevand, a post-doctoral research fellow at The Feinstein Institute for Medical Research in Manhasset, New York.
His team’s findings appear in the April 16 issue of The Journal of Neuroscience. The patient’s hallucinations came as Mégevand and his medical team were tickling electrodes they had placed in his brain in search of the origin of his epilepsy, which had been difficult to control. The patient had started suffering epileptic seizures after contracting West Nile virus when he was 10.
In this patient, Mégevand’s collaborator, Ashesh Mehta, director of epilepsy surgery at the Feinstein Institute, drilled tiny holes in the skull through which he inserted 2-inch long electrodes and guided them to specific points on unique folds in the brain tissue. Even with that level of precision, results can be difficult to reproduce from patient to patient, Mehta says. That’s because everyone’s brain is different, and a variation of millimeters can make a certain hallucination-producing spot hard to pinpoint across patients.
“What was groundbreaking was everything worked the way it was supposed to work,” Mehta said.
The research follows that of Stanford University neurologist Josef Parvizi, who two years ago showed that electrodes placed another spot in the brain were crucial in a patient’s processing of faces.
That study includes a video of the patient’s reaction (below). “You just turned into somebody else. Your face metamorphosed,” the patient marveled. “That was a trip.” Parvizi published another study last year showing that stimulating yet another part of the brain “gave patients the will to persevere hardship.”
This type of ongoing research “is a perfect way for us to explore the functional architecture of the human brain,” Parvizi said. He describes the Feinstein Institute team’s paper as “elegant,” but stresses that the findings do not prove that certain parts of the brain are entirely responsible for the processing of faces, places or anything else. Instead, he says, it only shows that these spots are critical links in networks of neurons responsible for a certain task.
Back in New York, Mehta says he expects to make additional discoveries as his team continues their epilepsy research and treatment. “As we’re stimulating more and more of the brain, we’re finding more unique little spots,” he said.
Still, with these findings come more questions.
For example, Mégevand says, was the pizzeria hallucination the result of an electrode placed partly between neurons that process faces, and those that process places? The patient owned a pizzeria with his family, he says. So was that scene part of an old memory, or something he’d never seen before? Those are questions Mégevand says he hopes to answer going forward.
“If he had been working in a sushi place, maybe we would have been wearing different garb,” Mégevand chuckled.
n 請至原網頁觀看相關影片
http://www.wired.com/2014/04/pizzeria-hallucination-place-brain/
本文於 修改第 1 次
|
大腦神經細胞與知覺及思考 - K. Jeffery
|
|
推薦1 |
|
|
Chattering brain cells hold the key to the language of the mind
Kate Jeffery, 03/12/14
Let’s say Martians land on the Earth and wish to understand more about humans. Someone hands them a copy of the Complete Works of Shakespeare and says: “When you understand what’s in there, you will understand everything important about us.”
The Martians set to work – they allocate vast resources to recording every detail of this great tome until eventually they know where every “e”, every “a”, every “t” is on every page. They remain puzzled, and return to Earth. “We have completely characterised this book,” they say, “but we still aren’t sure we really understand you people at all.”
The problem is that characterising a language is not the same as understanding it, and this is the problem faced by brain researchers too. Neurons (brain cells) use language of a kind, a “code”, to communicate with each other, and we can tap into that code by listening to their “chatter” as they fire off tiny bursts of electricity (nerve impulses). We can record this chatter and document all its properties.
We can also determine the location of every single neuron and all of its connections and its chemical messengers. Having done this, though, we still will not understand how the brain works. To understand a code we need to anchor that code to the real world.
Place, memory and administration
We easily anchor Shakespeare’s code (we find out that “Juliet” refers to a specific young woman, “Romeo” to a specific young man) but can we do this for the brain? It seems we can. By recording the chatter of neurons while animals (and sometimes humans) perform the tasks of daily life, researchers have discovered that there are regions where the neural code relates to the real world in remarkably straightforward ways.
The best known of these is the code for “place”, discovered in a small and deeply buried part of the brain called the hippocampus. A given hippocampal neuron starts chattering furiously whenever its owner (rat, mouse, bat, human) goes to a particular place. Each neuron tends to be most excited at a particular place (near the door, halfway along a wall) and so a large collection of neurons can, between them, be ready to “speak up” for any place in the environment. It is as if these neurons encode space, to form something akin to a mental map.
To determine where you are, you simply consult your hippocampus and see which neuron is active. (In practice, of course, many neurons will be active in that place and not just one – otherwise every time a neuron died you would lose a small piece of your map.) These neurons in the hippocampus are called “place neurons”, and are remarkable entities that form the foundation not only for our mental map of the space around us, but also for memories of the events that occur in that space – a kind of biographical record. Their importance is evident in the terrible disorientation and amnesia that result from their degeneration in Alzheimer’s disease. When the brain loses its link to its place in the world, and to its past, its owner loses all sense of self.
There are many other neurons in the brain whose code seems decipherable. Neurons that activate when facing a particular direction, or near a wall, or when you see your grandmother … Gradually we are piecing together the network of nodes in the brain that connect the inner code to the world outside.
This is not all that neurons do, of course. Much of the brain is involved with internal “administration”. For example, a large part of the frontal lobe (the brain behind the forehead) is involved in making decisions – how to prioritise activities, what to do next, and so on. Many neurons, scattered throughout the brain, have housekeeping duties to do with maintaining the code, improving and refining it, preserving the relevant parts as memory and discarding the rest.
Some of the most numerous neurons seem simply to have the job of suppressing their neighbours, so that the neural conversation, as it were, does not degenerate into the equivalent of uncontrollable shouting (which, in technical terms, we recognise as epilepsy).
Still room for psychology
It is clear that to understand the brain we need to investigate all aspects of its functioning, not just those that relate to internal administration but also those that connect to the outside world.
We need to determine how brain activity relates to what the brain’s owner is thinking, feeling and doing with respect to the world outside that brain – that is, we need to anchor the code to the real world.
For this, we need scientists who study thoughts, feelings and behaviour – psychologists – as much as we need those who study anatomy and physiology. Study of the brain requires investigation at all levels – otherwise, we will have a complete characterisation, but no understanding, of this remarkable organ.
Decoding the brain, a special report produced in collaboration with the Dana Centre, looks at how technology and person-to-person analysis will shape the future of brain research.
Kate Jeffery receives, or have received, funding for her work from the BBSRC, MRC, Wellcome trust and European Commission FP7. She is a non-shareholding director of the biomedical instrumentation company Axona Ltd, which makes data acquisition systems for in vivo electrophysiological recording
http://theconversation.com/chattering-brain-cells-hold-the-key-to-the-language-of-the-mind-24085
本文於 修改第 1 次
|
|
|
Christof Koch教授的觀點只能稱為關於「意識」的新「定義」;把它稱之為「理論」不免牽強。
本文於 修改第 1 次
|
解釋「意識」的新理論 - B. Kiem
|
|
推薦0 |
|
|
A Neuroscientist’s Radical Theory of How Networks Become Conscious
Brandon Kiem, 11/14/13
It’s a question that’s perplexed philosophers for centuries and scientists for decades: Where does consciousness come from? We know it exists, at least in ourselves. But how it arises from chemistry and electricity in our brains is an unsolved mystery.
Neuroscientist Christof Koch, chief scientific officer at the Allen Institute for Brain Science, thinks he might know the answer. According to Koch, consciousness arises within any sufficiently complex, information-processing system. All animals, from humans on down to earthworms, are conscious; even the internet could be. That’s just the way the universe works.
“The electric charge of an electron doesn’t arise out of more elemental properties. It simply has a charge,” says Koch. “Likewise, I argue that we live in a universe of space, time, mass, energy, and consciousness arising out of complex systems.”
What Koch proposes is a scientifically refined version of an ancient philosophical doctrine called panpsychism -- and, coming from someone else, it might sound more like spirituality than science. But Koch has devoted the last three decades to studying the neurological basis of consciousness. His work at the Allen Institute now puts him at the forefront of the BRAIN Initiative, the massive new effort to understand how brains work, which will begin next year.
Koch’s insights have been detailed in dozens of scientific articles and a series of books, including last year’s Consciousness: Confessions of a Romantic Reductionist. WIRED talked to Koch about his understanding of this age-old question.
WIRED: How did you come to believe in panpsychism?
Christof Koch: I grew up Roman Catholic, and also grew up with a dog. And what bothered me was the idea that, while humans had souls and could go to heaven, dogs were not suppose to have souls. Intuitively I felt that either humans and animals alike had souls, or none did. Then I encountered Buddhism, with its emphasis on the universal nature of the conscious mind. You find this idea in philosophy, too, espoused by Plato and Spinoza and Schopenhauer, that psyche -- consciousness -- is everywhere. I find that to be the most satisfying explanation for the universe, for three reasons: biological, metaphysical and computational.
WIRED: What do you mean?
Koch: My consciousness is an undeniable fact. One can only infer facts about the universe, such as physics, indirectly, but the one thing I’m utterly certain of is that I’m conscious. I might be confused about the state of my consciousness, but I’m not confused about having it. Then, looking at the biology, all animals have complex physiology, not just humans. And at the level of a grain of brain matter, there’s nothing exceptional about human brains.
Only experts can tell, under a microscope, whether a chunk of brain matter is mouse or monkey or human -- and animals have very complicated behaviors. Even honeybees recognize individual faces, communicate the quality and location of food sources via waggle dances, and navigate complex mazes with the aid of cues stored in their short-term memory. If you blow a scent into their hive, they return to where they’ve previously encountered the odor. That’s associative memory. What is the simplest explanation for it? That consciousness extends to all these creatures, that it’s an imminent property of highly organized pieces of matter, such as brains.
WIRED: That’s pretty fuzzy. How does consciousness arise? How can you quantify it?
Koch: There’s a theory, called Integrated Information Theory, developed by Giulio Tononi at the University of Wisconsin, that assigns to any one brain, or any complex system, a number -- denoted by the Greek symbol of Φ -- that tells you how integrated a system is, how much more the system is than the union of its parts. Φ gives you an information-theoretical measure of consciousness. Any system with integrated information different from zero has consciousness. Any integration feels like something
It's not that any physical system has consciousness. A black hole, a heap of sand, a bunch of isolated neurons in a dish, they're not integrated. They have no consciousness. But complex systems do. And how much consciousness they have depends on how many connections they have and how they’re wired up.
WIRED: Ecosystems are interconnected. Can a forest be conscious?
Koch: In the case of the brain, it’s the whole system that’s conscious, not the individual nerve cells. For any one ecosystem, it’s a question of how richly the individual components, such as the trees in a forest, are integrated within themselves as compared to causal interactions between trees.
The philosopher John Searle, in his review of Consciousness, asked, “Why isn’t America conscious?” After all, there are 300 million Americans, interacting in very complicated ways. Why doesn’t consciousness extend to all of America? It’s because integrated information theory postulates that consciousness is a local maximum. You and me, for example: We’re interacting right now, but vastly less than the cells in my brain interact with each other. While you and I are conscious as individuals, there’s no conscious Übermind that unites us in a single entity. You and I are not collectively conscious. It’s the same thing with ecosystems. In each case, it’s a question of the degree and extent of causal interactions among all components making up the system.
WIRED: The internet is integrated. Could it be conscious?
Koch: It’s difficult to say right now. But consider this. The internet contains about 10 billion computers, with each computer itself having a couple of billion transistors in its CPU. So the internet has at least 10^19 transistors, compared to the roughly 1000 trillion (or quadrillion) synapses in the human brain. That’s about 10,000 times more transistors than synapses. But is the internet more complex than the human brain? It depends on the degree of integration of the internet.
For instance, our brains are connected all the time. On the internet, computers are packet-switching. They’re not connected permanently, but rapidly switch from one to another. But according to my version of panpsychism, it feels like something to be the internet -- and if the internet were down, it wouldn’t feel like anything anymore. And that is, in principle, not different from the way I feel when I’m in a deep, dreamless sleep.
WIRED: Internet aside, what does a human consciousness share with animal consciousness? Are certain features going to be the same?
Koch: It depends on the sensorium [the scope of our sensory perception --ed.] and the interconnections. For a mouse, this is easy to say. They have a cortex similar to ours, but not a well-developed prefrontal cortex. So it probably doesn’t have self-consciousness, or understand symbols like we do, but it sees and hears things similarly.
In every case, you have to look at the underlying neural mechanisms that give rise to the sensory apparatus, and to how they’re implemented. There’s no universal answer.
WIRED: Does a lack of self-consciousness mean an animal has no sense of itself?
Koch: Many mammals don’t pass the mirror self-recognition test, including dogs. But I suspect dogs have an olfactory form of self-recognition. You notice that dogs smell other dog’s poop a lot, but they don’t smell their own so much. So they probably have some sense of their own smell, a primitive form of self-consciousness. Now, I have no evidence to suggest that a dog sits there and reflects upon itself; I don’t think dogs have that level of complexity. But I think dogs can see, and smell, and hear sounds, and be happy and excited, just like children and some adults.
Self-consciousness is something that humans have excessively, and that other animals have much less of, though apes have it to some extent. We have a hugely developed prefrontal cortex. We can ponder.
WIRED: How can a creature be happy without self-consciousness?
Koch:: When I’m climbing a mountain or a wall, my inner voice is totally silent. Instead, I’m hyperaware of the world around me. I don’t worry too much about a fight with my wife, or about a tax return. I can’t afford to get lost in my inner self. I’ll fall. Same thing if I’m traveling at high speed on a bike. It’s not like I have no sense of self in that situation, but it’s certainly reduced. And I can be very happy.
WIRED: I’ve read that you don’t kill insects if you can avoid it.
Koch: That’s true. They’re fellow travelers on the road, bookended by eternity on both sides.
WIRED: How do you square what you believe about animal consciousness with how they’re used in experiments?
Koch: There are two things to put in perspective. First, there are vastly more animals being eaten at McDonald’s every day. The number of animals used in research pales in comparison to the number used for flesh. And we need basic brain research to understand the brain’s mechanisms. My father died from Parkinson’s. One of my daughters died from Sudden Infant Death Syndrome. To prevent these brain diseases, we need to understand the brain -- and that, I think, can be the only true justification for animal research. That in the long run, it leads to a reduction in suffering for all of us. But in the short term, you have to do it in a way that minimizes their pain and discomfort, with an awareness that these animals are conscious creatures.
WIRED: Getting back to the theory, is your version of panpsychism truly scientific rather than metaphysical? How can it be tested?
Koch: In principle, in all sorts of ways. One implication is that you can build two systems, each with the same input and output -- but one, because of its internal structure, has integrated information. One system would be conscious, and the other not. It’s not the input-output behavior that makes a system conscious, but rather the internal wiring.
The theory also says you can have simple systems that are conscious, and complex systems that are not. The cerebellum should not give rise to consciousness because of the simplicity of its connections. Theoretically you could compute that, and see if that’s the case, though we can’t do that right now. There are millions of details we still don’t know. Human brain imaging is too crude. It doesn’t get you to the cellular level.
The more relevant question, to me as a scientist, is how can I disprove the theory today. That’s more difficult. Tononi’s group has built a device to perturb the brain and assess the extent to which severely brain-injured patients -- think of Terri Schiavo -- are truly unconscious, or whether they do feel pain and distress but are unable to communicate to their loved ones. And it may be possible that some other theories of consciousness would fit these facts.
WIRED: I still can’t shake the feeling that consciousness arising through integrated information is -- arbitrary, somehow. Like an assertion of faith.
Koch: If you think about any explanation of anything, how far back does it go? We’re confronted with this in physics. Take quantum mechanics, which is the theory that provides the best description we have of the universe at microscopic scales. Quantum mechanics allows us to design MRI and other useful machines and instruments. But why should quantum mechanics hold in our universe? It seems arbitrary! Can we imagine a universe without it, a universe where Planck’s constant has a different value? Ultimately, there’s a point beyond which there’s no further regress. We live in a universe where, for reasons we don’t understand, quantum physics simply is the reigning explanation.
With consciousness, it’s ultimately going to be like that. We live in a universe where organized bits of matter give rise to consciousness. And with that, we can ultimately derive all sorts of interesting things: the answer to when a fetus or a baby first becomes conscious, whether a brain-injured patient is conscious, pathologies of consciousness such as schizophrenia, or consciousness in animals. And most people will say, that’s a good explanation.
If I can predict the universe, and predict things I see around me, and manipulate them with my explanation, that’s what it means to explain. Same thing with consciousness. Why we should live in such a universe is a good question, but I don’t see how that can be answered now.
Brandon is a Wired Science reporter and freelance journalist. Based in Brooklyn, New York and sometimes Bangor, Maine, he's fascinated with science, culture, history and nature.
http://www.wired.com/wiredscience/2013/11/christof-koch-panpsychism-consciousness/all/
本文於 修改第 2 次
|
「意識」面面觀 – M. Hanlon
|
|
推薦1 |
|
|
Consciousness is the greatest mystery in science
Don’t believe the hype: the Hard Problem is here to stay
Michael Hanlon, 10/09/13
Over there is a bird, in silhouette, standing on a chimney top on the house opposite. It is evening; the sun set about an hour ago and now the sky is an angry, pink-grey, the blatting rain of an hour ago threatening to return. The bird, a crow, is proud (I anthropomorphise). He looks cocksure. If it’s not a he then I’m a Dutchman. He scans this way and that. From his vantage point he must be able to see Land’s End, the nearby ramparts of Cape Cornwall, perhaps the Scillies in the fading light.
What is going on? What is it like to be that bird? Why look this way and that? Why be proud? How can a few ounces of protein, fat, bone and feathers be so sure of itself, as opposed to just being, which is what most matter does?
Old questions, but good ones. Rocks are not proud, stars are not nervous. Look further than my bird and you see a universe of rocks and gas, ice and vacuum. A multiverse, perhaps, of bewildering possibility. From the spatially average vantage point in our little cosmos you would barely, with human eyes alone, be able to see anything at all; perhaps only the grey smudge of a distant galaxy in a void of black ink. Most of what is is hardly there, let alone proud, strutting, cock-of-the-chimney-top on an unseasonably cold Cornish evening.
We live in an odd place and an odd time, amid things that know that they exist and that can reflect upon that, even in the dimmest, most birdlike way. And this needs more explaining than we are at present willing to give it. The question of how the brain produces the feeling of subjective experience, the so-called ‘hard problem’, is a conundrum so intractable that one scientist I know refuses even to discuss it at the dinner table. Another, the British psychologist Stuart Sutherland, declared in 1989 that ‘nothing worth reading has been written on it’. For long periods, it is as if science gives up on the subject in disgust. But the hard problem is back in the news, and a growing number of scientists believe that they have consciousness, if not licked, then at least in their sights.
A triple barrage of neuroscientific, computational and evolutionary artillery promises to reduce the hard problem to a pile of rubble. Today’s consciousness jockeys talk of p‑zombies and Global Workspace Theory, mirror neurones, ego tunnels, and attention schemata. They bow before that deus ex machina of brain science, the functional magnetic resonance imaging (fMRI) machine. Their work is frequently very impressive and it explains a lot. All the same, it is reasonable to doubt whether it can ever hope to land a blow on the hard problem.
For example, fMRI scanners have shown how people’s brains ‘light up’ when they read certain words or see certain pictures. Scientists in California and elsewhere have used clever algorithms to interpret these brain patterns and recover information about the original stimulus -- even to the point of being able to reconstruct pictures that the test subject was looking at. This ‘electronic telepathy’ has been hailed as the ultimate death of privacy (which it might be) and as a window on the conscious mind (which it is not).
The problem is that, even if we know what someone is thinking about, or what they are likely to do, we still don’t know what it’s like to be that person. Hemodynamic changes in your prefrontal cortex might tell me that you are looking at a painting of sunflowers, but then, if I thwacked your shin with a hammer, your screams would tell me you were in pain. Neither lets me know what pain or sunflowers feel like for you, or how those feelings come about. In fact, they don’t even tell us whether you really have feelings at all. One can imagine a creature behaving exactly like a human -- walking, talking, running away from danger, mating and telling jokes -- with absolutely no internal mental life. Such a creature would be, in the philosophical jargon, a zombie. (Zombies, in their various incarnations, feature a great deal in consciousness arguments.)
Why might an animal need to have experiences (‘qualia’, as they are called by some) rather than merely responses? In this magazine, the American psychologist David Barash summarised some of the current theories. One possibility, he says, is that consciousness evolved to let us to overcome the ‘tyranny of pain’. Primitive organisms might be slaves to their immediate wants, but humans have the capacity to reflect on the significance of their sensations, and therefore to make their decisions with a degree of circumspection. This is all very well, except that there is presumably no pain in the non-conscious world to start with, so it is hard to see how the need to avoid it could have propelled consciousness into existence.
Despite such obstacles, the idea is taking root that consciousness isn’t really mysterious at all; complicated, yes, and far from fully understood, but in the end just another biological process that, with a bit more prodding and poking, will soon go the way of DNA, evolution, the circulation of blood, and the biochemistry of photosynthesis.
Daniel Bor, a cognitive neuroscientist at Sussex University, talks of the ‘neuronal global workspace’, and asserts that consciousness emerges in the ‘prefrontal and parietal cortices’. His work is a refinement of the Global Workspace Theory developed by the Dutch neuroscientist Bernard Baars. In both schemes, the idea is to pair up conscious experiences with neural events, and to give an account of the position that consciousness occupies among the brain’s workings. According to Baars, what we call consciousness is a kind of ‘spotlight of attention’ on the workings of our memory, an inner domain in which we assemble the narrative of our lives. Along somewhat similar lines, we have seen Michael Graziano, of Princeton University, suggesting in this magazine that consciousness evolved as a way for the brain to keep track of its own state of attention, allowing it to make sense of itself and of other brains.
Meanwhile, the IT crowd is getting in on the act. The American futurologist Ray Kurzweil, the Messiah of the Nerds, thinks that in about 20 years or less computers will become conscious and take over the world (Kurzweil now works for Google). In Lausanne in Switzerland, the neuroscientist Henry Markram has been given several hundred million euros to reverse-engineer first rat then human brains down to the molecular level and duplicate the activities of the neurones in a computer -- the so‑called Blue Brain project. When I visited Markram’s labs a couple of years ago, he was confident that modelling something as sophisticated as a human mind was only a matter of better computers and more money.
Yes, but. Even if Markram’s Blue Brain manages to produce fleeting moments of ratty consciousness (which I accept it might), we still wouldn’t know how consciousness works. Saying we understand consciousness because this is what it does is like saying we understand how the Starship Enterprise flies between the stars because we know it has a warp drive. We are writing labels, not answers.
So, what can we say? Well, first off, as the philosopher John Searle put it in a TED talk in May this year, the conscious experience is non-negotiable: ‘if it consciously seems to you that you are conscious, you are conscious’. That seems hard to argue against. Such experience can, moreover, be extreme. Asked to name the most violent events in nature, you might point to cosmological cataclysms such as the supernova or gamma-ray burster. And yet, these spectacles are just heaps of stuff doing stuff-like things. They do not matter, any more than a boulder rolling down a hill matters -- until it hits someone.
Compare a supernova to, say, the mind of a woman about to give birth, or a father who has just lost his child, or a captured spy undergoing torture. These are subjective experiences that are off the scale in terms of importance. ‘Yes, yes,’ you might say, ‘but that sort of thing only matters from the human point of view.’ To which I reply: in a universe without witness, what other point of view can there be? The world was simply immaterial until someone came along to perceive it. And morality is both literally and figuratively senseless without consciousness: until we have a perceiving mind, there is no suffering to relieve, no happiness to maximise.
While we are looking at things from this elevated philosophical perspective, it is worth noting that there seems to be rather a limited range of basic options for the nature of consciousness. You might, for example, believe that it is some sort of magical field, a soul, that comes as an addendum to the body, like a satnav machine in a car. This is the traditional ‘ghost in the machine’ of Cartesian dualism. It is, I would guess, how most people have thought of consciousness for centuries, and how many still do. In scientific circles, however, dualism has become immensely unpopular. The problem is that no one has ever seen this field. How is it generated? More importantly, how does it interact with the ‘thinking meat’ of the brain? We see no energy transfer. We can detect no soul.
If you don’t believe in magical fields, you are not a traditional dualist, and the chances are that you are a materialist of some description. (To be fair, you might hover on the border. David Chalmers, who coined the term ‘hard problem’ in 1995, thinks that consciousness might be an unexplained property of all organised, information-juggling matter -- something he calls ‘panprotopsychism’.)
Committed materialists believe that consciousness arises as the result of purely physical processes -- neurones and synapses and so forth. But there are further divisions within this camp. Some people accept materialism but think there is something about biological nerve cells that gives them the edge over, say, silicon chips. Others suspect that the sheer weirdness of the quantum realm must have something to do with the hard problem. Apparent ‘observer effects’, Albert Einstein’s ‘spooky’ action at a distance, hints that a fundamental yet hidden reality underpins our world… Who knows? Maybe that last one is where consciousness lives. Roger Penrose, a physicist at Oxford University, famously thinks that consciousness arises as the result of mysterious quantum effects in brain tissue. He believes, in other words, not in magic fields but in magic meat. So far, the weight of evidence appears to be against him.
The philosopher John Searle does not believe in magic meat but he does think meat is important. He is a biological naturalist who thinks that consciousness emerges from complex neuronal processes that cannot (at present) be modelled in a machine. Then there are those like the Tufts philosopher Daniel Dennett, who says that the mind-body problem is essentially a semantic mistake. Finally, there are the arch-eliminativists who appear to deny the existence of a mental world altogether. Their views are useful but insane.
Time to take stock. Lots of clever people believe these things. Like the religions, they cannot all be right (though they might all be wrong). Reading these giants of consciousness criticise each other is an instructive experience in itself. When Chalmers aired his ideas in his book The Conscious Mind (1996), this philosopher, a professor at both New York University and the Australian National University, was described as ‘absurd’ by John Searle in The New York Review of Books. Physicists and chemists do not tend to talk like this.
Even so, let’s say we can make a machine that thinks and feels and enjoys things; imagine it eating a pear or something. If we do not believe in magic fields and magic meat, we must take a functionalist approach. This, on certain plausible assumptions, means our thinking machine can be made of pretty much anything -- silicon chips, sure; but also cogwheels and cams, teams of semaphorists, whatever you like. In recent years, engineers have succeeded in building working computers out of Lego, scrap metal, even a model railway set. If the brain is a classical computer – a universal Turing machine, to use the jargon – we could create consciousness just by running the right programme on the 19th-century Analytical Engine of Charles Babbage. And even if the brain isn’t a classical computer, we still have options. However complicated it might be, a brain is presumably just a physical object, and according to the Church-Turing-Deutsch principle of 1985, a quantum computer should be able to simulate any physical process whatsoever, to any level of detail. So all we need to simulate a brain is a quantum computer.
And then what? Then the fun starts. For if a trillion cogs and cams can produce (say) the sensation of eating a pear or of being tickled, then do the cogs all need to be whirling at some particular speed? Do they have to be in the same place at the same time? Could you substitute a given cog for a ‘message’ generated by its virtual-neighbour-cog telling it how many clicks to turn? Is it the cogs, in toto, that are conscious or just their actions? How can any ‘action’ be conscious? The German philosopher Gottfried Leibniz asked most of these questions 300 years ago, and we still haven’t answered a single one of them.
The consensus seems to be that we must run away from too much magic. Daniel Dennett dismisses the idea of ‘qualia’ (perhaps an unfortunately magical-sounding word) altogether. To him, consciousness is simply our word for what it feels like to be a brain. He told me: We don’t need something weird or an unexplained property of biological [matter] for consciousness any more than we need to posit ‘fictoplasm’ to be the mysterious substance in which Sherlock Holmes and Ebenezer Scrooge find their fictive reality. They are fictions, and hence do not exist … a neural representation is not a simulacrum of something, made of ‘mental clay’; it is a representation made of ... well, patterns of spike trains in neuronal axons and the like.
David Chalmers says that it is quite possible for a mind to be disconnected from space and time, but he insists that you do at least need the cogwheels. He says: ‘I’m sympathetic with the idea that consciousness arises from cogwheel structure. In principle it could be delocalised and really slow. But I think you need genuine causal connections among the parts, with genuine dynamic structure.’
As to where the qualia ‘happen’, the answer could be ‘nowhere and nowhen’. If we do not believe in magic forcefields, but do believe that a conscious event, a quale, can do stuff, then we have a problem (in addition to the problem of explaining the quale in the first place). As David Chalmers says, ‘the problem of how qualia causally affect the physical world remains pressing… with no easy answer in sight’. It is very hard to see how a mind generated by whirring cogs can affect the whirring of those cogs in turn.
Nearly a quarter of a century ago, Daniel Dennett wrote that: ‘Human consciousness is just about the last surviving mystery.’ A few years later, Chalmers added: ‘[It] may be the largest outstanding obstacle in our quest for a scientific understanding of the universe.’ They were right then and, despite the tremendous scientific advances since, they are still right today. I do not think that the evolutionary ‘explanations’ for consciousness that are currently doing the rounds are going to get us anywhere. These explanations do not address the hard problem itself, but merely the ‘easy’ problems that orbit it like a swarm of planets around a star. The hard problem’s fascination is that it has, to date, completely and utterly defeated science. Nothing else is like it. We know how genes work, we have (probably) found the Higgs Boson; but we understand the weather on Jupiter better than we understand what is going on in our own heads. This is remarkable.
Consciousness is in fact so weird, and so poorly understood, that we may permit ourselves the sort of wild speculation that would be risible in other fields. We can ask, for instance, if our increasingly puzzling failure to detect intelligent alien life might have any bearing on the matter. We can speculate that it is consciousness that gives rise to the physical world rather than the other way round. The 20th-century British physicist James Hopwood Jeans speculated that the universe might be ‘more like a great thought than like a great machine.’ Idealist notions keep creeping into modern physics, linking the idea that the mind of the observer is somehow fundamental in quantum measurements and the strange, seemingly subjective nature of time itself, as pondered by the British physicist Julian Barbour. Once you have accepted that feelings and experiences can be quite independent of time and space (those causally connected but delocalised cogwheels), you might take a look at your assumptions about what, where and when you are with a little reeling disquiet.
I don’t know. No one does. And I think it is possible that, compared with the hard problem, the rest of science is a sideshow. Until we get a grip on our own minds, our grip on anything else could be suspect. It’s hard, but we shouldn’t stop trying. The head of that bird on the rooftop contains more mystery than will be uncovered by our biggest telescopes or atom smashers. The hard problem is still the toughest kid on the block.
Correction, 10 Oct 2013: The original version of this article stated that Charles Babbage's Difference Engine would have been Turing-complete. In fact it was Babbage's Analytical Engine that had this distinction. We regret the error.
Michael Hanlon is a science journalist and a Templeton Journalism Fellow. His latest book is Eternity: Our Next Billion Years (2009). He lives in London.
請至原網頁參考:
Detail from the visualization of the model juvenile rat cortical column, as created by the Blue Brain Project in Lausanne, Switzerland. Photo courtesy EPFL/Blue Brain Project
http://www.aeonmagazine.com/being-human/will-we-ever-get-our-heads-round-consciousness/
本文於 修改第 1 次
|
解釋「意識」是什麼的新理論 – M. Graziano
|
|
推薦2 |
|
|
How the light gets out
Consciousness is the ‘hard problem’, the mystery that confounds science and philosophy. Has a new theory cracked it?
Michael Graziano, 08/21/13
Scientific talks can get a little dry, so I try to mix it up. I take out my giant hairy orangutan puppet, do some ventriloquism and quickly become entangled in an argument. I’ll be explaining my theory about how the brain -- a biological machine -- generates consciousness. Kevin, the orangutan, starts heckling me. ‘Yeah, well, I don’t have a brain. But I’m still conscious. What does that do to your theory?’
Kevin is the perfect introduction. Intellectually, nobody is fooled: we all know that there’s nothing inside. But everyone in the audience experiences an illusion of sentience emanating from his hairy head. The effect is automatic: being social animals, we project awareness onto the puppet. Indeed, part of the fun of ventriloquism is experiencing the illusion while knowing, on an intellectual level, that it isn’t real.
Many thinkers have approached consciousness from a first-person vantage point, the kind of philosophical perspective according to which other people’s minds seem essentially unknowable. And yet, as Kevin shows, we spend a lot of mental energy attributing consciousness to other things. We can’t help it, and the fact that we can't help it ought to tell us something about what consciousness is and what it might be used for. If we evolved to recognise it in others – and to mistakenly attribute it to puppets, characters in stories, and cartoons on a screen -- then, despite appearances, it really can’t be sealed up within the privacy of our own heads.
Lately, the problem of consciousness has begun to catch on in neuroscience. How does a brain generate consciousness? In the computer age, it is not hard to imagine how a computing machine might construct, store and spit out the information that ‘I am alive, I am a person, I have memories, the wind is cold, the grass is green,’ and so on. But how does a brain become aware of those propositions? The philosopher David Chalmers has claimed that the first question, how a brain computes information about itself and the surrounding world, is the ‘easy’ problem of consciousness. The second question, how a brain becomes aware of all that computed stuff, is the ‘hard’ problem.
I believe that the easy and the hard problems have gotten switched around. The sheer scale and complexity of the brain’s vast computations makes the easy problem monumentally hard to figure out. How the brain attributes the property of awareness to itself is, by contrast, much easier. If nothing else, it would appear to be a more limited set of computations. In my laboratory at Princeton University, we are working on a specific theory of awareness and its basis in the brain. Our theory explains both the apparent awareness that we can attribute to Kevin and the direct, first-person perspective that we have on our own experience. And the easiest way to introduce it is to travel about half a billion years back in time.
In a period of rapid evolutionary expansion called the Cambrian Explosion, animal nervous systems acquired the ability to boost the most urgent incoming signals. Too much information comes in from the outside world to process it all equally, and it is useful to select the most salient data for deeper processing. Even insects and crustaceans have a basic version of this ability to focus on certain signals. Over time, though, it came under a more sophisticated kind of control -- what is now called attention. Attention is a data-handling method, the brain’s way of rationing its processing resources. It has been found and studied in a lot of different animals. Mammals and birds both have it, and they diverged from a common ancestor about 350 million years ago, so attention is probably at least that old.
Attention requires control. In the modern study of robotics there is something called control theory, and it teaches us that, if a machine such as a brain is to control something, it helps to have an internal model of that thing. Think of a military general with his model armies arrayed on a map: they provide a simple but useful representation -- not always perfectly accurate, but close enough to help formulate strategy. Likewise, to control its own state of attention, the brain needs a constantly updated simulation or model of that state. Like the general’s toy armies, the model will be schematic and short on detail. The brain will attribute a property to itself and that property will be a simplified proxy for attention. It won’t be precisely accurate, but it will convey useful information. What exactly is that property? When it is paying attention to thing X, we know that the brain usually attributes an experience of X to itself -- the property of being conscious, or aware, of something. Why? Because that attribution helps to keep track of the ever-changing focus of attention.
I call this the ‘attention schema theory’. It has a very simple idea at its heart:
that consciousness is a schematic model of one’s state of attention.
Early in evolution, perhaps hundreds of millions of years ago, brains evolved a specific set of computations to construct that model. At that point, ‘I am aware of X’ entered their repertoire of possible computations.
And then what? Just as fins evolved into limbs and then into wings, the capacity for awareness probably changed and took on new functions over time. For example, the attention schema might have allowed the brain to integrate information on a massive new scale. If you are attending to an apple, a decent model of that state would require representations of yourself, the apple, and the complicated process of attention that links the two. An internal model of attention therefore collates data from many separate domains. In so doing, it unlocks enormous potential for integrating information, for seeing larger patterns, and even for understanding the relationship between oneself and the outside world.
Such a model also helps to simulate the minds of other people. We humans are continually ascribing complex mental states -- emotions, ideas, beliefs, action plans -- to one another. But it is hard to credit John with a fear of something, or a belief in something, or an intention to do something, unless we can first ascribe an awareness of something to him. Awareness, especially an ability to attribute awareness to others, seems fundamental to any sort of social capability.
It is not clear when awareness became part of the animal kingdom’s social toolkit. Perhaps birds, with their well-developed social intelligence, have some ability to attribute awareness to each other. Perhaps the social use of awareness expanded much later, with the evolution of primates about 65 million years ago, or even later, with our own genus Homo, a little over two million years ago. Whenever it arose, it clearly plays a major role in the social capability of modern humans. We paint the world with perceived consciousness. Family, friends, pets, spirits, gods and ventriloquist’s puppets -- all appear before us suffused with sentience.
But what about the inside view, that mysterious light of awareness accessible only to our innermost selves? A friend of mine, a psychiatrist, once told me about one of his patients. This patient was delusional: he thought that he had a squirrel in his head. Odd delusions of this nature do occur, and this patient was adamant about the squirrel. When told that a cranial rodent was illogical and incompatible with physics, he agreed, but then went on to note that logic and physics cannot account for everything in the universe. When asked whether he could feel the squirrel -- that is to say, whether he suffered from a sensory hallucination -- he denied any particular feeling about it. He simply knew that he had a squirrel in his head.
We can ask two types of questions.
The first is rather foolish but I will spell it out here.
How does that man’s brain produce an actual squirrel?
How can neurons secrete the claws and the tail?
Why doesn’t the squirrel show up on an MRI scan?
Does the squirrel belong to a different, non-physical world that can’t be measured with scientific equipment?
This line of thought is, of course, nonsensical. It has no answer because it is incoherent.
The second type of question goes something like this.
How does that man’s brain process information so as to attribute a squirrel to his head?
What brain regions are involved in the computations?
What history led to that strange informational model?
Is it entirely pathological or does it in fact do something useful?
So far, most brain-based theories of consciousness have focused on the first type of question.
How do neurons produce a magic internal experience?
How does the magic emerge from the neurons?
The theory that I am proposing dispenses with all of that. It concerns itself instead with the second type of question:
how, and for what survival advantage, does a brain attribute subjective experience to itself?
This question is scientifically approachable, and the attention schema theory supplies the outlines of an answer.
One way to think about the relationship between brain and consciousness is to break it down into two mysteries. I call them Arrow A and Arrow B.
Arrow A is the mysterious route from neurons to consciousness. If I am looking at a blue sky, my brain doesn’t merely register blue as if I were a wavelength detector from Radio Shack. I am aware of the blue. Did my neurons create that feeling?
Arrow B is the mysterious route from consciousness back to the neurons. Arrow B attracts much less scholarly attention than Arrow A, but it is just as important.
The most basic, measurable, quantifiable truth about consciousness is simply this:
we humans can say that we have it.
We can conclude that we have it, couch that conclusion into language and then report it to someone else. Speech is controlled by muscles, which are controlled by neurons. Whatever consciousness is, it must have a specific, physical effect on neurons, or else we wouldn’t be able to communicate anything about it. Consciousness cannot be what is sometimes called an epiphenomenon -- a floating side-product with no physical consequences -- or else I wouldn’t have been able to write this article about it.
Any workable theory of consciousness must be able to account for both Arrow A and Arrow B. Most accounts, however, fail miserably at both. Suppose that consciousness is a non-physical feeling, an aura, an inner essence that arises somehow from a brain or from a special circuit in the brain. The ‘emergent consciousness’ theory is the most common assumption in the literature. But how does a brain produce the emergent, non-physical essence? And even more puzzling, once you have that essence, how can it physically alter the behaviour of neurons, such that you can say that you have it? ‘Emergent consciousness’ theories generally stake everything on Arrow A and ignore Arrow B completely.
The attention schema theory does not suffer from these difficulties. It can handle both Arrow A and Arrow B. Consciousness isn’t a non-physical feeling that emerges. Instead, dedicated systems in the brain compute information. Cognitive machinery can access that information, formulate it as speech, and then report it. When a brain reports that it is conscious, it is reporting specific information computed within it. It can, after all, only report the information available to it. In short, Arrow A and Arrow B remain squarely in the domain of signal-processing. There is no need for anything to be transmuted into ghost material, thought about, and then transmuted back to the world of cause and effect.
Some people might feel disturbed by the attention schema theory. It says that awareness is not something magical that emerges from the functioning of the brain. When you look at the colour blue, for example, your brain doesn’t generate a subjective experience of blue. Instead, it acts as a computational device. It computes a description, then attributes an experience of blue to itself. The process is all descriptions and conclusions and computations. Subjective experience, in the theory, is something like a myth that the brain tells itself. The brain insists that it has subjective experience because, when it accesses its inner data, it finds that information.
I admit that the theory does not feel satisfying; but a theory does not need to be satisfying to be true. And indeed, the theory might be able to explain a few other common myths that brains tell themselves.
What about out-of-body experiences?
The belief that awareness can emanate from a person’s eyes and touch someone else?
That you can push on objects with your mind?
That the soul lives on after the death of the body?
One of the more interesting aspects of the attention schema theory is that it does not need to turn its back on such persistent beliefs. It might even explain their origin.
The heart of the theory, remember, is that awareness is a model of attention, like the general’s model of his army laid out on a map. The real army isn’t made of plastic, of course. It isn’t quite so small, and has rather more moving parts. In these respects, the model is totally unrealistic. And yet, without such simplifications, it would be impractical to use.
If awareness is a model of attention, how is it simplified? How is it inaccurate? Well, one easy way to keep track of attention is to give it a spatial structure -- to treat it like a substance that flows from a source to a target.
In reality, attention is a data-handling method used by neurons.
It isn’t a substance and it doesn’t flow. But it is a neat accounting trick to model attention in that way; it helps to keep track of who is attending to what. And so the intuition of ghost material -- of ectoplasm, mind stuff that is generated inside us, that flows out of the eyes and makes contact with things in the world -- makes some sense. Science commonly regards ghost-ish intuitions to be the result of ignorance, superstition, or faulty intelligence. In the attention schema theory, however, they are not simply ignorant mistakes. Those intuitions are ubiquitous among cultures because we humans come equipped with a handy, simplified model of attention. That model informs our intuitions.
What are out-of-body experiences then? One view might be that no such things exist, that charlatans invented them to fool us. Yet such experiences can be induced in the lab, as a number of scientists have now shown. A person can genuinely be made to feel that her centre of awareness is disconnected from her body. The very existence of the out-of-body experience suggests that awareness is a computation and that the computation can be disrupted. Systems in the brain not only compute the information that I am aware, but also compute a spatial framework for it, a location, and a perspective. Screw up the computations, and I screw up my understanding of my own awareness.
And here is yet another example: why do so many people believe that we see by means of rays that come out of the eyes? The optical principle of vision is well understood and is taught in elementary school. Nevertheless, developmental psychologists have known for decades that children have a predisposition to the opposite idea, the so-called ‘extramission theory’ of vision. And not only children: a study by the psychologist Gerald Winer and colleagues at the University of Ohio in 2002 found that about half of American college students also think that we see because of rays that come out of the eyes. Our culture, too, is riddled with the extramission theory. Superman has X-ray vision that emanates from his eyes toward objects. The Terminator has red glowing eyes. Many people believe that they can feel a subtle heat when someone is staring at them. Why should a physically inaccurate description of vision be so persistent? Perhaps because the brain constructs a simplified, handy model of attention in which there is such a thing as awareness, an invisible, intangible stuff that flows from inside a person out to some target object. We come pre-equipped with that intuition, not because it is physically accurate but because it is a useful model.
Many of our superstitions -- our beliefs in souls and spirits and mental magic -- might emerge naturally from the simplifications and shortcuts the brain takes when representing itself and its world. This is not to say that humans are necessarily trapped in a set of false beliefs. We are not forced by the built-in wiring of the brain to be superstitious, because there remains a distinction between intuition and intellectual belief. In the case of ventriloquism, you might have an unavoidable gut feeling that consciousness is emanating from the puppet’s head, but you can still understand that the puppet is in fact inanimate. We have the ability to rise above our immediate intuitions and predispositions.
Let’s turn now to a final -- alleged -- myth. One of the long-standing questions about consciousness is whether it really does anything. Is it merely an epiphenomenon, floating uselessly in our heads like the heat that rises up from the circuitry of a computer? Most of us intuitively understand it to be an active thing: it helps us to decide what to do and when. And yet, at least some of the scientific work on consciousness has proposed the opposite, counter-intuitive view: that it doesn’t really do anything at all; that it is the brain’s after-the-fact story to explain itself. We act reflexively and then make up a rationalisation.
There is some evidence for this post-hoc notion. In countless psychology experiments, people are secretly manipulated into making certain choices -- picking green over red, pointing left instead of right. When asked why they made the choice, they confabulate. They make up reasons that have nothing to do with the truth, known only to the experimenter, and they express great confidence in their bogus explanations. It seems, therefore, that at least some of our conscious choices are rationalisations after the fact. But if consciousness is a story we tell ourselves, why do we need it? Why are we aware of anything at all? Why not just be skilful automata, without the overlay of subjectivity? Some philosophers think we are automata and just don’t know it.
This idea that consciousness has no leverage in the world, that it’s just a rationalisation to make us feel better about ourselves, is terribly bleak. It runs against most people’s intuitions. Some people might confuse the attention schema theory with that nihilistic view. But the theory is almost exactly the opposite. It is not a theory about the uselessness or non-being of consciousness, but about its central importance. Why did an awareness of stuff evolve in the first place? Because it had a practical benefit. The purpose of the general’s plastic model army is to help direct the real troops. Likewise, according to the theory, the function of awareness is to model one’s own attentional focus and control one’s behaviour. In this respect, the attention schema theory is in agreement with the common intuition: consciousness plays an active role in guiding our behaviour. It is not merely an aura that floats uselessly in our heads. It is a part of the executive control system.
In fact, the theory suggests that even more crucial and complex functions of consciousness emerged through evolution, and that they are especially well-developed in humans. To attribute awareness to oneself, to have that computational ability, is the first step towards attributing it to others. That, in turn, leads to a remarkable evolutionary transition to social intelligence. We live embedded in a matrix of perceived consciousness. Most people experience a world crowded with other minds, constantly thinking and feeling and choosing. We intuit what might be going on inside those other minds. This allows us to work together: it gives us our culture and meaning, and makes us successful as a species. We are not, despite certain appearances, trapped alone inside our own heads.
And so, whether or not the attention schema theory turns out to be the correct scientific formulation, a successful account of consciousness will have to tell us more than how brains become aware. It will also have to show us how awareness changes us, shapes our behaviour, interconnects us, and makes us human.
Published on 21 August 2013
http://www.aeonmagazine.com/being-human/how-consciousness-works/
本文於 修改第 1 次
|
焦慮 大腦化學物質 和生活環境 -- J. Steenhuysen
|
|
推薦0 |
|
|
Brain chemical may play key role in anxiety Julie Steenhuysen (路透社) CHICAGO (Reuters) – A chemical important for brain development may play a role in explaining why some people are genetically predisposed to anxiety and could lead to new treatments, U.S. researchers said on Tuesday. They said rats bred to be highly anxious had very low levels of a brain chemical called fibroblast growth factor 2 or FGF2 compared with rats that were more laid back. But when they improved the anxious rats' living conditions, -- giving them new toys to explore, an obstacle course and a bigger cage to live in -- levels of this brain chemical increased and they became less anxious. "The levels of this molecule increased in response to the experiences that the rats were exposed to. It also decreased their anxiety," Javier Perez of the University of Michigan, whose study appears in the Journal of Neuroscience, said in a telephone interview. "It made them behave the same way as the rats that were laid back and had low anxiety to begin with," he said. Injecting the rats with the chemical also made them less anxious, he said. In a prior study of people who were severely depressed before they died, the team found the gene that makes FGF2 was producing very low levels of the growth factor, which is known primarily for organizing the brain during development and repairing it after injury. Perez thinks the brain chemical may be a marker for genetic vulnerability to anxiety and depression. But it can also respond to changes in the environment in a positive way, possibly by preserving new brain cells. While both the calm and anxious rats produced the same number of new brain cells, these cells were less likely to survive in the high-anxiety rats, the team found. Giving the rats better living conditions or injecting them with FGF2 helped improve cell survival. "This discovery may pave the way for new, more specific treatments for anxiety that will not be based on sedation, like currently prescribed drugs, but will instead fight the real cause of the disease," Dr. Pier Vincenzo Piazza of the University of Bordeaux in France, who had seen the study, said in a statement. Perez said the study was funded in part by the Pritzker Neuropsychiatric Disorders Research Fund, which is seeking to patent the molecule. (Editing by Maggie Fox and Cynthia Osterman) 轉貼自︰ http://news.yahoo.com/s/nm/20090514/sc_nm/us_anxiety_brain;_ylt=AiShwbS7F6bFdeyWVNO6FNcbr7sF
本文於 修改第 1 次
|
連接音樂和記憶的大腦區域 -- J. Hsu
|
|
推薦0 |
|
|
Music-Memory Connection Found in Brain Jeremy Hsu, Staff Writer, LiveScience.com People have long known that music can trigger powerful recollections, but now a brain-scan study has revealed where this happens in our noggins. The part of the brain known as the medial pre-frontal cortex sits just behind the forehead, acting like recent Oscar host Hugh Jackman singing and dancing down Hollywood's memory lane. "What seems to happen is that a piece of familiar music serves as a soundtrack for a mental movie that starts playing in our head." said Petr Janata, a cognitive neuroscientist at University of California, Davis. "It calls back memories of a particular person or place, and you might all of a sudden see that person's face in your mind's eye." Janata began suspecting the medial pre-frontal cortex as a music-processing and music-memories region when he saw that part of the brain actively tracking chord and key changes in music. He had also seen studies which showed the same region lighting up in response to self- reflection and recall of autobiographical details, and so he decided to examine the possible music-memory link by recruiting 13 UC-Davis students. Test subjects went under an fMRI brain scanner and listened to 30 different songs randomly chosen from the Billboard "Top 100" music charts from years when the subjects would have been 8 to 18 years old. They signaled researchers when a certain 30-second music sample triggered any autobiographical memory, as opposed to just being a familiar or unfamiliar song. "This is the first study using music to evoke autobiographical memory," Janata told LiveScience. His full study is detailed online this week in the journal Cerebral Cortex. The students also filled out the details of their memories in a survey immediately following the MRI session, explaining the content and clarity of their recollections. Most recognized about 17 out of 30 music samples on average, with about 13 having moderate or strong links with a memory from their lives. Janata saw that tunes linked to the strongest self- reported memories triggered the most vivid and emotion- filled responses - findings corroborated by the brain scan showing spikes in mental activity within the medial prefrontal cortex. The brain region responded quickly to music signature and timescale, but also reacted overall when a tune was autobiographically relevant. Furthermore, music tracking activity in the brain was stronger during more powerful autobiographical memories.
This latest research could explain why even Alzheimer's patients who endure increasing memory loss can still recall songs from their distant past. "What's striking is that the prefrontal cortex is among the last [brain regions] to atrophy," Janata noted. He pointed to behavioral observations of Alzheimer's patients singing along or brightening up when familiar songs came on. Janata said that his research merely tried to establish a neuroscience basis for why music can tickle memory. He voiced the hope that his and other studies could encourage practices such as giving iPods to Alzheimer's patients - perhaps providing real-life testament to the power of music. "It's not going to reverse the disease," Janata said. "But if you can make quality of life better, why not?" Video: Laptop Orchestra - The Next Step in Computer Music Top 10 Mysteries of the Mind Why Do We Love Music? 轉貼自︰ http://news.yahoo.com/s/livescience/20090224/sc_livescience/musicmemoryconnectionfoundinbrain;_ylt=AgpGccPw1RT.zrQwId6t1nIbr7sF
本文於 修改第 1 次
|
直覺就是記憶 -- LiveScience Staff
|
|
推薦0 |
|
|
Study Suggests Why Gut Instincts Work LiveScience Staff, LiveScience.com Sometimes when you think you're guessing, your brain may actually know better. After conducting some unique memory and recognition tests, while also recording subjects' brain waves, scientists conclude that some gut feelings are not just guesswork after all. Rather, we access memories we aren't even aware we have. "We may actually know more than we think we know in everyday situations, too," said Ken Paller, professor of psychology at Northwestern University and co-researcher on the study. "Unconscious memory may come into play, for example, in recognizing the face of a perpetrator of a crime or the correct answer on a test. Or the choice from a horde of consumer products may be driven by memories that are quite alive on an unconscious level." The findings were published online Sunday in the journal Nature Neuroscience. The research, done with only a couple dozen participants, adds to a growing body of conflicting evidence about decision-making. In one study done in 2007, researchers found that quick decisions were better than those given lots of thought. But a study last year suggested neither snap judgments nor "sleeping on it" trump good old- fashioned conscious thought. The new study During the first part of the memory test in the new study, participants were shown a series of colorful kaleidoscope images that flashed on a computer screen. Half of the images were viewed with full attention as participants tried to memorize them. While viewing the other half, the participants were distracted: They heard a spoken number that they had to keep in mind until the next trial, when they indicated whether it was odd or even. In other words, they could focus on memorizing half of the images but were greatly distracted from memorizing the others. A bit later, they viewed pairs of similar kaleidoscope images in a recognition test. "Remarkably, people were more accurate in selecting the old image when they had been distracted than when they had paid full attention," Paller said. "They also were more accurate when they claimed to be guessing than when they registered some familiarity for the image." Splitting attention during a memory test usually makes memory worse. "But our research showed that even when people weren't paying as much attention, their visual system was storing information quite well," Paller said. The brain's role During the tests, electrical signals in the brain were recorded from a set of electrodes placed on each person's head. The brain waves during implicit recognition were distinct from those associated with conscious memory experiences. A unique signal of implicit recognition was seen a quarter of a second after study participants saw each old image. Other related research has shown that amnesia victims with severe memory problems often have strong implicit memories, Paller and his colleague, Joel L. Voss of the Beckman Institute, said in a statement. "Intuition may have an important role in finding answers to all sorts of problems in everyday life," Paller said. · Brain News and Information (http://www.livescience.com/topic/brain) · 5 Things You Must Never Forget · Memories: News and Information (http://www.livescience.com/topic/memory) 轉貼自︰ http://news.yahoo.com/s/livescience/20090209/sc_livescience/studysuggestswhygutinstinctswork;_ylt=AvMpAFt0Er6WXZ2JUsH9ErYbr7sF
本文於 修改第 1 次
|
|
|