網路城邦
回本城市首頁 時事論壇
市長:胡卜凱  副市長:
加入本城市推薦本城市加入我的最愛訂閱最新文章
udn城市政治社會政治時事【時事論壇】城市/討論區/
討論區知識和議題 字體:
看回應文章  上一個討論主題 回文章列表 下一個討論主題
大腦、知覺、與語言 三者的互動 - R. E. SCHMID
 瀏覽94,724|回應21推薦3

胡卜凱
等級:8
留言加入好友
文章推薦人 (3)

綺綺大美女
犬犬可愛
胡卜凱

Color perception shifts from right brain to left



RANDOLPH E. SCHMID, AP Science Writer


WASHINGTON – Learning the name of a color changes the part of the brain that handles color perception. Infants perceive color in the right hemisphere of the brain, researchers report, while adults do the job in the brain's left hemisphere.


Testing toddlers showed that the change occurred when the youngsters learned the names to attach to particular colors, scientists report in Tuesday's edition of Proceedings of the National Academy of Sciences.


 


"It appears, as far as we can tell, that somehow the brain, when it has categories such as color, it actually consults those categories," Paul Kay of the department of linguistics, University of California, Berkeley, said in a telephone interview.


He said the researchers did a similar experiment with silhouettes of dogs and cats with the same result -- once a child learns the name for the animal, perception moves from the right to the left side of the brain.


"It's important to know this because it's part of a debate that's gone on as long as there has been philosophy or science, about how the language we speak affects how we look at the world," Kay said. Indeed, scholars continue to discuss the comparative importance of nature versus nurture.


 


The researchers studied the time it took toddlers to begin eye movement toward a colored target in either their left or right field of vision to determine which half of the brain was processing the information.


 


The research was funded by the National Science Foundation.


On the Net:


 


PNAS: http://www.pnas.org


 


http://news.yahoo.com/mp/1689/20081118;_ylt=Ak9Rikqcq.N.rK.eMAsXCeYbr7sF



本文於 修改第 10 次
回應 回應給此人 推薦文章 列印 加入我的文摘

引用
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=3118344
 回應文章 頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁
大腦解密研究 – J. Gorman
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Learning How Little We Know About the Brain             

 

James Gorman, 11/10/14

 

Research on the brain is surging. The United States and the European Union have launched new programs to better understand the brain. Scientists are mapping parts of mouse, fly and human brains at different levels of magnification. Technology for recording brain activity has been improving at a revolutionary pace.

 

The National Institutes of Health, which already spends $4.5 billion a year on brain research, consulted the top neuroscientists in the country to frame its role in an initiative announced by President Obama last year to concentrate on developing a fundamental understanding of the brain.

 

Scientists have puzzled out profoundly important insights about how the brain works, like the way the mammalian brain navigates and remembers places, work that won the 2014 Nobel Prize in Physiology or Medicine for a British-American and two Norwegians.

 

Yet the growing body of data -- maps, atlases and so-called connectomes that show linkages between cells and regions of the brain -- represents a paradox of progress, with the advances also highlighting great gaps in understanding.

 

So many large and small questions remain unanswered. How is information encoded and transferred from cell to cell or from network to network of cells? Science found a genetic code but there is no brain-wide neural code; no electrical or chemical alphabet exists that can be recombined to say “red” or “fear” or “wink” or “run.” And no one knows whether information is encoded differently in various parts of the brain.

 

Brain scientists may speculate on a grand scale, but they work on a small scale. Sebastian Seung at Princeton, author of “Connectome: How the Brain’s Wiring Makes Us Who We Are,” speaks in sweeping terms of how identity, personality, memory -- all the things that define a human being -- grow out of the way brain cells and regions are connected to each other. But in the lab, his most recent work involves the connections and structure of motion-detecting neurons in the retinas of mice.

 

Larry Abbott, 64, a former theoretical physicist who is now co-director, with Kenneth Miller, of the Center for Theoretical Neuroscience at Columbia University, is one of the field’s most prominent theorists, and the person whose name invariably comes up when discussions turn to brain theory.

 

Edvard Moser of the Norwegian University of Science and Technology, one of this year’s Nobel winners, described him as a “pioneer of computational neuroscience.” Mr. Abbott brought the mathematical skills of a physicist to the field, but he is able to plunge right into the difficulties of dealing with actual brain experiments, said Cori Bargmann of Rockefeller University, who helped lead the N.I.H. committee that set a plan for future neuroscience research.

 

“Larry is willing to deal with the messiness of real neuroscience data, and work with those limitations,” she said. “Theory is beautiful and internally consistent. Biology, not so much.” And, she added, he has helped lead a whole generation of theorists in that direction, which is of great value for neuroscience.

 

Dr. Abbott is unusual among his peers because he switched from physics to neuroscience later in his career. In the late 1980s, he was a full professor of physics at Brandeis University, where he also received his Ph.D. But at the time, a project to build the largest particle accelerator in the world in Texas was foundering, and he could see a long drought ahead in terms of advances in the field.

 

He was already considering a career switch when he stopped by the lab of a Brandeis colleague, Eve Marder, who was then, and still is, drawing secrets from a small network of neurons that controls a muscle in crabs.

 

She was not in her lab when Dr. Abbott came calling, but one of her graduate students showed him equipment that was recording the electrical activity of neurons and translating it into clicks that could be heard over speakers each time a cell fired, or spiked. “You know what?” he said recently in his office at Columbia, “We wouldn’t be having this conversation if they didn’t have that audio monitor on. It was the sound of those spikes that entranced me.”

 

“I remember I walked out of the door and I kind of leaned up against the wall, in terror, saying, ‘I’m going to switch,’” he added. “I just knew that something had clicked in me. I’m going to switch fields, and I’m dead, because nobody knows me. I don’t know anything.”

 

Dr. Marder served as his guide to the new field, telling him what to read and answering his many questions. He was immediately accepted both in her lab and by other experimentalists, she said, “because he’s both wicked smart and humble.”

 

“He did something that was astonishing,” Dr. Marder said. “Six months in, he actually understood what people knew and what they didn’t know.”

 

Dr. Abbott recalled that it took a while for them to develop a productive collaboration. “Eve and I talked for a year and then finally started to understand each other,” he said.

 

Together, they invented something called the dynamic clamp technique, a way to link brain cells to a computer to manipulate their activity and test ideas about how cells and networks of cells work.

 

A decade ago, he moved from Brandeis to Columbia, which now has one of the biggest groups of theoretical neuroscientists in the world, he says, and which has a new university-wide focus on integrating brain science with other disciplines.

 

The university is now finishing the Jerome L. Greene Science Center, which will be home to the Mortimer B. Zuckerman Mind Brain Behavior Institute. The center for theoretical neuroscience will move to the new building.

 

Dr. Abbott collaborates with scientists at Columbia and elsewhere, trying to build computer models of how the brain might work. Single neurons, he said, are fairly well understood, as are small circuits of neurons.

 

The question now on his mind, and that of many neuroscientists, is how larger groups, thousands of neurons, work together -- whether to produce an action, like reaching for a cup, or to perceive something, like a flower.

 

There are ways to record the electrical activity of neurons in a brain, and those methods are improving fast. But, he said, “If I give you a picture of a thousand neurons firing, it’s not going to tell you anything.”

 

Computer analysis helps to reduce and simplify such a picture but, he says, the goal is to discover the physiological mechanism in the data.

 

For example, he asks why does one pattern of neurons firing “make you jump off the couch and run out the door and others make you just sit there and do nothing?” It could be, Dr. Abbott says, that simultaneous firing of all the neurons causes you to take action. Or it could be that it is the number of neurons firing that prompts an action.

 

His tools are computers and equations, but he collaborates on all kinds of experimental work on neuroscientific problems in animals and humans. Some of his recent work was with Nate Sawtell, a fellow Columbia researcher, and Ann Kennedy a graduate student at the time in Dr. Sawtell’s lab, now doing post-doctoral research at Caltech. Their subject was the weakly electric fish.

 

Unlike electric eels and other fish that use shocks to stun prey, this fish generates a weak electric field to help it navigate and to locate insects and other prey. Over the years, researchers, notably Curtis Bell at the Oregon Health and Science University, have designed experiments to understand, up to a point, how its brain and electric-sensing organs work.

 

Dr. Abbott joined with Dr. Kennedy and Dr. Sawtell, the senior author on the paper that grew out of this work, and others in the lab to take this understanding a step further. The fish has two sensing systems. One is passive, picking up electric fields of other fish or prey. Another is active, sending out a pulse, for communication or as an electrical version of sonar. They knew the fish was able to cancel out its own pulse of electricity by creating what he called a “negative image.”

 

They wired the brain of a weakly electric fish and -- through a combination of testing and developing mathematical models -- found that a surprising group of neurons, called unipolar brush cells, were sending out a delayed copy of the command that another part of the brain was sending to its electric organ. The delayed signal went straight to the passive sensing system to cancel out the information from the electric pulse.

 

“The brain has to compute what’s self-generated versus what’s external,” said Dr. Sawtell.

 

This may not sound like a grand advance, but, Dr. Abbott said, “I think it’s pretty deep,” adding that it helps illuminate how a creature begins to draw a distinction between itself and the world. It is the very beginning of how a brain sorts a flood of data coming in from the outside world, and gives it meaning.

 

That is part of the brain’s job, after all -- to build an image of the world from photons and electrons, light and dark, molecules and motion, and to connect it with what the fish, or the person, remembers, needs and wants.

 

“We’ve looked at the nervous system from the two ends in,” Dr. Abbott said, meaning sensations that flow into the brain and actions that are initiated there. “Somewhere in the middle is really intelligence, right? That’s where the action is.”

 

In the brain, somehow, stored memories and desires like hunger or thirst are added to information about the world, and actions are the result. This is the case for all sorts of animals, not just humans. It is thinking, at the most basic level.

 

“And we have the tools to look there,” he said. “Whether we have the intelligence to figure it out, I view that, at least in part, as a theory problem.”

 

請至原網頁參考相關圖片

 

http://mobile.nytimes.com/2014/11/11/science/learning-how-little-we-know-about-the-brain.html

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5249724
「意識」的判準 - R. Letzter
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Is A Simulated Brain Conscious?

 

Dr. Scott Aaronson's answer has implications for C-3PO, the universe and the odds that you are a Boltzmann Brain.

 

Rafi Letzter, 09/12/14

 

Imagine standing in an open field with a bucket of water balloons and a couple of friends. You've decided to play a game called "Mind." Each of you has your own set of rules. Maybe Molly will throw a water balloon at Bob whenever you throw a water balloon at Molly. Maybe Bob will splash both of you whenever he goes five minutes without getting hit -- or if it gets too warm out or if it's seven o'clock or if he's in a bad mood that day. The details don't matter.

 

That game would look a lot like the way neurons, the cells that make up your brain and nerves, interact with one another. They sit around inside an ant or a bird or Stephen Hawking and follow a simple set of rules. Sometimes they send electrochemical signals to their neighbors. Sometimes they don't. No single neuron "understands” the whole system.

 

Now imagine that instead of three of of you in that field there were 86 billion -- about the number of neurons in an average brain. And imagine that instead of playing by rules you made up, you each carried an instruction manual written by the best neuroscientists and computer scientists of the day -- a perfect model of a human brain. No one would need the entire rulebook, just enough to know their job. If the lot of you stood around, laughing and playing by the rules whenever the rulebook told you, given enough time you could model one or two seconds of human thought.

 

Here's a question though: While you're all out there playing, is that model conscious? Are its feelings, modeled in splashing water, real? What does "real" even mean when it comes to consciousness? What's it like to be a simulation run on water balloons?

 

These questions may seem absurd at first, but now imagine the game of Mind sped up a million times. Instead of humans standing around in a field, you model the neurons in the most powerful supercomputer ever built. (Similar experiments have already been done, albeit on much smaller scales.) You give the digital brain eyes to look out at the world and ears to hear. An artificial voice box grants Mind the power of speech. Now we're in the twilight between science and science fiction. ("I'm sorry Dave, I'm afraid I can't do that.")

 

Is Mind conscious now?

 

Now imagine Mind's architects copied the code for Mind straight out of your brain. When the computer stops working, does a version of you die?

 

These queries provide an ongoing puzzle for scientists and philosophers who think about computers, brains, and minds. And many believe they could one day have real world implications.

 

Dr. Scott Aaronson, a theoretical computer scientist at MIT and author of the blog Shtetl-Optimized, is part of a group of scientists and philosophers (and cartoonists) who have made a habit of dealing with these ethical sci-fi questions. While most researchers concern themselves primarily with data, these writers perform thought experiments that often reference space aliens, androids, and the Divine. (Aaronson is also quick to point out the highly speculative nature of this work.)

 

Many thinkers have broad interpretations of consciousness for humanitarian reasons, Aaronson tells Popular Science. After all, if that giant game of Mind in that field (or C-3PO or Data or Hal) simulates a thought or a feeling, who are we to say that consciousness is less valid than our own?

 

In 1950 the brilliant British codebreaker and early computer scientist Alan Turing wrote against human-centric theologies in his essay “Computing Machinery and Intelligence:”

 

Thinking is a function of man's immortal soul [they say.] God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.

 

I am unable to accept any part of this … It appears to me that the argument quoted above implies a serious restriction of the omnipotence of the Almighty. It is admitted that there are certain things that He cannot do such as making one equal to two, but should we not believe that He has freedom to confer a soul on an elephant if He sees fit? ... An argument of exactly similar form may be made for the case of machines.

 

"I think it's like anti-racism," Aaronson says. "[People] don't want to say someone different than themselves who seems intelligent is less deserving just because he’s got a brain of silicon.”

 

According to Aaronson, this train of thought leads to a strange slippery slope when you imagine all the different things it could apply to. Instead, he proposes finding a solution to what he calls the Pretty Hard Problem. "The point," he says, "is to come up with some principled criterion for separating the systems we consider to be conscious from those we do not."

 

A lot of people might agree that a mind simulated in a computer is conscious, especially if they could speak to it, ask it questions, and develop a relationship with it. It's a vision of the future explored in the Oscar-winning film Her.

 

Think about the problems you'd encounter in a world where consciousness were reduced to a handy bit of software. A person could encrypt a disk, and then instead of Scarlett Johannsen's voice, all Joaquin Phoenix would hear in his ear would be strings of unintelligible data. Still, somewhere in there, something would be thinking.

 

Aaronson takes this one step further. If a mind can be written as code, there's no reason to think it couldn't be written out in a notebook. Given enough time, and more paper and ink than there is room in the universe, a person could catalogue every possible stimulus a consciousness could ever encounter, and label each with a reaction. That journal could be seen as a sentient being, frozen in time, just waiting for a reader.

 

“There’s a lot of metaphysical weirdness that comes up when you describe a physical consciousness as something that can be copied,” he says.

 

The weirdness gets even weirder when you consider that according to many theorists, not all the possible minds in the universe are biological or mechanical. In fact, under this interpretation the vast majority of minds look nothing like anything you or I will ever encounter. Here's how it works: Quantum physics -- the 20th century branch of science that reveals the hidden, exotic behavior of the particles that make up everything -- states that nothing is absolute. An unobserved electron isn't at any one point in space, really, but spread across the entire universe as a probability distribution; the vast majority of that probability is concentrated in a tight orbit around an atom, but not all of it. This still works as you go up in scale. That empty patch of sky midway between here and Pluto? Probably empty. But maybe, just maybe, it contains that holographic Charizard trading card that you thought slipped out of your binder on the way home from school in second grade.

 

As eons pass and the stars burn themselves out and the universe gets far emptier than it is today, that quantum randomness becomes very important. It's probable that the silent vacuum of space will be mostly empty. But every once in a while, clumps of matter will come together and dissipate in the infinite randomness. And that means, or so the prediction goes, that every once in a while those clumps will arrange themselves in such a way perfect, precise way that they jolt into thinking, maybe just for a moment, but long enough to ask, "What am I?"

 

These are the Boltzmann Brains, named after the nineteenth-century physicist Ludwig Boltzmann. These strange late-universe beings will, according to one line of thinking, eventually outnumber every human, otter, alien and android who ever lived or ever will live. In fact, assuming this hypothesis is true, you, dear reader, probably are a Boltzmann Brain yourself. After all, there will only ever be one "real' version of you. But Boltzmann Brains popping into being while hallucinating this moment in your life -- along with your entire memory and experiences -- they will keep going and going, appearing and disappearing forever in the void.

 

In his talk at IBM, Aaronson pointed to a number of surprising conclusions thinkers have come to in order to resolve this weirdness.

 

You might say, sure, maybe these questions are puzzling, but what’s the alternative? Either we have to say that consciousness is a byproduct of any computation of the right complexity, or integration, or recursiveness (or something) happening anywhere in the wave-function of the universe, or else we’re back to saying that beings like us are conscious, and all these other things aren’t, because God gave the souls to us, so na-na-na. Or I suppose we could say, like the philosopher John Searle, that we’re conscious, and ... all these other apparitions aren’t, because we alone have 'biological causal powers.' And what do those causal powers consist of? Hey, you’re not supposed to ask that! Just accept that we have them. Or we could say, like Roger Penrose, that we’re conscious and the other things aren’t because we alone have microtubules that are sensitive to uncomputable effects from quantum gravity. [Aaronson points out elsewhere in the talk that there there is no direct or clear indirect evidence to support this claim.] But neither of those two options ever struck me as much of an improvement.

 

Instead, Aaronson proposes a rule to help us understand what bits of matter are conscious and what bits of matter are not.

 

Conscious objects, he says, are locked into "the arrow of time." This means that a conscious mind cannot be reset to an earlier state, as you can do with a brain on a computer. When a stick burns or stars collide or a human brain thinks, tiny particle-level quantum interactions that cannot be measured or duplicated determine the exact nature of the outcome. Our consciousnesses are meat and chemical juices, inseperable from their particles.Once a choice is made or an experience is had, there's no way to truly rewind the mind to a point before it happened because the quantum state of the earlier brain can not be reproduced.

 

When a consciousness is hurt, or is happy, or is a bit too drunk, that experience becomes part of it forever. Packing up your mind in an email and sending it to Fiji might seem like a lovely way to travel, but, by Aaronson's reckoning, that replication of you on the other side would be a different consciousness altogether. The real you died with your euthanized body back home.

 

Additionally Aaronson says you shouldn't be concerned about being a Boltzmann Brain. Not only could a Boltzmann Brain never replicate a real human consciousness, but it could never be conscious in the first place. Once the theoretical apparition is done thinking its thoughts, it disappears unobserved back into the ether -- effectively rewound and therefore meaningless.

 

This doesn't mean us bio-beings must forever be alone in the universe. A quantum computer, or maybe even a sufficiently complex classical computer could find itself as locked into the arrow of time as we are. Of course, that alone is not enough to call that machine conscious. Aaronson says there are many more traits it must have before you would recognize something of yourself in it. (Turing himself proposed one famous test, though, as Popular Science reported, there is now some debate over its value.)

 

So, you, Molly, and Bob might in time forget that lovely game with the water balloons in the field, but you can never un-live it. The effects of that day will resonate through the causal history of your consciousness, part of an unbroken chain of joys and sorrows building toward your present. Nothing any of us experience ever really leaves us.

 

http://www.popsci.com/article/science/simulated-brain-conscious?dom=PSC&loc=recent&lnk=7&con=is-a-simulated-brain-conscious



本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5198277
大腦、意識、與語言的隱喻 - M. Chorost
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Your Brain on Metaphors                      

 

Neuroscientists test the theory that your body shapes your ideas

 

Michael Chorost, 09/01/14

 

The player kicked the ball.
The patient kicked the habit.
The villain kicked the bucket.

 

The verbs are the same. The syntax is identical. Does the brain notice, or care, that the first is literal, the second metaphorical, the third idiomatic?

 

It sounds like a question that only a linguist could love. But neuroscientists have been trying to answer it using exotic brain-scanning technologies. Their findings have varied wildly, in some cases contradicting one another. If they make progress, the payoff will be big. Their findings will enrich a theory that aims to explain how wet masses of neurons can understand anything at all. And they may drive a stake into the widespread assumption that computers will inevitably become conscious in a humanlike way.

 

The hypothesis driving their work is that metaphor is central to language. Metaphor used to be thought of as merely poetic ornamentation, aesthetically pretty but otherwise irrelevant. "Love is a rose, but you better not pick it," sang Neil Young in 1977, riffing on the timeworn comparison between a sexual partner and a pollinating perennial. For centuries, metaphor was just the place where poets went to show off.

 

But in their 1980 book, Metaphors We Live By, the linguist George Lakoff (at the University of California at Berkeley) and the philosopher Mark Johnson (now at the University of Oregon) revolutionized linguistics by showing that metaphor is actually a fundamental constituent of language. For example, they showed that in the seemingly literal statement "He’s out of sight," the visual field is metaphorized as a container that holds things. The visual field isn’t really a container, of course; one simply sees objects or not. But the container metaphor is so ubiquitous that it wasn’t even recognized as a metaphor until Lakoff and Johnson pointed it out.

 

From such examples they argued that ordinary language is saturated with metaphors. Our eyes point to where we’re going, so we tend to speak of future time as being "ahead" of us. When things increase, they tend to go up relative to us, so we tend to speak of stocks "rising" instead of getting more expensive. "Our ordinary conceptual system is fundamentally metaphorical in nature," they wrote.

 

Metaphors do differ across languages, but that doesn’t affect the theory. For example, in Aymara, spoken in Bolivia and Chile, speakers refer to past experiences as being in front of them, on the theory that past events are "visible" and future ones are not. However, the difference between behind and ahead is relatively unimportant compared with the central fact that space is being used as a metaphor for time. Lakoff argues that it is impossible -- not just difficult, but impossible -- for humans to talk about time and many other fundamental aspects of life without using metaphors to do it.

 

Lakoff and Johnson’s program is as anti-Platonic as it’s possible to get. It undermines the argument that human minds can reveal transcendent truths about reality in transparent language. They argue instead that human cognition is embodied -- that human concepts are shaped by the physical features of human brains and bodies. "Our physiology provides the concepts for our philosophy," Lakoff wrote in his introduction to Benjamin Bergen’s 2012 book, Louder Than Words: The New Science of How the Mind Makes Meaning. Marianna Bolognesi, a linguist at the International Center for Intercultural Exchange, in Siena, Italy, puts it this way: "The classical view of cognition is that language is an independent system made with abstract symbols that work independently from our bodies. This view has been challenged by the embodied account of cognition which states that language is tightly connected to our experience. Our bodily experience."

 

Modern brain-scanning technologies make it possible to test such claims empirically. "That would make a connection between the biology of our bodies on the one hand, and thinking and meaning on the other hand," says Gerard Steen, a professor of linguistics at VU University Amsterdam. Neuroscientists have been stuffing volunteers into fMRI scanners and having them read sentences that are literal, metaphorical, and idiomatic.

 

Neuroscientists agree on what happens with literal sentences like "The player kicked the ball." The brain reacts as if it were carrying out the described actions. This is called "simulation." Take the sentence "Harry picked up the glass." "If you can’t imagine picking up a glass or seeing someone picking up a glass," Lakoff wrote in a paper with Vittorio Gallese, a professor of human physiology at the University of Parma, in Italy, "then you can’t understand that sentence." Lakoff argues that the brain understands sentences not just by analyzing syntax and looking up neural dictionaries, but also by igniting its memories of kicking and picking up.

 

But what about metaphorical sentences like "The patient kicked the habit"? An addiction can’t literally be struck with a foot. Does the brain simulate the action of kicking anyway? Or does it somehow automatically substitute a more literal verb, such as "stopped"? This is where functional MRI can help, because it can watch to see if the brain’s motor cortex lights up in areas related to the leg and foot.

 

The evidence says it does. "When you read action-related metaphors," says Valentina Cuccio, a philosophy postdoc at the University of Palermo, in Italy, "you have activation of the motor area of the brain." In a 2011 paper in the Journal of Cognitive Neuroscience, Rutvik Desai, an associate professor of psychology at the University of South Carolina, and his colleagues presented fMRI evidence that brains do in fact simulate metaphorical sentences that use action verbs. When reading both literal and metaphorical sentences, their subjects’ brains activated areas associated with control of action. "The understanding of sensory-motor metaphors is not abstracted away from their sensory-motor origins," the researchers concluded.

 

Textural metaphors, too, appear to be simulated. That is, the brain processes "She’s had a rough time" by simulating the sensation of touching something rough. Krish Sathian, a professor of neurology, rehabilitation medicine, and psychology at Emory University, says, "For textural metaphor, you would predict on the Lakoff and Johnson account that it would recruit activity- and texture-selective somatosensory cortex, and that indeed is exactly what we found."

 

But idioms are a major sticking point. Idioms are usually thought of as dead metaphors, that is, as metaphors that are so familiar that they have become clichés. What does the brain do with "The villain kicked the bucket" ("The villain died")? What about "The students toed the line" ("The students conformed to the rules")? Does the brain simulate the verb phrases, or does it treat them as frozen blocks of abstract language? And if it simulates them, what actions does it imagine? If the brain understands language by simulating it, then it should do so even when sentences are not literal.

 

The findings so far have been contradictory. Lisa Aziz-Zadeh, of the University of Southern California, and her colleagues reported in 2006 that idioms such as "biting off more than you can chew" did not activate the motor cortex. So did Ana Raposo, then at the University of Cambridge, and her colleagues in 2009. On the other hand, Véronique Boulenger, of the Laboratoire Dynamique du Langage, in Lyon, France, reported in the same year that they did, at least for leg and arm verbs.

 

In 2013, Desai and his colleagues tried to settle the problem of idioms. They first hypothesized that the inconsistent results come from differences of methodology. "Imaging studies of embodiment in figurative language have not compared idioms and metaphors," they wrote in a report. "Some have mixed idioms and metaphors together, and in some cases, ‘idiom’ is used to refer to familiar metaphors." Lera Boroditsky, an associate professor of psychology at the University of California at San Diego, agrees. "The field is new. The methods need to stabilize," she says. "There are many different kinds of figurative language, and they may be importantly different from one another."

 

Not only that, the nitty-gritty differences of procedure may be important. "All of these studies are carried out with different kinds of linguistic stimuli with different procedures," Cuccio says. "So, for example, sometimes you have an experiment in which the person can read the full sentence on the screen. There are other experiments in which participants read the sentence just word by word, and this makes a difference."

 

To try to clear things up, Desai and his colleagues presented subjects inside fMRI machines with an assorted set of metaphors and idioms. They concluded that in a sense, everyone was right. The more idiomatic the metaphor was, the less the motor system got involved: "When metaphors are very highly conventionalized, as is the case for idioms, engagement of sensory-motor systems is minimized or very brief."

 

But George Lakoff thinks the problem of idioms can’t be settled so easily. The people who do fMRI studies are fine neuroscientists but not linguists, he says. "They don’t even know what the problem is most of the time. The people doing the experiments don’t know the linguistics."

 

That is to say, Lakoff explains, their papers assume that every brain processes a given idiom the same way. Not true. Take "kick the bucket." Lakoff offers a theory of what it means using a scene from Young Frankenstein. "Mel Brooks is there and they’ve got the patient dying," he says. "The bucket is a slop bucket at the edge of the bed, and as he dies, his foot goes out in rigor mortis and the slop bucket goes over and they all hold their nose. OK. But what’s interesting about this is that the bucket starts upright and it goes down. It winds up empty. This is a metaphor -- that you’re full of life, and life is a fluid. You kick the bucket, and it goes over."

 

That’s a useful explanation of a rather obscure idiom. But it turns out that when linguists ask people what they think the metaphor means, they get different answers. "You say, ‘Do you have a mental image? Where is the bucket before it’s kicked?’ " Lakoff says. "Some people say it’s upright. Some people say upside down. Some people say you’re standing on it. Some people have nothing. You know! There isn’t a systematic connection across people for this. And if you’re averaging across subjects, you’re probably not going to get anything."

 

Similarly, Lakoff says, when linguists ask people to write down the idiom "toe the line," half of them write "tow the line." That yields a different mental simulation. And different mental simulations will activate different areas of the motor cortex -- in this case, scrunching feet up to a line versus using arms to tow something heavy. Therefore, fMRI results could show different parts of different subjects’ motor cortexes lighting up to process "toe the line." In that case, averaging subjects together would be misleading.

 

Furthermore, Lakoff questions whether functional MRI can really see what’s going on with language at the neural level. "How many neurons are there in one pixel or one voxel?" he says. "About 125,000. They’re one point in the picture." MRI lacks the necessary temporal resolution, too. "What is the time course of that fMRI? It could be between one and five seconds. What is the time course of the firing of the neurons? A thousand times faster. So basically, you don’t know what’s going on inside of that voxel." What it comes down to is that language is a wretchedly complex thing and our tools aren’t yet up to the job.

 

Nonetheless, the work supports a radically new conception of how a bunch of pulsing cells can understand anything at all. In a 2012 paper, Lakoff offered an account of how metaphors arise out of the physiology of neural firing, based on the work of a student of his, Srini Narayanan, who is now a faculty member at Berkeley. As children grow up, they are repeatedly exposed to basic experiences such as temperature and affection simultaneously when, for example, they are cuddled. The neural structures that record temperature and affection are repeatedly co-activated, leading to an increasingly strong neural linkage between them.

 

However, since the brain is always computing temperature but not always computing affection, the relationship between those neural structures is asymmetric. When they form a linkage, Lakoff says, "the one that spikes first and most regularly is going to get strengthened in its direction, and the other one is going to get weakened." Lakoff thinks the asymmetry gives rise to a metaphor: Affection is Warmth. Because of the neural asymmetry, it doesn’t go the other way around: Warmth is not Affection. Feeling warm during a 100-degree day, for example, does not make one feel loved. The metaphor originates from the asymmetry of the neural firing. Lakoff is now working on a book on the neural theory of metaphor.

 

If cognition is embodied, that raises problems for artificial intelligence. Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious -- that is, achieving strong AI? Lakoff is uncompromising: "It kills it." Of Ray Kurzweil’s singularity thesis, he says, "I don’t believe it for a second." Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

 

On the other hand, roboticists such as Rodney Brooks, an emeritus professor at the Massachusetts Institute of Technology, have suggested that computers could be provided with bodies. For example, they could be given control of robots stuffed with sensors and actuators. Brooks pondered Lakoff’s ideas in his 2002 book, Flesh and Machines, and supposed, "For anything to develop the same sorts of conceptual understanding of the world as we do, it will have to develop the same sorts of metaphors, rooted in a body, that we humans do."

 

But Lera Boroditsky wonders if giving computers humanlike bodies would only reproduce human limitations. "If you’re not bound by limitations of memory, if you’re not bound by limitations of physical presence, I think you could build a very different kind of intelligence system," she says. "I don’t know why we have to replicate our physical limitations in other systems."

 

What’s emerging from these studies isn’t just a theory of language or of metaphor. It’s a nascent theory of consciousness. Any algorithmic system faces the problem of bootstrapping itself from computing to knowing, from bit-shuffling to caring. Igniting previously stored memories of bodily experiences seems to be one way of getting there. And so may be the ability to create asymmetric neural linkages that say this is like (but not identical to) that. In an age of brain scanning as well as poetry, that’s where metaphor gets you.

 

Michael Chorost is the author of Rebuilt: How Becoming Part Computer Made Me More Human (Houghton Mifflin, 2005) and World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet (Free Press, 2011).

 

http://chronicle.com/article/Your-Brain-on-Metaphors/148495/



本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5187144
負面態度是人來自演化的本能 – J. Burak
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Outlook: gloomy               

 

Humans are wired for bad news, angry faces and sad memories. Is this negativity bias useful or something to overcome?

 

, 09/04/14

 

I have good news and bad news. Which would you like first? If it’s bad news, you’re in good company – that’s what most people pick. But why?

 

Negative events affect us more than positive ones. We remember them more vividly and they play a larger role in shaping our lives. Farewells, accidents, bad parenting, financial losses and even a random snide comment take up most of our psychic space, leaving little room for compliments or pleasant experiences to help us along life’s challenging path. The staggering human ability to adapt ensures that joy over a salary hike will abate within months, leaving only a benchmark for future raises. We feel pain, but not the absence of it.

 

Hundreds of scientific studies from around the world confirm our negativity bias: while a good day has no lasting effect on the following day, a bad day carries over. We process negative data faster and more thoroughly than positive data, and they affect us longer. Socially, we invest more in avoiding a bad reputation than in building a good one. Emotionally, we go to greater lengths to avoid a bad mood than to experience a good one. Pessimists tend to assess their health more accurately than optimists. In our era of political correctness, negative remarks stand out and seem more authentic. People – even babies as young as six months old – are quick to spot an angry face in a crowd, but slower to pick out a happy one; in fact, no matter how many smiles we see in that crowd, we will always spot the angry face first.

 

The machinery by which we recognise facial emotion, located in a brain region called the amygdala, reflects our nature as a whole: two-thirds of neurons in the amygdala are geared toward bad news, immediately responding and storing it in our long-term memory, points out neuropsychologist Rick Hanson, Senior Fellow of the Greater Good Science Center at University of California, Berkeley. This is what causes the ‘fight or flightreflex – a survival instinct based on our ability to use memory to quickly assess threats. Good news, by comparison, takes 12 whole seconds to travel from temporary to long-term memory. Our ancient ancestors were better off jumping away from every stick that looked like a snake than carefully examining it before deciding what to do.

 

Our gloomy bent finds its way into spoken language, with almost two thirds of English words conveying the negative side of things. In the vocabulary we use to describe people, this figure rises to a staggering 74 per cent. And English is not alone. Aside from Dutch, all other languages lean toward the bleak.

 

We’re so attuned to negativity that it penetrates our dreams. The late American psychologist Calvin Hall, who analysed thousands of dreams over more than 40 years, found the most common emotion to be anxiety, with negative feelings (embarrassment, missing a flight, threats of violence) much more frequent than positive ones. A study from 1988 found that, among residents of developed countries, American men have the highest rate of aggressive dreams, reported by 50 per cent, as opposed to 32 per cent of Dutch men – apparently a compulsively positive group.

 

One of the first researchers to explore our negative slant was the Princeton psychologist Daniel Kahneman, winner of the 2002 Nobel Prize, and best known for pioneering the field of behavioural economics. In 1983, Kahneman coined the term ‘loss aversion’ to describe his finding that we mourn loss more than we enjoy benefit. The upset felt after losing money is always greater than the happiness felt after gaining the same sum.

 

The psychologist Roy Baumeister, now professor at Florida State University, has expanded on the concept. ‘Centuries of literary efforts and religious thought have depicted human life in terms of a struggle between good and bad forces,’ he wrote in 2001. ‘At the metaphysical level, evil gods or devils are the opponents of the divine forces of creation and harmony. At the individual level, temptation and destructive instincts battle against strivings for virtue, altruism, and fulfilment. “Good” and “bad” are among the first words and concepts learnt by children (and even by house pets).’ After reviewing hundreds of published papers, Baumeister and team reported that Kahneman’s find extended to every realm of life – love, work, family, learning, social networking and more. ‘Bad is stronger than good,’ they declared in their seminal, eponymous paper.

 

Following fast on the heels of the Baumeister paper, the psychologists Paul Rozin and Edward Royzman of the University of Pennsylvania invoked the term ‘negativity bias’ to reflect their finding that negative events are especially contagious. The Penn researchers give the example of brief contact with a cockroach, which ‘will usually render a delicious meal inedible’, as they say in a 2001 paper. ‘The inverse phenomenon – rendering a pile of cockroaches on a platter edible by contact with one’s favourite food – is unheard of. More modestly, consider a dish of a food that you are inclined to dislike: lima beans, fish, or whatever. What could you touch to that food to make it desirable to eat – that is, what is the anti-cockroach? Nothing!’ When it comes to something negative, minimal contact is all that’s required to pass on the essence, they argue.

 

Of all the cognitive biases, the negative bias might have the most influence over our lives. Yet times have changed. No longer are we roaming the savannah, braving the harsh retribution of nature and a life on the move. The instinct that protected us through most of the years of our evolution is now often a drag – threatening our intimate relationships and destabilising our teams at work.

 

It was the University of Washington psychologist John Gottman, an expert on marital stability, who showed how eviscerating our dark side could be. In 1992, Gottman found a formula to predict divorce with an accuracy rate of more than 90 per cent by spending only 15 minutes with a newly-wed couple. He spent the time evaluating the ratio of positive to negative expressions exchanged between the partners, including gestures and body language. Gottman later reported that couples needed a ‘magic ratio’ of at least five positive expressions for each negative one if a relationship was to survive. So, if you have just finished nagging your partner over housework, be sure to praise him five times very soon. Couples who went on to get divorced had four negative comments to three positive ones. Sickeningly harmonious couples displayed a ratio of about 20:1 – a boon to the relationship but perhaps not so helpful for the partner needing honest help navigating the world.

 

Other researchers applied these findings to the world of business. The Chilean psychologist Marcial Losada, for instance, studied 60 management teams at a large information-processing company. In the most effective groups, employees were praised six times for every time they were put down. In especially low-performing groups, there were almost three negative remarks to every positive one.

 

Losada’s controversial ‘critical positivity ratio’, devised with psychologist Barbara Fredrickson of the University of North Carolina at Chapel Hill and based on complex mathematics, aimed to serve up the perfect formula of 3-6:1. In other words, hearing praise between three and six times as often as criticism, the researchers said, sustained employee satisfaction, success in love, and most other measures of a flourishing, happy life. The paper with the formula, entitled ‘Positive Affect and the Complex Dynamics of Human Flourishing’, was published by the respected journal American Psychologist in 2005.

 

Achieving the critical ratio soon became a major part of the toolkit developed by positive psychology, a recent sub-discipline of psychology focused on enhancing positive measures such as happiness and resilience instead of treating negatives like disorders of the mind. Yet the ratio provoked pushback, starting with Nicholas Brown, a Masters student in psychology at the University of East London, who thought the mathematics was bunk. Brown approached the mathematician Alan Sokal, of New York University and the University of London, who helped him dismantle the formula in a paper called ‘The Complex Dynamics of Wishful Thinking: The Critical Positivity Ratio’ (2013). The Fredrikson-Losada paper has since been partially withdrawn – and Fredickson has disavowed the work in full.

 

Ultimately, there might be no way to extinguish the negative bias of our minds. Unable to rise above this negativity bias with praise, affirmations, magic formulas and the like, it might be time to embrace the advantage that our negative capability confers – most especially, the ability to see reality straight and, so, to adjust course and survive. In fact, studies show that depressed people may be sadder, but they are also wiser, to evoke the famous words of Samuel Taylor Coleridge. This 'depressive realism' gives the forlorn a more accurate perception of reality, especially in terms of their own place in the world and their ability to influence events.

 

When it comes to resolving conflicts on the world stage, the negativity bias must be part of the mix. International disputes are not going to be resolved by positive thinking without a huge dose of realism as well. In the end, we need both perspectives to help us share resources, negotiate peace, and get along. In an article published this June in Behavioral and Brain Sciences, a team led by University of Nebraska-Lincoln political scientist John Hibbing argue that differences between conservatives and liberals can be explained, in part, by their psychological and physiological reactions to negatives in the environment. Compared with liberals, they say, ‘conservatives tend to register greater physiological responses to negative stimuli and also to devote more psychological resources to them’. That might explain why supporters of tradition and stability are so often pitted against supporters of reform, and why the tug-of-war between the two -- the middle ground -- is often where we end up.

 

Last November, Daniel Kahneman gave an interview in Hebrew to the New Israel Fund to mark International Human Rights Day. In it, he addressed the influence that the negativity bias might have on the Israeli-Palestinian peace talks. He claimed that the bias encourages hawkish views (which usually emphasise risk or immediate loss) over dovish proposals (which emphasise the chance of future benefits). The best leaders, he suggested, would offer a vision where ‘future gains’ were great enough to compensate for the risks involved in venturing peace – yet without a magic formula, on both sides of the line, negativity prevailed.

 

 

Read more essays on cognition and neuroscience

 

Jacob Burak is the founder of Alaxon, a digital magazine about culture, art and popular science, where he writes regularly. His latest book is How to Find a Black Cat in a Dark Room (2013). He lives in Tel Aviv.

 

http://aeon.co/magazine/psychology/humans-are-wired-for-negativity-for-good-or-ill/



本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5187102
試圖解釋「意識」的兩個理論 - T. Lewis
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Scientists Closing in on Theory of Consciousness       解釋「意識」的兩個理論

 

Tanya Lewis, 07/30/14

 

Probably for as long as humans have been able to grasp the concept of consciousness, they have sought to understand the phenomenon.

 

Studying the mind was once the province of philosophers, some of whom still believe the subject is inherently unknowable. But neuroscientists are making strides in developing a true science of the self.

 

Here are some of the best contenders for a theory of consciousness.

 

Cogito ergo sum

 

Not an easy concept to define, consciousness has been described as the state of being awake and aware of what is happening around you, and of having a sense of self. [Top 10 Mysteries of the Mind]

 

The 17th century French philosopher René Descartes proposed the notion of "cogito ergo sum" ("I think, therefore I am"), the idea that the mere act of thinking about one's existence proves there is someone there to do the thinking.

 

Descartes also believed the mind was separate from the material body -- a concept known as mind-body duality -- and that these realms interact in the brain's pineal gland. Scientists now reject the latter idea, but some thinkers still support the notion that the mind is somehow removed from the physical world.

 

But while philosophical approaches can be useful, they do not constitute testable theories of consciousness, scientists say.

 

"The only thing you know is, 'I am conscious.' Any theory has to start with that," said Christof Koch, a neuroscientist and the chief scientific officer at the Allen Institute for Neuroscience in Seattle.

 

Correlates of consciousness

 

In the last few decades, neuroscientists have begun to attack the problem of understanding consciousness from an evidence-based perspective. Many researchers have sought to discover specific neurons or behaviors that are linked to conscious experiences.

 

Recently, researchers discovered a brain area that acts as a kind of on-off switch for the brain. When they electrically stimulated this region, called the claustrum, the patient became unconscious instantly. In fact, Koch and Francis Crick, the molecular biologist who famously helped discover the double-helix structure of DNA, had previously hypothesized that this region might integrate information across different parts of the brain, like the conductor of a symphony.

 

But looking for neural or behavioral connections to consciousness isn't enough, Koch said. For example, such connections don't explain why the cerebellum, the part of the brain at the back of the skull that coordinates muscle activity, doesn't give rise to consciousness, while the cerebral cortex (the brain's outermost layer) does. This is the case even though the cerebellum contains more neurons than the cerebral cortex.

 

Nor do these studies explain how to tell whether consciousness is present, such as in brain-damaged patients, other animals or even computers. [Super-Intelligent Machines: 7 Robotic Futures]

 

Neuroscience needs a theory of consciousness that explains what the phenomenon is and what kinds of entities possess it, Koch said. And currently, only two theories exist that the neuroscience community takes seriously, he said.

 

Integrated information

 

Neuroscientist Giulio Tononi of the University of Wisconsin-Madison developed one of the most promising theories for consciousness, known as integrated information theory.

 

Understanding how the material brain produces subjective experiences, such as the color green or the sound of ocean waves, is what Australian philosopher David Chalmers calls the "hard problem" of consciousness. Traditionally, scientists have tried to solve this problem with a bottom-up approach. As Koch put it, "You take a piece of the brain and try to press the juice of consciousness out of [it]." But this is almost impossible, he said.

 

In contrast, integrated information theory starts with consciousness itself, and tries to work backward to understand the physical processes that give rise to the phenomenon, said Koch, who has worked with Tononi on the theory.

 

The basic idea is that conscious experience represents the integration of a wide variety of information, and that this experience is irreducible. This means that when you open your eyes (assuming you have normal vision), you can't simply choose to see everything in black and white, or to see only the left side of your field of view.

 

Instead, your brain seamlessly weaves together a complex web of information from sensory systems and cognitive processes. Several studies have shown that you can measure the extent of integration using brain stimulation and recording techniques.

 

The integrated information theory assigns a numerical value, "phi," to the degree of irreducibility. If phi is zero, the system is reducible to its individual parts, but if phi is large, the system is more than just the sum of its parts.

 

This system explains how consciousness can exist to varying degrees among humans and other animals. The theory incorporates some elements of panpsychism, the philosophy that the mind is not only present in humans, but in all things.

 

An interesting corollary of integrated information theory is that no computer simulation, no matter how faithfully it replicates a human mind, could ever become conscious. Koch put it this way: "You can simulate weather in a computer, but it will never be 'wet.'"

 

Global workspace

 

Another promising theory suggests that consciousness works a bit like computer memory, which can call up and retain an experience even after it has passed.

 

Bernard Baars, a neuroscientist at the Neurosciences Institute in La Jolla, California, developed the theory, which is known as the global workspace theory. This idea is based on an old concept from artificial intelligence called the blackboard, a memory bank that different computer programs could access.

 

Anything from the appearance of a person's face to a memory of childhood can be loaded into the brain's blackboard, where it can be sent to other brain areas that will process it. According to Baars' theory, the act of broadcasting information around the brain from this memory bank is what represents consciousness.

 

The global workspace theory and integrated information theories are not mutually exclusive, Koch said. The first tries to explain in practical terms whether something is conscious or not, while the latter seeks to explain how consciousness works more broadly.

 

"At this point, both could be true," Koch said.

 

Follow Tanya Lewis on Twitter and Google+. Follow us @livescience, Facebook & Google+. Original article on Live Science.

Editor's Recommendations

 

http://www.livescience.com/47096-theories-seek-to-explain-consciousness.html



本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5156973
Who or what is doing the processing?
    回應給: 麥芽糖(myata) 推薦0


胡卜凱
等級:8
留言加入好友

 
I have no experience on meditation, so I am not qualified to comment on the observation of “Emotion actually process before getting to the brain.”

 

However, the word “process” here is used as a verb. As such, an agent, i.e., someone, something, or some organ or organs has to DO the “processing.”

 

As far as I know, the commonly accepted neurological theory stipulates that our bodily organs other than the brain can only “sense”, i.e., receive and transmit external stimuli, signals if you will, via neurons, synapses, and associated chemicals. When the stimuli reach the particular cortical area, the neurons there will DO the “processing”; and this particular cortical area (in conjunction with other relevant cortical areas) will issue certain response which in turn is transmitted via another set of neurons, synapses, and associated chemicals to our limbs and/or organs. These limbs and/or organs will then carry out or execute the said response.

 

There are reported cases when the first neural path described above gets blocked somehow, the person will not “feel” anything; hence, she/he will exhibit no response emotional or otherwise.

 

Now, “commonly accepted” in no way imply that the theory is correct or true. However, it does entail:

 

a.     It has not been falsified or proven wrong;

b.     It has produced practical applications benefiting us. For example, drugs and/or operations that treat headache, depression, anxiety, sleeplessness, etc.

 

Unless someone can identify the “who” or “what” that is doing the “processing”, I would take the aforementioned “meditational observation” with a grain of salt.



本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5153215
Emotion actually process before getting to the brain
    回應給: 胡卜凱(jamesbkh) 推薦0


麥芽糖
等級:8
留言加入好友

 

Hard to explain; however, this is observed in meditation

Brain is more complicated. Emotional responses are processed before reaches the brain




回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5152837
情緒在大腦中以標準模式呈現 - M. Osgood
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Study cracks how the brain processes emotions

 

Melissa Osgood, Media Relations Office, Cornell University, 07/09/14

 

Although feelings are personal and subjective, the human brain turns them into a standard code that objectively represents emotions across different senses, situations and even people, reports a new study by Cornell University neuroscientist Adam Anderson.

 

“We discovered that fine-grained patterns of neural activity within the orbitofrontal cortex, an area of the brain associated with emotional processing, act as a neural code which captures an individual’s subjective feeling,” says Anderson, associate professor of human development in Cornell’s College of Human Ecology and senior author of the study. “Population coding of affect across stimuli, modalities and individuals,” published online in Nature Neuroscience.

 

Their findings provide insight into how the brain represents our innermost feelings – what Anderson calls the last frontier of neuroscience – and upend the long-held view that emotion is represented in the brain simply by activation in specialized regions for positive or negative feelings, he says.

 

“If you and I derive similar pleasure from sipping a fine wine or watching the sun set, our results suggest it is because we share similar fine-grained patterns of activity in the orbitofrontal cortex,” Anderson says.

 

“It appears that the human brain generates a special code for the entire valence spectrum of pleasant-to-unpleasant, good-to-bad feelings, which can be read like a ‘neural valence meter’ in which the leaning of a population of neurons in one direction equals positive feeling and the leaning in the other direction equals negative feeling,” Anderson explains.

 

For the study, the researchers presented participants with a series of pictures and tastes during functional neuroimaging, then analyzed participants’ ratings of their subjective experiences along with their brain activation patterns.

 

Anderson’s team found that valence was represented as sensory-specific patterns or codes in areas of the brain associated with vision and taste, as well as sensory-independent codes in the orbitofrontal cortices (OFC), suggesting, the authors say, that representation of our internal subjective experience is not confined to specialized emotional centers, but may be central to perception of sensory experience.

 

They also discovered that similar subjective feelings – whether evoked from the eye or tongue – resulted in a similar pattern of activity in the OFC, suggesting the brain contains an emotion code common across distinct experiences of pleasure (or displeasure), they say. Furthermore, these OFC activity patterns of positive and negative experiences were partly shared across people.

 

“Despite how personal our feelings feel, the evidence suggests our brains use a standard code to speak the same emotional language,” Anderson concludes.

 

Media note: Images and the paper can be downloaded at, https://cornell.box.com/Emotion

 

http://mediarelations.cornell.edu/2014/07/09/study-cracks-how-the-brain-processes-emotions/



本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5152791
混沌效應對大腦運作的影響 – K. Clancy
推薦0


胡卜凱
等級:8
留言加入好友

 

Your Brain Is On the Brink of Chaos

 

Neurological evidence for chaos in the nervous system is growing.

 

Kelly Clancy, 07/10/14

 

In one important way, the recipient of a heart transplant ignores its new organ: Its nervous system usually doesn’t rewire to communicate with it. The 40,000 neurons controlling a heart operate so perfectly, and are so self-contained, that a heart can be cut out of one body, placed into another, and continue to function perfectly, even in the absence of external control, for a decade or more. This seems necessary: The parts of our nervous system managing our most essential functions behave like a Swiss watch, precisely timed and impervious to perturbations. Chaotic behavior has been throttled out.

 

Or has it? Two simple pendulums that swing with perfect regularity can, when yoked together, move in a chaotic trajectory. Given that the billions of neurons in our brain are each like a pendulum, oscillating back and forth between resting and firing, and connected to 10,000 other neurons, isn’t chaos in our nervous system unavoidable?

 

The prospect is terrifying to imagine. Chaos is extremely sensitive to initial conditions -- just think of the butterfly effect. What if the wrong perturbation plunged us into irrevocable madness? Among many scientists, too, there is a great deal of resistance to the idea that chaos is at work in biological systems. Many intentionally preclude it from their models. It subverts computationalism, which is the idea that the brain is nothing more than a complicated, but fundamentally rule-based, computer. Chaos seems unqualified as a mechanism of biological information processing, as it allows noise to propagate without bounds, corrupting information transmission and storage.

The brain’s main function is to protect us, like an umbrella, from chaos.

 

At the same time, chaos has its advantages. On a behavioral level, the arms race between predator and prey has wired erratic strategies into our nervous system.1 A moth sensing an echolocating bat, for example, immediately directs itself away from the ultrasound source. The neurons controlling its flight fire in an increasingly erratic manner as the bat draws closer, until the moth, darting in fits, appears to be nothing but a tumble of wings and legs. More generally, chaos could grant our brains a great deal of computational power, by exploring many possibilities at great speed.

 

Motivated by these and other potential advantages, and with an accumulation of evidence in hand, neuroscientists are gradually accepting the potential importance of chaos in the brain.

 

Chaos is not the same as disorder. While disordered systems cannot be predicted, chaos is actually deterministic: The present state of the system determines its future. Yet even so, its behavior is only predictable on short time scales: Tiny differences in inputs result in vastly different outcomes. Chaotic systems can also exhibit stable patterns called “attractors” that emerge to the patient observer. Over time, chaotic trajectories will gravitate toward them. Because chaos can be controlled, it strikes a fine balance between reliability and exploration. Yet because it’s unpredictable, it’s a strong candidate for the dynamical substrate of free will.

 

The similarity to random disorder (or stochasticity) has been a thorn in the side of formal studies of chaos. It can be mathematically tricky to distinguish between the two -- especially in biological systems. There are no definite tests for chaos when dealing with multi-dimensional, fluctuating biological data. Walter Freeman and his colleagues spearheaded some of the earliest studies attempting to prove the existence of chaos in the brain, but came to extreme conclusions on limited data. He’s argued, for example, that neuropil, the extracellular mix of axons and dendrites, is the organ of consciousness -- a strong assertion in any light. Philosophers soon latched onto these ideas, taking even the earliest studies at face value. Articles by philosophers and scientists alike can be as apt to quote Jiddu Krishnamurti as Henri Poincaré, and chaos is often handled with a semi-mystical reverence.2, 3

 

As a result, researchers must tread carefully to be taken seriously. But the search for chaos is not purely poetic. The strongest current evidence comes from single cells. The squid giant axon, for example, operates in a resting mode or a repetitive firing mode, depending on the external sodium concentration. Between these extremes, it exhibits unpredictable bursting that resembles the wandering behavior of a chaotic trajectory before it settles into an attractor. When a periodic input is applied, the squid giant axon responds with a mixture of both oscillating and chaotic activity.4 There is chaos in networks of cells, too. The neurons in a patch of rat skin can distinguish between chaotic and disordered patterns of skin stretching.5

 

More evidence for chaos in the nervous system can be found at the level of global brain activity. Bizarrely, an apt metaphor for this behavior is an iron slab.6 The electrons it contains can each point in different directions (more precisely, their spins can point). Like tiny magnets, neighboring spins influence each other. When the slab is cold, there is not enough energy to overcome the influence of neighboring spins, and all spins align in the same direction, forming one solid magnet. When the slab is hot, each spin has so much energy that it can shrug off the influence of its neighbor, and the slab’s spins are disordered. When the slab is halfway between hot and cold, it is in the so-called “critical regime.” This is characterized by fluctuating domains of same-spin regions which exhibit the highest possible dynamic correlations -- that is, the best balance between a spin’s ability to influence its neighbors, and its ability to be changed.

 

The critical state can be quite useful for the brain, allowing it to exploit both order and disorder in its computations -- employing a redundant network with rich, rapid chaotic dynamics, and an orderly readout function to stably map the network state to outputs. The critical state would be maintained not by temperature, but the balance of neural excitation and inhibition. If the balance is tipped in favor of more inhibition, the brain is “frozen” and nothing happens. If there is too much excitation, it will descend into chaos. The critical point is analogous to an attractor.

 

But how can we tell whether the brain operates at the critical point? One clue is the structure of the signals generated by the activity of its billions of neurons. We can measure the power of the brain’s electrical activity at different oscillation frequencies. It turns out that the power of activity falls off as the inverse of the frequency of that activity. Once referred to as 1/f “noise,” this relationship is actually a hallmark of systems balanced at their critical point.7 The spatial extent of regions of coordinated neuronal activity also depend inversely on frequency, another hallmark of criticality. When the brain is pushed away from its usual operating regime using pharmacological agents, it usually loses both these hallmarks,8, 9 and the efficiency of its information encoding and transmission is reduced.10

 

The philosopher Gilles Deleuze and psychiatrist Felix Guattari contended that the brain’s main function is to protect us, like an umbrella, from chaos. It seems to have done so by exploiting chaos itself. At the same time, neural networks are also capable of near-perfect reliability, as with the beating heart. Order and disorder enjoy a symbiotic relationship, and a neuron’s firing may wander chaotically until a memory or perception propels it into an attractor. Sensory input would then serve to “stabilize” chaos. Indeed, the presentation of a stimulus reduces variability in neuronal firing across a surprising number of different species and systems,11 as if a high-dimensional chaotic trajectory fell into an attractor. By “taming” chaos, attractors may represent a strategy for maintaining reliability in a sensitive system.12 Recent theoretical and experimental studies of large networks of independent oscillators have also shown that order and chaos can co-exist in surprising harmony, in so-called chimera states.13

 

The current research paradigm in neuroscience, which considers neurons in a snapshot of time as stationary computational units, and not as members of a shifting dynamical entity, might be missing the mark entirely. If chaos plays an important role in the brain, then neural computations do not operate as a static read-out, a lockstep march from the transduction of photons to the experience of light, but a high-dimensional dynamic trajectory as spikes dance across the brain in self-choreographed cadence.

 

While hundreds of millions of dollars are being funneled into building the connectome -- a neuron-by-neuron map of the brain -- scientists like Eve Marder have argued that, due to the complexity of these circuits, a structural map alone will not get us very far. Functional connections can flicker in and out of existence in milliseconds. Individual neurons appear to change their tuning properties over time14, 15 and thus may not be “byte-addressable” -- that is, stably represent some piece of information -- but instead operate within a dynamic dictionary that constantly shifts to make room for new meaning. Chaos encourages us to think of certain disorders as dynamical diseases, epileptic seizures being the most dramatic example of the potential failure of chaos.16 Chaos might also serve as a signature of brain health: For example, researchers reported less chaotic dynamics in the dopamine-producing cells of rodents with brain lesions, as opposed to healthy rodents, which could have implications in diagnosing and treating Parkinson’s and other dopamine-related disorders.17

 

Economist Murray Rothbard described chaos theory as “destroying math from within.” It usurps the human impulse to simplify, replacing the clear linear relationships we seek in nature with the messy and unpredictable. Similarly, chaos in the brain undermines glib caricatures of human behavior. Economists often model humans as “rational agents”: hedonistic calculators who act for their future good. But we can’t really act out of self-interest -- though that would be a reasonable thing to do -- because we are terrible at predicting what that is. After all, how could we? It’s precisely this failure that makes us what we are.

 

Kelly Clancy studied physics at MIT, then worked as an itinerant astronomer for several years before serving with the Peace Corps in Turkmenistan. As a National Science Foundation fellow, she recently finished her PhD in biophysics at the University of California, Berkeley. She will begin her postdoctoral research at Biozentrum in Switzerland this fall.

 

References

 

1. Humphries, D.A. & Driver, P.M. Protean defence by prey animals. Oecologia 5, 285–302 (1970).

2. Abraham, F.D. Chaos, bifurcations, and self-organization: dynamical extensions of neurological positivism. Psychoscience 1, 85-118 (1992).

3. O’Nuallain, S. Zero power and selflessness: what meditation and conscious perception have in common. Cognitive Science 4, 49-64 (2008).

4. Korn, H. & Faure, P. Is there chaos in the brain? II. Experimental evidence and related models. Comptes Rendus Biologies 326, 787–840 (2003).

5. Richardson, K.A., Imhoff, T.T., Grigg, P. & Collins, J.J. Encoding chaos in neural spike trains. Physical Review Letters 80, 2485–2488 (1998).

6. Beggs, J.M. & Timme, N. Being critical of criticality in the brain. Frontiers in Physiology 3, 1–14 (2012).

7. Bak, P., Tang, C. & Wiesenfeld, K. Self-organized criticality: an explanation of 1/f noise. Physical Review Letters 59, 381–384 (1987).

8. Mazzoni, A. et al. On the dynamics of the spontaneous activity in neuronal networks. PLoS ONE 2 e439 (2007).

9. Beggs, J.M. & Plenz, D. Neuronal avalanches in neocortical circuits. Journal of Neuroscience 23, 11167–11177 (2003).

10. Shew, W.L., Yang, H., Yu, S., Roy, R. & Plenz, D. Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches. Journal of Neuroscience 31, 55–63 (2011).

11. Churchland, M.M. et al. Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nature Neuroscience 13, 369–378 (2010).

12. Laje, R. & Buonomano, D.V. Robust timing and motor patterns by taming chaos in recurrent neural networks. Nature Neuroscience 16, 925–933 (2013).

13. Kuramoto, Y. & Battogtokh, D. Coexistence of coherence and incoherence in nonlocally coupled phase oscillators: a soluble case. Nonlinearity 26, 2469-2498 (2002).

14. Margolis, D.J. et al. Reorganization of cortical population activity imaged throughout long-term sensory deprivation. Nature Neuroscience 15, 1539–1546 (2012).

15. Ziv, Y. et al. Long-term dynamics of CA1 hippocampal place codes. Nature Neuroscience 16, 264–266 (2013).

16. Schiff, S.J. et al. Controlling chaos in the brain. Nature 370, 615–620 (1994).

17. di Mascio, M., di Giovanni, G., di Matteo, V. & Esposito, E. Decreased chaos of midbrain dopaminergic neurons after serotonin denervation. Neuroscience 92, 237–243 (1999).

 

http://nautil.us/issue/15/turbulence/your-brain-is-on-the-brink-of-chaos



本文於 修改第 3 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5152769
大腦中意識可能所在的部位 - K. M. Steifel
推薦0


胡卜凱
等級:8
留言加入好友

 

Is the Claustrum the Key to Consciousness?

 

Klaus M. Steifel, 05/27/14

 

Editor's Note: This article was originally published at The Conversation.

 

Consciousness is one of the most fascinating and elusive phenomena we humans face. Every single one of us experiences it but it remains surprisingly poorly understood.

 

That said, psychology, neuroscience and philosophy are currently making interesting progress in the comprehension of this phenomenon.

 

The main player in this story is something called the claustrum. The word originally described an enclosed space in medieval European monasteries but in the mammalian brain it refers to a small sheet of neurons just below the cortex, and possibly derived from it in brain development.

 

The cortex is the massive folded layer on top of the brain mainly responsible for many higher brain functions such as language, long-term planning and our advanced sensory functions.

 

The location of the claustrum (blue) and the cingulate cortex (green), another brain region likely to act as a global integrator. The person whose brain is shown is looking to the right (see the inset in the top right corner). Brain Explorer, Allen Institute for Brain Science

 

Interestingly, the claustrum is strongly reciprocally connected to many cortical areas. The visual cortex (the region involved in seeing) sends axons (the connecting “wires” of the nervous system) to the claustrum, and also receives axons from the claustrum.

 

The same is true for the auditory cortex (involved in hearing) and a number of other cortex areas. A wealth of information converges in the claustrum and leaves it to re-enter the cortex.

 

The connection

 

Francis Crick – who together with James Watson gave us the structure of DNA – was interested in a connection between the claustrum and consciousness.

 

In a recent paper, published in Frontiers in Integrative Neuroscience, we have built on the ideas he described in his very last scientific publication.

 

Crick and co-author Christoph Koch argued that the claustrum could be a coordinator of cortical function and hence a “conductor of consciousness”.

 

Such percepts as colour, form, sound, body position and social relations are all represented in different parts of the cortex. How are they bound to a unified experience of consciousness? Wouldn’t a region exerting a (even limited) central control over all these cortical areas be highly useful?

 

This is what Crick and Koch suggested when they hypothesised the claustrum to be a “conductor of consciousness”. But how could this hypothesis about the claustrum’s role be tested?

 

Plant power alters the mind

 

Enter the plant Salvia divinorum, a type of mint native to Mexico. The Mazatecs civilisation’s priests would chew its leaves to get in touch with the gods.

 

It’s a powerful psychedelic, but not of the usual type. Substances such as LSD and psylocibin (the active compound in “magic” mushrooms) mainly act by binding to the serotonin neuromodulator receptor proteins.

 

It is not completely understood how these receptors bring about altered states of consciousness, but a reduction of the inhibitory (negative feedback) communication between neurons in the cortex likely plays a role.

 

In contrast, Salvia divinorum acts on the kappa-opiate receptors. These are structurally related, but their activation has quite different effects than the mu-opiate receptors which bind substances such as morphine or heroin.

 

In contrast to the mu-opiate receptors, which are involved in the processing of pain, the role of the kappa-opiate receptors is somewhat poorly understood.

 

Where are these kappa-opiate receptors located in the brain? You might have guessed it, they are most densely concentrated in the claustrum (and present at lower densities in a number of other brain regions such as the frontal cortex and the amygdala).

 

So, the activity of Salvia likely inhibits the claustrum via its activation of the kappa-opiate receptors. Consuming Salvia might just cause the inactivation of the claustrum necessary to test Crick and Koch’s hypothesis.

 

Any volunteers?

 

Did we administer this psychedelic to a group of volunteers to then record their hallucinations and altered perceptions? Well, no. To get ethics approval for such an experiment with a substance outlawed in Australia would be near impossible.

 

While Salvia is not known to be toxic or addictive, the current societal climate is not very sympathetic towards psychoactive substances other than alcohol.

 

But fortunately we had an alternative. The website Erowid.org hosts a database of many thousand trip reports, submitted by psychedelic enthusiasts, describing often in considerable detail what went on in their minds when consuming a wide selection of substances.

 

We analysed trip reports from this website written by folks who had consumed Salvia divinorum and, for comparison, LSD.

 

We found that subjects consuming Salvia were more likely to experience a few select psychological effects:

 

l   they were more likely to believe they were in an environment completely different from the physical space they were actually in

l   they often believed to be interacting with “beings” such as hallucinated dead people, aliens, fairies or mythical creatures

l   the often reported “ego dissolution”, a variety of experiences in which the self ceased to exist in the user’s subjective experience.

 

… and this means?

 

Altered surroundings, other beings and ego dissolution – this surely hints at a disturbance of the “conductor of consciousness”, as expected if the conductor claustrum is perturbed by Salvia divinorum.

 

If a region central to the integration of consciously represented information is disturbed in its function, we would expect fundamental disturbances in the conscious experience. The core of a person’s consciousness seems to be altered by Salvia divinorum, rather than merely some distortions of vision or audition.

 

We believe that the psychological effects of Salvia divinorum, together with the massive concentration of the kappa-opiate receptors (the target molecules of Salvia divinorum) in the claustrum support its role as a central coordinator of consciousness.

 

It’s worth noting that our results were not black-and-white. The users of LSD also experienced (albeit to a lesser degree) translation into altered environments, fairies and ego dissolution.

 

This, together with a review of the literature convinced us that the claustrum is one of the conductors of consciousness, with brain areas cingulate cortex and pulvinar likely being the other ones.

 

Still, the claustrum appears to be special in the brain’s connectivity and we think that Salvia can inactivate it. We hope that the experimental neuroscience community will take advantage of the window into the mind which this unique substance provides.

 

http://www.realclearscience.com/articles/2014/05/27/is_the_claustrum_the_key_to_consciousness_108673.html

 

-- 請至原網頁瀏覽相關圖片



本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5120886
頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁