網路城邦
回本城市首頁 時事論壇
市長:胡卜凱  副市長:
加入本城市推薦本城市加入我的最愛訂閱最新文章
udn城市政治社會政治時事【時事論壇】城市/討論區/
討論區知識和議題 字體:
看回應文章  上一個討論主題 回文章列表 下一個討論主題
大腦神經學:一般研究 - 開欄文
 瀏覽4,809|回應24推薦1

胡卜凱
等級:8
留言加入好友
文章推薦人 (1)

胡卜凱

我對大腦神經學的興趣始自上一世紀80年代。

我最早的求知動機在於回答「行為是否需要準則」,和(如果需要)「行為準則是什麼」這兩個問題。後來逐漸領悟到:這兩個問題其實是「決策判定」的問題。做為工程師,我自然了解「決策」的基礎在「知識」。從而,我的讀書範圍從倫理學和社會學擴充到「認識論」。1982前後我讀了第一本介紹「認知科學」的書。自此,大腦神經學成為我主要的閱讀對象。從2000年以後,我立論的基本假設都包含我對它這門學問的粗淺了解。《唯物人文觀(2006)則是我第一次嘗試整合我對大腦神經學與人文/社會科學兩個領域的了解。

最近我起了整合本城市討論/報導過各個重要議題的念頭;大腦神經學自然在列。等我完成手頭兩篇討論「文化」的文章後,我會開始討論「意識」。

本文於 修改第 5 次
回應 回應給此人 推薦文章 列印 加入我的文摘

引用
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7216219
 回應文章 頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁
人一生中大腦有5個成長階段 - Laura Baisas
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Your brain changes at 9, 32, 66, and 83

Brain scans of 3,802 people show how the brain’s structure changes at four major turning points.

Laura Baisas, 11/25/25

The brain’s structure changes in spurts, according to a new study.

A team of neuroscientists at the University of Cambridge in the United Kingdom identified five broad phases of 
brain structure over the course of an average human life. These eras occur as the human brain rewires to support the different ways of thinking while we grow, mature, and eventually decline. The five major turning points are detailed in a study published today in the journal Nature Communications.

In the study, they compared the brains of 
3,802 people between ages zero and 90, using datasets of MRI diffusion scans. These types of MRIs map neural connections by following how water molecules move through brain tissue. They detected five broad phases of brain structure in the average human life that are split up by four pivotal turning points between birth and death when our brains reconfigure.

The major turning points occur at ages:

*  Nine (Childhood brain architecture)
*  32 (Adulthood brain architecture)
*  66 (Early aging)
*  83 (Late aging)

“We know the brain’s wiring is crucial to our development, but we lack a big picture of how it changes across our lives and why,” study co-author and neuroscientist 
Dr. Alexa Mousley said in a statement. “This study is the first to identify major phases of brain wiring across a human lifespan. These eras provide important context for what our brains might be best at, or more vulnerable to, at different stages of our lives. It could help us understand why some brains develop differently at key points in life, whether it be learning difficulties in childhood, or dementia in our later years.”

Age nine–From baby to kid

From infancy through early childhood, the brain is defined by network consolidation. All of the 
connectors between neurons called synapses that were overproduced in a baby’s brain whittle down. The more active synapses survive, shaping the brain’s early architecture.

Across the whole brain, these connections rewire in the same pattern from birth until about nine years old. Meanwhile, the brain’s grey and white matter grow rapidly in volume.

The childhood brain runs from birth up until a turning point at the age of nine. Here, the brain is experiencing a change in cognitive capacity, but also an increased 
risk of mental health disorders.

Age 32–Adult brain takes shape

In the early 30s, the brain’s neural wiring shifts into adult mode. 
White matter continues to grow in volume, so the brain’s communications networks are increasingly refined based on MRI scans showing how water molecules movies. These changes keep the brain at an enhanced level of cognitive performance that peak in the early 30s and is the brain’s “strongest topological turning point” of the entire lifespan, according to the team.

“Around the age of 32, we see the most directional changes in wiring and largest overall shift in trajectory, compared to all the other turning points,” said Mousley. “While puberty offers a clear start, the end of adolescence is much harder to pin down scientifically. Based purely on neural architecture, we found that adolescent-like changes in brain structure end around the early thirties.”

Adulthood is the longest era and three decades. The brain’s architecture also stabilizes compared to previous phases, without any major turning points for the next 30 years. 
According to the team, this corresponds with a “plateau in intelligence and personality.”

All Eras: representative MRI tractography images of all eras of the human brain. Image. Dr. Alexa Mousley, University of Cambridge
請至原網頁觀看大腦5個成長階段的MRI照片

Age 66–Early aging begins

This mid-60s turning point marks the start of an “early aging” phase of brain architecture. It’s a more mild period and is not defined by any major structural shifts. However, the team still  uncovered meaningful changes to the 
pattern of brain networks on average at around age 66.

“The data suggest that a gradual reorganisation of brain networks culminates in the mid-sixties,” said Mousley. “This is probably related to aging, with further reduced connectivity as white matter starts to degenerate. This is an age when people face increased risk for a variety of health conditions that can affect the brain, such as hypertension.”

Age 83–Late aging

The last turning point comes around age 83. The data for this final era is 
more limited, but the  defining feature is a shift from global to local. Whole brain connectivity declines even further and it relies on certain regions as others fade.

“Looking back, many of us feel our lives have been characterised by different phases. It turns out that brains also go through these eras,” added study co-author and neuroscientist Duncan Astle. “Many neurodevelopmental, mental health, and neurological conditions are linked to the way the brain is wired. Indeed, differences in brain wiring predict difficulties with attention, language, memory, and a whole host of different behaviours”

Understanding that our brain’s structure journey is generally one of a few major turning points instead of a steady progression can help neuroscientists better identify when and how the wiring is more vulnerable.


本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7288702
人人大不同的大腦與感官認知 - Gary Lupyanr
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

What colour do you see?

New research is uncovering the hidden differences in how people experience the world. The consequences are unsettling

Gary Lupyan, 12/12/23

On 26 February 2015, Cates Holderness, a BuzzFeed community manager, posted a picture of a dress, captioned: ‘There’s a lot of debate on Tumblr about this right now, and we need to settle it.’ The post was accompanied by a poll that racked up millions of votes in a matter of days. About two-thirds of people saw the dress as white and gold. The rest, as blue and black. The comments section was filled with bewildered calls to ‘go check your eyes’ and all-caps accusations of trolling.

Vision scientists were quick to point out that the difference in appearance had to do with the ambiguity of ambient light in the photograph. If the visual system resolved the photograph as being taken indoors with its warmer light, the dress would appear blue and black; if outdoors, white and gold. That spring, the annual Vision Sciences Society conference had a live demo of the actual dress (blue and black, for the record) lit in different ways to demonstrate the way the difference of ambient light shifted its appearance. But none of this explains why the visual systems of different people would automatically infer different ambient light (one predictive factor 
seems to be a person’s typical wake-up time: night owls have more exposure to warmer, indoor light).

Whatever the full explanation turns out to be, it is remarkable that this type of genuine difference in visual appearance could elude us so completely. Until #TheDress went viral, no one, not even vision scientists, had any idea that these specific discrepancies in colour appearance existed. This is all the more remarkable considering how easy it is to establish this difference. In the case of #TheDress, it’s as easy as asking ‘What colours do you see?’ If we could be oblivious to such an easy-to-measure difference in subjective experience, how many other such differences might there be that can be discovered if only we know where to look and which questions to ask?

Take the case of Blake Ross, the co-creator of the Firefox web browser. For the first three decades of his life, Ross assumed his subjective experience was typical. After all, why wouldn’t he? Then he read a popular science story about people who do not have visual imagery. While most people can, without much effort, form vivid images in their ‘mind’s eye’, others cannot – a condition that has been documented since the 1800s but only recently named: 
aphantasia. Ross learned from the article that he himself had aphantasia. His reaction was memorable: ‘Imagine your phone buzzes with breaking news: WASHINGTON SCIENTISTS DISCOVER TAIL-LESS MAN. Well, then, what are you?

Ross went on to ask his friends about what it’s like for them when they imagine various things, quickly realising that, just as he took his lack of imagery as a fact of the human condition, they similarly took their presence of visual imagery as a given. ‘I have never visualised anything in my entire life,’ Ross wrote in Vox in 2016. ‘I can’t “see” my father’s face or a bouncing blue ball, my childhood bedroom or the run I went on 10 minutes ago… I’m 30 years old, and I never knew a human could do any of this. And it is blowing my goddamn mind.’

There is a kind of visceral astonishment that accompanies these types of hidden differences. We seem wedded to the idea that we experience things a certain way because they are that way. Encountering someone who experiences the world differently (even when that difference seems trivial, like the colour of a dress) means acknowledging the possibility that our own perception could be ‘wrong’. And if we can’t be sure about the colour of something, what else might we be wrong about? Similarly, for an aphantasic to acknowledge that visual imagery exists is to realise that there is a large mismatch between their subjective experiences and those of most other people.

Studying hidden differences like these can enrich our scientific understanding of the mind. It would not occur to a vision scientist to ask whether being a night owl might have an impact on colour perception, but a bunch of people on the internet comparing notes on how they saw a dress inspired just such a study. The study of aphantasia is helping us understand ways in which people lacking imagery can accomplish the same goals (like remembering the visual details of their living room) without using explicit imagery. How many other such examples might there be once we start looking? There is also, arguably, a moral imperative for us to study and understand these kinds of differences because they help us understand the various ways of being human and to empathise with these differences. It’s a sobering thought that a person might respond differently to a situation not just because they have a different opinion about what to do or are in possession of different knowledge, but because their experience of the situation is fundamentally different.

For most of my research career, I didn’t really care about individual differences. Like most other cognitive scientists, my concern was with manipulating some factor and looking to see how this manipulation affected the group average. In my case, I was interested in the ways that typical human cognition and perception is augmented by language. And so, in a typical experiment, I would manipulate some aspect of language. For example, I examined whether learning names for novel objects changed how people categorised, remembered and perceived them. These were typical group-effect studies in which we compare how people respond to some manipulation. Of course, with any such study, different people respond in different ways, but the focus is on the average response.

For 
example, hearinggreenhelps (most) people see the subtle differences between more-green and less-green colour patches. Interfering with language by having people do a concurrent verbal task makes it harder for (most) people to group together objects that share a specific feature, such as being of a similar size or colour. But most people aren’t everyone. Could it be that some people’s colour discrimination and object categorisation is actively aided by language, but other people’s less so? This thought led us to wonder if this could be another hidden difference, much like aphantasia. In particular, we began to look at inner speech, long thought to be a universal feature of human experience.

Most people report having an inner voice. For example, 83 per cent (3,445 out of 4,145 people in our sample) ‘agree’ or ‘strongly agree’ with the statement ‘When I read I tend to hear a voice in my mind’s ear.’ A similar proportion – 80 per cent – ‘agree’ or ‘strongly agree’ with the statement ‘I think about problems in my mind in the form of a conversation with myself.’ This proportion goes up even more when asked about social problems: 85 per cent ‘agree’ or ‘strongly agree’ with the statement ‘When thinking about a social problem, I often talk it through in my head.’

But 85 per cent is hardly everyone. What about those who disagree with these statements? Some of them report experiencing an inner voice only in specific situations. For example, when it comes to reading, some say that they hear a voice only if they deliberately slow down or are reading something difficult. But a small percentage (2-5 per cent) report never experiencing an inner voice at all. Like those with aphantasia who assume their whole lives that visual imagery is just a metaphor, those with anendophasia – a term Johanne Nedergaard and I 
coined to refer to the absence of inner speech – assume that those inner monologues so common in TV shows are just a cinematic device rather than something that people actually experience. People with anendophasia report that they never replay past conversations and that, although they have an idea of what they want to say, they don’t know what words will come out of their mouths until they start talking.

It is tempting to think that there is a trade-off between thinking using language and thinking using imagery. Take the widespread idea that people have different ‘learning styles’, some being visual learners and others verbal learners (it turns out this idea is 
largely incorrect). When it comes to imagery and inner speech, what we find is a moderate positive correlation between vividness of visual imagery and inner speech. On average, those who report having more visual imagery also report experiencing more inner speech. Most who claim to not experience inner speech also report having little imagery.

This raises the question of what their thoughts feel like to them. When we have asked, we tend to get answers that are quite vague, for example: ‘I think in ideas’ and ‘I think in concepts.’ We have lots of language at our disposal that we can use to talk about perceptual properties (especially visual ones) and, of course, we can use language to talk about language. So it is not really surprising that people have trouble conveying what thoughts without a perceptual or linguistic format feel like. But the difficulties in expressing these types of thoughts using language don’t make them any less real. They merely show that we have to work harder to better understand what they are like.

Differences in visual imagery and inner speech are just the tip of the iceberg. Other hidden differences include synaesthesia, Greek for ‘union of the senses’, in which people 
hear lights or taste sounds, and Eigengrau, a German word for the ‘intrinsic grey’ we see when we close our eyes. Except not all of us experience Eigengrau. About 10 per cent in our samples claim their experience is nothing like Eigengrau. Instead, when they close their eyes, they report seeing colourful patterns or a kind of visual static noise, like an analogue TV not tuned to a channel.

Our memory, too, seems to be the subject of larger differences than anyone expected. In 2015, the psychologist Daniela Palombo and colleagues published a
paper describing ‘severely deficient autobiographical memory’ (SDAM). A person with SDAM might know that they went on a trip to Italy five years ago, but they cannot retrieve a first-person account of the experience: they cannot engage in the ‘mental time travel’ that most of us take for granted. As in other cases of hidden differences, these individuals tend not to realise they are unusual. As Claudia Hammond wrote for the BBC about Susie McKinnon, one of the first described cases of SDAM, she always ‘assumed that when people told in-depth stories about their past, they were just making up the details to entertain people.’

What is it about differences in imagery, inner speech, synaesthesia and memory that render them hidden? It is tempting to think that it’s because we don’t directly observe them. We can see that someone is a really fast runner. But having direct access only to our own reality, how are we to know what another person imagines when they think of an apple, or whether they hear a voice when they read? Still, while we can’t directly experience another person’s reality, we can compare notes by talking about it. Often, it’s remarkably easy: for #TheDress, we just needed to ask one another what colours we see. We can also ask whether letters always appear in colour (a grapheme-colour synaesthete will say yes; others will say no). People without imagery will tell you they cannot visualise an apple, and those without inner speech will say they do not have silent conversations with themselves. It is not actually difficult to discover these differences once we start systematically studying them.

Paradoxically, although language is what allows us to compare notes and learn about differences between our subjective experiences, its power to abstract may also cause us to overlook these differences because the same word can mean many different things. We use ‘imagine’ to refer to forming an image in the mind’s eye, but we also use it when referring to more abstract activities like imagining a hypothetical future. It is perfectly reasonable for an aphantasic to not realise that, in some cases, people use ‘imagine’ to mean actually forming mental images that have a perceptual reality.

Much of our understanding of hidden differences relies on people’s self-report. Can we trust it? Modern psychology is sceptical about self-report, a scepticism I’ve inherited as part of my academic training. Recent reports of large individual differences in imagery and inner speech have often been accompanied by incredulity. How do we know that these differences reflect something real? Can we really just take people at their word when they say they don’t have an inner voice?

Before tackling the more complex question of whether we should trust self-reports about internal subjective states like imagery and inner speech, let’s consider some simpler cases. When someone says they dislike cauliflower, they are reporting on their subjective experience, and we tend to take them at their word. But we don’t have to. We can easily set up an experiment where we observe how likely they are to eat cauliflower when given alternatives. It would be surprising if someone claimed to not like cauliflower but chose to eat it at every opportunity. There are, of course, cases where such ‘stated-vs-revealed preference gaps’ occur. Many researchers have made their careers studying these gaps. For example, if one lives in a culture where cauliflower-eating is associated with higher status, people may be compelled to say they like it even though they don’t. Conversely, someone might eat cauliflower only to avoid offending their host. Such situations call for caution in interpreting people’s preferences – both stated and revealed – but they do not negate the observation that, in ordinary circumstances, taking people at their word regarding their preferences is a very good guide to their behaviour.

Let’s take another case. You are in a shared office and your office-mate says they feel cold when the thermostat is set to 72°F (22°C). Do you take them at their word, or do you say ‘But 72°F is the proper indoor temperature? How can you feel cold?’ Suppose you take measurements of their skin temperature, core temperature, even an fMRI scan showing activation of their insula. None of these would allow you to claim that they don’t feel cold. None of these measures would negate their self-report. If one was concerned about hypothermia, relying on objective measurements may well be appropriate but, if the goal is to understand what a person feels, self-report trumps objective measurement.

The same logic applies to other inherently subjective states such as loneliness, pain and awe. To measure loneliness, it is not sufficient to count how many people someone talks to or is friends with because one person’s active social life may be another person’s depth of loneliness. We can tell if there is a flu epidemic by using objective tests, but diagnosing a ‘loneliness epidemic’ requires taking into account whether people feel lonely. This is also why, despite all the available technology we have to measure people’s physiological states, when it comes to pain, we continue to rely on pain scales, a simple form of self-report. If we take introspective judgments seriously when it comes to preferences, emotion and pain, why would we be more sceptical about them in cases of phenomenal differences such as imagery and inner speech?

One possibility is that we are able to reliably introspect about some things and not others. Perhaps we can reliably report on ‘basic’ states like pain and whether we like cauliflower (though, even here, there may well be differences in people’s ability to self-report), but in other cases our introspection fails. For example, most people
think they are above-average drivers – one of the many examples of the so-called ‘Lake Wobegon Effect’. We can also be wrong in the other direction. In a typical implicit learning study, participants are exposed to sequences of flashing lights, sounds or shapes that obey a certain rule. They subsequently have to identify whether new sequences obey the same rule or not. Participants often feel like they are just guessing, that is, they think they have not learned anything. Their behaviour, however, can be far above chance level, indicating that they in fact have learned something. In such cases, the ‘incorrect’ self-report is still informative: it gives us insight into the person’s subjective reality (they think they are in the 80 percentile of driving ability, they think they are just guessing, they think they haven’t learned something that they, in fact, have). But at the same time, these self-reports do not reflect objective reality. They are poor guides to predicting what a person can or is likely to do.

Lastly, consider dreams. In a 1958 
survey, Fernando Tapia and colleagues reported that only about 9 per cent of respondents indicated that their dreams contained colour. Other surveys done around this time reported similarly low proportions. A decade later, the tide turned and a large majority reported dreaming in colour. The philosopher Eric Schwitzgebel considers several explanations for this discrepancy. One possibility is that black-and-white photographs and television changed the content of dreams. As colour TV came to dominate, colour returned to people’s dreams (‘returned’ because, in a few studies from the more distant past, people did not claim to dream in black and white).

The problem with this is that there is no reason to think TV should have such an outsized impact on the phenomenology of our dreams. After all, the world never ceased to be in colour. The alternative, argues Schwitzgebel, is that ‘at least some people must be pretty badly mistaken about their dreams.’ Our ability to report on the perceptual content of our dreams may simply be unreliable. And with no objective measures against which to measure the subjective report, we can’t really know whether these reports reflect any reality, subjective or not. Why then would there be any consistency in people’s reports from a given time? Perhaps because, in the absence of having good access to their phenomenal states, people go with the response they think is most reasonable. In the 1950s, the dominant popular and scientific view was that dreams lack colour. And so, when queried, participants simply mirrored that dominant view. The same happened as the dominant view later changed. Neither case, Schwitzgebel argues, reflects ‘correctphenomenology because we simply do not have valid introspection when it comes to the colour of our dreams.

If reports about phenomenal states like imagery and inner speech are like reports about dreams, we have every reason to remain sceptical of whether differences in introspection report actual differences in people’s actual experiences. If they are more like reports about our preferences and emotions, then we can (mostly) take people at their word. Even then, we must consider social pressures to respond in a certain way. If having vivid imagery were a requirement for admission to art school, we should not be surprised if aspiring artists all claim to have very vivid imagery. If hearing a voice when one reads were considered a sign of mental illness, people would be less likely to say they hear a voice when they read.

Establishing the validity of self-report can be done in several ways.

First, we must show consistency. If one day people claimed they experience inner speech constantly and the next day they claimed they never did, we have a problem. As it turns out, people’s reports are highly consistent. Inner speech questionnaires taken months apart show high correlations. (At the same time, Russell Hurlburt’s 
work using descriptive experience sampling, which probes people’s thinking at random points during the day, does show that people overestimate how much of their thinking is in the form of inner speech.)

We can also see whether differences in reported phenomenology predict differences in objective behaviour. This is not an option when it comes to dreams, but we can make specific predictions about behavioural consequences of having more or less visual imagery and inner speech based on existing theories of imagery and language. Differences in self-reported phenomenology can be linked to differences in objective behaviour. Those with less inner speech have a harder time remembering lists of words; those with less visual imagery 
report fewer visual details when describing past events.

There are also reported differences in more automatic physiological responses. More light entering the pupil causes it to constrict. But simply imagining something bright like the Sun also causes (a smaller, but still measurable) constriction. Aphantasics show perfectly typical pupillary responses to actual changes in light. However, their pupils 
do not change to imagined light. At the same time, many hypothesised differences in behaviour are not observed because, it seems, people compensate by, for example, discovering ways of remembering detailed visual content without engaging explicit imagery. Such compensation can prove beneficial. People with poor autobiographical memory find other ways of keeping track of information that can help stave off some of the cognitive decline in ageing.

Another way to establish validity is that we can ask whether there are neural and physiological correlates of reported phenomenal differences. If differences in reported imagery were mere confabulations or the results of people just telling researchers what they think the researchers want to hear, it would be surprising if they had different brain connectivity and functional activation as measured by fMRI. Yet this is what we are finding. Fraser Milton and colleagues 
scanned groups of people identifying as aphantasics and hyperphantasics (those with unusually vivid visual imagery). When asked to lie in the scanner and stare at a cross on a screen, the brain responses of the hyperphantasic group had greater connectivity between prefrontal cortex and the occipital visual network, compared with the aphantasic group. Participants were also asked to look at and imagine various famous people and places. The difference in activation between perception and imagery (in a left anterior parietal region) was larger in hyperphantasic compared with aphantasic participants. Those with typical imagery tended to fall in between the aphantasics and hyperphantasic group on many of the measures. Less is known about neural correlates of differences in inner speech. In work presented at the 2023 meeting of the Society for the Neurobiology of Language, Huichao Yang and colleagues found a relationship between how much inner speech people reported to experience and resting-state functional connectivity in the language network.

Lastly, even though we don’t know what it’s like to be someone else, we can compare how our phenomenology differs from one time to another. There are numerous
reports of people with brain injuries that cause them to lose visual imagery, and some cases of losing inner speech. It is much harder to brush aside self-reports of someone who says they used to be able to imagine things, and now they can’t (especially when these are confirmed by clear differences in objective behaviour).

Holderness’s caption introducing the world to #TheDress had a second part. ‘This is important,’ she wrote, ‘because I think I’m going insane.’ The idea that the same image can look different to different people is alarming because it threatens our conviction that the world is as we ourselves experience it. When an aphantasic learns that other people can form mental images, they are learning that something they did not know was even a possibility is, in fact, many people’s everyday reality. This is understandably destabilising.

And yet, there is a scientific and moral imperative for learning about the diverse forms of our phenomenology. Scientifically, it prevents us from making claims that the majority experience (or the scientist’s experience) is everyone’s experience. Morally, it encourages us to go beyond the ancient advice to ‘know thyself’ which can lead to excessive introspection, and to strive to know others. And to do that requires that we open ourselves up to the possibility that their experiences may be quite different from our own.


Gary Lupyan is professor of psychology at the University of Wisconsin-Madison, where he researches the effects of language on cognition.

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7288387
大腦人 -- Asif Ghazanfar
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

這篇論文有5,600字以上,沒有分節,而且每段的字數也超乎常態。整個讀完會相當痛苦。不過,就前幾段來說,全文應該有點看頭。另一方面,(至少我覺得)作者有「過度解釋」(婆婆媽媽?)的傾向,「娓娓道來」的副作用是犧牲了全文的「可讀性」;願者上鉤唄。

Brain man

How can you have a picture of the world when your brain is locked up in your skull? Neuroscientist Dale Purves has clues

Spectrally identical patches can look differently coloured when placed in spectrally different surrounds. The two central targets here are identical. Courtesy 
Purveslab/Duke University 請至原網頁觀看顯示「視力錯覺」的圖片

Asif Ghazanfar, Edited by Sam Haselby, 10/06/25

Picture someone washing their hands. The water running down the drain is a deep red. How you interpret this scene depends on its setting, and your history. If the person is in a gas station bathroom, and you just saw the latest true-crime series, these are the ablutions of a serial killer. If the person is at a kitchen sink, then perhaps they cut themselves while preparing a meal. If the person is in an art studio, you might find resonance with the struggle to get paint off your hands. If you are naive to crime story tropes, cooking or painting, you would have a different interpretation. If you are present, watching someone wash deep red off their hands into a sink, your response depends on even more variables.

How we act in the world is also specific to our species; we all live in an ‘umwelt’, or self-centred world, in the words of the philosopher-biologist Jakob von Uexküll (1864-1944). It’s not as simple as just taking in all the sensory information and then making a decision.

First, our particular eyes, ears, nose, tongue and skin already filter what we can see, hear, smell, taste and feel. We don’t take in everything. We don’t see ultraviolet light 
like a bird, we don’t hear infrasound like elephants and baleen whales do.
Second, the size and shape of our bodies determine what possible actions we can take. Parkour athletes – those who run, vault, climb and jump in complex urban environments – are remarkable in their skills and daring, but sustain injuries that a cat doing the exact same thing would not. Every animal comes with a unique bag of tricks to exploit their environment; these tricks are also limitations under different conditions.
Third, the world, our environment, changes. Seasons change, what animals can eat therefore also changes. If it’s the rainy season, grass will be abundant. The amount of grass determines who is around to eat it and therefore who is around to eat the grass-eaters. Ultimately, the challenge for each of us animals is how to act in this unstable world that we do not fully apprehend with our senses and our body’s limited degrees of freedom.
There is a fourth constraint, one that isn’t typically recognised. Most of the time, our intuition tells us that what we are seeing (or hearing or feeling) is an accurate representation of what is out there, and that anyone else would see (or hear or feel) it the same way.

But we all know that’s not true and yet are continually surprised by it. It is even more fundamental than that: you know that seemingly basic sensory information that we are able to take in with our eyes and ears? It’s inaccurate. How we perceive elementary colours, ‘red’ for example, always depends on the amount of light, surrounding colours and other factors. In low lighting, the deep red washing down the sink might appear black. A yellow sink will make it look more orange; a blue sink may make it look violet. If, instead of through human eyeballs, we measured the wavelengths of light coming off the scene with a device called a spectrophotometer, then the wavelength of the light reflected off that ‘blood’ would be the same, no matter the surrounding colours. But our eyes don’t see the world as it really is because our eyes don’t measure wavelengths like a spectrophotometer.

Ulyanovsk, Russia, 1990. Photo by Peter Marlow/Magnum Photos
請至原網頁觀看第二個顯示「視力錯覺」的圖片

Dale Purves, the George B Geller Professor of Neurobiology (Emeritus) at Duke University in North Carolina, thinks that, because we can never really see the world accurately, the brain’s primary purpose is to help us make associations to guide our behaviour in a way that, literally, makes sense. Purves sees ‘making sense’ as an active process where the brain produces inferences, of which we are not consciously aware, based on past experiences, to interpret and construct a coherent picture of our surroundings. Our brains use learned patterns and expectations to compensate for our imperfect senses and finite experiences, to give us the best understanding of the world it can.

Purves is the scientist’s scientist. He pursues the questions he’s genuinely interested in and does so with original approaches and ideas. Over the years, he’s changed the subject of his research multiple times, all with the intent of understanding how the brain works, tackling subjects that were new and unfamiliar to him, as opposed to chasing trends and techniques or sticking to a tried-and-true research path. His career is an instance of the claim 
Viktor Frankl makes in Man’s Search for Meaning (1946): ‘For success, like happiness, cannot be pursued; it must ensue, and it only does so as the unintended side-effect of one’s personal dedication to a cause greater than oneself …’ If success is measured by accolades, then it did indeed follow Purves’s pursuits. Among a laundry list of awards and honours, he’s one of the few scientists elected both to the National Academy of Sciences (1989) and to what is now the National Academy of Medicine (1996). Election to either is considered to be among the highest honours that can be bestowed on a scientist in the United States. Nevertheless, if the name ‘Dale Purves’ sounds familiar to you, it is likely because you took a neuroscience course in college, for which the textbook was Neuroscience by Purves et al (one of the most popular and now in its 7th edition). Indeed, this is the text I used when I taught an introductory neuroscience course at Princeton.

Oddly enough, Purves’s passion for neuroscience took time and experience to materialise. As an undergraduate at Yale, he struggled initially but found a major – philosophy – intending to pursue medicine. Purves developed an interest in science but didn’t know anything about being a scientist, and medicine seemed close enough. In 1960, he entered Harvard Medical School thinking he would become a psychiatrist. In his first year, he took a course on the nervous system, taught by young hotshot neuroscientists, some of whom would go on to be among the greats of the 20th century (and whose work is now textbook material): David Potter, Ed Furshpan, David Hubel and Torsten Wiesel (the latter two won the Nobel Prize in 1981, along with Roger Sperry). Purves finished his medical degree in 1964, but he had grown disillusioned with psychiatry. He’d tried switching to general surgery, but realised he lacked the intensity of interest in surgery required to excel.

It was 1965, and the Vietnam War gave him time to think. Purves was drafted but, as a physician, could serve his time as a Peace Corps employee, which he did in Venezuela. There, he said, he came across a book, The Machinery of the Brain (1963) by Dean Wooldridge, that synthesised what he learned years ago in that first-year course on the nervous system. The book was written for the lay reader and shared the current knowledge about the brains of humans and other animals. Its particular angle was to compare the brain to the computer technology of the time. The book reignited Purves’s interest in the brain, which was first piqued in his medical school course. When he returned to the US, Purves would begin again as a researcher in neuroscience.

I was an undergraduate at the University of Idaho when I first read Purves’s work. It was the early 1990s and I was a philosophy major like Purves but was really interested in neuroscience. So I sought hands-on research experience in a professor’s lab in the biology department. We were studying the immune-system markers of motor neurons in the rat’s spinal cord, the ones that connect to leg muscles causing them to contract. Mark DeSantis, my advisor, suggested a book by Purves. At the time, there were few courses or serious books on the brain. Purves’s Body and Brain: A Trophic Theory of Neural Connections (1988) was perfect, just what I needed. Its central thesis is that the survival of neurons making connections, and the number of these connections, is regulated by the targets of those neuronal connections. In essence, he was telling us that, unlike the static circuit board of a computer, which is carefully designed and built according to a plan, the circuits of the nervous system are constructed on the fly in accordance with signals it receives from the targets of its connections. Those targets could be other neurons, organs or muscles in the body. As a result, as the body changes over the course of development in an individual or in the evolution of species, the neural circuits will adjust accordingly. Why is this important? Purves showed us that the brain is more than just a controller of the body: it is also an organ system that is embedded in a 
dynamic relationship with the rest of the body, affected by body size, shape and activity.

Courtesy the Human Connectome Project
請至原網頁觀看「大腦神經連接網路」的示意圖照片

One of Purves’s favourite examples to illustrate what sparked this theory is from the work of one of his scientific heroes, Viktor Hamburger. Hamburger had been studying central nervous system development in the 1930s, first at the University of Chicago, then as a faculty member at Washington University in St Louis. Using chicken embryos and those spinal cord motor neurons, Hamburger 
showed that there were more neurons in the developing embryo than there would be in the adult chicken. How could that be? Why were there more neurons in an embryo than actually needed and why did some die off? Hamburger’s idea was that the muscles targeted by those neurons supplied a limited amount of trophic, or nutrient, factors. In essence, the target muscles were producing a ‘food’ (we now call it ‘nerve growth factor’) that kept the neuron alive. The size of the target determined how much food was available and therefore how many neuron lives it could sustain. Exploiting the ease with which chick embryos could be manipulated, Hamburger showed this by first amputating one of the wing buds (ie, the nascent wing). When he did so, the final number of motor neurons on the amputated side was lower than typical, with fewer on the ‘control’ side of the spinal cord. So, the limb bud was important for the survival of neurons. If that was true, then more of this ‘target tissue’ should save more neurons. Was it possible to ‘rescue’ those extra neurons that would normally die off? To answer this question, Hamburger surgically attached an extra limb bud on one side of the embryo, thereby artificially creating more target tissue. The result: more motor neurons survived on that side of the spinal cord. In both experiments, the size of the connection target – the body, those limb buds specifically – determined the number of neurons that survived. Purves ran with this idea of a dialogic relationship between body and brain.

Around four decades later, in the 1970s and ’80s, and Purves was now a young faculty colleague of Hamburger, his elder statesman at Washington University. Here, he took Hamburger’s theory about neuron-to-muscle connections and applied it to neuron-to-neuron connections. While he also looked at neuronal cell survival like Hamburger did, Purves also investigated the elimination and elaboration of individual connections that neurons make, their synapses. This was a big leap because Purves was now testing whether or not Hamburger’s findings about death and survival of neurons in the chick embryo were peculiar to that species’ neuron-to-muscle relationship. Was the same process apparent in other developing circuits in other animals? And, if so, if a neuron survives, then are the number of connections (those synapses) also subject to competition for trophic factors?

Neurons come in all shapes and sizes, with different degrees of complexity. If you’ve seen a typical picture of a single neuron then you know it looks like a tree or bush, with a set of root-like branches on one end and a single, long limb-like branch on the other. The latter can branch as well, depending on circumstances. One of those sets of branches – the dendrites – receives inputs from other neurons, and the other – the axon – sends outputs to other neurons. Together with his graduate student Jeff Lichtman, Purves wanted to know how synapse numbers change with 
development and across different species of animals. Purves and Lichtman started with a simple neuron-to-neuron connection, where the receiving neuron had zero dendrites and the sending neuron’s axons made synapses directly on the receiving neuron’s cell body. To see this, they would surgically remove a tight group of functionally similar neurons, known as a ‘ganglion’, from different animals. They would then carefully fill a few individual neurons with a special enzyme. When this enzyme is then given a chemical to react with, it produces a colour. This colouring allows the neuron to be visualised under a microscope in its full glory – all their branches can be seen and counted. (Imagine a microscopic glass-blown tentacled creature being filled with ink and then counting its appendages.)

The end of each branch represents a synaptic connection. Comparing connections in developing rats versus adults, they found that neurons initially received a few synapses from a number of different neurons. In a sense, the circuits were tangled up in young rats. By the time the rats were adults, each neuron had many synapses but only from one neuron – the circuit was untangled. How did this happen? Akin to the process of eliminating extra neurons based on target size, there was a process of elimination of superfluous connections from some neurons. Then there was an additional process of multiplication (or elaboration) of connections coming from the ‘correct’ (so to speak) neuron. In essence, once neurons could find the right partners by getting rid of the less desirable ones, their relationship could blossom in the form of additional synapses. Purves and Lichtman then replicated this basic finding with increasingly complex sets of neurons and in other species.

Before we get lost in the weeds, here’s the bottom line: trophic interactions between neurons match the number of neurons to the target size, and these interactions also regulate how many synapses they make. The grander theory is this: each class of cells in a neural pathway is supporting and regulating the connections it receives by trophic interactions with the cells it’s connected to down the line. Thus, a coordinated chain of connectivity extends from neuronal connections with the body’s muscles and organs to connections among neurons within the brain itself. The brain and body constitute a single, coherent and dynamic network; there is no way to separate them. They depend on each other at every level.

Some artists go through distinct periods in their careers, while others stick to similar themes in and/or approaches to their work for decades. Scientists are the same. Most stick to trying to answer one particular question, getting deeper and deeper, as they figure out more and more details about their subject. Others find that, at some point, they are satisfied with the answers at hand and move on, finding a new question or challenge. Purves is the latter type of scientist, making multiple radical shifts in his scientific research. His important work supporting trophic theory had an obvious direction for continued investigation: using molecular tools to find new ways to visualise synaptic development. Purves was not interested. His research programme up to this time exploited easily manipulated and unambiguously visualised neural circuits in the peripheral nervous system. The brain itself is a different story, there where all the important action supposedly is; its complexity and density make it impossible to address the same questions – which connections are disappearing or multiplying – with the same degree of clarity and specificity.

Neuroscience was changing as well. By the time the late 1980s and ’90s rolled around, the most attention-getting work focused on the brain, particularly in the neocortex – the part that has disproportionately increased in size in primates like us. Many who were interested in how the brain developed were inspired by those Nobel Prize-winners, Hubel and Wiesel, who elegantly demonstrated that the visual part of the neocortex had a critical period of development. At this point, Purves had reached middle age, and he was at an impasse. The answer to the question ‘What to do next?’ was not self-evident. As an academic scientist, one can pretty much study whatever one wants, but it has to be interesting, potentially consequential, and, for Purves especially, it has to be tractable: you should be able to formulate a clear hypothesis that could lead to an unambiguous finding. The answer came in the form of a new collaborator, Anthony-Samuel LaMantia, who joined his lab in 1988 as a postdoctoral fellow after completing his PhD on the development of the neocortex. Together, Purves and LaMantia decided to tackle the question ‘How does the brain grow?’

There are many different kinds of brains, as many as there are animals. There is a beauty in all of them, perhaps because they adhere quite nicely to the form-follows-function principle of design. The designer in each case is natural selection’s influence on how a species develops, and thus the form its body and brain take in response to environmental challenges. Brain scientists study the anatomy of these solutions when we use any number of techniques like tracers, stains, imaging, etc. Each technique is suited to looking at the brain at a particular spatial scale. Consistently, what they reveal is that the brain is beautiful, sometimes stunning. At one of those scales, you can see repeated patterns of neural circuitry, or modules, that look exactly like the spots and stripes we see on the skin of so many animals. For example, depending on what dyes you use to stain it, the visual cortex of primates has a pattern of stripes, with each stripe seemingly dedicated to the visual signals coming from one of our two eyes. Stain it another way, and you’ll see an array of ‘blobs’ that are claimed by some to be dedicated to colour processing. Other animals have different patterns: rats have an array of barrel-shaped modules in their somatosensory (touch) cortex which corresponds to their array of facial whiskers. Dolphins have blobs in their auditory cortex; we don’t know what their function is.

Purves wanted to know how these iterated patterns of neural circuitry developed. He started by first looking just outside the neocortex, in the olfactory bulb of the mouse. In this structure, mice have a number of modules known as ‘glomeruli’. The olfactory bulbs of mice jut out from the main part of the brain and so are more accessible for experiments. Purves and LaMantia developed a method for exposing the bulbs in a live animal and staining the glomeruli with a dye that would not hurt the mice. They could then see that mice were not born with their full set of glomeruli; over the course of development, new ones were added. This was exciting and surprising because many popular theories at the time argued that brain development is mainly the result of selecting useful circuits from a larger repertoire of possible circuits. Here, they were showing that useful circuits were actually being constructed, not selected. Moreover, if circuits were constructed in this way after the animal is born, then the circuits might be influenced by experience. Were other modules in other species and brain areas added in the same way? In the macaque monkey visual cortex (ie, the experimental animal most closely related to humans and the brain area that is among the most studied) they couldn’t look at module development like they did in the mouse (looking at the same brain structure in the same animal repeatedly over time), but they were able to count the number of blobs in young monkeys versus adult monkeys. Unlike the glomeruli in mice, however, the number of blobs remained constant over time.

To Purves, this was not super exciting. He had hoped to find more traction on perhaps a new process of neocortex development in primates, one that he could elaborate into a novel research programme. Nevertheless, he did come to one important conclusion. It seemed that most scientists – indeed, many luminaries of neuroscience – wanted to see brain modules as fundamental features of the neocortex, each serving a particular behavioural or perceptual purpose. For example, one ‘barrel’ in the rat’s touch cortex is there to process the inputs of one whisker on its face. Purves pointed out that iterated patterns of modules may be found in one species’ brain but absent in a closely related species. Moreover, he also noted that they don’t seem to be obligatorily linked to function. ‘Blobs’ are there in the human and the monkey visual cortex and are linked to colour-vision processing, but nocturnal primates with poor colour vision still have blobs in their visual cortex. So, the blobs do not seem to enable colour vision. Similarly, chinchillas have a barrel cortex like rats but don’t have the whisker movements of rats. Cats and dogs have whiskers but no related modules in their touch cortex.

Thus, it seems that, while the iterated patterns of the brain are beautiful, they are unlike modern architecture in that their beauty is not linked to function. So why then do they form at all? Here, Purves 
suggested that iterated patterns are the result of synaptic connections finding and relying on each other and then making more of those connections in pathways that are most active. In other words, the iterated patterns of the brain are epiphenomenal, the byproducts of the rules of neural connections and competing patterns of neural activity. Those activity patterns are generated by sensory inputs coming from the sensory organs – the eyes, ears, nose and skin. So seeing beautiful-looking patterns in the brain does not necessarily mean they were constructed for a particular purpose. 

I first met Purves in 1993 when I was interviewing for graduate school after he had moved to Duke University. I had already read a lot of his work and was in awe of his contrarian instincts and pursuit of work that is out of the mainstream yet important. When I entered his office for my interview, I was extremely nervous but managed to ask about the portraits on his office walls. They were scientists. John Newport Langley, a 19th-century British physiologist who made important discoveries about neurotransmitters. He inspired the problems Purves tackled as a new professor. The aforementioned Viktor Hamburger was also there. He was a major figure in 20th-century embryology and also a good friend of Purves, despite the difference in their ages and experience. Another photo was of Stephen Kuffler, perhaps the most beloved figure in neuroscience at the time and who made key discoveries in vision. Kuffler had organised the neuroscience team who taught Purves when he was in medical school, and Purves considers him a mentor who exemplified what to pursue (and what not to pursue) in neuroscience. The final photo was of Bernard Katz, a Nobel laureate who figured out how neurons communicate with muscles. Purves collaborated with Katz in the 1970s and considers him a paragon of scientific excellence. I was admitted to Duke and, a year later, moved to Durham, North Carolina hoping to study with Purves or LaMantia, who was there too as a new professor.

When I arrived at Duke, Purves was about to make a major change, away from studying the brain itself entirely. This seemed kind of crazy after so much success with discoveries about the developing nervous system, building an enviable career and becoming a sought-after leader in the field. But Purves’s restless instinct arose again and he switched his focus, this time to study perception. He had a hunch that the great advances wowing people about brain anatomy and the function of circuits therein were not going to be enough to make it clear how the brain works in actually guiding human behaviours. The origin of the hunch was in philosophy, which Purves had majored in as an undergraduate. The philosopher George Berkeley (1685-1753) had noticed that our eyeballs take in radically different-sized, three-dimensional objects and then project them back onto the retina (the sensory wall in the back of the eye) in exactly the same size and only in two dimensions (known as the inverse optics problem). This is why framing a distant human’s whole body between your two fingers, seemingly able to crush them, is amusing. It uses forced perspective to imply an impossibility. The implication of the inverse problem is profound. It means that the information about the object (the source) coming into our brain is uncertain, incomplete, partial.

Portrait of George Berkeley (1730) by John Smibert. Courtesy Wikipedia
請至原網頁觀喬治•柏克萊畫像

As a solution to the inverse problem, the scientist Hermann von Helmholtz (1821-94) proposed that perception relied on learning from experience. We learn about objects through trial and error, and make inferences about any ambiguous image. Thus, since we have no experiences with lilliputian human beings, we can infer that the tiny human in the forced-perspective example is actually far away. Purves took the seed of Helmholtz’s idea – that our perception depends on experience – and built an entire research programme around it. Since the mid-1990s, he and his collaborators have systematically analysed a variety of visual illusions in brightness, contrast, motion, and geometry. They have shown that our perceptions are experience-based constructions, not accurate reflections of their sources in the real world. The example of ‘red’ from the beginning of this essay is based on his colour work.

Purves and his collaborator Beau Lotto would generate two identically coloured ‘target’ squares on a computer screen but give them backgrounds of different colours. The backgrounds would make the two squares look like they were different colours (even though they were actually identical, as measured by a spectrophotometer). Then, participants were asked to adjust the hue, saturation and brightness (the same controls on your phone’s camera app) of the target squares until they looked identical. Each participant’s adjustments were quantified and used as a difference measure between perception and reality. Ultimately, Purves’s research led to the conclusion that the brain functions on a wholly empirical basis. We construct our perception of the world through our past experiences in that world.

This is a radical departure from the long-standing prevailing orthodoxy that the brain extracts features from objects and other sensory sources only to recombine them to guide our behaviour. Instead of extracting features and combining them in the brain (red+round = apple), Purves argues that it is our learned associations among an event or feature of the world, the context in which it appeared and the consequences of our subsequent actions that builds our umwelt, our self-centred world. The research of Purves and his collaborators showed that our ability to perceive accurately is largely based on past experiences and learned associations. This means that we must learn about the space around us, the objects in it and other aspects of perception; these are not innate but developed through interaction with the environment. This all seems very reasonable. The environment is always in flux, with different challenges at different times for any animal. You wouldn’t want your brain fine-tuned to an environment in which you no longer live. Equally, it also wouldn’t make sense for a species to build each individual’s brain as a tabula rasa, to start from scratch with every generation.

Purves’s findings and interpretation lead to a more philosophical puzzle. To what extent is the ‘environment’ that the brain is trying to make sense of actually ‘out there’ beyond our heads? Is there a real reality? Purves has shown that, even if there is a real reality, we don’t perceive much of it… or at least we don’t have a universal way of perceiving it. For example, not all humans see the same colours the same way. There are two reasons for this. One is that colour and its interpretation depend highly on environmental factors. The other is that perception also depends on experience. Experiences depend on your interaction with specific environments. Do you live in the sea, on land, in burrows, in a nest or in a climate-controlled house? Do you have vision, or are you blind? What do your physiology and anatomy allow you to perceive and interact with? What have you seen before? Your perception and interpretation of the world, and indeed those of other animals, depend on the answers to these types of questions. 

In her memoir Pilgrim at Tinker Creek (1974), Annie Dillard writes about coming across a book on vision, which shows what happens when blind humans of all ages are suddenly given the ability to see. Does experience really determine how we see and act in the world? That book was Space and Sight (1960). In it, Marius von Senden describes how patients who were blind at birth because of cataracts saw the world when those cataracts were removed. Were they able to see the world as we see it, those of us with vision since birth? No. Most patients did not. In one example from the book, Dillard recounts:

Before the operation a doctor would give a blind patient a cube and a sphere; the patient would tongue it or feel it with his hands, and name it correctly. After the operation the doctor would show the same objects to the patient without letting him touch them; now he had no clue whatsoever what he was seeing.

There is even an example of a patient finally ‘seeing’ her mother but at a distance. Because of a lack of experience, she failed to understand the relationship between size and distance (forced perspective) that we learn from experience with sight. When asked how big her mother was, she set her two fingers a few inches apart. These types of experiments (which have been 
replicated in various ways) show just how important experience and learned associations are to making sense of the world.

Today, Purves has enough studies to show an operating principle of the nervous system or – more cautiously, he would say – ‘how it seems to work’. The function of the nervous system is to make, maintain and modify neural associations to guide adaptive behaviours – those that lead to survival and reproduction – in a world that sensory systems cannot accurately capture. It is not a stretch to link Purves’s work, all the way back, on trophic theory to the current ideas. A biological agent must assemble, without any blueprint or instructions, a nervous system that matches the shape and size of a changing body. This nervous system, paired with sensory organs that filter the world in peculiar ways, must somehow process the physical states of the world in order to guide behaviour. Similar principles – neural activity, changing synaptic connections – that guided development also guide our ongoing perceptions of an ever-changing world. We use our individual experiences to do this guiding. If we happen to perceive or interpret events like other human beings, it is because we have similar bodies and shared similar experiences at some point. To my experience, Purves is a paragon of scientific excellence.

Recently, I asked Purves how he thought of the arc of his career, and it was very different from my perception of it. From my perspective, it seems that Purves’s question is always ‘How is a nervous system built?’ Addressing this question took him to increasingly large scales: from trophic theory and neural connections in the peripheral nervous system to neural construction of the brain (iterated patterns; growth of the neocortex) to the relationships between brain-area size and perceptual acuity to the construction of ‘reality’ via experience. I asked him what he thought about this narrative, and he responded:

That is one way of framing it, but I don’t really see a narrative arc. As you know, one’s work is often driven by venal/trivial considerations such as what research direction has a better chance of winning grant support or addressing the popular issues of the day. The theme you mention (how nervous systems are built) was not much in my thinking, although in retrospect that narrative seems to fit.

Neither one of us is wrong. In fact, we are interpreting the body of work according to our own experiences and how it best suits our needs, just as his research would suggest.

Purves’s remarkable research insights are the product of his distinctive approach to science. A popular approach in neuroscience is to identify one problem early in a career and to just keep plugging away, learning more and more details about it. Then, maybe acquire an exciting new technique – like fMRI in the 1990s or optogenetics in the 2000s – to investigate the same problem in a different way. Or adopt the new technique and then search for a new question that could be answered by that method. Another approach would be to just apply the method, collect some data and only then ask what story can be told with those data. None of these approaches is ‘wrong’, but Purves’s scientific approach is in stark contrast to this. He first identifies a big, interesting question, one that could possibly have a ‘yes’ or ‘no’ answer, and then finds whatever means are necessary to find the answer.

To put it another way, there is a lot of thought behind his work and approach, and there is a lot of thought about what any findings may mean in the big picture of brain science. Purves is always engaged. Very few scientists have original, influential work in multiple domains of their field. Dale Purves has achieved major advances in our understanding of brain development, from small circuits to big ones, and from bodily experience to a new way of thinking about how the brain works.


Asif Ghazanfar is a professor at the Princeton Neuroscience Institute and the department of psychology at Princeton University in New Jersey. His lab investigates the developmental and evolutionary bases for communication in humans. 

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7287055
嬰兒在子宮內的語言聽覺能力 - Tibi Puiu
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Babies’ Brains Recognize Foreign Languages They Heard in the Womb Before Birth

New research shows fetuses can tune in to foreign languages weeks before birth.

Tibi Puiu, Edited and reviewed by Zoe Gordon, 10/08/25

For years, scientists suspected that fetuses could hear the muffled murmur of the outside world and that these early sounds might shape their brains. Now, researchers at the Université de Montréal have provided evidence that this is the case. They’ve shown that even before birth, babies begin to map the rhythms and cadences of language in their developing neural circuits — even for foreign languages, different from their household tongue.

The team led by neuropsychologist Anne Gallagher found that just a few weeks of exposure to a foreign language in the womb can rewire a newborn’s brain. Babies who listened to ten minutes of stories in Hebrew or German before birth processed those languages using the same brain regions as their native French.

“Even a few minutes of listening per day for a few weeks is enough to modulate the organization of brain networks,” Gallagher said.

Always Listening in the Womb

The experiment was both simple and elegant. Sixty French-speaking pregnant women took part. Starting in the 35th week of pregnancy, some were asked to play recordings of a short story in French and a second language — either German or Hebrew — through headphones placed on their abdomen.

Why these languages? “We were looking for languages that were acoustically and phonologically different from French,” said co-author Andréanne René. “We were fortunate to find a trilingual speaker.”

Each woman played the recordings about 25 times before giving birth. Then, within three days of delivery, researchers replayed the same stories. This time they were in three languages: French, the familiar foreign one, and a completely new one the babies were never exposed to.

The newborns wore what Gallagher described as “a device that looks like a swim cap lined with lights.” The setup, called functional near-infrared spectroscopy (fNIRS), measures changes in blood oxygenation in the cortex, revealing which regions are active.

When babies heard French, their brains lit up in the left temporal cortex, the same area adults use for language. The same pattern appeared when they heard the foreign language they’d been exposed to in the womb. But when they listened to a completely unfamiliar language, the activity dropped, and the brain’s response was more diffuse.

表單的底部

From the very start, it seemed, the babies’ brains could tell the difference between a language they’d “met” before and one they hadn’t.

The Earliest Form of Familiarity

The Montreal team’s findings line up with decades of hints that fetuses recognize sound patterns long before birth. Previous behavioral studies showed that newborns 
prefer their mother’s voice or native language, measured by subtle cues like sucking rate or head turns. But this new study is the first to reveal the same effect directly in brain imaging.

However, this doesn’t mean that babies can learn a language while still in the womb.

“We cannot say babies ‘learn’ a language prenatally,” Gallagher stressed during an interview with 
Scientific American. “What we can say is that neonates develop familiarity with one or more languages during gestation, which shapes their brain networks at birth.”

By the final trimester, a fetus’s auditory system is fully functional. Although the womb filters out high frequencies, lower sounds — below about 400 hertz pass through clearly. Prenatal babies best respond to bass, apparently.

Inside that sonic cocoon, the brain begins tuning itself to the cadences of the outside world. “It shows how malleable language networks are,” Gallagher said. “But it also reminds us of their fragility: if a positive environment can have an effect, we can suppose that a negative environment would too.”

Baby Brains Do More than We Realize

The study adds to a growing realization that the human brain is not a blank slate at birth. “The newborn brain is already specialized,” said Ana Carolina Coan, a pediatric neurologist not involved in the research, in Scientific American. “The gestational environment starts shaping fetuses’ brain processing even before birth.”

The researchers emphasize that this isn’t about raising “multilingual” babies before birth. One avenue of great interest concerns understanding the origins of language itself. Another more practical outcome involves new treatments for babies born with speech disorders.

“We’re not there yet,” Gallagher cautioned, “but it is conceivable that one day this approach could be used to support vulnerable children or children with developmental disorders.”

For now, the work underscores just how early learning begins — and how intertwined it is with experience.

In other words, while a baby may be born speechless, it’s not born clueless. Its brain has already been listening, for weeks, to the murmur of its first languages — and laying the groundwork for a lifetime of words.


Tibi Puiu is a science journalist and co-founder of ZME Science. He writes mainly about emerging tech, physics, climate, and space. In his spare time, Tibi likes to make weird music on his computer and groom felines. He has a B.Sc in mechanical engineering and an M.Sc in renewable energy systems.

Get smarter everyday...

Join 20,000+ curious minds who subscribe to ZME Science's best stories.

表單的頂端

Sign Up (請至原網頁登錄)

Don't worry. You can unsubscribe anytime. Before signing up, review our 
Privacy Policy.

Daily Newsletter The science you need to know, every weekday.

Weekly Newslette A week in science, all in one place. Sends every Sunday.

Related Posts

Humans got smarter to care for needy infants, making them more helpless in the process
The oldest stone cutting tools may have sparked the evolution of language
You can still remember a foreign language even if you think it’s forever forgotten
Ancestral shift in diet may have changed human speech as well

We recommend

Kindling the Spark: Early Development Joanne Haroutounian, Oxford Academic Books, 2002

Checking blood glucose in newborn babies Paediatrics & Child Health, 2004
Read, speak, sing: Promoting early literacy in the health care setting Alyson Shaw, Paediatrics & Child Health, 2021
Unequal at birth John Komlos, OUPBlog, 2015
The lifelong importance of nutrition in pregnancy for brain development Susanne de Rooij, OUPBlog, 2016
Caves: Lewis Cave and Ignatius of Loyola Belden C. Lane, Oxford Academic Books, 2019


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7286891
《大腦不是機器:大腦神經科學急需新思考方式》讀後
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

讀了魯絲特教授大作後(請見本欄上一篇貼文),我有四點「感想」;還請看官們指教:

1) 
我不是生命科學領域出身;但是,我相信一個對此領域有大學程度常識的人都了解:「大腦不是機器」。

2) 
一般來說,該文前6段所討論:大腦神經科學和憂鬱症治療兩者在「方法論」和「研究成果」所碰到的「瓶頸」/「困難」,跟所有科學在研究過程中遇到的「困難」一樣;社會科學尤為嚴重:  

2a
「自變數」過多,導致究者無法充分掌控;同時,研究者也難以確定「自變數」和「他變數」的對應關係。
2b
「周邊環境」複雜,導致究者難以規劃與待解決問題(如「憂鬱症」)直接相關的「實驗」;此外,研究者也難以擬定「假設」或建構「理論」,來規劃研究/實驗的方向和方式。
2c
在整個狀況或「(問題)圖像」仍然霧煞煞的情況下,走冤枉路或走進死胡同的機率往往高於70 %甚至80%。這是牛頓:「如果說我看得比別人遠,那是因為我站在巨人肩膀上」的意思。所謂「巨人」,不過就是以前做同樣研究工作,但走了冤枉路或走進死胡同的一拖拉庫阿木林或二百五。因此,我們應該也必須尊重和前人的努力並心存感激。我們可以把「典範轉移」看成是「此路不通」後「另闢蹊徑」的必然。順帶說一句,我偶而會聽到一些沒有什麼見識的人用輕蔑口吻說:「『近代』西方科學好像西方只有400500年歷史,不像中國這樣源遠流長。事實是:不論東方或西方,近代或當代的成就(或缺失),都是過去幾千年文化傳統所開的「花」和所結的「果」。

3) 
魯絲特教授行文稍欠嚴謹。她在倒數第3段介紹新的(嘗試中的?)「憂鬱症」治療方法時,語氣即使不算推崇,也說得上嘉許;但緊接著在倒數第2段,她就提及一些我都想到的未知因素和高度風險(該欄共4篇貼文)。如果是我,我會把這兩段順序對調,並改用提醒或審慎而非樂觀或期待的筆觸/語氣。

4) 
最後,魯絲特教授用來提綱挈領的標題和該文實際內容頗有一段距離。

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7282676
大腦不是機器:大腦神經科學急需新思考方式 -- Nicole Rust
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

請參見此文《讀後》(本欄下一篇)。

Neuroscience needs a new paradigm: The brain is not a machine

Feedback loops in the brain destroy deterministic neuroscience

Nicole Rust, 08/12/25

Editor’s Notes
Brain researchers are overturning decades of dogma. Forget simple causal chains from genes to brain to behavior. Instead, argues award-winning neuroscientist Nicole Rust, the brain is a dynamic complex system—like the weather or a megacity—whose parts interact via feedback loops that are impossible to study in isolation from each other. And the revolution isn’t just theoretical: a bold cohort of experimentalists is uncovering mental health treatments that go beyond traditional drugs like SSRIs—such as psychedelic therapy, which may be able to rewire brains trapped in destructive loops.

A remarkable 
20% of adults will suffer from a mood disorder such as depression. For those who try antidepressant medication, it will not work for half. We still don’t understand what's happening in the brain of someone who is experiencing a depressive episode, or even how antidepressants work—when they do. We don't even understand how our brains drive our everyday moods.

Why not? After all, neuroscience has been making discoveries about the brain at a rapid clip for decades. Why then haven't we made more progress toward understanding and treating depression? As described by one 
report from the Wellcome Trust, the persistent gap between new discoveries about the brain and new treatments for depression is a “troubling disconnection.” That disconnect is what inspired me to sit down and figure out what has been going wrong, reflected in my new book, Elusive Cures.

For some disorders, like Alzheimer’s, it’s clear that the brain is what researchers should focus on to understand what causes the disease and how to treat it. In comparison, it’s less clear what to focus on for depression. Depression can be caused by experience, such as poverty, trauma or the death of a loved one. It can also follow from mental interpretations and mental simulations, such as the repetitive thinking, or rumination, about a problem that does not lead to a solution. Finally, depression can follow from the biology of the brain, as reflected by families with a genetically inherited predisposition for it. All these different types of causes must be linked in some way, but how exactly?

Decades of brain research have envisioned these causes linked as a long chain. That chain begins with our genes, which are expressed to form brain cells. Those cells, in turn, shape brain circuits, and the activity in those circuits gives rise to all the things our brains do, such as seeing, remembering, and feeling emotions and moods. In this picture, our mental interpretations and experiences—such as trauma—determine the genes that are expressed, and this, in turn, shapes how the circuits in our brain are wired up—which is what we call learning.

When researchers who are looking to understand the causes of dysfunction envision the brain in this way, they aim to pinpoint the broken link in the chain so they can repair it. In the case of depression, the idea is that a broken link in the chain can be repaired with a treatment such as a pharmaceutical drug, brain stimulation, or behavioral therapy.

Unfortunately, decades of depression research have focused on trying to pinpoint broken links to little avail, and it’s now clear that straightforward explanations for depression won’t be forthcoming. 
A predisposition for depression cannot be tied to a single gene—it’s tied only weakly to hundreds of them. Likewise, there’s no single part of the brain responsible for depression, and that is why we still cannot decipher from a brain scan whether someone is experiencing a depressive episode. Similarly, mental therapies like cognitive behavioral therapy don’t work for everyone, and it’s unclear why.

Researchers are beginning to realize that this idea of depression as the end of a long chain is just too simple. The brain and mind are thought to be the most complex things in the known universe, and likewise, depression is thought to be one of the most complex forms of their dysfunction. But what makes it all so complicated? Mood is part of a system teeming with feedback loops where causes (like our brains) lead to effects (like our moods, motivations and decisions) that feed back again as causes (to shape how our brains are wired up).

Likewise, 
our brains are chock-full of feedback loops—if brain region A sends information to B, B often sends information back again to A; the same is true for genetic networks. Complex systems with these feedback loops can have surprising properties that are hard to predict from understanding their parts in isolation, and it is likely that mood is an emergent property of a complex system. Complex systems can also break in unpredictable ways that are hard to fix—the weather breaking out into a hurricane is an example. Like depression, we do not know how to dissipate a hurricane.

Fortunately, breakthroughs in biotechnology and artificial intelligence are ushering in a new era for mood and depression research. A complex system’s causal variables must be studied simultaneously (instead of one at a time) because their surprising properties follow from their interactions. We can finally make measurements like these—for instance, 20 years ago, researchers were limited to recording the activity of a few individual brain cells, and now they can record up to 
1 million simultaneously in a mouse. Likewise, researchers can now collect vast amounts of information about individuals, including genes, brain activity, and lived experiences, and new tools from machine learning and artificial intelligence are helping them analyze that data.

As one illustration: researchers have long wanted, but have thus far been unable, to measure how the brain transforms an experience into an emotion—that’s finally changing. In 
one recent study, researchers employed tools designed for complex systems to measure how an irritating air puff to the eye is transformed from a sensation into a prolonged negative emotional experience in both humans and mice. Similarly, new work from my research group has used ideas from complex systems to measure how fluctuations in happiness are reflected in the brain following wins and losses during a gambling game. Analogous to understanding temperature in the sixteenth century, understanding the biology of emotions and moods begins by figuring out how to measure one in the brain. Studies like these open the door for understanding biological details, analogous to thermodynamics.

New ideas about depression are also beginning to emerge, like the idea that depression happens when a feedback loop-crammed brain has gotten itself stuck in a maladaptive “attractor” state from which it cannot escape, like a ball stuck in a bucket. This new way of thinking about the brain is leading researchers to new therapies that work in new ways. One idea is that treating depression requires adjusting how millions of brain cells communicate, all a bit differently—something that no drug or brain stimulation could ever accomplish. Instead, what the depressed brain needs is some benevolent reprogramming. This is the idea behind 
psychedelic therapy, where hallucinogenic drugs enhance the ability of a depressed brain to rewire, combined with talk therapy to ensure the brain gets rewired in a good way.

While promising, we must also remain mindful that complex systems can be unpredictable and fragile, and we must proceed with caution. Psychedelics can trigger psychosis in people with a predisposition for psychotic disorders. Learning can also have a dark side: as patients relive traumatic memories, it can trigger distress, uncontrollable thoughts, and nightmares. That is not to suggest that there’s not tremendous optimism around psychedelic-based therapies. But to protect and help individuals suffering from depression, the field needs to proceed rigorously and cautiously.

Before writing 
Elusive Cures, I could not explain the “troubling disconnection” between what we researchers have been discovering and what society needs, despite my decades on the front lines of brain research. Moreover, I could not spell out what we plan to do about it. It took me years of surveying and contemplating brain and mind research at a high level to see the revolution that is happening, as well as understand why it’s happening now, and its implications. To date, brain and mind researchers have been limited by technology, but new breakthroughs have unlocked opportunities for a new type of progress. It begins by embracing the idea of the brain as a complex system. Now that I can see it, I’m unequivocally optimistic that the disconnections between discovery and societal needs will begin to dissolve. I am also excited to dive in and help make it happen, focusing on mood and depression.


Nicole Rust is an award-winning neuroscientist and Professor of Psychology at the University of Pennsylvania, author of Elusive Cures: Why Neuroscience Hasn’t Solved Brain Disorders—and How We Can Change That (2025).

More information about Professor Rust’s research is 
available on her website.

Related Posts:

Is your brain really necessary for consciousness?
The difficulty of defining death
Phone addiction is worse than smoking or cocaine
Forgetting is more important than remembering

Related Videos:

Consciousness in the clouds With Massimo Pigliucci, Shini Somara, Anders Sandberg, Nadine Dijkstra, Roman V. Yampolskiy
Electric brains With Anders Sandberg
Consciousness in the clouds
The normal and the abnormal
We misunderstand mental health
A Mad World

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7282634
你的大腦為何有時突然變空空 - Roberta McLain
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Why does your mind goes 'blank'? New brain scans reveal the surprising answer

Neuroscientists think moments of "mind blanking" could be a way for the brain to protect itself.

Roberta McLain, 08/03/25

Scientists uses two brain-scan methods to see what happens when your mind goes totally blank. | Credit: Grafissimo/Getty Images
請至原網頁觀看照片

You look up from your phone screen and suddenly realize you weren't thinking about anything. It's not a lapse in memory or a daydream; it's literally a moment when you're not thinking of anything at all.

Neuroscientists have a term for it — 
mind blanking — which they define as a brief, waking state when conscious thought simply stops.

Scientists used to think our waking minds were always generating thoughts, but recent research shows that's not the case. Mind blanking is now recognized as a distinct conscious state associated with changes in arousal, which in neuroscience refers to alertness and responsiveness to stimuli. Studying this curious state could shed light on how consciousness works, some researchers think.

"For some, it's kind of a blip in the mind, and suddenly there's nothing," 
Thomas Andrillon, a neuroscience researcher at the French National Institute of Health and Medical Research and the Paris Brain Institute, told Live Science. "But not with that feeling, 'There was something that I forgot.'"

Often, people are unaware of the lapse until they are prompted to answer "What were you just thinking about?"

"When we interrupt them randomly," Andrillon continued, "it's clear it's more frequent than what people realize." Although the frequency of this phenomenon varies among individuals, 
various studies suggest about 5% to 20% of a person's waking hours may be spent in this state.

An investigation of 'mind blanking'


In a study published in the July issue of the journal 
Trends in Cognitive Sciences, Andrillon and his team used electroencephalography (EEG) — which involves placing electrodes on participants' heads — to measure brain activity while people experienced lapses in attention, such as mind wandering or mind blanking. Mind wandering occurs when people's thoughts drift to tasks or ideas unrelated to the one at hand, while mind blanking involves the absence of all thought.

While wearing EEG caps, participants watched numbers flash rapidly on a display screen. They were instructed to press a button every time a number appeared except for 3, which they were told to skip. This task tests how quickly people react when a response is required and how well they can inhibit that response, when necessary.

Because most of the presented numbers required a response, people often pressed the button by mistake when they saw a 3 onscreen. The researchers paused the task once a minute to ask what the participants were thinking, finding that they were either focused on the task, their mind was wandering, or they were experiencing a "mind blank."

Participants pressed the button more quickly when their minds were wandering, whereas their responses slowed noticeably during mind blanking, suggesting these two mental states are distinct.

Brain activity told a similar story. The EEG data showed that the participants' brain activity tended to slow down slightly more when their minds were blank than when they were wandering, compared to the baseline of their paying attention. “The connectivity changes as if the inner workings of the brain were specific, in a way, to that state," Andrillion said.

EEG data is great for tracking rapid changes in brain activity, but it can't pinpoint exactly which brain regions are involved. That's in part because it records brain waves through the skull, and the signals blur as they make their way through the brain tissue, fluid and bone. Andrillon explained it's like listening through a wall. You can tell if a group inside is noisy or quiet, but you can't tell who is talking.

The EEG results from the study suggest that during mind blanking, the brain's activity slows down globally, but the technique couldn't identify specific areas. That's where functional MRI (fMRI) came in.

Hypersynchronization

fMRI provides a clearer view of which regions are active and how they interact, but its tracking speed is slower because the technique tracks bloodflow, rather than directly following brain signals. fMRI is more like peeking into the room and seeing who's talking to whom, but not knowing precisely when, Andrillion said.

Study co-author 
Athena Demertzi, a neuroscience researcher at the GIGA Institute-CRC Human Imaging Center at the University of Liège in Belgium, led the fMRI portion of the study. As people rested in an fMRI scanner with no particular task at hand, Demertzi and her team periodically asked what they were thinking.

The results were surprising: when people reported mind blanking, their brains showed hyperconnectivity — a global, synchronized activity pattern similar to that seen in deep sleep. Typically, when we are awake and conscious, our brain regions are connected and communicating but not synchronized, as they appear to be during mind blanks.

"What we think happens in the case of mind blanking is that the brain is pushed a little bit toward the side of synchronization," Andrillon said. "That might be enough to disrupt these sweet spots of consciousness, sending our mind to blank."

Research into mind blanking is still in its early stages, but Andrillon and Demertzi noted that its similarity to brain patterns seen during deep sleep may offer an important clue as to its function
Deep sleep, also known as slow-wave sleep, coincides with important cleanup work for the brain. It clears away accumulated waste, cools the brain, conserves energy and helps reset the system after a full day of mental activity.

Andrillon and Demertzi suggested mind blanking may act as a mini-reset while we're awake. Demertzi said it's like "taking five (
暫停) to steam off" or "to cool your head." Early studies in Demertzi's lab suggest sleep-deprived people report more mind blanks, adding support to this idea.

Both researchers stressed that this state is likely a way for the brain to maintain itself, though "it's not ideal for performance," Andrillon said.

Andrillon believes it's possible but unlikely that there are people who have never experienced mind blanking. Detecting a mind blank can be a challenge. "It can require being interrupted," Andrillon said, "to realize, 'OK, actually, there was no content.'"


相關閱讀

Super-detailed map of brain cells that keep us awake could improve our understanding of consciousness
Why do we forget things we were just thinking about?
Electronic' scalp tattoos could be next big thing in brain monitoring
'Hyper-synchronized' brain waves may explain why different psychedelics have similar effects, rat study reveals
'Hyper-synchronized' brain waves may explain why different psychedelics have similar effects, rat study reveals

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7282002
上癮跟大腦結構的因果關係 -- Ronald Bailey
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Does Drug Use Lead to Addiction, or Are Some Brains More Prone To Use Drugs?  

Researchers argue that "we may need to reevaluate the causal assumptions that underlie brain disease models of addiction."  

Ronald Bailey, From the July 2025 issue 

(Illustration: Joanna Andreasson)  
請至原網頁觀看示意圖

Does using alcohol, nicotine, or cannabis engender addiction by changing the structure of brains, or does the structure of brains incline some people toward using those substances? In standard brain disease 
models of addiction, the neurotoxic effects of abused psychoactive substances are thought to cause brain changes that spur compulsive cravings for drink, smokes, or dope.  

A recent 
study in JAMA Network Open, an open-access, peer-reviewed, international medical journal published by the American Medical Association, challenges that model and suggests that brain differences associated with addiction precede rather than result from substance abuse. A team of neuroscientists examined associations between brain structure and substance use initiation in nearly 10,000 children enrolled in the ongoing Adolescent Brain Cognitive Development (ABCD) Study.  

Children aged 9 to 11 years were enrolled in the study. MRIs of each child's brain were taken at that time. None of the kids in the initial cohort reported using alcohol, nicotine, cannabis, or other psychoactive substances. During the next three years, the researchers periodically asked the kids, all still below age 15, if they had used any of those substances. Roughly a third of the kids (3,460), with some overlap, owned up to using either alcohol (3,123), nicotine products (431), cannabis (212), and other substances (213), such as inhalants, prescription sedatives, and hallucinogens.  

The researchers then compared the brain MRIs of the kids who consumed psychoactive substances with those who did not. Remember, these MRIs were taken before any of the now adolescents had used any psychoactive substances. The researchers identified eight "neuroanatomical features associated with substance use initiation that were present before substance exposure."  

Prior studies of adult addicts have 
found that they generally have lower overall brain volumes than nonabusers do. In their study of the ABCD cohort, the researchers were surprised to find the contrary to be the case. Bigger adolescent brains with more gray matter were significantly associated with early substance-use initiation. Interestingly, neurological research suggests that bigger brains somewhat correlate with higher intelligence.  

Another difference in brain structures coincident with early substance use is a thinner prefrontal cortex, which is associated with impaired emotional regulation and working memory. Early users also have larger globus pallidus volumes, which lessens impulse control. The researchers
suggest their study may be capturing brain variability related to exploration and risk-taking that motivates precocious psychoactive substance use.  

An 
earlier study using data from the ABCD cohort asked if cannabis use contributes to psychosis in adolescents or if adolescents use cannabis to self-medicate their emerging psychotic symptoms. The researchers did not find evidence that early cannabis use contributed to the risk of experiencing psychotic symptoms.  

Instead, they suggest there may be a shared vulnerability in which genetic, gestational, or environmental factors may confer vulnerability for both cannabis use and psychosis. They further found, consistent with the self-medication hypothesis, that worsening symptoms motivated the initiation of cannabis use and that the users experienced reduced symptom distress.  

In their 
commentary on the adolescent substance use initiation study, two University of Minnesota cognitive neuroscientists observed that the brain differences found in the new study "reflect predispositional risk for substance use initiation—and that we may need to reevaluate the causal assumptions that underlie brain disease models of addiction."  
 
 
Ronald Bailey is science correspondent at Reason.  

This article originally appeared in print under the headline "This Is Your Brain Before Drugs."  

NEXT: An Empty Pool in Peru Is a Monument to the Drawbacks of Historic Preservation 

Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7262387
思考需要多少能量? -- Conor Feehly
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

這篇報導終於解釋我現在(或大概從65歲以後)為什麼懶得動腦筋

a. 
老了新陳代謝比較慢
b. 
我一天只吃一餐,每天吸收的能量大不如前。

所以,沒有多餘的5%能花在用腦筋上

How Much Energy Does It Take To Think?

Conor Feehly, 06/04/25

Studies of neural metabolism reveal our brain’s effort to keep us alive and the evolutionary constraints that sculpted our most complex organ.

Introduction

You’ve just gotten home from an exhausting day. All you want to do is put your feet up and zone out to whatever is on television. Though the inactivity may feel like a well-earned rest, your brain is not just chilling. In fact, it is using nearly as much energy as it did during your stressful activity, according to recent research.

Sharna Jamadar (opens a new tab), a neuroscientist at Monash University in Australia, and her colleagues reviewed research from her lab and others around the world to estimate the metabolic cost of cognition (opens a new tab) — that is, how much energy it takes to power the human brain. Surprisingly, they concluded that effortful, goal-directed tasks use only 5% more energy than restful brain activity. In other words, we use our brain just a small fraction more when engaging in focused cognition than when the engine is idling.

It often feels as though we allocate our mental energy through strenuous attention and focus. But the new research builds on a growing understanding that the majority of the brain’s function goes to maintenance. While many neuroscientists have historically focused on active, outward cognition, such as attention, problem-solving, working memory and decision-making, it’s becoming clear that beneath the surface, our background processing is a hidden hive of activity. Our brains regulate our bodies’ key physiological systems, allocating resources where they’re needed as we consciously and subconsciously react to the demands of our ever-changing environments.

“There is this sentiment that the brain is for thinking,” said 
Jordan Theriault (opens a new tab), a neuroscientist at Northeastern University who was not involved in the new analysis. “Where, metabolically, [the brain’s function is] mostly spent on managing your body, regulating and coordinating between organs, managing this expensive system which it’s attached to, and navigating a complicated external environment.”

The brain is not purely a cognition machine, but an object sculpted by evolution — and therefore constrained by the tight energy budget of a biological system. Thinking may make you feel tired, then, not because you are out of energy, but because you have evolved to preserve resources. This study of neural metabolism, when tied to research on the dynamics of the brain’s electrical firing, points to the competing evolutionary forces that explain the limitations, scope and efficiencies of our cognitive capabilities.

The Cost of a Predictive Engine

The human brain is incredibly expensive to run. At roughly 2% of body weight, the organ gorges on 20% of our body’s energetic resources. “It’s hugely metabolically demanding,” Jamadar said. For infants, that number is closer to 50%.

The brain’s energy comes in the form of the molecule adenosine triphosphate (ATP), which cells make from glucose and oxygen. A tremendous expanse of thin capillaries — an estimated 400 miles of vascular wiring — weaves through brain tissue to carry glucose- and oxygen-rich blood to neurons and other brain cells. Once synthesized within cells, ATP powers communication between neurons, which enact the brain’s functions. Neurons carry electrical signals to their synapses, which allow the cells to exchange molecular messages; the strength of a signal determines whether they will release molecules (or “fire”). If they do, that molecular signal determines whether the next neuron will pass on the message, and so on. Maintaining what are known as membrane potentials — stable voltages across a neuron’s membrane that ensure that the cell is primed to fire when called upon — is known to account for at least half of the brain’s total energy budget.

Measuring ATP directly in the human brain is highly invasive. So, for their paper, Jamadar’s lab 
reviewed studies (opens a new tab), including their own findings, that used other estimates of energy use — glucose consumption, measured by positron-emission tomography (PET), and blood flow, measured by functional magnetic resonance imaging (fMRI) — to track differences in how the brain uses energy during active tasks and rest. When performed simultaneously, PET and fMRI can provide complementary information on how glucose is being consumed by the brain, Jamadar said. It’s not a complete measure of the brain’s energy use because neural tissues can also convert some amino acids (opens a new tab) into ATP, but the vast majority of the brain’s ATP is produced by glucose metabolism.

Jamadar’s analysis showed that a brain performing active tasks consumes just 5% more energy compared to a resting brain. When we are engaged in an effortful, goal-directed task, such as studying a bus schedule in a new city, neuronal firing rates increase in the relevant brain regions or networks — in that example, visual and language processing regions. This accounts for that extra 5%; the remaining 95% goes to the brain’s base metabolic load.

Researchers don’t know precisely how that load is allocated, but over the past few decades, they have clarified what the brain is doing in the background. “Around the mid-’90s we started to realize as a discipline [that] actually there is a whole heap of stuff happening when someone is lying there at rest and they’re not explicitly engaged in a task,” she said. “We used to think about ongoing resting activity that is not related to the task at hand as noise, but now we know that there is a lot of signal in that noise.”

Much of that signal is from the 
default mode network, which operates while we’re resting or otherwise not engaged in apparent activity. This network is involved in the mental experience of drifting between past, present and future scenarios — what you might make for dinner, a memory from last week, some pain in your hip. Additionally, beneath the iceberg of awareness, our brains are keeping track of the mosaic of physical variables — body temperature, blood glucose level, heart rate, respiration, and so on — that must remain stable, in a state known as homeostasis, to keep us alive. If any of them stray too far, things can get bad pretty quickly.

Theriault speculates that most of the brain’s base metabolic load goes toward prediction. To achieve its homeostatic goals, the brain needs to always be planning for what comes next — building a sophisticated model of the environment and how changes might affect the body’s biological systems. Prediction, rather than reaction, Theriault said, allows the brain to dole out resources to the body efficiently.

The Brain’s Evolutionary Constraints

A 5% increased energy demand during active thought may not sound like much, but in the context of the entire body and the energy-hungry brain, it can add up. And when you consider the strict energetic constraints our ancestors had to deal with, weariness at the end of a taxing day suddenly makes a lot more sense.

“The reason you are fatigued, just like you are fatigued after physical activity, isn’t because you don’t have the calories to pay for it,” said 
Zahid Padamsey (opens a new tab), a neuroscientist at Weill Cornell Medicine-Qatar, who was not involved in the new research. “It is because we have evolved to be very stingy systems. … We evolved in energy-poor environments, so we hate exerting energy.”

The modern world, in which calories are relatively abundant for many people, contrasts starkly with the conditions of scarcity that Homo sapiens evolved in. That 5% increase in burn rate, over 20 days of persistent, active, task-related focus, can amount to a whole day’s worth of cognitive energy. If food is hard to come by, it could mean the difference between life and death.


“This can be substantial over time if you don’t cap the burn rate, so I think it is largely a relic of our evolutionary heritage,” Padamsey said. In fact, the brain has built-in systems to prevent overexertion. “You’re going to activate fatigue mechanisms that prevent further burn rates,” he said.


To better understand these energetic constraints, in 2023 Padamsey summarized research on certain 
peculiarities of electrical signaling (opens a new tab) that indicate an evolutionary tendency toward energy efficiency. For one thing, you might imagine that the faster you transmit information, the better. But the brain’s optimal transmission rate is far lower than might be expected.

Theoretically, the top speed for a neuron to feasibly fire and send information to its neighbor is 500 hertz. However, if neurons actually fired at 500 hertz, the system would become completely overwhelmed. The 
optimal information rate (opens a new tab) — the fastest rate at which neurons can still distinguish messages from their neighbors — is half that, or 250 hertz.  

Our neurons, however, have an average firing rate of 4 hertz, 50 to 60 times less than what is optimal for information transmission. What’s more, many synaptic transmissions fail: Even when an electrical signal is sent to the synapse, priming it to release molecules to the next neuron, it will do so only 20% of the time.


That’s because we didn’t evolve to maximize total information sent. “We have evolved to maximize information transmission per ATP spent,” Padamsey said. “That’s a very different equation.” Sending the maximum amount of information for as little energy as possible (bits per ATP), the optimal neuronal firing rate is under 10 hertz.


Evolutionarily, the large, sophisticated human brain offered an unprecedented level of behavioral complexity — at a great energetic cost. This negotiation, between the flexibility and innovation of a large brain and the energetic constraints of a biological system, defines the dynamics of how our brain transmits information, the mental fatigue we feel after periods of concentration, and the ongoing work our brain does to keep us alive. That it does so much within its limitations is rather astonishing.



Conor Feehly, Contributing Writer

Related:

What Your Brain Is Doing When You’re Not Doing Anything
AI Is Nothing Like a Brain, and That’s OK
The Mysterious Flow of Fluid in the Brain
The Brain Maps Out Ideas and Memories Like Spaces

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7254139
大腦神經網路整體連接學2.0 - Laura Dattaro
推薦2


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (2)

亓官先生
胡卜凱

索引

connectome大腦神經網路整體連接圖請參見科學家宣告完成果蠅幼蟲的「大腦地圖》。
connectomics大腦神經網路整體連接學請參見看見新世界,腦科學連結體計畫Connectome Project引領未來》。

Connectomics 2.0: Simulating the brain

With a complete fly connectome in hand, researchers are taking the next step to model how brain circuits fuel function.

Laura Dattaro, 05/02/25

Form and function: Using simulations based on connectivity maps, researchers are exploring how much they can learn about a circuit’s functions from its connections alone. Courtesy of Tory Herman
請至原網頁觀看模擬圖

In
2012, neuroscientists Sebastian Seung and J. Anthony Movshon squared off at a Columbia University event over the usefulness of connectomes—maps of every connection between every cell in the brain of a living organism

Such a map, 
Seung argued, could crack open the brain’s computations and provide insight into processes such as sensory perception and memory. But Movshon, professor of neural science and psychology at New York University, countered that the relationship between structure and function was not so straightforward—that even if you knew how all of a brain’s neurons connect to one another, you still wouldn’t understand how the organ turns electrical signals into cognition and behavior.  

The debate in the field continues, even though Seung and his colleagues in the
FlyWire Consortium completed the first connectome of a female Drosophila melanogaster in 2023and even though a slew of new computational models built from that and other connectomes hint that structure does, in fact, reveal something about function.

“This is just the beginning, and that’s what’s exciting,” says Seung, professor of neuroscience at the Princeton Neuroscience Institute. “These papers are kicking off a beginning to an entirely new field, which is connectome-based brain simulation.” A simulated fruit fly optic lobe, detailed in a September 2024 Nature
paper, for example, accurately predicts which neurons in living fruit flies respond to different visual stimuli.

“All the work that’s been done in the past year or two feels like the beginning of something new,” says 
John Tuthill, associate professor of neuroscience at the University of Washington. Tuthill was not involved in the optic lobe study but used a similar approach to identify a circuit that seems to control walking in flies. Most published models so far have made predictions about simple functions that were already understood from recordings of neural activity, Tuthill adds. But “you can see how this will build up to something that is eventually very insightful.” 

And having the connectome has shaved years off the time it takes to, say, identify a neuron involved in a particular behavior, Seung says, and narrowed the field of experiments to only those that align with the way the brain is actually connected. “You don’t have to spend months or years chasing down dead ends,” he adds. “Simulation is going to improve that even more.”

The field of connectomics began in earnest with the mapping of the 302 neurons of the nematode Caenorhabditis elegans in 1986. Around the turn of the millennium, though, advances in electron microscopy made it possible to consider mapping significantly larger nervous systems, such as those of a fruit fly, a
mouse or, ultimately, a human. Researchers could slice up a brain, image each slice, reconstruct the brain in a computer and trace each neuron’s winding path.

But the excitement about that possibility was almost immediately matched by reservations about the time and money involved—and concerns that the payoff might not be worth it. 

Some of those concerns were laid out in a seminal 2013 Nature Methods
commentary by neuroscientists Eve Marder and Cori BargmannThey wondered at the time what additional information beyond the brain’s synaptic connections — whether those connections are inhibitory or excitatory, for instance—would be needed to make truly informative models

More than a decade later, fly connectome data still lack that basic information. They also don’t account for electrical synapses—connections between neurons via electrical signals shared across a cell membrane. And some cells can be connected both electrically and chemically, creating multiple potential pathways across a single circuit, Marder says. “In the absence of knowing who’s electrically coupled to who, you can make some assumptions from a chemical circuit connectome that are going to be missing a lot of parallel pathways.” 

In its most pared-down form, a connectome represents the connections between neurons mathematically. In the case of the fruit fly, each connection appears in a matrix of 139,000 or so rows and columns, each representing one of the fly’s 139,000 neurons. The cells display numbers that indicate how strongly two neurons are connected. Most contain a 0, because most pairs of neurons do not touch

Simulations based on such matrices must add information—or make assumptions—about types of neurons, where they are located in the brain and what kinds of signals they propagate. That often works: Folding neural-network predictions about neurotransmitter identities into a 
connectome-based simulation for taste in flies, for example, generated activity in neurons known to help move the proboscis in response to sugar. Silencing those neurons in living flies blocked the behavior, suggesting the model had found the correct cells.

But 
Srinivas Turaga, a group leader at the Howard Hughes Medical Institute’s Janelia Research Campus, is developing a method to incorporate more real-world data into the models’ assumptions. In the September study that modeled the fly’s visual system, Turaga and his team built some basic assumptions about visual input into 50 models—all based on a fruit fly connectome from Janelia—and then they gave the models a rule: Whatever else they do, they must have the keen vision of a fly.

As these models processed simple movies, they spat out the activity from the neurons in the optic lobe’s motion pathway, and—across all 50—that activity aligned almost entirely with data recorded in 26 studies of living flies. For example, the models correctly identified a set of neurons called elementary motion detectors, known as T4 and T5 neurons. 

Neurons outside of the T4 and T5 group also seemed to compute motion, suggesting other as-yet-unknown visual-system cells exist. “Whether in reality they do this or not, that’s an empirical question,” says Turaga’s co-investigator 
Jakob Macke, professor of machine learning in science at Tübingen University. “For me, what this development can initiate is a new time in which computation modeling is so accurate that we can use it as a way to drive and plan experiments, rather than to explain experiments after the fact.”

In April of this year, Turaga’s team 
published in Nature a whole-body simulation of a fly that uses machine learning to walk and fly. That model is currently constrained by an artificial neural network, but it could instead be guided by the connectome—as well as a map of the nerve cord that connects the fly’s brain and body—in a similar way to Turaga’s optic lobe simulation. 

“Just dreaming of that would have been impossible before the connectome,” Turaga says. “Even now it sounds a little crazy. I’m not embarrassed to say that. The methods we’ve developed are promising enough to say, ‘Maybe we can dream that way, and maybe we can start thinking about building those models.’”

Even as these predictions get better, it’s unclear if they can be extrapolated across cell types. The medulla in the fly visual system, for example, contains at least 100 different cell types. And although connectome-based simulations predict to a large extent what cells react to—for example, whether a neuron responds more to dark or light spots—those predictions don’t always match real-life recordings, 
Thomas Clandinin, professor of neurobiology at Stanford University, and his postdoctoral fellow Timothy Currier reported in a preprint posted on bioRxiv in March. 

For one, many of the models assume that large neurons have a proportionally large visual field—a correlation that did not appear in the flies. What’s more, pairs of cell types that shared some aspects of connectivity, such as a connection with the same input cell type, did not necessarily have similar visual responses

“Similar connectivity does not allow you to predict similarities in function,” Currier says.

Their findings suggest that it’s difficult to make detailed predictions for many cell types when you consider only connectome data in your model. That means connectome-based simulations might provide a “sketch” of what an area of the brain is doing but aren’t useful at a finer resolution, Clandinin says—like a pointillist painting in which the picture seems to dissolve when you look closely. 

“You wouldn’t want to stare at an individual dot in the painting and infer, ‘This is exactly what happens here,’” Clandinin says. “Some of the papers have tried to stare at every dot.”

Incorporating recorded data, as Turaga is doing, will likely make the models better, according to Currier. “There’s always going to be a space for connectomics, and there’s always going to be a space for physiology,” he says. “You need both of them together to make the best predictions.”

And creating a perfectly accurate model of the brain from a connectome isn’t necessarily the goal, says 
Benjamin Cowley, assistant professor of computational neuroscience at Cold Spring Harbor Laboratory. Before the fly connectome existed, Cowley built a model of the optic lobe based on neuronal recordings from flies as they engaged in courtship behavior—males using songs to woo females. They selectively knocked out a set of neurons and recorded how the flies’ behavior changed. They then fed that information to the deep neural network on which their model ran, a method they call knockout training.  

As soon as the full female fly connectome preprint came out last year, Cowley found it confirmed some of his model’s predictions, and he published his
findings in Nature last May. He is now working to update his model using connectome data but says he knows the connectomes are not the final say. They represent individual flies. Although the circuits across individuals are so far strikingly similar, they are not exactly the same. And models that are required to exactly replicate the connectome risk missing general principles of computation, Cowley says.

“The more faithful you are to the connectome,” he says, “the harder it is to train a model to be faithful to the behavior or the responses that you record.”

Improvements to the connectomes—particularly the addition of electrical synapses—could improve the simulations, Marder says, and some scientists at Janelia are creating markers of those synapses that could be used to identify them in the connectome.  

“I would assume once they have the electrical synapses added to those connectomes, they will discover a lot that they don’t know now,” she says. “It’s fine to be starting [the simulation work]. It’s just not going to be the end.”

A better understanding of the principles governing circuits could also help. In a map of more than 70,000 synapses among 1,188 excitatory neurons and 164 inhibitory neurons in the mouse visual cortex, distinguishing cells based on their morphology, for example, revealed that different types of inhibitory cells 
tend to connect with different excitatory cells. The findings were published in Nature in April.

“The pathways mean the information is flowing very differently from one set of cells to another,” says investigator 
Forrest Collman, associate director of informatics at the Allen Institute for Brain Science. “That has to have a bottom-up functional impact on the way information and activity propagates through the system.”

But even if you could incorporate every detail about the imaged neurons and their interactions with one another, the connectome would still represent a single moment in time—devoid of information about how these connections change with experience. 

“To me, that is what makes a brain a brain,” says 
Adriane Otopalik, a group leader at Janelia who previously worked in Marder’s lab as a graduate student. “It seems odd to me to design a model that totally ignores that level of biology.”


表單的頂端

Sign up for our weekly newsletter.

Catch up on what you missed from our recent coverage, and get breaking news alerts.

ConnectomeArtificial intelligence, Circuits, Connectivity, Drosophila, Machine learning, Neural circuitsSynapsesVisual cortex

Recommended reading

Connectome
Inhibitory cells work in concert to orchestrate neuronal activity in mouse brain
Katie Moisse, 9 April 2025
Connectome
New connectomes fly beyond the brain
Laura Dattaro, 26 July 2024
To develop better nervous-system visualizations, we need to think BIG
Tyler Sloan, 8 July 2024

Explore more from The Transmitter

Systems neuroscience
Imagining the ultimate systems neuroscience paper
Mark Humphries, 2 December 2024
The big picture
To keep or not to keep: Neurophysiology’s data dilemma
Nima Dehghani, 25 November 2024
Appetite regulation
Novel neurons upend ‘yin-yang’ model of hunger, satiety in brain
Giorgia Guglielmi, 9 December 2024

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7252214
頁/共3頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁