|
大腦神經學:一般研究 – 開欄文
|
瀏覽2,861 |回應15 |推薦1 |
|
|
我對大腦神經學的興趣始自上一世紀80年代。 我最早的求知動機在於回答:「行為是否需要準則」和(如果需要)「行為準則是什麼」這兩個問題。後來逐漸領悟到:這兩個問題其實是「決策判定」的問題。做為工程師,我自然了解「決策」的基礎在「知識」。從而,我的讀書範圍從倫理學和社會學擴充到「認識論」。1982前後我讀了第一本介紹「認知科學」的書。自此,大腦神經學成為我主要的閱讀對象。從2000年以後,我立論的基本假設都包含我對它的粗淺了解。《唯物人文觀》(2006)則是我第一次嘗試整合我對大腦神經學與人文/社會科學兩個領域的了解。 最近我起了整合本城市討論/報導過各個重要議題的念頭;大腦神經學自然在列。等我完成手頭兩篇討論「文化」的文章後,我會開始討論「意識」。
本文於 修改第 3 次
|
大腦神經網路整體連接學2.0 - Laura Dattaro
|
|
推薦2 |
|
|
索引: connectome:大腦神經網路整體連接圖;請參見《科學家宣告完成果蠅幼蟲的「大腦地圖」》。 connectomics:大腦神經網路整體連接學;請參見《看見新世界,腦科學連結體計畫Connectome Project引領未來》。 Connectomics 2.0: Simulating the brain With a complete fly connectome in hand, researchers are taking the next step to model how brain circuits fuel function. Laura Dattaro, 05/02/25 Form and function: Using simulations based on connectivity maps, researchers are exploring how much they can learn about a circuit’s functions from its connections alone. Courtesy of Tory Herman 請至原網頁觀看模擬圖 In 2012, neuroscientists Sebastian Seung and J. Anthony Movshon squared off at a Columbia University event over the usefulness of connectomes—maps of every connection between every cell in the brain of a living organism. Such a map, Seung argued, could crack open the brain’s computations and provide insight into processes such as sensory perception and memory. But Movshon, professor of neural science and psychology at New York University, countered that the relationship between structure and function was not so straightforward—that even if you knew how all of a brain’s neurons connect to one another, you still wouldn’t understand how the organ turns electrical signals into cognition and behavior. The debate in the field continues, even though Seung and his colleagues in the FlyWire Consortium completed the first connectome of a female Drosophila melanogaster in 2023, and even though a slew of new computational models built from that and other connectomes hint that structure does, in fact, reveal something about function. “This is just the beginning, and that’s what’s exciting,” says Seung, professor of neuroscience at the Princeton Neuroscience Institute. “These papers are kicking off a beginning to an entirely new field, which is connectome-based brain simulation.” A simulated fruit fly optic lobe, detailed in a September 2024 Nature paper, for example, accurately predicts which neurons in living fruit flies respond to different visual stimuli. “All the work that’s been done in the past year or two feels like the beginning of something new,” says John Tuthill, associate professor of neuroscience at the University of Washington. Tuthill was not involved in the optic lobe study but used a similar approach to identify a circuit that seems to control walking in flies. Most published models so far have made predictions about simple functions that were already understood from recordings of neural activity, Tuthill adds. But “you can see how this will build up to something that is eventually very insightful.” And having the connectome has shaved years off the time it takes to, say, identify a neuron involved in a particular behavior, Seung says, and narrowed the field of experiments to only those that align with the way the brain is actually connected. “You don’t have to spend months or years chasing down dead ends,” he adds. “Simulation is going to improve that even more.” The field of connectomics began in earnest with the mapping of the 302 neurons of the nematode Caenorhabditis elegans in 1986. Around the turn of the millennium, though, advances in electron microscopy made it possible to consider mapping significantly larger nervous systems, such as those of a fruit fly, a mouse or, ultimately, a human. Researchers could slice up a brain, image each slice, reconstruct the brain in a computer and trace each neuron’s winding path. But the excitement about that possibility was almost immediately matched by reservations about the time and money involved—and concerns that the payoff might not be worth it. Some of those concerns were laid out in a seminal 2013 Nature Methods commentary by neuroscientists Eve Marder and Cori Bargmann. They wondered at the time what additional information beyond the brain’s synaptic connections — whether those connections are inhibitory or excitatory, for instance—would be needed to make truly informative models. More than a decade later, fly connectome data still lack that basic information. They also don’t account for electrical synapses—connections between neurons via electrical signals shared across a cell membrane. And some cells can be connected both electrically and chemically, creating multiple potential pathways across a single circuit, Marder says. “In the absence of knowing who’s electrically coupled to who, you can make some assumptions from a chemical circuit connectome that are going to be missing a lot of parallel pathways.” In its most pared-down form, a connectome represents the connections between neurons mathematically. In the case of the fruit fly, each connection appears in a matrix of 139,000 or so rows and columns, each representing one of the fly’s 139,000 neurons. The cells display numbers that indicate how strongly two neurons are connected. Most contain a 0, because most pairs of neurons do not touch. Simulations based on such matrices must add information—or make assumptions—about types of neurons, where they are located in the brain and what kinds of signals they propagate. That often works: Folding neural-network predictions about neurotransmitter identities into a connectome-based simulation for taste in flies, for example, generated activity in neurons known to help move the proboscis in response to sugar. Silencing those neurons in living flies blocked the behavior, suggesting the model had found the correct cells. But Srinivas Turaga, a group leader at the Howard Hughes Medical Institute’s Janelia Research Campus, is developing a method to incorporate more real-world data into the models’ assumptions. In the September study that modeled the fly’s visual system, Turaga and his team built some basic assumptions about visual input into 50 models—all based on a fruit fly connectome from Janelia—and then they gave the models a rule: Whatever else they do, they must have the keen vision of a fly. As these models processed simple movies, they spat out the activity from the neurons in the optic lobe’s motion pathway, and—across all 50—that activity aligned almost entirely with data recorded in 26 studies of living flies. For example, the models correctly identified a set of neurons called elementary motion detectors, known as T4 and T5 neurons. Neurons outside of the T4 and T5 group also seemed to compute motion, suggesting other as-yet-unknown visual-system cells exist. “Whether in reality they do this or not, that’s an empirical question,” says Turaga’s co-investigator Jakob Macke, professor of machine learning in science at Tübingen University. “For me, what this development can initiate is a new time in which computation modeling is so accurate that we can use it as a way to drive and plan experiments, rather than to explain experiments after the fact.” In April of this year, Turaga’s team published in Nature a whole-body simulation of a fly that uses machine learning to walk and fly. That model is currently constrained by an artificial neural network, but it could instead be guided by the connectome—as well as a map of the nerve cord that connects the fly’s brain and body—in a similar way to Turaga’s optic lobe simulation. “Just dreaming of that would have been impossible before the connectome,” Turaga says. “Even now it sounds a little crazy. I’m not embarrassed to say that. The methods we’ve developed are promising enough to say, ‘Maybe we can dream that way, and maybe we can start thinking about building those models.’” Even as these predictions get better, it’s unclear if they can be extrapolated across cell types. The medulla in the fly visual system, for example, contains at least 100 different cell types. And although connectome-based simulations predict to a large extent what cells react to—for example, whether a neuron responds more to dark or light spots—those predictions don’t always match real-life recordings, Thomas Clandinin, professor of neurobiology at Stanford University, and his postdoctoral fellow Timothy Currier reported in a preprint posted on bioRxiv in March. For one, many of the models assume that large neurons have a proportionally large visual field—a correlation that did not appear in the flies. What’s more, pairs of cell types that shared some aspects of connectivity, such as a connection with the same input cell type, did not necessarily have similar visual responses. “Similar connectivity does not allow you to predict similarities in function,” Currier says. Their findings suggest that it’s difficult to make detailed predictions for many cell types when you consider only connectome data in your model. That means connectome-based simulations might provide a “sketch” of what an area of the brain is doing but aren’t useful at a finer resolution, Clandinin says—like a pointillist painting in which the picture seems to dissolve when you look closely. “You wouldn’t want to stare at an individual dot in the painting and infer, ‘This is exactly what happens here,’” Clandinin says. “Some of the papers have tried to stare at every dot.” Incorporating recorded data, as Turaga is doing, will likely make the models better, according to Currier. “There’s always going to be a space for connectomics, and there’s always going to be a space for physiology,” he says. “You need both of them together to make the best predictions.” And creating a perfectly accurate model of the brain from a connectome isn’t necessarily the goal, says Benjamin Cowley, assistant professor of computational neuroscience at Cold Spring Harbor Laboratory. Before the fly connectome existed, Cowley built a model of the optic lobe based on neuronal recordings from flies as they engaged in courtship behavior—males using songs to woo females. They selectively knocked out a set of neurons and recorded how the flies’ behavior changed. They then fed that information to the deep neural network on which their model ran, a method they call knockout training. As soon as the full female fly connectome preprint came out last year, Cowley found it confirmed some of his model’s predictions, and he published his findings in Nature last May. He is now working to update his model using connectome data but says he knows the connectomes are not the final say. They represent individual flies. Although the circuits across individuals are so far strikingly similar, they are not exactly the same. And models that are required to exactly replicate the connectome risk missing general principles of computation, Cowley says. “The more faithful you are to the connectome,” he says, “the harder it is to train a model to be faithful to the behavior or the responses that you record.” Improvements to the connectomes—particularly the addition of electrical synapses—could improve the simulations, Marder says, and some scientists at Janelia are creating markers of those synapses that could be used to identify them in the connectome. “I would assume once they have the electrical synapses added to those connectomes, they will discover a lot that they don’t know now,” she says. “It’s fine to be starting [the simulation work]. It’s just not going to be the end.” A better understanding of the principles governing circuits could also help. In a map of more than 70,000 synapses among 1,188 excitatory neurons and 164 inhibitory neurons in the mouse visual cortex, distinguishing cells based on their morphology, for example, revealed that different types of inhibitory cells tend to connect with different excitatory cells. The findings were published in Nature in April. “The pathways mean the information is flowing very differently from one set of cells to another,” says investigator Forrest Collman, associate director of informatics at the Allen Institute for Brain Science. “That has to have a bottom-up functional impact on the way information and activity propagates through the system.” But even if you could incorporate every detail about the imaged neurons and their interactions with one another, the connectome would still represent a single moment in time—devoid of information about how these connections change with experience. “To me, that is what makes a brain a brain,” says Adriane Otopalik, a group leader at Janelia who previously worked in Marder’s lab as a graduate student. “It seems odd to me to design a model that totally ignores that level of biology.” 表單的頂端
Sign up for our weekly newsletter. Catch up on what you missed from our recent coverage, and get breaking news alerts. Connectome, Artificial intelligence, Circuits, Connectivity, Drosophila, Machine learning, Neural circuits, Synapses, Visual cortex Recommended reading Connectome Inhibitory cells work in concert to orchestrate neuronal activity in mouse brain Katie Moisse, 9 April 2025 Connectome New connectomes fly beyond the brain Laura Dattaro, 26 July 2024 To develop better nervous-system visualizations, we need to think BIG Tyler Sloan, 8 July 2024 Explore more from The Transmitter
Systems neuroscience Imagining the ultimate systems neuroscience paper Mark Humphries, 2 December 2024 The big picture To keep or not to keep: Neurophysiology’s data dilemma Nima Dehghani, 25 November 2024 Appetite regulation Novel neurons upend ‘yin-yang’ model of hunger, satiety in brain Giorgia Guglielmi, 9 December 2024
本文於 修改第 1 次
|
幫助記憶維持長期穩固的分子鍵 -- Ajdina Halilovic
|
|
推薦1 |
|
|
The Molecular Bond That Helps Secure Your Memories Ajdina Halilovic, 05/07/25 How do memories last a lifetime when the molecules that form them turn over within days, weeks or months? An interaction between two proteins points to a molecular basis for memory. Introduction When Todd Sacktor (opens a new tab) was about to turn 3, his 4-year-old sister died of leukemia. “An empty bedroom next to mine. A swing set with two seats instead of one,” he said, recalling the lingering traces of her presence in the house. “There was this missing person — never spoken of — for which I had only one memory.” That memory, faint but enduring, was set in the downstairs den of their home. A young Sacktor asked his sister to read him a book, and she brushed him off: “Go ask your mother.” Sacktor glumly trudged up the stairs to the kitchen. It’s remarkable that, more than 60 years later, Sacktor remembers this fleeting childhood moment at all. The astonishing nature of memory is that every recollection is a physical trace, imprinted into brain tissue by the molecular machinery of neurons. How the essence of a lived moment is encoded and later retrieved remains one of the central unanswered questions in neuroscience. Sacktor became a neuroscientist in pursuit of an answer. At the State University of New York Downstate in Brooklyn, he studies the molecules involved in maintaining the neuronal connections underlying memory. The question that has always held his attention was first articulated in 1984 (opens a new tab) by the famed biologist Francis Crick: How can memories persist for years, even decades, when the body’s molecules degrade and are replaced in a matter of days, weeks or, at most, months? In 2024, working alongside a team that included his longtime collaborator André Fenton (opens a new tab), a neuroscientist at New York University, Sacktor offered a potential explanation in a paper published in Science Advances. The researchers discovered that a persistent bond between two proteins (opens a new tab) is associated with the strengthening of synapses, which are the connections between neurons. Synaptic strengthening is thought to be fundamental to memory formation. As these proteins degrade, new ones take their place in a connected molecular swap that maintains the bond’s integrity and, therefore, the memory. In 1984, Francis Crick described a biological conundrum: Memories last years, while most molecules degrade in days or weeks. “How then is memory stored in the brain so that its trace is relatively immune to molecular turnover?” he wrote in Nature. 請至原網頁觀看照片 The researchers present “a very convincing case” that “the interaction between these two molecules is needed for memory storage,” said Karl Peter Giese (opens a new tab), a neurobiologist at King’s College London who was not involved with the work. The findings offer a compelling response to Crick’s dilemma, reconciling the discordant timescales to explain how ephemeral molecules maintain memories that last a lifetime. Molecular Memory Early in his career, Sacktor made a discovery that would shape the rest of his life. After studying under the molecular memory pioneer James Schwartz at Columbia University, he opened his own lab at SUNY Downstate to search for a molecule that might help explain how long-term memories persist. The molecule he was looking for would be in the brain’s synapses. In 1949, the psychologist Donald Hebb proposed that repeatedly activating neurons strengthens the connections between them, or, as the neurobiologist Carla Shatz later put it: “Cells that fire together, wire together.” In the decades since, many studies have suggested that the stronger the connection between neurons that hold memories, the better the memories persist. In the early 1990s, in a dish in his lab, Sacktor stimulated a slice of a rat’s hippocampus — a small region of the brain linked to memories of events and places, such as the interaction Sacktor had with his sister in the den — to activate neural pathways in a way that mimicked memory encoding and storage. Then he searched for any molecular changes that had taken place. Every time he repeated the experiment, he saw elevated levels of a certain protein within the synapses. “By the fourth time, I was like, this is it,” he said. It was protein kinase M zeta, or PKMζ for short. As the rats’ hippocampal tissue was stimulated, synaptic connections strengthened and levels of PKMζ increased (opens a new tab). By the time he published his findings in 1993, he was convinced that PKMζ was crucial for memory. Todd Sacktor has devoted his career to pursuing the molecular nature of memory. 請至原網頁觀看照片 Over the next two decades, he would go on to build a body of work showing that PKMζ’s presence helps maintain memories long after their initial formation. When Sacktor blocked the molecule’s activity an hour after a memory was formed, he saw that synaptic strengthening was reversed. This discovery suggested that PKMζ was “necessary and sufficient (opens a new tab)” to preserve a memory over time, he wrote in Nature Neuroscience in 2002. In contrast, hundreds of other localized molecules impacted synaptic strengthening only if disrupted within a few minutes of a memory’s formation. It appeared to be a singular molecular key to long-term memory. To test his hypothesis in live animals, he teamed up with Fenton, who worked at SUNY Downstate at the time and had experience training lab animals and running behavioral experiments. In 2006, the duo published their first paper showing that blocking PKMζ could erase rats’ memories (opens a new tab) a day or a month after they had formed. This suggested that the persistent activity of PKMζ is required to maintain a memory. Individually, PKMζ and KIBRA don’t last a lifetime — but by binding to each other, they help ensure your memories might. The paper was a bombshell. Sacktor and Fenton’s star protein PKMζ gained widespread attention, and labs around the world found that blocking it could erase various types of memories, including those related to fear and taste. PKMζ seemed like a sweeping explanation for how memories form and are maintained at the molecular level. But then their hypothesis lost momentum. Other researchers genetically engineered mice to lack PKMζ, and in 2013, two independent (opens a new tab) studies (opens a new tab) showed that these mice could still form memories. This cast doubt on the protein’s role and brought much of the ongoing research to a halt. Sacktor and Fenton were undeterred. “We knew we had to figure it out,” Sacktor said. In 2016, they published a rebuttal (opens a new tab), demonstrating that in the absence of PKMζ, mice recruit a backup mechanism, involving another molecule, to strengthen synapses. The existence of a compensatory molecule wasn’t a surprise. “The biological system is not such that you lose one molecule and everything goes. That’s very rare,” Giese said. But identifying this compensatory molecule prompted a new question: How did it know where to go to replace PKMζ? It would take Sacktor and Fenton nearly another decade to find out. The Maintenance Bond A classic test of a molecule’s importance is to block it and see what breaks. Determined to pin down PKMζ’s role once and for all, Sacktor and Fenton set out to design a way to disrupt it more precisely than ever before. They developed a new molecule to inhibit the activity of PKMζ. It “worked beautifully,” Sacktor said. But it wasn’t clear how. One day in 2020, Matteo Bernabo, a graduate student from a collaborating lab at McGill University, was presenting findings related to the PKMζ inhibitor when a clue emerged from the audience. “I suggested that it worked by blocking the PKMζ’s interaction with KIBRA,” recalled Wayne Sossin (opens a new tab), a neuroscientist at McGill. KIBRA is a scaffolding protein. Like an anchor, it holds other proteins in place inside a synapse. In the brain, it is abundant in regions associated with learning and memory. “It’s not a protein that a lot of people work on,” Sossin said, but there is considerable “independent evidence that KIBRA has something to do with memory” — and even that it is associated with PKMζ (opens a new tab). Most research has focused on KIBRA’s role in cancer. “In the nervous system,” he said, “there are only three or four of us [studying it].” Sacktor and Fenton joined them. André Fenton and his team found that an interaction between two proteins is key to keeping memory intact over time. 請至原網頁觀看照片 To find out if KIBRA and PKMζ work together in response to synaptic activity, the researchers used a technique that makes interacting proteins glow. When they applied electrical pulses to hippocampal slices, glowing dots of evidence appeared: Following bursts of synaptic activity that produced long-term synaptic strengthening, a multitude of KIBRA-PKMζ complexes formed, and they were persistent. Then the team tested the bond during real memory formation by giving mice a drug to disrupt the formation of these complexes. They saw that the mice’s synaptic strength and task memory were lost — and that once the drug wore off, the erased memory did not return, but the mice could acquire and remember new memories once again. But are the KIBRA-PKMζ complexes needed to maintain memory over the long term? To find out, the researchers disrupted the complex four weeks after a memory was formed. Doing so did indeed wipe out the memory. This suggested that the interaction between KIBRA and PKMζ is crucial not only for forming memories, but also for keeping them intact over time. “It’s the persistent association between two proteins that maintains the memory, rather than a protein that lasts by itself for the lifetime of the memory,” said Panayiotis Tsokas, a researcher working with Sacktor and lead author on the new Science Advances paper. The KIBRA and PKMζ proteins stabilize each other by forming a bond. That way, when a protein degrades and needs to be replaced, the other remains in place. The bond itself and its location at the specific synapses that were activated during learning are preserved, allowing a new partner to slot itself in, perpetuating the alliance over time. Individually, PKMζ and KIBRA don’t last a lifetime — but by binding to each other, they help ensure your memories might. The discovery helps addresses the conundrum first identified by Crick, namely how memories persist despite the relatively short lifetimes of all biological molecules. “There had to be a very, very interesting answer, an elegant answer, for how this could come about,” Fenton said. “And that elegant answer is the KIBRA-PKMζ interacting story.” This work also answers a question that researchers had put on the shelf. Sacktor’s earlier study showed that increasing levels of PKMζ strengthened synapses and memories. But how did the molecule know where to go within the neuron? “We figured, well, one day, maybe we’ll understand that,” Sacktor said. Now, the researchers think that KIBRA acts as a synaptic tag that guides PKMζ. If true, this would help explain how only the specific synapses involved in a particular physical memory trace are strengthened, when a neuron may have thousands of synapses that connect it to various other cells. “These experiments very nicely show that KIBRA is necessary for maintaining the activity of PKMζ at the synapse,” said David Glanzman (opens a new tab), a neurobiologist at the University of California, Los Angeles, who was not involved in the study. However, he cautioned that this doesn’t necessarily translate to maintaining memory because synaptic strengthening is not the only model for how memory works. Glanzman’s own past research on sea slugs at first appeared to show that disrupting a molecule analogous to PKMζ erases memory. “Originally, I said it was erased,” Glanzman said, “but later experiments showed we could bring the memory back.” These findings prompted him to reconsider whether memory is truly stored as changes in the strength of synaptic connections. Glanzman, who has worked for 40 years under the synaptic model, is a recent proponent of an alternative view called the molecular-encoding model, which posits that molecules inside a neuron store memories. While he has no doubt that synaptic strengthening follows memory formation, and that PKMζ plays a major role in this process, he remains unsure if the molecule also stores the memory itself. Still, Glanzman emphasized that this study addresses some of the challenges of the synaptic model, such as molecular turnover and synapse targeting, by “providing evidence that KIBRA and PKMζ form a complex that is synapse-specific and persists longer than either individual molecule.” Although Sacktor and Fenton believe that this protein pair is fundamental to memory, they know that there may be other factors yet to be discovered that help memories persist. Just as PKMζ led them to KIBRA, the complex might lead them further still. Related: 1. How ‘Event Scripts’ Structure Our Personal Memories 2. Concept Cells Help Your Brain Abstract Information and Build Memories 3. Why Insect Memories May Not Survive Metamorphosis 4. AI Is Nothing Like a Brain, and That’s OK
本文於 修改第 1 次
|
大腦如和分辨自身和外在移動 - Sahana Sitaraman
|
|
推薦1 |
|
|
Is the World Spinning, or Is It Me? How the Brain Distinguishes Self and External Motion Researchers identified a region of the thalamus that predicts and nullifies the effects of movement on visual perception, enabling precise representation of the world. Sahana Sitaraman, PhD, 05/01/25 Researchers have identified a brain region that could encode information to reduce visual blur while an animal is on the move. iStock, cundra 請至原網頁觀看示意圖 In the 1860s, physician Hermann von Helmholtz did a simple experiment to understand how the world stays still during eye movements. With a still head, he closed one eye and swiveled the other to look around. Despite the rapid darting of the open eye, he noticed the image of the surroundings appeared stable rather than blurred. Next, instead of moving the eye naturally, he gently pushed it around in the socket with a finger and noticed the chaotic movement of the view. Why does the world shift when an external force is used to move the eye, but not when it swivels on its own? Von Helmholtz proposed that when an animal decides to move its eyes naturally, certain areas of the brain receive a duplicate of this command called the efference copy, which signals that the upcoming motion of the world is a result of eye movement and not actual shifting of the environment.1 This message is absent when an external force moves the visual field. “When the finger hijacks the movement, it takes away the efference copy and we see what happens in a world without one,” said Tomas Vega-Zuniga, a neuroscientist at the Institute of Science and Technology Austria (ISTA). The efference copy plays a crucial role in an animal’s ability to differentiate its own motion from that of the surrounding world, which in turn is essential for coherent perception and behavior.2 Understanding the neural mechanisms of this brain-body coordination has intrigued neuroscientists for decades. Previous investigations have suggested that the efference copy originates in various thalamic and cortical regions of the brain. Now, researchers at ISTA have shown that a region of the thalamus, called the ventral lateral geniculate nucleus (vLGN), serves as an interface between visual and motor neural circuits and is responsible for correcting self-motion-induced blur. These findings in mice, published in Nature Neuroscience, could help researchers better understand how an organism’s senses faithfully represent the world and enable appropriate behavior.3 Visual perception is complex and requires seamless communication between different brain areas. One such region that integrates visual and other sensory perceptions with movement is the superior colliculus.4 This multi-layered structure receives visual information directly from the retina, as well as indirectly via the visual cortex. “The superior colliculus is like a map of the world. It knows where things are in space,” said Vega-Zuniga, a study coauthor. In previous experiments in primates, researchers showed that the superior colliculus sends signals to areas in the cortex that control eye movements. So, Vega-Zuniga and his colleagues hypothesized that neurons that send information to the superior colliculus could serve as a source of the efference copy and correct motion-induced blur. An important function of the efference copy is to block certain sensory inputs to maintain coherent perception for the organism. For example, auditory signaling in crickets is diminished during their own chirps, so as not to desensitize hearing at other times. Since this is achieved through suppression or inhibition, the team focused on the vLGN, which forms inhibitory connections with neurons in superior colliculus. Neuroscientists Olga Symonova and Tomas Vega-Zuniga, along with their colleagues, used a custom-built imaging setup to visualize mice brain activity while the animals were awake and behaving. Institute of Science and Technology Austria 請至原網頁觀看照片 Vega-Zuniga and his colleagues wanted to determine how the vLGN modulates the activity of superior colliculus neurons in a mouse that is awake and performing diverse behaviors. They built an experimental setup in which the animal could run freely on a tiny ball while they peered into its brain through a tiny window in its skull. Using calcium imaging to visualize the activity of the vLGN neurons, the researchers observed that the thalamic neurons responded to both visual stimuli and movement, including eye movement, making them well-suited to modulate visual signals in the superior colliculus in response to self-motion. However, the question remained: Does the vLGN produce an efference copy that helps to differentiate between self and external motion? The researchers recorded vLGN neuronal responses to natural eye movement or eye movement simulated by the movement of a virtual environment around the mice, representing self and external motion, respectively. They found that the vLGN neurons only responded to self-motion. Additionally, when the team blocked vLGN activity, the neuronal responses to eye movement in the superior colliculus became longer and more frequent, suggesting that the vLGN shortens the effective time of visual exposure during movement, thus reducing blur. To confirm this, Vega-Zuniga and his colleagues tested how well the mice could perceive depth, which not only requires both visual and movement perception, but is detrimentally affected by blurry vision. They observed that mice in which the vLGN output was blocked showed reduced avoidance of a cliff in the behavioral arena, demonstrating difficulty in judging depth. Based on this, the authors suggested that the vLGN is important for visual perception during self-generated movements. However, some researchers think it’s too early to infer this with conviction. “They need to demonstrate that there’s an informative gate at the level of the vLGN that tells the cortex, ‘Look, it's not the external world that is moving, it's you that is moving,’” said Maria Morrone, a neuroscientist at the University of Pisa, who was not involved in the study. “That will change the field of active vision.” Aman Saleem, a behavioral neuroscientist at University College London who did not contribute to the study, expressed similar reservations. “Does [the vLGN] have a modulatory influence or is it actively encoding information?” he questioned. While he appreciates the in-depth characterization of the circuits and behaviors related to the vLGN, he is looking forward to data on the detailed computations at each step, from the information coming through the retina, to what the animal sees when it makes the movement. References 1. Sun LD, Goldberg ME. Corollary discharge and oculomotor proprioception: Cortical mechanisms for spatially accurate vision.Annu Rev Vis Sci. 2016;2:61-84. 2. Crapse TB, Sommer MA. Corollary discharge across the animal kingdom.Nat Rev Neurosci. 2008;9(8):587-600. 3. Vega-Zuniga T, et al. A thalamic hub-and-spoke network enables visual perception during action by coordinating visuomotor dynamics.Nat Neurosci. 2025;28:627-639. 4. Massot C, et al. Sensorimotor transformation elicits systematic patterns of activity along the dorsoventral extent of the superior colliculus in the macaque monkey.Commun Biol. 2019;2(1):287. Sahana Sitaraman, PhD, is a science journalist and an intern at The Scientist, with a background in neuroscience and microbiology. She has previously written for Live Science, Massive Science, and eLife. 相關閱讀: Evolutionary Biology Fruit Bats Echolocate During the Day Despite Having Great Vision
本文於 修改第 2 次
|
概念神經細胞 ------- Carlyn Zwarenstein
|
|
推薦1 |
|
|
對我來說,以下這則報導的內容是「新知」。如果實際情況的確如此(應該沒這麼簡單),則「意識」神祕的破解,或許在百年內有關鍵性的突破。 What makes humans intelligent? These unique neurons might hold the key Research suggests our intelligence may arise from specialized brain cells only found in humans Carlyn Zwarenstein, 03/25/25 Neuronal network, conceptual illustration. (CHRISTOPH BURGSTEDT/SCIENCE PHOTO LIBRARY / Getty Images) You probably have a general understanding of the human brain: a network of nerve cells connected by synapses. Complex or abstract ideas emerge as a result of the firing of many of these nerve cells, or neurons. This means that a concept or memory or idea is the result of a distributed pattern of neural activity. In contrast to computer memory, which always follows the same pattern of 1s and 0s, it's more like networks in our brain weave a new tapestry every time we think about something. This understanding is common to most of us lay people who know a little about neurology and the brain, or who are interested in AI and the attempts to replicate human intelligence. Unfortunately, you may be badly out of date. As it turns out, in human brains, we also have a specialized type of cell called a concept neuron, which does what was long thought to be impossible: each of these cells encodes entire concepts, so that that single neuron fires whenever you’re exposed to a stimulus relating to that concept, or even when you think about it without an external stimulus. This would be like having a single neuron that fires when you see a photograph of your grandmother, hear her voice, read her name or perhaps even smell her familiar perfume. This single neuron thus has semantic invariance for the concept of your grandmother. This means that it fires whenever your grandmother is the topic of thought, regardless of the context or medium or the sense that stimulates your thought of her. Dr. Florian Mormann, a physician and researcher who heads a working group on cognitive and clinical neurophysiology at the University of Bonn, told Salon in a video interview that “textbooks of neuroscience that still exist today often mention the grandmother neuron as an example of something you would never find in a brain. Because clearly, it would be so much more efficient to simply have a network of eight neurons if it can do the same job as 70 different neurons. So it seemed a no-brainer for everyone that there shouldn’t be grandmother neurons in any brain”. And yet, the no-brainer is seemingly wrong about the brain. As it turns out, grandmother neurons, or concept cells as they’re now known (or Jennifer Aniston neurons as they were called for a bit, as we’ll see) have been right there in our brains all along. And some scientists have known they are there, gradually learning more about them and their implications, over the past 20 years. It’s not a conspiracy: the news just hasn’t really trickled out to the rest of us. That’s partly because most scientists who physically get right inside skulls to study the brains inside them do so with the brains of non-human animals, so most brain research we hear about still doesn’t involve these neurons, which it seems exist exclusively in the human brain. Dr. Rodrigo Quian Quiroga, the director of the Centre for Systems Neuroscience at the University of Leicester, discovered concept cells twenty years ago. In research published in January, a team led by Quian Quiroga showed for the first time the way they respond to a given concept regardless of the context. This is in contrast to everything that is known about how memories are encoded in non-human animals, and it suggests that this might be a key adaptation behind human intelligence. It’s not ethical to carry out invasive procedures on human brains. For this reason, we don’t often conduct single neuron recordings, a type of research that requires access right inside the skull, on humans. “So in the past 50 years [single neuron recordings have] been done on a huge scale in especially rodents and also in monkeys, plus a few other mammals. But it’s mainly those two species,” Mormann, who, like Quian Quiroga, is one of very few researchers to conduct single neuron studies on humans, told Salon. “And of course, the multi-million dollar industry of rodent research does not exist because we’re all so fascinated by what the rodent brain can do. Not at all. It is because we believe it could serve as a valid model of the human brain for human episodic memory,” Mormann added. Instead, neuroscience has relied on the data gathered from doing such procedures on mouse and primate models. The assumption that underlies this is that a mouse or monkey brain serves as an adequate, if simplified, proxy for the human. And indeed, that’s been good enough to fuel decades of brain science. Single neuron recordings in humans Single neuron recordings are otherwise only done in rodent or primate models because brain surgery is never without risks, and those are not risks we usually ask humans to take without extremely good reason. A diagnostic procedure to ensure the surgeons don’t resect the wrong hippocampus when considering surgery for patients with epilepsy is something most of us would consider an extremely good reason. So Mormann’s team, and a small number of other research groups, study the firing of individual neurons by inserting fine microwires inside the hollow tube of the depth electrodes that have to be implanted anyway in order to diagnose the location of a patient's seizures in hopes of curing their epilepsy. Two decades ago, shortly after single neuron recordings started to be used by scientists to take advantage of this rare opportunity to engage in ethical neurological research in human beings, Dr. Rodrigo Quian Quiroga, who now heads up the Neural Mechanisms of Perception and Memory Research Group at the Hospital del Mar Research Institute in Barcelona, showed that individual neurons could be extremely selective, with a single neuron firing predictably in response to images of a single animal, individual or place. “Twenty years ago … I was doing experiments with a patient, and then I showed many pictures of Jennifer Aniston, and I found a neuron that responded only to her and to nothing else,” Quian Quiroga told Salon in a video interview. "And as I found this one, I found many others later on, and basically it was very clear that in an area called the hippocampus that is known to be critical for memory, we have neurons that represent, in this case, specific people, or in general, specific concepts. It can be a person, it can be an object, it can be a place — whatever is relevant to the person or to the patient." In that first work, described in 2005, UCLA neurosurgeon Itzhak Fried, Quian Quiroga, then his mentee, and other colleagues showed that a subset of neurons in the hippocampal formation, and more generally, in an area of the brain called the medial temporal lobe (MTL) that includes the hippocampus, fire in response to the subjects being shown pictures of a particular person under strikingly different conditions: at different ages, poses or contexts. A given neuron would even fire in response to reading the person’s name. Confusion is possible: for example, a Jennifer Aniston neuron might fire if you show the subject a picture of Lisa Kudrow, who played Phoebe on "Friends" alongside Aniston’s Rachel. “Nobody expected that this neuron type could exist but we show very clearly that they do exist,” Quian Quiroga said. They dubbed these brain cells Jennifer Aniston neurons. Whatever you call them, these neurons have not yet been found in rats, nor in monkeys, nor in any other mammal or organism — only in humans, despite Quian Quiroga challenging anyone, anywhere to find another creature with concept cells. But in the last two decades, no one has found them in anyone but humans, where they are reliably easy to find by testing neurons at random in the MTL. A brief history of concept cells Quian Quiroga’s team and others, like Mormann’s, conducted further studies that gradually revealed different properties of concept neurons in the MTL, primarily in the hippocampus, and how they behave in different conditions. Semantic concept neurons have also been found in different parts of the MTL, such as the amygdala and the entorhinal cortex. Recently, Mormann and his team identified smell-specific concept cells in the piriform cortex, which they dubbed olfactory concept cells. In fact, of 1,856 neurons tested in the piriform cortex in that study, 66 responded to both images and odors, meaning that they had semantic invariance with respect to the concept of the odor in question, firing in response to an image associated with the smell as well as the smell itself. Concepts encoded by concept neurons can be an animal, an article of clothing, a place, or a person. The concept can be evoked, and the associated neuron made to fire, by a direct stimulus such as seeing the person or hearing their voice; or without a stimulus, by imagery, free recall, or comparisons that reference the concept. “We show explicitly that [concept cells are] involved in forming and storing memories,” Quian Quiroga explained. “We had a series of experiments [where] we showed that these neurons are involved in the forming of memories, and they can do this very quickly. I mean, in just one shot, they can change the way they respond, and with this, they are encoding new experiences.” He further argued that concept cells were fundamental components, or building blocks, of declarative memory. It’s not surprising, he felt, that we have neurons responding to concepts in an area of the brain associated with memory — after all, we do tend to remember concepts and forget details. The human way of remembering is very abstract. Most of us don’t remember precisely what a person looks like, what they are wearing, or the words they say in a conversation; rather, we focus more on the basic ideas. As the authors (including Mormann) of a 2020 study of how single neurons in the MTL are able to code abstract meaning write, “Although semantic abstraction is efficient and may facilitate generalization of knowledge to novel situations, it comes at the cost of a loss of detail and may be central to the generation of false memories.” Still, that cost may well be the price of human intelligence. “I started arguing,” Quian Quiroga recalled, “that this is a trait of human intelligence, and … one of the key aspects that distinguishes us from other animals: the fact that we just don’t focus on details, but we’re able to extract what is the important information and focus on that. And this is the way we store our memories, and this is the way we think so we are not bombarded with details.” For humans, unlike our fellow animals, we just want the key point, the essential abstraction. That, Quian Quiroga maintains, is the way we remember, and the way we think. And that’s what he shows with the new study, published in Cell Reports. The research team recorded the activity of individual neurons while patients learned and then recalled two stories that described different situations but featured the same character or place. Nearly all of the neurons that fired initially did so without regard to the context, such that, as the authors explain, “taking all neurons together it is possible to decode the person/place being depicted in each story, but not the particular story.” The brain cells that fire during learning and memory are firing in response to the concept of that character and will fire in any context in which that concept features. Quian Quiroga believes that the development of language involved adaptation of neurons, common to all mammals, to this specialized purpose. Still, we don't know exactly when this arose, but perhaps it evolved gradually, Quian Quiroga theorized. "I think in the last 100,000 years, the moment that [Homo] sapiens started uttering words and attributing meaning to things in terms of words,” Quian Quiroga said. "Then the sapiens started thinking in terms of words instead of pictures. I think that created the big phase transition of intelligence, the fact that we started thinking in terms of words. I think this created concept cells, because once you attribute the word into something then you get completely rid of the details. And that’s exactly what concept cells do." A new frontier in neuroscience “The striking thing that also since 2005 has gone largely ignored [by] rodent electrophysiologists is that this degree of semantic invariance, plus also context independence, is something we observe only in the human. There’s no other species in the animal kingdom where this has been convincingly reported, and even in humans, we only find them in the medial temporal lobe and not any other brain region,” Mormann explained. “To me, [it’s] one of the most seminal discoveries of the last 50 years, at least, but has been largely ignored. And the reason why it’s been largely ignored is, in my opinion, because this type of electrophysiological research traditionally cannot be done in humans, for obvious ethical constraints.” But as we've seen, there is one very particular circumstance in which it’s necessary to carry out such invasive procedures. Patients with seizures as a result of epilepsy may require exploratory surgery — invasive seizure diagnostics, it’s called — to determine if they would be good candidates for a neurosurgical resection where the seizure-generating area of the brain would be removed, taking away the condition and any neurological or cognitive deficits associated with it. “Our job is to make sure that we found the seizure-generating area so that we can then provide the patient with three pieces of information. One, their chances of becoming permanently seizure-free if we resect that area of the brain, which reflects how certain we are that we’ve identified the epileptic focus. Two is what price they’ve got to pay, because there is often some residual function that might be gone once we remove [the epileptic tissue]. And the third one is the complication risk,” Mormann explained to Salon with evident care. The history of lobotomy is such that no one would want to be associated with reckless brain surgery. For a small proportion of this group of epilepsy patients, less than 10%, recording seizures using scalp EEG is enough to provide the necessary information. But in cases where they cannot reach a conclusion on those risks and benefits non-invasively, his team offers the patients diagnostic surgery in which they implant electrodes and use them to record seizures exactly where they are happening. The implants may be in for a week or more. “The area that is being implanted the most [or used to be] is the medial temporal lobe, simply because it’s very well-shielded from the outside on both sides, and also because that is the region that’s mandatory for episodic memory formation,” Mormann said. Henry Molaison — known for decades only as H.M. — became one of the most famous patients in neuroscience after losing his ability to form memories entirely in 1953. That incident, so unfortunate for him and so interesting for our understanding of memory, occurred due to a bilateral resection to control epileptic seizures that had blighted his life since the age of 10, possibly resulting from a minor bicycle accident. To say he had a bilateral resection means that the surgeons removed these structures on both hemispheres of the brain. The surgery was successful at curing the epilepsy, but left him with the inability to form new memories. (Interestingly, attempts to replicate the effects of Molaison’s surgery on memory in monkeys were unsuccessful at first, revealing that humans and monkeys use different parts of the brain for learning certain tasks.) So it’s safer to resect just one hemisphere. But if you choose the wrong one to remove, the patient will still have seizures, and now may have impaired memory as well. “That is why … it has become customary to be careful not to resect the wrong hippocampus, because simply, there’s no second attempt,” Mormann explained.
So, Mormann believes, the research in humans has been largely ignored. In his view, semantic concept cells in humans are thought of as “a fancy version of place cells” at best. Place cells are cells found, so far, only in rodents, that fire at certain spots as a mouse or rat moves through a linear track, indicating a cell specific to certain locations. (As Mormann’s own recent research has affirmed, humans do also have location-specific neurons that play similarly specialized roles in spatial awareness.) “I think there’s a paradigm shift involved in all this,” said Quian Quiroga. While colleagues have been teaching his work to undergraduates for years now, “there might be some inertia not to take this paradigm shift because … we assume that the human brain is kind of like an extrapolated version of the workings of the animal brain." He added that neuroscientists, have described certain principles in animal models, "assume that these principles will also apply to humans, although maybe with a bit of a higher complexity. And I think what we’re showing is that … we shouldn’t take this for granted.” Not that this invalidates the use of animal models in neurology. But it means that, rather than expecting a rat’s brain or a monkey’s brain to tell us all about how things work in humans, the differences between their brains and ours may be what’s really of interest. In fact, Quian Quiroga is busy with experiments aimed at quantifying the differences between the way humans process information in the hippocampus or in the memory system in general compared to what’s been described over the last fifty years of memory research using other animal models. “This paper we just published is the first that I expect to be a series of studies because this is just showing the tip of the iceberg,” Quian Quiroga said. What concept cells could mean for AI Of course, all those years of focus on mouse and monkey brains as models for how we think have also informed our work in artificial intelligence. Might this explain why, impressive though it is, we have not yet replicated the way humans really think? It wasn't until 2020, after fifteen years of experimental evidence supporting the existence of concept cells, that three researchers writing in Scientific Reports set out a theoretical justification for the possibility of such structures existing, and in fact for the likelihood of the existence of such cells in the hippocampus. "Three fundamental conditions, fulfilled by the human brain, ensure high cognitive functionality of single cells," the authors write. Till now though, it was the very different, prevailing understanding of the brain that has guided our development of artificial brains: that the coordinated action of countless neurons — a neural network — is what allows for the representation of abstract concepts. Although of course we do use a distributed neural network for many aspects of our cognition, now it seems that representing entire concepts using specific cells with semantic invariance might be a key difference between human intelligence and that of other animals. Perhaps also of interest to those hoping to build artificial brains, what Quian Quiroga’s work suggests is a possible explanation for how it is that anatomically, the brain of a human and that of a chimpanzee is not all that different. The human’s is bigger, but not so much as to explain the very considerable difference in intelligence. “So my point is not that the human brain is different,” Quian Quiroga said. “It’s that the human brain must be working differently. It is just the fact that in the chimpanzee brain, you will have a visual stimulus going all the way from visual processing areas into your memory system. So you form memories based on pictures, based on images. In the human brain, the visual system comes to a point, and then you extract a meaning from it, and it’s only the meaning and not the stimulus itself that goes into the memory system” Our anatomy has barely changed from our common ancestor with chimpanzees. But language, it seems, has drastically changed how we use it. Carlyn Zwarenstein writes about science for Salon. She's also the author of a book about drugs, pain, and the consolations of art, On Opium: Pain, Pleasure, and Other Matters of Substance. MORE FROM Carlyn Zwarenstein Read more * A "talking" cat is giving scientists insight into how felines think * Death seems “kind of arbitrary”: Scientists want to upload the brain so we can live forever * How naturally-produced opioids held plumb the depths of animal minds * The first observations of octopus brain waves revealed how alien their minds truly are We need your help to stay independent Subscribe today to support Salon's progressive journalism
本文於 修改第 1 次
|
男女大腦有別嗎?-N. Lanese
|
|
推薦1 |
|
|
gender:性向(此「詞」的「信」、「達」待確定);根據「文化」(行為模式、自我定位、判斷基準等)對人所做的區分;請參看分別1、分別2。 sex:性別;根據生物構成單元性質(如染色體)和生理結構(如生殖器官)對人所做的區分;請參看分別1、分別2。 Is there really a difference between male and female brains? Emerging science is revealing the answer. Nicoletta Lanese, 03/07/25 Brain scans, postmortem dissections, artificial intelligence and lab mice reveal differences in the brain that are linked to sex. Do we know what they mean? 請至原網頁觀看插圖 You're holding two wrinkly human brains, each dripping in formaldehyde. Look at one and then the other. Can you tell which brain is female and which is male? You can't. Humanity has been hunting for sex-based differences in the brain since at least the time of the ancient Greeks, and it has largely been an exercise in futility. That's partly because human brains do not come in two distinct forms, said Dr. Armin Raznahan, chief of the National Institute of Mental Health's Section on Developmental Neurogenomics. "I'm not aware of any measure you can make of the human brain where the male and female distributions don't overlap," Raznahan told Live Science. But the question of how male and female brains differ may still matter, because brain diseases and psychiatric disorders manifest differently between the sexes. Disentangling how much of that difference is rooted in biology versus the environment could lead to better treatments, experts argue. There are many different disorders of the brain — psychiatric and neurologic diseases — that occur with different prevalence and are expressed in different ways between sexes, said Dr. Yvonne Lui, a clinician-scientist and vice chair of research in NYU Langone's Department of Radiology. "Trying to understand baseline differences can help us better understand how diseases manifest." Now, thanks in part to artificial intelligence (AI), scientists are starting to reliably distinguish male and female brains using subtle differences in their cellular structures and in neural circuits that play a role in a wide range of cognitive tasks, from visual perception to movement to emotional regulation. Other studies point to sex-based differences in human brain structure that may be present from birth, and still other, lab-based research in animals points to sex-based differences in how brain cells fire at a molecular level. What's still completely unclear is to what extent these differences matter. Do they change how people's brains function or how susceptible they are to disease? Should they dictate which treatments doctors offer to each patient? Even as scientists pinpoint subtle brain differences between females and males, their research inevitably runs up against tricky questions of how sex, gender and culture interplay to sculpt human cognition. Right now, it's impossible to answer these big questions. But ongoing and future research — focused on lab animals, human chromosomes and brain development, and subjects followed from youth through adulthood — could start to reveal how these sex-based differences concretely affect cognition, and ultimately, the development of diseases of the brain. Why study sex-based brain differences? Historically, scientists used purported brain differences to make sweeping statements about how men and women think and behave and to justify sexist beliefs that women were innately less intelligent and less capable than men. While that early research has been discredited, modern studies still find cognitive differences between men and women — at least on average. For example, men reportedly perform better on tests of spatial ability, while women are better at interpreting the facial expressions of others. But men and women are raised and treated very differently in society, so what's at the root of these differences? Is it nature or nurture, or both? "It's actually incredibly difficult in humans to … causally distinguish how much of a sex difference is societally or environmentally driven," Raznahan said. "We have all of these assumptions and biases that sort of slip into our heads through the back door without us realizing." Given the dubious history of studying sex differences in the brain, and the logistical difficulty of doing it the right way, one might wonder why scientists bother. For many, it's because neurological diseases and psychiatric conditions seem to play out differently in males and females, and both biological and environmental factors could explain why that is. Data suggest women experience higher rates of depression and migraine than men do, while men have higher rates of schizophrenia and autism. About twice the number of men develop Parkinson's disease than women do, but women with the condition tend to have faster-progressing disease. All these data come from studies that don't necessarily distinguish sex from gender — "sex" describes biology, while "gender" reflects self-identity, as well as societal roles and pressures. Lumping the two concepts together muddies our understanding of why a given difference exists. For instance, pubescent girls are more likely to experience depression than boys are, which may be related to how their maturing brains handle stress or the possibility that they encounter more stressful events than boys do at that age. Conversely, do boys' brains make them resilient against depression, or are they actually going underdiagnosed due to social stigma? The answers to these questions point to different solutions. Scientists argue that understanding the biological factors behind differences in neurological and psychiatric disorders could lead to better, tailored treatments for each sex. (Image credit: Photo illustration by Marilyn Perkins; source image by hidesy via Getty Images) 請至原網頁觀看照片 Large-scale structures, negligible differences Thanks to brain-scanning techniques like MRI, scientists have found subtle sex differences in the size, shape and thickness of various brain structures, as well as differences in networks that link different parts of the brain. But these differences are small to negligible when you account for the average size difference between males and females, argues Lise Eliot, a professor of neuroscience at the Rosalind Franklin University of Medicine and Science and author of "Pink Brain, Blue Brain" (Houghton Mifflin Harcourt, 2009). Eliot and colleagues recently looked at about 30 years of studies, finding that, on average, male brains are 6% larger than female brains at birth and grow to be 11% larger by adulthood. This makes sense because average brain size scales along with average body size, and male bodies tend to be larger. But when you take this overall size difference into account, subtler structural differences between male and female brains shrink to the point of negligibility, the researchers concluded. "There are maybe species-wide sex differences in the brain, but so far, they haven't been proven," Eliot told Live Science. "And so if they exist, they must be pretty small." Nonetheless, some scientists have reported differences that they say don't scale with body size. Some examples came from a research group who'd crunched MRI data from over 40,000 adult brains scanned for the UK Biobank, a repository of medical data from 500,000 adults in the United Kingdom. The best established structural difference between male and female brains is the average difference in whole-brain volume. Across many, but not all studies, the putamen tends to be larger in males. Findings about size differences in other structures — such as the hippocampus, nucleus accumbens and thalamus — have been more variable across studies, Eliot and colleagues argue. The above differences were reported in the UK Biobank study. (Image credit: Marilyn Perkins) 請至原網頁觀看解說圖 In that study, males had a larger thalamus, a relay station for sensory information. They also had a larger putamen, which helps control movement and forms part of a feedback loop that tells you whether a movement was well executed. Females, on average, had a larger left-side nucleus accumbens, part of the brain's reward center, and a bigger hippocampus, the storage site for short-term memories of facts and events that also helps transfer the information to long-term memory. But neither this nor other studies have revealed a specific feature that reliably distinguishes a given male brain from a female brain, since the size ranges seen in each sex largely overlap, Raznahan and colleagues noted in a letter responding to that study. For the few size differences that do exist, it's currently impossible to say whether they explain any differences in cognition linked to sex, or alternatively, whether they actually make males' and females' cognition more similar, the letter authors noted. Perhaps male and female brains operate slightly differently to reach the same output — to "counterbalance" differences in hormones or genetics that may affect brain function, they wrote. "When we're just talking about describing a difference in a measurement, that's not saying anything about whether it's got any functional relevance at all," Raznahan emphasized. AI finds subtle differences While large-scale structural features might not distinguish male and female brains, AI is helping to uncover other, subtler features that may differentiate the two. Some of these differences appear on the level of the brain's microstructure, meaning its individual cells and components of those cells. For instance, a study published in May 2024 used different AI models to analyze brain scans from 1,030 young adults ages 22 to 37 years old. The research primarily focused on white matter, the bundles of insulated wiring that run between neurons. "I believe ours is the first study to detect brain microstructural differences between sexes," said Lui, who co-authored the study. The AI models analyzed differences in both local landmarks in the brain — such as the corpus callosum, which connects the brain's two halves — and the highways that connect distant cells. It also looked at differences in how the white matter was bundled together, as well as in how dense and well insulated those bundles were. The algorithms accurately predicted the sex of the subject tied to a given scan 92% to 98% of the time. That remaining gap in accuracy likely comes down to the "huge amount of variance in humans," Lui said. No single part of the brain could be used to make predictions; one model relied on 15 distinct regions of white matter. All models showed some consistencies, though, with the largest white matter structure that crosses the midline, the corpus callosum, standing out as key. This figure displays regions of white matter that were important for predicting a given study participant's sex (labeled red). Specifically, this figure highlights areas that were important due to their distinct "fractional anisotropy," a common measure of white-matter integrity. The labels along the left-hand side correspond with the three AI algorithms used in the study. (Image credit: Chen, et al. (2024) doi: 10.1038/s41598-024-60340-y (CC by 4.0)) 請至原網頁觀看照片 From birth Lui and colleagues' study was not designed to address how an individual's upbringing or environment shapes the brain. Nor did it aim to disentangle biological differences in the brain from those rooted in gender. Sex describes biological differences in anatomy, physiology, hormones and chromosomes. Sex traits are categorized as male or female, although some people's traits don't fit neatly in either category. Gender, on the other hand, is cultural. It encompasses how people identify and express themselves, as well as how they are treated and expected to behave by others. Genders include man and woman, as well as others, including those that fall under the umbrella term nonbinary or are unique to specific cultures, like the māhū of Hawai'i. Historically, studies have conflated sex and gender. To tease these factors apart and see how each manifests in the brain, it would be helpful to follow people over time as their brains are developing — and new research is beginning to do just that. For example, a 2024 study looked at average brain volume in over 500 newborns: Males' brains were 6% larger overall, even after accounting for differences in birth weight, and females had larger gray-to-white matter ratios. (Gray matter, the cell bodies of neurons, is primarily found in the outer layer of the brain, called the cortex.) That average difference in gray matter is also seen in adults, which makes sense given that larger brains need more white matter to relay signals between far-apart cells. Statistically, these big-picture brain differences were more significant than differences seen in smaller structures. Females had larger corpus callosa, as well as more gray matter around the hippocampus and in a key emotion-processing hub called the left anterior cingulate gyrus (ACG). Males had more gray matter in parts of the temporal lobe involved in sensory processing, as well as in the subthalamic nucleus, key for movement control. But sex could only explain a fraction of the variance seen in these structures. As in adults, whole-brain volume differences have been consistently reported in children of different sexes. Data regarding size differences in smaller features of the brain have been less consistent across studies. The above graphic reflects the findings of the 2024 study in newborns. (Image credit: Marilyn Perkins) 請至原網頁觀看解說圖 Some of these brain differences are "present from the earliest stage of postnatal life" and persist into adulthood, the authors noted. This applies mostly to the global differences, but also potentially to some of the smaller ones. For example, some studies — but not all — show that the left ACG is also larger in adult females, not only in babies. Durable differences present from birth are likely sex-based. But differences that emerge or disappear in later life, like those in the hippocampus, may be influenced by the environment, or else reflect sex differences in development, including hormonal shifts in puberty. Gender and sex Studies like this can help tease apart the influence of sex and gender on the brain. At present, there's a "massive gap" in our understanding of how these factors shape the brain independently and in tandem, said Elvisha Dhamala, an assistant professor of psychiatry at the Feinstein Institutes for Medical Research in New York. Dhamala and colleagues recently aimed to fill in that gap using data from the Adolescent Brain and Cognitive Development (ABCD) study, an enormous U.S.-based study of brain development and child health. They incorporated functional MRI (fMRI) scans from nearly 4,800 children; fMRI tracks blood flow in the brain to give an indirect measure of brain activity. Each child joined the study at age 9 or 10 and will be followed for 10 years, which will enable follow-up studies. The fMRI scans highlighted linked brain areas, or networks that lit up as the children did different tasks, including memory tests that required them to recall several images. The children and their parents also answered questions about the kids' feelings about their genders and how they typically play and express themselves. "It's not anything clinical," Dhamala noted. "It's just an aspect of behavior that represents your gender." These answers were used to generate "scores" for each child that the AI algorithm could use as data points. This figure illustrates associations between brain networks in the cortex, as well as non-cortical structures (top left), and the children's sexes and genders. The heatmap in the top right shows correlations between the various networks and sex, with warmer colors indicating stronger correlations and cooler colors indicating weaker correlations. The bottom two heatmaps display correlations to the gender scores generated from the parents' questionnaires. The left-bottom map shows data for children assigned female at birth (AFAB), and the right-bottom map shows data for kids assigned male at birth (AMAB). (Image credit: Dhamala, et al. (2024) doi: 10.1126/sciadv.adn4202) 請至原網頁觀看解說圖 The algorithm ultimately revealed two largely distinct brain networks tied to sex and gender. The brain differences most strongly tied to sex were found in networks responsible for processing visual stimuli and physical sensations, controlling movement, making decisions and regulating emotions. Differences tied to gender were more widely dispersed, involving connections within and between many areas in the cortex. After pinpointing these networks, the researchers trained their AI algorithms to "predict" a child's sex or gender based on brain activity. They accurately determined most children's sexes, similar to the results of Lui's study. Gender proved trickier: With the children's questionnaire answers, the AI couldn't predict where they landed on a continuum of gender, whereas with the parents' answers, its predictive power exceeded chance but was still "much lower" than the predictions for sex, Dhamala said. Nonetheless, the study highlighted an understudied idea: that gender sculpts the brain in ways that are distinct from sex, she said. Interestingly, some tentative lines can be drawn between Lui's and Dhamala's AI-powered studies. They can't be directly compared, as the two studies used different types of analyses and focused on different features of the brain. But many of the physical white matter tracts flagged in the former study correspond with functional networks highlighted in the latter, Dhamala told Live Science. As an example, the cingulum — a white-matter tract that encircles the corpus callosum — seemed key for making predictions in Lui's study. It also links together various networks flagged in Dhamala's study, including circuits involved in emotional processing. That hints that sex differences exist in both the physical anatomy of these networks and in their activation patterns, Dhamala said. The future of the sex-difference field Scientists have made some progress at teasing out sex differences in the brain, but to truly understand these distinctions, researchers will need to do more animal studies to allow for more experimental control, according to a 2020 paper co-authored by Raznahan. Various studies in lab rats have already revealed differences in how males and females form connections between neurons, and how each sex processes fearful memories, for example. In humans, scientists can collect more brain data right at the time of birth, to pinpoint baseline differences that might exist before a child encounters any cultural influences, and then track the child over time, Raznahan and colleagues added. Another option is to study human genes that are unique to either the X or Y chromosome. By looking at people with extra or missing sex chromosomes, for example, scientists have started to unravel how these genes either inflate or shrink brain structures, contributing to sex differences in size. Chromosomes may also raise or lower the risk of disorders — for instance, carrying an extra Y raises the likelihood that a person has autism, whereas an extra X does not. That may help to explain why males, who usually carry one X and one Y, have higher autism rates than females, who typically have two Xs. Right now, the fate of such research is uncertain in the U.S. Prompted by executive orders from the new presidential administration, the National Science Foundation has been combing through active research projects to see if they include words that might violate said orders, such as "woman," "female" and "gender," and the National Institutes of Health appeared to archive a long-standing policy requiring both male and female lab animals in studies. "There's just a lot of uncertainty," Dhamala told Live Science. If the worst case scenario comes to pass, "removing that gender component, or making it harder to study sex differences, is going to push us backward rather than forward." But if the field survives, future work could incorporate gender the way the ABCD study did, using questionnaires to generate composite scores, Dhamala said. As a start, scientists could at least ask study participants what gender they identify as, she added. Other experts agree. By adopting these strategies, scientists could dramatically advance this research field that dates back to Aristotle. Their efforts could lend new talking points to the endless debate of nature versus nature. They could uncover meaningful sex differences that pave the way to better treatments for depression, Alzheimer's and more. Or they could highlight the ways members of the "opposite sex" are actually more alike than they are different. Nicoletta Lanese, Channel Editor, Health, is the health channel editor at Live Science and was previously a news editor and staff writer at the site. She holds a graduate certificate in science communication from UC Santa Cruz and degrees in neuroscience and dance from the University of Florida. Her work has appeared in The Scientist, Science News, the Mercury News, Mongabay and Stanford Medicine Magazine, among other outlets. Based in NYC, she also remains heavily involved in dance and performs in local choreographers' work. Science Spotlight Science Spotlight takes a deeper look at emerging science and gives you, our readers, the perspective you need on these advances. Our stories highlight trends in different fields, how new research is changing old ideas, and how the picture of the world we live in is being transformed thanks to science. Related: * 'Let's just study males and keep it simple': How excluding female animals from research held neuroscience back, and could do so again * Babies' brain activity changes dramatically before and after birth, groundbreaking study finds * Men have a daily hormone cycle — and it's synced to their brains shrinking from morning to night * Pregnancy shrinks parts of the brain, leaving 'permanent etchings' postpartum
本文於 修改第 2 次
|
大腦處理資訊的基本原則 ---- Brandon Robert Munn
|
|
推薦1 |
|
|
這篇文章報導的不只是大腦神經學研究結果,作者歸納出來的原則也適用於企業和政治。一個社會要蓬勃發展,每一位成員都要有一定程度的能力和自由發揮的空間。前者來自教育和具備基本經濟能力,後者建立在合理和開放的社會組織。 How do brains coordinate activity? From fruit flies to monkeys, we discovered this universal principle Brandon Robert Munn, 11/06/24 The brain is a marvel of efficiency, honed by thousands of years of evolution so it can adapt and thrive in a rapidly changing world. Yet, despite decades of research, the mystery of how the brain achieves this has remained elusive. Our new research, published in the journal Cell, reveals how neurons – the cells responsible for your childhood memories, thoughts and emotions – coordinate their activity. It’s a bit like being a worker in a high-performing business. Balancing individual skills with teamwork is key to success, but how do you achieve the balance? As it turns out, the brain’s secret is surprisingly simple: devote no more than half (and no less than 40%) of each cell’s effort to individual tasks. Where does the rest of the effort go? Towards scalable teamwork. And here’s the kicker: we found the exact same organisational structure across the brains of five species – from fruit flies and nematodes to zebrafish, mice and monkeys. These species come from different branches of the tree of life that are separated by more than a billion years of evolution, suggesting we may have uncovered a fundamental principle for optimised information processing. It also offers powerful lessons for any complex system today. The critical middle ground Our discovery addresses a long-standing debate about the brain: do neurons act like star players (each highly specialised and efficient) or do they prioritise teamwork (ensuring the whole system works even when some elements falter)? Answering this question has been challenging. Until recently, neuroscience tools were limited to either recording the activity of a few cells, or of several million. It would be like trying to understand a massive company by either interviewing a handful of employees or by only receiving high-level department summaries. The critical middle ground was missing. However, with advances in calcium imaging, we can now record signals from tens of thousands of cells simultaneously. Calcium imaging is a method that lets us watch neural activity in real time by using fluorescent sensors that light up according to calcium levels in the cell. An example of calcium imaging shows neuron activity in a zebrafish brain. (請至原網頁觀看此視頻) Applying insights from my physics training to analyse large-scale datasets, we found that brain activity unfolds according to a fractal hierarchy. Cells work together to build larger, coordinated networks, creating an organisation with each scale mirroring those above and below. This structure answered the debate: the brain actually does both. It balances individuality and teamwork, and does so in a clever way. Roughly half of the effort goes to “personal” performance as neurons collaborate within increasingly larger networks. The Sierpiński triangle is an example of a fractal, where the same pattern repeats at infinite scales. Beojan Stanislaus/Wikimedia Commons, CC BY-SA (請至原網頁查看圖片) The brain can rapidly adapt to change To test whether the brain’s structure had unique advantages, we ran computational simulations, revealing that this fractal hierarchy optimises information flow across the brain. It allows the brain to do something crucial: adapt to change. It ensures the brain operates efficiently, accomplishing tasks with minimal resources while staying resilient by maintaining function even when neurons misfire. Whether you are navigating unfamiliar terrain or reacting to a sudden threat, your brain processes and acts on new information rapidly. Neurons continuously adjust their coordination, keeping the brain stable enough for deep thought, yet agile enough to respond to new challenges. The multiscale organisation we found allows different strategies – or “neural codes” – to function at different scales. For instance, we found that zebrafish movement relies on many neurons working in unison. This resilient design ensures swimming continues smoothly, even in fast-changing environments. By contrast, mouse vision adapts at the cellular scale, permitting the precision required to extract fine details from a scene. Here, if a few neurons miss key pieces of information, the entire perception can shift – like when an optical illusion tricks your brain. Evolutionary tree of species analysed in our study, each displaying a fractal neural organisation that balances efficiency and resilience. (MYA: million years ago; BYA: billion years ago) Brandon Munn (請至原網頁查看圖片) Our findings reveal that this fractal coordination of neuron activity occurs across a vast evolutionary span: from vertebrates, whose last common ancestor lived 450 million years ago, to invertebrates, dating back a billion years. This suggests brains have evolved to balance efficiency with resilience, allowing for optimised information processing and adaptability to new behavioural demands. The evolutionary persistence hints that we’ve uncovered a fundamental design principle. A fundamental principle? These are exciting times, as physics and neuroscience continue interacting to uncover the universal laws of the brain, crafted over aeons of natural selection. Future work will be needed to see how these principles might play out in the human brain. Our findings also hint at something bigger: this simple rule of individual focus and scalable teamwork might not just be a solution for the brain. When elements are organised into tiered networks, resources can be shared efficiently, and the system becomes robust against disruptions. The best businesses operate in the same way — when a new challenge arises, individuals can react without waiting for instructions from their manager, allowing them to solve the problem while remaining supported by the organisation rapidly. It may be a universal principle to achieve resilience and efficiency in complex systems. It appears basketball legend Michael Jordan was right when he said: “talent wins games, but teamwork and intelligence win championships”. Brandon Robert Munn is a Postdoctoral research fellow, University of Sydney.
本文於 修改第 3 次
|
老鼠開小車之情緒/大腦神經/行為三體互動 -- Kelly Lambert
|
|
推薦1 |
|
|
請至原網頁參看相關照片及視頻。 索引: accumbens:依伏神經核 appetitive:此處:誘動性的,具激勵性的;促進食慾的 Froot Loop:穀物圈早餐麥片品牌名稱 spur:此處:鼓勵,激勵,鞭策;促進,加速;尖物,馬刺;(用馬刺)策(馬) Neuroscientists taught rats to drive tiny cars. They took them out on 'joy rides.' Scientists taught rats to drive to a certain destination, but the rodents took a detour, suggesting they enjoy both the journey and the rewarding destination. Kelly Lambert, 11/16/24 We crafted our first rodent car from a plastic cereal container. After trial and error, my colleagues and I found that rats could learn to drive forward by grasping a small wire that acted like a gas pedal. Before long, they were steering with surprising precision to reach a Froot Loop treat. As expected, rats housed in enriched environments — complete with toys, space and companions – learned to drive faster than those in standard cages. This finding supported the idea that complex environments enhance neuroplasticity: the brain's ability to change across the lifespan in response to environmental demands. After we published our research, the story of driving rats went viral in the media. The project continues in my lab with new, improved rat-operated vehicles, or ROVs, designed by robotics professor John McManus and his students. These upgraded electrical ROVs — featuring rat-proof wiring, indestructible tires and ergonomic driving levers — are akin to a rodent version of Tesla's Cybertruck. As a neuroscientist who advocates for housing and testing laboratory animals in natural habitats, I've found it amusing to see how far we've strayed from my lab practices with this project. Rats typically prefer dirt, sticks and rocks over plastic objects. Now, we had them driving cars. But humans didn't evolve to drive either. Although our ancient ancestors didn't have cars, they had flexible brains that enabled them to acquire new skills — fire, language, stone tools and agriculture. And some time after the invention of the wheel, humans made cars. Although cars made for rats are far from anything they would encounter in the wild, we believed that driving represented an interesting way to study how rodents acquire new skills. Unexpectedly, we found that the rats had an intense motivation for their driving training, often jumping into the car and revving the "lever engine" before their vehicle hit the road. Why was that? Some rats training to drive press a lever before their car is placed on the track, as if they're eagerly anticipating the ride ahead. The new destination of joy Concepts from introductory psychology textbooks took on a new, hands-on dimension in our rodent driving laboratory. Building on foundational learning approaches such as operant conditioning, which reinforces targeted behavior through strategic incentives, we trained the rats step-by-step in their driver's ed programs. Initially, they learned basic movements, such as climbing into the car and pressing a lever. But with practice, these simple actions evolved into more complex behaviors, such as steering the car toward a specific destination. The rats also taught me something profound one morning during the pandemic. It was the summer of 2020, a period marked by emotional isolation for almost everyone on the planet, even laboratory rats. When I walked into the lab, I noticed something unusual: The three driving-trained rats eagerly ran to the side of the cage, jumping up like my dog does when asked if he wants to take a walk. Had the rats always done this and I just hadn't noticed? Were they just eager for a Froot Loop, or anticipating the drive itself? Whatever the case, they appeared to be feeling something positive — perhaps excitement and anticipation. Behaviors associated with positive experiences are associated with joy in humans, but what about rats? Was I seeing something akin to joy in a rat? Maybe so, considering that neuroscience research is increasingly suggesting that joy and positive emotions play a critical role in the health of both human and nonhuman animals. With that, my team and I shifted focus from topics such as how chronic stress influences brains to how positive events — and anticipation for these events — shape neural functions. Working with postdoctoral fellow Kitty Hartvigsen, I designed a new protocol that used waiting periods to ramp up anticipation before a positive event. Bringing Pavlovian conditioning into the mix, rats had to wait 15 minutes after a Lego block was placed in their cage before they received a Froot Loop. They also had to wait in their transport cage for a few minutes before entering Rat Park, their play area. We also added challenges, such as making them shell sunflower seeds before eating. This became our Wait For It research program. We dubbed this new line of study UPERs — unpredictable positive experience responses — where rats were trained to wait for rewards. In contrast, control rats received their rewards immediately. After about a month of training, we expose the rats to different tests to determine how waiting for positive experiences affects how they learn and behave. We're currently peering into their brains to map the neural footprint of extended positive experiences. Preliminary results suggest that rats required to wait for their rewards show signs of shifting from a pessimistic cognitive style to an optimistic one in a test designed to measure rodent optimism. They performed better on cognitive tasks and were bolder in their problem-solving strategies. We linked this program to our lab's broader interest in behaviorceuticals, a term I coined to suggest that experiences can alter brain chemistry similarly to pharmaceuticals. This research provides further support of how anticipation can reinforce behavior. Previous work with lab rats has shown that rats pressing a bar for cocaine — a stimulant that increases dopamine activation — already experience a surge of dopamine as they anticipate a dose of cocaine. The tale of rat tails It wasn't just the effects of anticipation on rat behavior that caught our attention. One day, a student noticed something strange: One of the rats in the group trained to expect positive experiences had its tail straight up with a crook at the end, resembling the handle of an old-fashioned umbrella. I had never seen this in my decades of working with rats. Reviewing the video footage, we found that the rats trained to anticipate positive experiences were more likely to hold their tails high than untrained rats. But what, exactly, did this mean? Curious, I posted a picture of the behavior on social media. Fellow neuroscientists identified this as a gentler form of what's called Straub tail, typically seen in rats given the opioid morphine. This S-shaped curl is also linked to dopamine. When dopamine is blocked, the Straub tail behavior subsides. Natural forms of opiates and dopamine — key players in brain pathways that diminish pain and enhance reward — seem to be telltale ingredients of the elevated tails in our anticipation training program. Observing tail posture in rats adds a new layer to our understanding of rat emotional expression, reminding us that emotions are expressed throughout the entire body. While we can't directly ask rats whether they like to drive, we devised a behavioral test to assess their motivation to drive. This time, instead of only giving rats the option of driving to the Froot Loop Tree, they could also make a shorter journey on foot — or paw, in this case. Surprisingly, two of the three rats chose to take the less efficient path of turning away from the reward and running to the car to drive to their Froot Loop destination. This response suggests that the rats enjoy both the journey and the rewarding destination. Rat lessons on enjoying the journey We're not the only team investigating positive emotions in animals. Neuroscientist Jaak Panksepp famously tickled rats, demonstrating their capacity for joy. Research has also shown that desirable low-stress rat environments retune their brains' reward circuits, such as the nucleus accumbens. When animals are housed in their favored environments, the area of the nucleus accumbens that responds to appetitive experiences expands. Alternatively, when rats are housed in stressful contexts, the fear-generating zones of their nucleus accumbens expand. It is as if the brain is a piano the environment can tune. Neuroscientist Curt Richter also made the case for rats having hope. In a study that wouldn't be permitted today, rats swam in glass cylinders filled with water, eventually drowning from exhaustion if they weren't rescued. Lab rats frequently handled by humans swam for hours to days. Wild rats gave up after just a few minutes. If the wild rats were briefly rescued, however, their survival time extended dramatically, sometimes by days. It seemed that being rescued gave the rats hope and spurred them on. The driving rats project has opened new and unexpected doors in my behavioral neuroscience research lab. While it's vital to study negative emotions such as fear and stress, positive experiences also shape the brain in significant ways. As animals — human or otherwise — navigate the unpredictability of life, anticipating positive experiences helps drive a persistence to keep searching for life's rewards. In a world of immediate gratification, these rats offer insights into the neural principles guiding everyday behavior. Rather than pushing buttons for instant rewards, they remind us that planning, anticipating and enjoying the ride may be key to a healthy brain. That's a lesson my lab rats have taught me well. Dr. Kelly Lambert received her undergraduate degree from Samford University in Birmingham AL (majoring in psychology and biology) in 1984 and her M.S. and Ph.D. in the field of Biopsychology from the University of Georgia in 1988. After spending 28 years at Randolph-Macon College in Ashland VA where she served as the Macon and Joan Brock Professor and Chair of the Psychology Department, Co-Director of Undergraduate Research, and Director of the Behavioral Neuroscience Major, she recently joined the faculty at the University of Richmond as Professor of Behavioral Neuroscience. She enjoys teaching courses such as Behavioral Neuroscience, Clinical Neuroscience, Comparative Animal Behavior, Neuroplasticity and Psychobiology of Stress. Dr. Lambert has won several teaching awards including the 2008 Virginia Professor of the Year. Related Readings: 'A direct relationship between your sense of sight and recovery rate': Biologist Kathy Resilience is a skill that can be cultivated, a psychologist explains Scientists breed most human-like mice yet These 3 neurons may underlie the drive to eat food Willis on why looking at nature can speed up healing
本文於 修改第 1 次
|
奇妙的大腦 -- Kerri Smith
|
|
推薦1 |
|
|
本文附大量示意圖和資訊視頻等輔助說明,請務必上原網頁觀看。 What's so special about the human brain? Torrents of data from cell atlases, brain organoids and other methods are finally delivering answers to an age-old question. Kerri Smith, Infographics by Nik Spencer, Illustrations by Phil Wheeler, 11/2024 There must be something about the human brain that’s different from the brains of other animals — something that enables humans to plan, imagine the future, solve crossword puzzles, tell sarcastic jokes and do the many other things that together make our species unique. And something that explains why humans get devastating conditions that other animals don’t — such as bipolar disorder and schizophrenia. So, what is that something? In the past few years, new methods for studying the human brain — and those of other species — have started to reveal key differences in greater detail than ever before. Researchers can now snoop on what happens inside millions of brain cells by cataloguing the genes, RNA and proteins they produce. And by studying brain tissue, scientists are learning key lessons about how the organ develops and functions. One is that the differences between human brain cells and those of other species are often subtle. Another is that the human brain develops slowly compared with other animals. But how these features give rise to our cognitive skills is still a mystery — although researchers have plenty of promising leads. Size matters If there is one thing that stands out about the human brain compared with those of other primates — and even those of some extinct human relatives — it is its size. The human brain is up to three times larger in volume than the brains of chimpanzees, gorillas and many extinct human relatives. Brain size is tightly correlated with body size in most animals. But humans break the mould. Our brains are much larger than expected given our body size. Here are some animals’ brains ranked according to size. Researchers often use a ratio called the encephalization quotient (EQ) to get an idea of how much larger or smaller an animal’s brain is compared with what would be expected given its body size. The EQ is 1.0 if the brain to body mass ratio meets expectations. Here are their brains scaled according to their EQ, with the actual brain sizes represented by dotted lines. The mouse brain is half as big as expected for its body size. The human brain is more than seven times the expected size. Although evolution has enlarged the human brain, it hasn’t done so uniformly: some brain areas have ballooned more than others. One particularly enlarged region is the cortex, an area that carries out planning, reasoning, language and many other behaviours that humans excel at. Other areas, such as the cerebellum — an area at the back of the brain that is densely populated with neurons, and which helps to conduct movement and planning — have expanded too. The prefrontal cortex has a similar structure in both chimps and humans, although it takes up much more real estate in the human brain than in the chimp brain. There is also a big difference between the number of neurons in the human brain compared with those of other animals. The human brain has about 1,000 times more neurons than the mouse brain, for instance, and 13.5 times more than the macaque1.. But brain size and neuron number aren’t everything; some animals whose brains look and develop differently to mammals — such as ravens and other members of the crow family — can learn or remember impressively. “Brain size alone can’t explain human cognition,” says Chet Sherwood, an anthropologist and neuroscientist at The George Washington University in Washington DC. Special recipe Looking at brain cells closely has shown some interesting patterns. Over the past five years, techniques that enable scientists to catalogue the genes expressed in a single cell have been revealing the many different types of cell that make up a brain — at a level of detail much higher than anything achieved before. Last year, a team based at the Allen Institute for Brain Science in Seattle, Washington, reported the most-comprehensive atlases yet of cell types in both the mouse and human brain. As part of an international effort called the BRAIN Initiative Cell Census Network (BICCN), researchers catalogued the whole mouse brain, finding 5,300 cell types2; the human atlas is unfinished but so far includes more than 3,300 types from 100 locations3; researchers expect to find many more. Some regions do have distinct cell types — for instance, the human visual cortex contained several types of neuron that were exclusive to that area4. But in general, human-specific cell types are rare. The overall impression, when comparing the cell types of the human brain with other species, is one of similarity. “I was expecting bigger differences,” says Ed Lein, a neuroscientist at the Allen Institute, who is involved in efforts to catalogue cells in human, mouse and other brains. “The basic cellular architecture is remarkably conserved until you get down to the finer details”, he says. Most human brain regions differ from primates and mice in the relative proportions of cell types that appear5, and in the ways those cells express their genes: it's not the ingredients that are different, but the recipe. Take these two comparable regions of the human and mouse cortex, which both process auditory information. The mouse area contains a higher proportion of excitatory neurons, which propagate signals, relative to inhibitory neurons, which dampen activity. The human region had a much greater proportion of non-neuronal cells, such as astrocytes, oligodendrocytes and microglia. These cells support neurons and also help to prune and refine their connections during development. The ratio of these cells to neurons was five times that of mice. The upshot of the differences still isn’t clear, but the atlases provide a way to study these cells and the genes they express, to better understand their function. The same cell types can also look different in different species. This is the same type of neuron — a pyramidal cell — from the cortex of a mouse, chimp and human. The mouse brain has fewer of these cells and they are less well connected compared with the human brain6. Even compared with the chimp, the human neurons are longer and make more connections with each other. The cortical layers they live in are thicker than those of the chimp. Source: Ref. 6 Making connections No neuron is an island, and the networks they form could be a huge part of what gives various brains their different functions and specialisms. One study compared 1.6 million connections between more than 2,000 total brain cells in mouse, macaque and human brain samples taken from the cortex. The human wiring diagram, or ‘connectome’, had 2.5 times more interneurons — a class of cells that dampen neural activity and control excitation, shown here in two colours — than did the mouse, and those cells made ten times more connections between themselves7.. A specialized group of interneurons with a preference for connecting to others of the same type (bipolar neurons, in green) were rare in mice but have expanded to be more than half the population in humans. A second class of interneurons, called multipolar neurons, did not expand to the same extent. Source: Ref. 7; M. Sievers et al. (2024) The finding was “super surprising”, says study leader Moritz Helmstaedter at the Max Planck Institute for Brain Research in Frankfurt, Germany. He thinks that this expanded network of interneurons might help to solve one major problem in the human brain: neurons operate quickly but thoughts and actions take seconds. Larger networks of interneurons could prolong neuronal activity, allowing the brain to generate more complex thoughts and keep things ‘in mind’ for longer. The team is now looking at larger segments of the human cortex. The results of Helmstaedter's connectome study are supported by genetic work. When comparing gene expression across species, many differences turn out to be related to how the connections between neurons — called synapses — connect with and signal to each other. In a study8 led by researchers at the Allen Institute, a few hundred genes showed expression patterns unique to humans. Often, these specializations were related to circuit function — they were involved in synapse-building or signalling. And they were often seen in non-neuronal cells, such as astrocytes and microglia. Slow to develop Some scientists think that there is one key pedal that has been pressed in the human brain that can explain many of the differences between us and other species. The brake. “Whatever you look at, it’s happening more slowly in humans,” says neuroscientist Madeline Lancaster, who studies human brain development at the MRC Laboratory of Molecular Biology in Cambridge, UK. The pace of brain development varies a lot across species, but it’s incredibly protracted in humans. The mouse brain, for instance, is fully developed just 5% of the way into the animal’s lifespan. Macaque and chimp brains are fully developed about one-third of the way into theirs. Human brains take much longer to grow, mature and refine their connections — about 30 years, or almost half our average lifespan. Source: Ref. 6 This sluggish pace could help humans to grow more neurons, and foster more diversity and complexity. It also gives the brain more time to be shaped by its environment. Research suggests that, in humans, neural progenitors, the cells that give rise to neurons, spend longer in a limbo state before assuming their final identities9. Human progenitors also have more potential — they can become more than one broad type of neuron, whereas in rodents one type of progenitor tends to develop into just one type of neuron10. Here is a typical timeline for chimp neurons — they develop from progenitors, they grow axons and dendrites to reach out to other cells, those outgrowths develop synapses to connect to each other and send signals, and finally they develop a layer of myelin, which insulates neurons and helps signals to travel6. The same process in humans takes longer and results in neurons that grow more dendrites, each with more connections. Axons can be longer than those of chimps because they have further to travel, and the resulting neurons are more complex. Several gene variants have been linked to this slowdown and elaboration. One is a gene duplication seen only in humans; when mice were engineered to have the same duplication, they grew more synapses and their learning improved11. Another example is a change in the sequence that codes for a protein called NOTCH, which has been linked to the expansion of the cortex. This change allows human neurons to spend longer proliferating — giving rise to a larger pool of new neurons — than those of non-human primates12,13. Source: Ref. 6 Although some changes to genes and cells undoubtedly make us who we are, it's too early to leap to any conclusions, says Alex Pollen, a geneticist who studies human brain evolution at the University of California San Francisco. Some changes could just be side effects of other adaptations — for example, an increase in certain types of neuron so that brain regions could still communicate when the brain expanded. There are downsides, too, to our special abilities. Sherwood says that humans undergo more drastic changes than other primates, such as a shrinkage of the cortex, owing to ageing — in part because we live so much longer. But even the oldest great ape brains don’t seem to change as much as human brains do with age, he says. And some conditions that seem specific to humans could be the price we pay for complexity, says Lancaster. “Even a small defect could have more dramatic consequences,” she says. There’s plenty more to discover about how our brains make us so talkative, sociable and intelligent. Scientists are interested in how gene variants act on neurons and the brain; how neural activity during development influences growth; and how parts of the brain other than the cortex might have changed to endow humans with our unique skills. The confluence of technologies has energized researchers to look afresh at a classic question, says Lancaster. “I feel lucky to be doing science at this moment.” References 1. Herculano-Houzel, S. Front. Hum. Neurosci. 3, 31 (2009). 2. Yao, Z. et al. Nature 624, 317–332 (2023). 3. Siletti, K. et al. Science 382, eadd7046 (2023). 4. Jorstad, N. L. et al. Science 382, eadf6812 (2023). 5. Fang, R. et al. Science 377, 56–62 (2022). 6. Lindhout, F. W. et al. Nature 630, 596–608 (2024). 7. Loomba, S. et al. Science 377, eabo0924 (2022). 8. Jorstad, N. L. et al. Science 382, eade9516 (2023). 9. Otani, T. et al. Cell Stem Cell 18, 467–480 (2016). 10. Delgado, R. N. et al. Nature 601, 397–403 (2022). 11. Schmidt, E. R. E. et al. Nature 599, 640–644 (2021). 12. Fiddes, I. T. et al. Cell 173, 1356–1369 (2018). 13. Suzuki, I. K. et al. Cell 173, 1370–1384 (2018). Author: Kerri Smith Illustration: Phil Wheeler Infographics: Nik Spencer Design: Wes Fernandes Subeditor: Joanna Beckett Editor: Richard Monastersky © 2024 Springer Nature Limited. All rights reserved.
本文於 修改第 1 次
|
用物理學方法研究記憶與思考 - The Physics arXiv Blog
|
|
推薦1 |
|
|
我在天普大學物理系唸研究所時,統計物理課由格林教授講授。我已經忘了他的大名;只記得他的指導教授就是下文提到的吉卜石博士。
The Hunt For The Laws Of Physics Behind Memory And Thought The massive networks of neurons in our brains produce complex behaviors, like actions and thought. Now physicists want to understand the laws that govern this emergent phenomena. The Physics arXiv Blog, 10/01/24 表單的底部
One of the curious features of the laws of physics is that many of them seem to be the result of the bulk behavior of many much smaller components. The atoms and molecules in a gas, for example, move at a huge range of velocities. When constrained in a container, these particles continually strike the surface creating a force. But it is not necessary to know the velocities of all the particles to determine this force. Instead, their influence averages out into a predictable and measurable bulk property called pressure. This and other bulk properties like temperature, density and elasticity, are hugely useful because of the laws of physics that govern them. Over one hundred years ago, physicists like Willard Gibbs and others determined the mathematical character of these laws and the statistical shorthand that physicists and engineers now use routinely in everything from laboratory experiments to large scale industrial processes. The success of so-called statistical physics raises the possibility that other systems that consist of enormous numbers of similar entities might also have their own “laws of physics”. In particular, physicists have long hoped that the bulk properties of neurons might be amenable to this kind of approach. Neural Physics The behavior of single neurons is well understood. But put them together into networks and much more significant behaviors emerge, such as sensory perception, memories and thought. The hope is that a statistical or mathematical approach to these systems could reveal the laws of neural physics that describe the bulk behavior of nervous systems and brains. “It is an old dream of the physics community to provide a statistical mechanics description for these and other emergent phenomena of life,” say Leenoy Meshulam at the University of Washington and William Bialek at Princeton University, who have reviewed progress in this area. “These aspirations appear in a new light because of developments in our ability to measure the electrical activity of the brain, sampling thousands of individual neurons simultaneously over hours or days.” The nature of these laws is, of course, fundamentally different to the nature of conventional statistical physics. At the heart of the difference is that neurons link together to form complex networks in which the behavior of one neuron can be closely correlated with the behavior of its neighbors. It is relatively straightforward to formulate a set of equations that capture this behavior. But it quickly becomes apparent that these equations cannot be easily solved in anything other than trivial circumstances. Instead, physicists must consider the correlations between all possible pairs of neurons and then use experimental evidence to constrain what correlations are possible. The problem, of course, is that the number of pairs increases exponentially with the number of neurons. That raises the question of how much more data must be gathered to constrain the model as the number of neurons increases. One standard system in which this has been well measured is the retina (視網膜). This consists of a network of light sensitive neurons in which activity between neighbors is known to be corelated. So if one neuron is activated there is a strong possibility that its neighbor will be too. (This is the reason for the gently evolving, coral-like patterns in vision that people sometimes notice when they first wake up.) Experiments in this area began by monitoring the behavior of a handful of neurons, then a few dozen, a few hundred and now approach thousands (but not millions). It turns out that the data helps constrain the model to the point where they give remarkably accurate predictions of neural behavior when asked, for example, to predict how many neurons are active out of any given set of them. That suggests the system of equations accurately captures the behavior of retinal networks. In other words, “the models really are the solutions to the mathematical problem that we set out to solve,” say Meshulam and Bialek. Of course, the retina is a highly specialized part of the nervous system so an important question is whether similar techniques can generalize to the higher cognitive tasks that take place in other parts of the brain. Emergent Behavior One challenge here is that networks can demonstrate emergent behavior. This is not the result of random correlations or even weak correlations. Instead, the correlations can be remarkably strong and can spread through a network like an avalanche. Networks that demonstrate this property are said to be in a state of criticality (臨界狀態) and are connected in a special way that allows this behavior. This criticality turns out to be common in nature and suggests networks can tune themselves in a special way to achieve it. “Self-organized criticality” has been widely studied in the last two decades and there has been some success in describing it mathematically. But exactly how this self-tuning works is the focus of much ongoing research. Just how powerful these approaches will become is not yet clear. Meshulam and Bialek take heart from the observation that some natural behaviors are amenable to the kind of analysis that physicists are good at. “All the birds in a flock agreeing to fly in the same direction is like the alignment of spins in a magnet,” they say. The fact that this is merely a metaphor concerns them — metaphors can help understanding but the real behavior of these system is often much more complex and subtle. But there are reasons to think that mathematical models can go further. “The explosion of data on networks of real neurons offers the opportunity to move beyond metaphor,” they say, adding that the data from millions of neurons should soon help to inform this debate. “Our experimentalist friends will continue to move the frontier, combining tools from physics and biology to make more and more of the brain accessible in this way,” conclude Meshulam and Bialek. “The outlook for theory is bright.” Ref: Statistical mechanics for networks of real neurons : arxiv.org/abs/2409.00412
本文於 修改第 1 次
|
戀愛腦 -- Jess Thomson
|
|
推薦1 |
|
|
問題:實驗觀察到的「大腦活動」,是「情緒」引起的,還是「語言」引起的? Scientists Reveal Where the Brain Feels Love—and Which Type Is Strongest Jess Thomson, 08/26/24
Love might feel like it comes from the heart, but scientists have figured out where love lives inside the brain. Researchers used functional magnetic resonance imaging (fMRI) to measure brain activity while people thought about various types of love, finding that the brain lit up in different areas, according to a new paper in the journal Cerebral Cortex. They discovered that love in different types of relationships results in brain activity of different strengths, but all activated more or less the same brain areas. Stock image of a couple (main) and a brain (inset). Scientists have measured brain scans regarding different kinds of love. ISTOCK / GETTY IMAGES PLUS (請至原網頁查看照片) "We now provide a more comprehensive picture of the brain activity associated with different types of love than previous research," study co-author Pärttyli Rinne, a philosopher and researcher at Aalto University in Finland, said in a statement. "The activation pattern of love is generated in social situations in the basal ganglia, the midline of the forehead, the precuneus and the temporoparietal junction at the sides of the back of the head." Love comes in many forms, from parental love for children to romantic love, friendship love, and even love of animals or nature. The researchers describe how they measured brain activity in people who had just heard a description of a type of love, such as: "You see your newborn child for the first time. The baby is soft, healthy and hearty — your life's greatest wonder. You feel love for the little one." They found that the love of a parent generated the most powerful brain activity, followed by romantic love. While the intensity of the brain activity varied between types, they all mostly lit up in the same region of the brain, with some exceptions. "In parental love, there was activation deep in the brain's reward system in the striatum area while imagining love, and this was not seen for any other kind of love," said Rinne. They also tested the brain activity associated with friendships, pets, nature, and strangers. The researchers found that love of nature lit up the brain's reward system and not the areas associated with social cognition, while love of people lit up the social areas instead. Interestingly, the researchers discovered that brain waves when spoken to about animals actually revealed if the person had a pet or not. "When looking at love for pets and the brain activity associated with it, brain areas associated with sociality statistically reveal whether or not the person is a pet owner. When it comes to the pet owners, these areas are more activated than with non-pet owners," said Rinne. Understanding the physiology of love may seem cold, however, the scientists hope that their research could be used to better treat attachment disorders, depression or relationship issues. References Rinne, P., Lahnakoski, J., Saarimäki, H., Tavast, M., Sams, M., & Henriksson, L. (2024). Six types of loves differentially recruit reward and social cognition brain areas. Cerebral Cortex, 34(8). https://doi.org/10.1093/cercor/bhae331 Jess Thomson is a Newsweek Science Reporter based in London UK. Her focus is reporting on science, technology and healthcare. She has covered weird animal behavior, space news and the impacts of climate change extensively. Jess joined Newsweek in May 2022 and previously worked at Springer Nature. She is a graduate of the University of Oxford. Languages: English.
本文於 修改第 1 次
|
|
|