本城市有一個以「科學方法」為主題的專欄(【關於科學方法的討論】)。本欄則轉貼我認為近於「胡言亂語」的「學術」論文或研究報告。它們是 how not to do research的範例。先轉貼兩篇我最近看到的介紹以及一篇論文。
本文於 修改第 4 次
掃描可能導致錯誤解讀的原因 - M. Costandi
Bold Assumptions: Why Brain Scans Are Not Always What They Seem
In 2009, researchers at the University of California, Santa Barbara performed a curious experiment. In many ways, it was routine — they placed a subject in the brain scanner, displayed some images, and monitored how the subject's brain responded. The measured brain activity showed up on the scans as red hot spots, like many other neuroimaging studies.
Except that this time, the subject was an Atlantic salmon, and it was dead.
Dead fish do not normally exhibit any kind of brain activity, of course. The study was a tongue-in-cheek reminder of the problems with brain scanning studies. Those colorful images of the human brain found in virtually all news media may have captivated the imagination of the public, but they have also been subject of controversy among scientists over the past decade or so. In fact, neuro-imagers are now debating how reliable brain scanning studies actually are, and are still mostly in the dark about exactly what it means when they see some part of the brain "light up."
Glitches in reasoning
Functional magnetic resonance imaging (fMRI) measures brain activity indirectly by detecting changes in the flow of oxygen-rich blood, or the blood oxygen-level dependent (BOLD) signal, with its powerful magnets. The assumption is that areas receiving an extra supply of blood during a task have become more active. Typically, researchers would home in on one or a few "regions of interest," using 'voxels,' tiny cube-shaped chunks of brain tissue containing several million neurons, as their units of measurement.
Early fMRI studies involved scanning participants' brains while they performed some mental task, in order to identify the brain regions activated during the task. Hundreds of such studies were published in the first half of the last decade, many of them garnering attention from the mass media.
Eventually, critics pointed out a logical fallacy in how some of these studies were interpreted. For example, researchers may find that an area of the brain is activated when people perform a certain task. To explain this, they may look up previous studies on that brain area, and conclude that whatever function it is reported to have also underlies the current task.
Among many examples of such studies were those that concluded people get satisfaction from punishing rule-breaking individuals, and that for mice, pup suckling is more rewarding than cocaine. In perhaps one of the most famous examples, a researcher diagnosed himself as a psychopath by looking at his own brain scan.
These conclusions could well be true, but they could also be completely wrong, because the area observed to be active most likely has other functions, and could serve a different role than that observed in previous studies.
The brain is not composed of discrete specialized regions. Rather, it's a complex network of interconnected nodes, which cooperate to generate behavior. Thus, critics dismissed fMRI as "neo-phrenology" – after the discredited nineteenth century pseudoscience that purported to determine a person's character and mental abilities from the shape of their skull – and disparagingly referred to it as 'blobology.'
When results magically appear out of thin air
In 2009, a damning critique of fMRI appeared in the journal Perspectives on Psychological Science. Initially titled "Voodoo Correlations in Social Neuroscience" and later retitled to "Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition," the article questioned the statistical methods used by neuro-imagers. The authors, Ed Vul of University of California in San Diego and his colleagues, examined a handful of social cognitive neuroscience studies, and pointed out that their statistical analyses gave impossibly high correlations between brain activity and behavior.
"It certainly created controversy," says Tal Yarkoni, an assistant professor in the Department of Psychology at the University of Texas, Austin. "The people who felt themselves to be the target ignored the criticism and focused on the tone, but I think a large subset of the neuroimaging community paid it some lip service."
Russ Poldrack of the Department of Psychology at Stanford University says that although the problem was more widespread than the paper suggested, many neuro-imagers were already aware of it. "They happened to pick on one part of the literature, but almost everybody was doing it," he says.
The problem arises from the "circular" nature of the data analysis, Poldrack says. "We usually analyze a couple of hundred thousand voxels in a study," he says. "When you do that many statistical tests, you look for the ones that are significant, and then choose those to analyze further, but they'll have high correlations by virtue of the fact that you selected them in the first place."
Not long after Vul's paper was published, Craig Bennett and his colleagues published their dead salmon study to demonstrate how robust statistical analyses are key to interpreting fMRI data. When stats are not done well enough, researchers can easily get false positive results – or see an effect that isn't actually there, such as activity in the brain of a dead fish.
The rise of virtual superlabs
The criticisms drove researchers to do better work— to think more deeply about their data, avoid logical fallacies in interpreting their results, and develop new analytical methods.
At the heart of the matter is the concept of statistical power, which reflects how likely the results are to be meaningful instead of being obtained by pure chance. Smaller studies typically have lower power. An analysis published in 2013 showed that underpowered studies are common in almost every area of brain research. This is specially the case in neuroimaging studies, because most of them involve small numbers of participants.
"Ten years ago I was willing to publish papers showing correlations between brain activity and behavior in just 20 people," says Poldrack. "Now I wouldn't publish a study that doesn't involve at least 50 subjects, or maybe 100, depending on the effect. A lot of other labs have come around to this idea."
Cost is one of the big barriers preventing researchers from increasing the size of their studies. "Neuroimaging is very expensive. Every lab has a budget and a researcher isn't going to throw away his entire year's budget on a single study. Most of the time, there's no real incentive to do the right thing," Yarkoni says.
Replication – or repeating experiments to see if the same results are obtained – also gives researchers more confidence in their results. But most journals are unwilling to publish replication experiments, preferring novel findings instead, and the act of repeating someone else's experiments is seen as aggressive, as if implying they were not done properly in the first place.
One way around these problems is for research teams to collaborate with each other and pool their results to create larger data sets. One such initiative is the IMAGEN Consortium, which brings together neuro-imaging experts from 18 European research centers, to share their results, integrate them with genetic and behavioral data, and create a publicly available database.
Five years ago, Poldrack started the OpenfMRI project, which has similar aims. "The goal was to bring together data to answer questions that couldn't be answered with individual data sets," he says. "We're interested in studying the psychological functions underlying multiple cognitive tasks, and the only way of doing that is to amass lots of data from lots of different tasks. It's way too much for just one lab."
An innovative way of publishing scientific studies, called pre-registration, could also increase the statistical power of fMRI studies. Traditionally, studies are published in scientific journals after they have been completed and peer-reviewed. Pre-registration requires that researchers submit their proposed experimental methods and analyses early on. If these meet the reviewers' satisfaction, they are published; the researchers can then conduct the experiment and submit the results, which are eventually published alongside the methods.
"The low statistical power and the imperative to publish incentivizes researchers to mine their data to try to find something meaningful," says Chris Chambers, a professor of cognitive neuroscience at the University of Cardiff. "That's a huge problem for the credibility and integrity of the field."
Chambers is an associate editor at Cortex, one of the first scientific journals to offer pre-registration. As well as demanding larger sample sizes, the format also encourages researchers to be more transparent about their methods.
Many fMRI studies would, however, not be accepted for pre-registration – their design would not stand up to the scrutiny of the first-stage reviewers. "Neuro-imagers say pre-registration consigns their field to a ghetto," says Chambers. "I tell them they can collaborate with others to share data and get bigger samples."
Pushing the field forward
Even robust and apparently straight-forward fMRI findings can still be difficult to interpret, because there are still unanswered questions about the nature of the BOLD signal. How exactly does the blood rush to a brain region? What factors affect it? What if greater activation in a brain area actually means the region is working less efficiently?
"What does it mean to say neurons are firing more in one condition than in another? We don't really have a good handle on what to make of that," says Yarkoni. "You end up in this uncomfortable situation where you can tell a plausible story no matter what you see."
To some extent, the problems neuro-imagers face are part of the scientific process, which involves continuously improving one's methods and refining ideas in light of new evidence. When done properly, the method can be extremely powerful, as the ever-growing number of so-called "mind-reading" and "decoding" studies clearly show.
It's likely that with incremental improvements in the technology, fMRI results will become more accurate and reliable. In addition, there are a number of newer projects that aim to find other ways to capture brain activity. For example, one group at Massachusetts General Hospital is working on using paramagnetic nanoparticles to detect changes in blood volume in the brain's capillaries. Such a method would radically enhance the quality of signals and make it possible to detect brain activity in one individual, as opposed to fMRI that requires pooling data from a number of people, according to the researchers. Other scientists are diving even deeper, using paramagnetic chemicals to reveal brain activity at the cell level. If such methods come to fruition, we could find the subtlest activities in the brain, maybe just not in a dead fish.
本文於 修改第 2 次
本文於 修改第 7 次
學者既非萬事通故不應撈過界 - M. R. Francis
Quantum and Consciousness Often Mean Nonsense
Lots of things are mysterious. That doesn’t mean they’re connected.
Matthew R. Francis, 05/29/14
Possibly no subject in science has inspired more nonsense than quantum mechanics. Sure, it’s a complicated field of study, with a few truly mysterious facets that are not settled to everyone’s satisfaction after nearly a century of work. At the same time, though, using quantum to mean “we just don’t know” is ridiculous -- and simply wrong. Quantum mechanics is the basis for pretty much all our modern technology, from smartphones to fluorescent lights, digital cameras to fiber-optic communications.
If I had to pick a runner-up in the nonsense sweepstakes, it would be human consciousness, another subject with a lot of mysterious aspects. We are made of ordinary matter yet are self-aware, capable of abstractly thinking about ourselves and of recognizing others (including nonhumans) as separate entities with their own needs. As a physicist, I’m fascinated by the notion that our consciousness can imagine realities other than our own: The universe is one way, but we are perfectly happy to think of how it might be otherwise.
I hold degrees in physics and have spent a lot of time learning and teaching quantum mechanics. Nonphysicists seem to have the impression that quantum physics is really esoteric, with those who study it spending their time debating the nature of reality. In truth, most of a quantum mechanics class is lots and lots of math, in the service of using a particle’s quantum state -- the bundle of physical properties such as position, energy, spin, and the like -- to describe the outcomes of experiments. Sure, there’s some weird stuff and it’s fun to talk about, but quantum mechanics is aimed at being practical (ideally, at least).
Yet the mysterious aspects of quantum physics and consciousness have inspired many people to speculate freely. The worst offenders will even say that because we don’t fully understand either field, they must be related problems. It sounds good at first: We don’t know exactly how some things in quantum physics work, we don’t know exactly how to go from the brain to consciousness, so maybe consciousness is quantum.
The problem with this idea? It’s almost certainly wrong.
Oh, sure: In a sense the brain is quantum, simply because all matter is described by quantum mechanics. However, what people usually mean by quantum isn’t ordinary stuff such as molecules that let brain cells communicate. Instead, the term is usually reserved for the deeper processes that rely on the quantum state. The quantum state is where fun stuff like entanglement lives: the coupling of two widely separated particles that act like parts of a single system. But that level of analysis is not generally helpful for describing the motion of molecules across the gap between cells in the brain.
That’s not to say that quantum effects are entirely ruled out in biology. Some researchers are investigating how photosynthesis or even the human senses of sight and smell might work in part by manipulating quantum states. The retina in the eye is sensitive to small numbers of photons -- particles of light -- and the quantum state of the photon interacts with the quantum state of the retinal cell. But once those signals are translated into something the brain can process, the original quantum state seems to be irrelevant.
The overwhelming success of modern physics does not give physicists the ability to pronounce judgment on other sciences.
I’ll hedge my bets: Maybe there’s room for some small quantum effects in the brain, but I sincerely doubt those will be directly relevant for consciousness. That’s because almost anything involving individual quantum states requires isolation from environmental interference for the weirdness to show up. For example, most particles aren’t entangled in any meaningful way, because interactions with other particles change their quantum state. That process is known as decoherence. (If someone wants to propose a theory of the mind based on decoherence, I might listen, especially on days when I’m distracted.)
However, other people go much further. In his bestselling 1989 book The Emperor’s New Mind, mathematical physicist Roger Penrose proposed that the problems of interpreting quantum states implies that the conscious mind will need a new kind of physics to describe it. Penrose is no crackpot in his area of expertise (the mathematics of general relativity, which also happens to be my area), but his foray into the mind and consciousness is a cautionary tale.
Just because you’re a world expert in one branch of science doesn’t qualify you in any other discipline. As Zach Weinersmith’s painfully funny comic points out, this is a particularly bad habit among physicists.
Some of them think that the overwhelming success of modern physics gives them the ability to pronounce judgment on other sciences, from linguistics to paleontology. Celebrity physicist Michio Kaku is a particularly egregious example, getting evolution completely wrong (see this critique) and telling infamous crackpot Deepak Chopra that our actions can have effects in distant galaxies. Then there are the physicists -- including Freeman Dyson, one of the architects of the quantum theory describing interactions between light and matter -- who contradict climate scientists in their own area of expertise.
Physicists aren’t the only culprits, though. A new book by neuroscientist W. R. Klemm implies that the edges of physics could provide answers about human consciousness. Ironically, he writes, “I just hate it when physicists write about biology. They sometimes say uninformed and silly things. But I hate it just as much when I write about physics, for I too am liable to say uninformed and silly things -- as I may well do here.” Nearly everything that follows in the book excerpt is either wrong or misleading. I could write a point-by-point response, but suffice to say: The problems and incompleteness he cites about quantum physics are overblown and frankly incorrect.
I take it back: I will rant briefly about two of his points. First, Klemm writes, “But is mass really identical to energy? True, mass can be converted to energy, as atom bombs prove, and energy can even be turned into mass. Still, they are not the same things.” That’s an unnecessary obfuscation: Einstein’s equation E = mc2 does connect mass and energy in a fundamental and entirely unmysterious way. Probably no other single equation has inspired as many popular explanations, so it’s safe to say we get it: Mass is a form of energy. To be precise, it’s the energy a particle has when it’s at rest. Sure, there are complications in particle physics collisions at high speeds, but the basic concept is really simple.
Second, dark energy -- which I have written about for Slate -- does not impart energy to galaxies or anything smaller. If it turns out to be “vacuum energy,” which looks probable, then the only way dark energy could have anything to do with human consciousness would be if our heads were empty.
The problem with Klemm’s assertions, as well as those of many others who misuse the word quantum, is that their speculation is based on a superficial understanding of one or both fields. Physics may or may not have anything informative to say about consciousness, but you won’t make any progress in that direction without knowing a lot about both quantum physics and how brains work. Skimping on either of those will lead to nonsense.
Matthew R. Francis is a physicist, science writer, public speaker, educator, and frequent wearer of jaunty hats. He blogs at Galileo’s Pendulum.
本文於 修改第 2 次
10個關於心理學的不正確看法 - R. Pomeroy
10 of the Greatest Myths in Psychology
Ross Pomeroy, 04/06/14
Myth over Mind
Psychology is rife with misinformation and falsehoods. And sadly, the vast majority of them show no signs of vacating popular culture.
In 2009, Scott Lilienfeld, Steven Jay Lynn, John Ruscio, and Barry Beyerstein assembled a compendium of 50 Great Myths of Popular Psychology, then proceeded to dispel each and every one of them. Their book was a triumph of evidence and reason.
Using 50 Great Myths of Popular Psychology as a guide, we've created a list of 10 of the biggest psychological myths. Don't be ashamed if you believe one, or all, of these.
Subliminal Advertising Works
It's one of the great conspiracies of the television era: that advertisers and influencers are flashing subtle messages across our screens -- sometimes lasting as little as 1/3000th of a second -- and altering how we think and act, as well as what we buy.
Rest assured, however, these advertisements don't work. Your unconscious mind is safe. In a great many carefully controlled laboratory trials, subliminal messages did not affect subjects' consumer choices or voting preferences. When tested in the real world, subliminal messaging failed just as spectacularly. In 1958, the Canadian Broadcasting Corporation informed its viewers that they were going to test a subliminal advertisement during a Sunday night show. They then flashed the words "phone now" 352 times throughout the program. Records from telephone companies were examined, with no upsurge in phone calls whatsoever.
The dearth of evidence for subliminal advertising hasn't stopped influencers from trying it. In 2000, a Republican ad aimed at Vice President Al Gore briefly flashed the word "RATS."
There's an Autism Epidemic
Autism is a "disorder of neural development characterized by impaired social interaction and verbal and non-verbal communication, and by restricted, repetitive or stereotyped behavior."
Prior to the 1990s, the prevalence of autism in the United States was estimated at 1 in 2,500. In 2007, that rate was 1 in 150. In March, the CDC announced new, startling numbers: 1 in 68. What's going on?
The meteoric rise in diagnoses has prompted many to cry "epidemic!" Fearful, they look for a reason, and often latch onto vaccines.
But vaccines are not the cause. The most likely explanation is far less frightening.
Over the past decades, the diagnostic criteria for autism have been significantly loosened. Each of the last three major revisions to the Diagnostic and Statistical Manual of Mental Disorders (DSM) has made it much easier for psychiatrists to diagnose the disorder. When a 2005 study conducted in England tracked autism cases between 1992 and 1998 using identical diagnostic criteria, the rates didn't budge.
We Only Use 10% of Our Brain Power
Oh if only it were true... If we found a way to unlock and unleash the remaining 90%, we could figure out the solution to that pesky problem at work, or become a math genius, or develop telekinetic powers!
But it's not true. Metabolically speaking, the brain is an expensive tissue to maintain, hogging as much as 20% of our resting caloric expenditure, despite constituting a mere 2% of the average human's body weight.
"It’s implausible that evolution would have permitted the squandering of resources on a scale necessary to build and maintain such a massively underutilized organ," Emory University psychologist Scott Lilienfeld wrote.
The myth likely stems back to American psychologist William James, who once espoused the idea that the average person rarely achieves more than 10% of their intellectual potential. Over the years, self-help gurus and hucksters looking to make a buck morphed that notion into the idea that 90% of our brain is dormant and locked away. They have the key, of course, and you can buy it for a pittance!
"Shock" Therapy Is a Brutal Treatment
When you think of electroconvulsive therapy (ECT), what comes to mind? Do you picture a straightjacketed individual being bound to a table against his will, electrodes attached to his skull, and then convulsing brutally on a table as electricity courses through his body?
According to surveys, most people view ECT as a barbaric relic of psychiatry's medieval past. And while ECT may once have been a violent process, it hasn't been like that for over five decades. Yes, it is still in use today.
"Nowadays, patients who receive ECT... first receive a general anesthetic, a muscle relaxant, and occasionally a substance to prevent salivation," Lilienfeld described. "Then, a physician places electrodes on the patient's head... and delivers an electric shock. This shock induces a seizure lasting 45 to 60 seconds, although the anesthetic... and muscle relaxant inhibit the patient's movements..."
There's no scientific consensus on why ECT works, but the majority of controlled studies show that -- for severe depression -- it does. Indeed, a 1999 study found that an overwhelming 91% of people who'd received ECT viewed it positively.
The union between two electrical charges, one positive and one negative, is the quintessential love story in physics. Opposites attract!
But the same cannot be said for a flaming liberal and a rabid conservative. Or an exercise aficionado and a professional sloth. People are not electrical charges.
Though Hollywood loves to perpetuate the idea that we are romantically attracted to people who differ from us, in practice, this is not the case.
"Indeed, dozens of studies demonstrate that people with similar personality traits are more likely to be attracted to each other than people with dissimilar personality traits," Lilienfeld wrote. "The same rule applies to friendships."
Lie Detector Tests Are Accurate
Those who operate polygraph -- "Lie Detector" -- tests often boast that they are 99% accurate. The reality is that nobody, not even a machine, can accurately tell when somebody is lying.
Lie detector tests operate under the assumption that telltale physiological signs reveal when people aren't telling the truth. Thus, polygraphs measure indicators like skin conductance, blood pressure, and respiration. When these signs spike out of the test-taker's normal range in response to a question, the operator interprets that a lie has been told.
But such physiological reactions are not universal. Moreover, when one learns to control factors like perspiration and heart rate, one can easily pass a lie detector test.
Dreams Possess Symbolic Meaning
Do you ever dream about hair-cutting, tooth loss, or beheading? You're probably worried about castration, at least according to Sigmund Freud.
About 43% of Americans believe that dreams reflect unconscious desires. Over half agree that dreams can unveil hidden truths. Admittedly, dreaming mostly remains an enigma to science, but the act is almost certainly not a crystal ball of the unconscious mind.
Instead, the theory that has garnered the most scientific support goes a little something like this: Dreaming is the jumbled representation of our brain's actions to assort and cobble together information and experience, like a file-sorting system. Thus, as Lilienfeld says, dream interpretation would be "haphazard at best."
"Rather than relying on a dream dictionary to foretell the future or help you make life decisions, it would probably be wisest to weigh the pros and cons of differing courses of action carefully, and consult trusted friends and advisers."
Our Memory Is Like a Recorder
About 36% of Americans believe that our brains perfectly preserve past experiences in the form of memories. This is decidedly not the case.
"Today, there's broad consensus among psychologists that memory isn't reproductive -- it doesn't duplicate precisely what we've experienced -- but reconstructive. What we recall is often a blurry mixture of accurate recollections, along with what jells with our beliefs, needs, emotions, and hunches," Lilienfeld wrote.
Our memory is glaringly fallible, and this is problematic, particularly in the courtroom. Eyewitness testimony has led to the false convictions of a great many innocent people.
Mozart Will Make Your Baby a Genius
In 1993, a study published in Nature found that college students who listened to a mere ten seconds of a Mozart sonata were endowed with augmented spatial reasoning skills. The news media ran wild with it. Lost in translation was the fact that the effects were fleeting. But it was too late. The "Mozart Effect" was born.
Since then, millions of copies of Mozart CDs marketed to boost intelligence have been sold. The state of Georgia even passed a bill to allow every newborn to receive a free cassette or CD of Mozart's music.
More recent studies which attempted to replicate the original study have failed or found miniscule effects. They've also pointed to a much more likely explanation for the original findings: short-term arousal.
"Anything that heightens alertness is likely to increase performance on mentally demanding tasks, but it's unlikely to produce long-term effects on spatial ability or, for that matter, overall intelligence," Lilienfeld explained. "So listening to Mozart's music may not be needed to boost our performance; drinking a glass of lemonade or cup of coffee may do the trick."
Left-Brained and Right-Brained
Some people are left-brained and others are right-brained. Those that use their left hemisphere are more analytical and logical, while those that use their right hemisphere are more creative and artistic.
Except that's not how the brain works.
Yes, certain regions of the brain are specialized and tailored to fulfill certain tasks, but the brain doesn't handicap itself by predominantly using one side or the other -- both hemispheres are used just about equally.
The left-brain/right-brain myth was rampant for decades and perpetuated by New Age thinkers, but the rise of functional MRI has granted us a firsthand look at brain activity. According to Scott Lilienfeld, It's showing us just the opposite.
"The two hemispheres are much more similar than different in their functions."
本文於 修改第 1 次
人們的10大思考盲點 -- RCS
10 Problems With How We Think
By nature, human beings are illogical and irrational. For most of our existence, survival meant thinking quickly, not methodically. Making a life-saving decision was more important than making a 100% accurate one, so the human brain developed an array of mental shortcuts.
Though not as necessary as they once were, these shortcuts -- called cognitive biases or heuristics -- are numerous and innate. Pervasive, they affect almost everything we do, from the choice of what to wear, to judgments of moral character, to how we vote in presidential elections. We can never totally escape them, but we can be more aware of them, and, just maybe, take efforts to minimize their influence.
Read on to learn about ten widespread faults with human thought.
Sunk Cost Fallacy (「放不開」錯誤)
Thousands of graduate students know this fallacy all too well. When we invest time, money, or effort into something, we don't like to see that investment go to waste, even if the task, object, or goal is no longer worth the cost. As Nobel Prize winning psychologist Daniel Kahneman explains, "We refuse to cut losses when doing so would admit failure, we are biased against actions that could lead to regret."
That's why people finish their overpriced restaurant meal even when they're stuffed to the brim, or continue to watch that horrible television show they don't even like anymore, or remain in a dysfunctional relationship, or soldier through grad school even when they decide that they hate their chosen major.
Conjunction Fallacy (「聯想」錯誤)
Sit back, relax, and read about Linda:
Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
Now, which alternative is more probable?
1. Linda is a bank teller, or
2. Linda is a bank teller and is active in the feminist movement.
If you selected the latter, you've just blatantly defied logic. But it's okay, about 85 to 90 percent of people make the same mistake. The mental sin you've committed is known as a conjunction fallacy. Think about it: it can't possibly be more likely for Linda to be a bank teller and a feminist compared to just a bank teller. If you answered that she was a bank teller, she could still be a feminist, or a whole heap of other possibilities.
A great way to realize the error in thought is to simply look at a Venn diagram. Label one circle as "bank teller" and the other as "feminist." Notice that the area where the circles overlap is always going to be smaller! (請至原網頁參考Venn diagram)
Renowned psychologists Amos Tversky and Daniel Kahneman once rigged a wheel of fortune, just like you'd see on the game show. Though labeled with values from 0 to 100, it would only stop at 10 or 65. As an experiment, they had unknowing participants spin the wheel and then answer a two-part question:
Is the percentage of African nations among UN members larger or smaller than the number you just wrote? What is your best guess of the percentage of African nations in the UN?
Kahneman described what happened next in his book Thinking, Fast and Slow:
The spin of a wheel of fortune... cannot possibly yield any useful information about anything, and the participants... should have simply ignored it. But they did not ignore it.
The participants who saw the number 10 on the wheel estimated the percentage of African nations in the UN at 25%, while those who saw 65 gave a much higher estimate, 45%. Participants' answers were "anchored" by the numbers they saw, and they didn't even realize it! Any piece of information, however inconsequential, can affect subsequent assessments or decisions. That's why it's in a car dealer's best interest to keep list prices high, because ultimately, they'll earn more money, and when you negotiate down, you'll still think you're getting a good deal!
Availability Heuristic (「可用訊息」原則)
When confronted with a decision, humans regularly make judgments based on recent events or information that can be easily recalled. This is known as the availability heuristic.
Says Kahneman, "The availability heuristic... substitutes one question for another: you wish to estimate... the frequency of an event, but you report the impression of ease with which instances come to mind."
Cable news provides plenty of fodder for this mental shortcut. For example, viewers of Entertainment Tonight probably think that celebrities divorce each other once every minute. The actual numbers are more complicated, and far less exorbitant.
It's important to be cognizant of the availability heuristic because it can lead to poor decisions. In the wake of the tragic events of 9/11, with horrific images of burning buildings and broken rubble fresh in their minds, politicians quickly voted to implement invasive policies to make us safer, such as domestic surveillance and more rigorous airport security. We've been dealing with, and griping about, the results of those actions ever since. Were they truly justified? Did we fall victim to the availability heuristic?
Optimism Bias (「樂觀」偏執)
"It won't happen to me" isn't merely a cultural trope. Individuals are naturally biased to thinking that they are less at risk of something bad happening to them compared to others. The effect, termed optimism bias, has been demonstrated in studies across a wide range of groups. Smokers believe they are less likely to develop lung cancer than other smokers, traders believe they are less likely to lose money than their peers, and everyday people believe they are less at risk of being victimized in a crime.
Optimism bias particularly factors into matters of health (PDF), prompting individuals to neglect salubrious behaviors like exercise, regular visits to the doctor, and condom use.
Gambler's Fallacy (「賭徒」錯誤)
On August 13, 1918, during a game of roulette at the Monte Carlo Casino, the ball fell on black 26 times in a row. In the wake of the streak, gamblers lost millions of francs betting against black. They assumed, quite fallaciously, that the streak was caused by an imbalance of randomness in the wheel, and that Nature would correct for the mistake.
No mistake was made, of course. Past random events in no way affect future ones, yet people regularly intuit (PDF) that they do.
Herd Mentality (「群聚」心理)
We humans are social creatures by nature. The innate desire to be a "part of the group" often outweighs any considerations of well being and leads to flawed decision-making. For a great example, look no further than the stock market. When indexes start to tip, panicked investors frantically begin selling, sending stocks even lower, which, in turn, further exacerbates the selling. Herd mentality also spawns cultural fads. In the back of their minds, pretty much everybody knew that pet rocks were a waste of money, but lots of people still bought them anyway.
Halo Effect (「以偏概全」效應)
The halo effect is a cognitive bias in which we judge a person's character based upon our rapid, and often oversimplified, impressions of him or her. The workplace is a haven -- more an asylum -- for this sort of faulty thinking.
"The halo effect is probably the most common bias in performance appraisal," researchers wrote in the journal Applied Social Psychology in 2012. The article goes on:
Think about what happens when a supervisor evaluates the performance of a subordinate. The supervisor may give prominence to a single characteristic of the employee, such as enthusiasm, and allow the entire evaluation to be colored by how he or she judges the employee on that one characteristic. Even though the employee may lack the requisite knowledge or ability to perform the job successfully, if the employee's work shows enthusiasm, the supervisor may very well give him or her a higher performance rating than is justified by knowledge or ability.
Confirmation Bias (「成見」偏執)
Confirmation bias is the tendency of people to favor information that confirms their beliefs. Even those who avow complete and total open-mindedness are not immune. This bias manifests in many ways. When sifting through evidence, individuals tend to value anything that agrees with them -- no matter how inconsequential -- and instantly discount that which doesn't. They also interpret ambiguous information as supporting their beliefs.
Hearing or reading information that backs our beliefs feels good, and so we often seek it out. A great many liberal-minded individuals treat Rachel Maddow or Bill Maher's words as gospel. At the same time, tons of conservatives flock to Fox News and absorb almost everything said without a hint of skepticism.
One place where it's absolutely vital to be aware of confirmation bias is in criminal investigation. All too often, when investigators have a suspect, they selectively search for, or erroneously interpret, information that "proves" the person's guilt.
Though you may not realize it, confirmation bias also pervades your life. Ever searched Google for an answer to a controversial question? When the results come in after a query, don't you click first on the result whose title or summary backs your hypothesis?
Discounting Delayed Rewards (「即刻報酬」效應)
If offered $50 today or $100 in a year, most people take the money and run, even though it's technically against their best interests. However, if offered $50 in five years or $100 in six years, almost everybody chooses the $100! When confronted with low-hanging fruit in the Tree of Life, most humans cannot resist plucking it.
This is best summed up by the Ainslie-Rachlin Law, which states, "Our decisions... are guided by the perceived values at the moment of the decision - not by the potential final value."
本文於 修改第 1 次
「偽科學」的「黑洞」性質 - M. Pigliucci
The Pseudoscience Black Hole
Massimo Pigliucci, 12/23/13
As I’ve mentioned on other occasions, my most recent effort in philosophy of science actually concerns what my collaborator Maarten Boudry and I call the philosophy of pseudoscience. During a recent discussion we had with some of the contributors to our book at the recent congress of the European Philosophy of Science Association, Maarten came up with the idea of the pseudoscience black hole. Let me explain.
The idea is that it is relatively easy to find historical (and even recent) examples of notions or fields that began within the scope of acceptable scientific practice, but then moved (or, rather, precipitated) into the realm of pseudoscience. The classic case, of course, is alchemy. Contra popular perception, alchemists did produce a significant amount of empirical results about the behavior of different combinations of chemicals, even though the basic theory of elements underlying the whole enterprise was in fact hopelessly flawed. Also, let's not forget that first rate scientists - foremost among them Newton - spent a lot of time carrying out alchemical research, and that they thought of it in the same way in which they were thinking of what later turned out to be good science.
Another example, this one much more recent, is provided by the cold fusion story. The initial 1989 report by Stanley Pons and Martin Fleischmann was received with natural caution by the scientific community, given the potentially revolutionary import (both theoretical and practical) of the alleged discovery. But it was treated as science, done by credentialed scientists working within established institutions. The notion was quickly abandoned when various groups couldn't replicate Pons and Fleischmann's results, and moreover given that theoreticians just couldn't make sense of how cold fusion was possible to begin with. The story would have ended there, and represented a good example of the self-correcting mechanism of science, if a small but persistent group of aficionados hadn't pursued the matter by organizing alternative meetings, publishing alleged results, and eventually even beginning to claim that there was a conspiracy by the scientific establishment to suppress the whole affair. In other words, cold fusion had - surprisingly rapidly - moved not only into the territory of discarded science, but of downright pseudoscience.
Examples of this type can easily be multiplied by even a cursory survey of the history of science. Eugenics and phrenology immediately come to mind, as well as - only slightly more controversially - psychoanalysis. At this point I would also firmly throw parapsychology into the heap (research in parapsychology has been conducted by credentialed scientists, especially during the early part of the 20th century, and for a while it looked like it might have gained enough traction to move to mainstream).
But, asked Maarten, do we have any convincing cases of the reverse happening? That is, are there historical cases of a discipline or notion that began as clearly pseudoscientific but then managed to clean up its act and emerge as a respectable science? And if not, why?
Before going any further, we may need to get a bit more clear on what we mean by pseudoscience. Of course Maarten, I and our contributors devoted an entire book to explore that and related questions, so the matter is intricate. Nonetheless, three characteristics of pseudoscience clearly emerged from our discussions:
1. Pseudoscience is not a fixed notion. A field can slide into (and maybe out of?) pseudoscientific status depending on the temporal evolution of its epistemic status (and, to a certain extent, of the sociology of the situation).
2. Pseudoscientific claims are grossly deficient in terms of epistemic warrant. This, however, is not sufficient to identify pseudoscience per se, as some claims made within established science can also, at one time or another, be epistemically grossly deficient.
3. What most characterizes a pseudoscience is the concerted efforts of its practitioners to mimic the trappings of science: They want to be seen as doing science, so they organize conferences, publish specialized journals, and talk about data and statistical analyses. All of it, of course, while lacking the necessary epistemic warrant to actually be a science.
Given this three-point concept of pseudoscience, then, is Maarten right that pseudoscientific status, once reached, is a "black hole," a sink from which no notion or field ever emerges again?
The obvious counter example would seem to be herbal medicine which, to a limited extent, is becoming acceptable as a mainstream practice. Indeed, in some cases our modern technology has uncontroversially and successfully purified and greatly improved the efficacy of natural remedies. Just think, of course, of aspirin, whose active ingredient is derived from the bark and leaves of willow trees, the effectiveness of which was well known already to Hippocrates 23 centuries ago.
Maybe, just maybe, we are in the process of witnessing a similar emergence of acupuncture from pseudoscience to medical acceptability. I say maybe because it is not at all clear, as yet, whether acupuncture has additional effects above and beyond the placebo. But if it does, then it should certainly be used in some clinical practice, mostly as a complementary approach to pain management (it doesn't seem to have measurable effects on much else).
But these two counter examples struck both Maarten and I as rather unconvincing. They are better interpreted as specific practices, arrived at by trial and error, which happen to work well enough to be useful in modern settings. The theory, such as it is, behind them is not just wrong, but could have never aspired to be scientific to begin with.
Acupuncture, for instance, is based on the metaphysical notion of Qi energy, flowing through 12 "regular" and 8 "extraordinary" so-called "meridians." Indeed, there are allegedly five types of Qi energy, corresponding to five cardinal functions of the human body: actuation, warming, defense, containment and transformation. Needless to say, all of this is entirely made up, and makes absolutely no contact with either empirical science or established theoretical notions in, say, physics or biology.
The situation is even more hopeless in the case of "herbalism," which originates from a hodgepodge of approaches, including magic, shamanism, and Chinese "medicine" type of supernaturalism. Indeed, one of Hippocrates' great contributions was precisely to reject mysticism and supernaturalism as bases for medicine, which is why he is often referred to as the father of "Western" medicine (i.e., medicine).
Based just on the examples discussed above - concerning once acceptable scientific notions that slipped into pseudoscience and pseudoscientific notions that never emerged into science - it would seem that there is a potential explanation for Maarten's black hole. Cold fusion, phrenology, and to some (perhaps more debatable) extent alchemy were not just empirically based (so is acupuncture, after all!), but built on a theoretical foundation that invoked natural laws and explicitly attempted to link up with established science. Those instances of pseudoscience whose practice, but not theory, may have made it into the mainstream, instead, invoked supernatural or mystical notions, and most definitely did not make any attempt to connect with the rest of the scientific web of knowledge.
Please note that I am certainly not saying that all pseudoscience is based on supernaturalism. Parapsychology and ufology, in most of their incarnations at least, certainly aren't. What I am saying is that either a notion begins within the realm of possibly acceptable science - from which it then evolves either toward full fledged science or slides into pseudoscience - or it starts out as pseudoscience and remains there. The few apparent exceptions to the latter scenario appear to be cases of practices based on mystical or similar notions. In those cases aspects of the practice may become incorporated into (and explained by) modern science, but the "theoretical" (really, metaphysical) baggage is irrevocably shed.
Can anyone think of examples that counter the idea of the pseudoscience black hole? Or of alternative explanations for its existence?
Originally on Rationally Speaking
本文於 修改第 1 次
失去嚴謹性的科學研究活動 - The Economist
How science goes wrong
Scientific research has changed the world. Now it needs to change itself
The Economist, 10/19/13
A SIMPLE idea underpins science: “trust, but verify”. Results should always be subject to challenge from experiment. That simple but powerful idea has generated a vast body of knowledge. Since its birth in the 17th century, modern science has changed the world beyond recognition, and overwhelmingly for the better.
But success can breed complacency. Modern scientists are doing too much trusting and not enough verifying -- to the detriment of the whole of science, and of humanity.
Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.
What a load of rubbish
Even when flawed research does not put people’s lives at risk -- and much of it is too far from the market to do so -- it squanders money and the efforts of some of the world’s best minds. The opportunity costs of stymied progress are hard to quantify, but they are likely to be vast. And they could be rising.
One reason is the competitiveness of science. In the 1950s, when modern academic research took shape after its successes in the second world war, it was still a rarefied pastime. The entire club of scientists numbered a few hundred thousand. As their ranks have swelled, to 6m-7m active researchers on the latest reckoning, scientists have lost their taste for self-policing and quality control. The obligation to “publish or perish” has come to rule over academic life. Competition for jobs is cut-throat. Full professors in America earned on average $135,000 in 2012 -- more than judges did. Every year six freshly minted PhDs vie for every academic post. Nowadays verification (the replication of other people’s results) does little to advance a researcher’s career. And without verification, dubious findings live on to mislead.
Careerism also encourages exaggeration and the cherry-picking of results. In order to safeguard their exclusivity, the leading journals impose high rejection rates: in excess of 90% of submitted manuscripts. The most striking findings have the greatest chance of making it onto the page. Little wonder that one in three researchers knows of a colleague who has pepped up a paper by, say, excluding inconvenient data from results “based on a gut feeling”. And as more research teams around the world work on a problem, the odds shorten that at least one will fall prey to an honest confusion between the sweet signal of a genuine discovery and a freak of the statistical noise. Such spurious correlations are often recorded in journals eager for startling papers. If they touch on drinking wine, going senile or letting children play video games, they may well command the front pages of newspapers, too.
Conversely, failures to prove a hypothesis are rarely even offered for publication, let alone accepted. “Negative results” now account for only 14% of published papers, down from 30% in 1990. Yet knowing what is false is as important to science as knowing what is true. The failure to report failures means that researchers waste money and effort exploring blind alleys already investigated by other scientists.
The hallowed process of peer review is not all it is cracked up to be, either. When a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were being tested.
If it’s broke, fix it
All this makes a shaky foundation for an enterprise dedicated to discovering the truth about the world. What might be done to shore it up? One priority should be for all disciplines to follow the example of those that have done most to tighten standards. A start would be getting to grips with statistics, especially in the growing number of fields that sift through untold oodles of data looking for patterns. Geneticists have done this, and turned an early torrent of specious results from genome sequencing into a trickle of truly significant ones.
Ideally, research protocols should be registered in advance and monitored in virtual notebooks. This would curb the temptation to fiddle with the experiment’s design midstream so as to make the results look more substantial than they are. (It is already meant to happen in clinical trials of drugs, but compliance is patchy.) Where possible, trial data also should be open for other researchers to inspect and test.
The most enlightened journals are already becoming less averse to humdrum papers. Some government funding agencies, including America’s National Institutes of Health, which dish out $30 billion on research each year, are working out how best to encourage replication. And growing numbers of scientists, especially young ones, understand statistics. But these trends need to go much further. Journals should allocate space for “uninteresting” work, and grant-givers should set aside money to pay for it. Peer review should be tightened -- or perhaps dispensed with altogether, in favour of post-publication evaluation in the form of appended comments. That system has worked well in recent years in physics and mathematics. Lastly, policymakers should ensure that institutions using public money also respect the rules.
Science still commands enormous -- if sometimes bemused -- respect. But its privileged status is founded on the capacity to be right most of the time and to correct its mistakes when it gets things wrong. And it is not as if the universe is short of genuine mysteries to keep generations of scientists hard at work. The false trails laid down by shoddy research are an unforgivable barrier to understanding.
本文於 修改第 1 次
本文於 修改第 1 次
Aschwanden女士在Where Do Thoughts Occur?(本欄《思想在那裏發生？》)一文中，介紹多位認知科學家所進行的實驗以及她/他們「根據」這些實驗所得到的結論 –人的肢體狀況是「思考」過程本身的一部份。
But what we don’t do is think entirely inside our heads. Thoughts aren’t confined to our brains -- they course through a network that expands to our bodies, perhaps eliminating, at times, the need for complex thought. (紅色字體是我加上來強調Aschwanden女士或她引用他人陳述的用字遣詞。下同。)
The notion that we think with the body -- the startling conclusion of a field called embodied cognition -- flies in the face of long-standing views. …
But dozens of studies over the past decade challenge that view, suggesting instead that our thoughts are inextricably linked to physical experience. As University of Toronto psychologist Spike Lee puts it, bodily states “aren’t some extraneous thing -- they’re part of the thinking process.”
“It’s not just that our bodies influence thought: It’s that thought itself is a system that simultaneously takes place in the brain, the body and the environment around us.”
如果我們能釐清” part of”一詞的「所指」，
” … our thoughts are inextricably linked to physical experience.”以及
”bodily states ‘aren’t some extraneous thing -- they’re part of the thinking process.’”
” But what we don’t do is think entirely inside our heads. Thoughts aren’t confined to our brains -- they course through a network that expands to our bodies, …”以及
“It’s not just that our bodies influence thought: It’s that thought itself is a system that simultaneously takes place in the brain, the body and the environment around us.”
人腦與電腦不同之處很多，其中之一在於：人腦不斷的在處理「即時」資訊，而且根據這些資訊，我們隨時和立即「更新」我們的判斷以及決定來應付與時俱變的外在和內在環境。因此，人腦的功能和「運算」結果，的確受到環境和資訊輸入媒介(我們的五官、肢體、呼吸、血液循環等等)的「影響」。20世紀初年俄國/蘇聯的心理學家巴伏洛夫的「刺激 – 反應」理論，以及隨後心理學的「行為主義學派」早就明白和鼓吹這個道理。這只表示我們思考的過程被五官、肢體等等的狀態影響以及使用到經由它們取得的資訊。它們的「參與」是個被動態，五官、肢體等等並非人類主動思考動作的一個環節。膝蓋真的可能思考嗎？
本文於 修改第 3 次
在Do You Really Have Free Will?一文中(本欄《人真的有自由意志嗎?當然有》)，Roy F. Baumeister教授說，如果我們要判斷、決定、或回答「人是否有『自由意志』？」這個問題，我們需要先給「自由意志」這個概念做一個清楚明確的界定。
Baumeister教授首先批評其他學者和John Bargh 教授對「自由意志」的闡釋：
「自由意志」一詞中的「自由」指人的行為不受「因果律」的限制(The ‘free' in free will means freedom from causation.)(Bargh 2009)
(These arguments leave untouched the meaning of free will that most people understand, which is consciously making choices about what to do in the absence of external coercion, and accepting responsibility for one’s actions.)
「自由意志」只不過是另一類「原因」。(Free will is just another kind of cause.)
The evolution of free will began when living things began to make choices. The difference between plants and animals illustrates an important early step. Plants don’t change their location and don’t need brains to help them decide where to go. Animals do. Free will is an advanced form of the simple process of controlling oneself, called agency. (紅色字體是我加上來強調Baumeister教授的用字遣詞。下同。)
If culture is so successful, why don’t other species use it? They can’t -- because they lack the psychological innate capabilities it requires.
What psychological capabilities are needed to make cultural systems work? To be a member of a group with culture, people must be able to understand the culture’s rules for actions, including moral principles and formal laws.
Self-control counts as a kind of freedom because it begins with not acting on every impulse.
a. 我們現在了解「演化」不是憑空而來或什麼「浮現」過程；它有物理性、化學性、和分子生物性的基礎 – 基因。植物和動物不同，不在於前者不需要「大腦」，而在於前者沒有能夠演化出「大腦」的結構和基因。
b. “psychological innate capabilities”的用法是一種暗渡陳倉的矇混手法。人之所以能創造文化，是因為人類有記憶細胞以及其他相關的大腦神經細胞。Baumeister教授明明知道人類創造文化的能力是我們的neurological 和physiological方面的能力。他用“psychological innate capabilities”其實是企圖避開physical capabilities這個概念，同時夾帶一個看不到、摸不著的東西，然後稱之為「自由意志」。
c. 人們之所以能夠了解「文化」中的種種遊戲規則和社會規範，首先是我們大腦中的記憶細胞賦予我們學習能力；其次，每個人在成長過程中都經過「社會建構」的洗禮。Baumeister教授所謂的psychological capabilities，是人們生理結構加上學習的結果。記憶細胞是我們天生和「內在」的成份；但學習則是後天和「外建」的過程。
d. 我可以理解「自我控制」可以看成是某種「意志」的思考方式；但我完全不能理解把「自我控制」視為(counts as)「自由」的邏輯。
* Bargh, J. A. 2009, The Will Is Caused, Not "Free", http://www.psychologytoday.com/blog/the-natural-unconscious/200906/the-will-is-caused-not-free
本文於 修改第 2 次