|
方法論:研究學問的盲點與歧路 -- 開欄文
|
瀏覽10,859 |回應17 |推薦2 |
|
|
|
本城市有一個以「科學方法」為主題的專欄(【關於科學方法的討論】)。本欄則轉貼我認為近於「胡言亂語」的「學術」論文或研究報告。它們是 how not to do research的範例。先轉貼兩篇我最近看到的介紹以及一篇論文。
近兩年來,我思路不怎麼順暢,行文也就跟著遲緩。或許要等一些時間我才能完成對它們的批判。歡迎大家共襄盛舉,參與討論。
本文於 修改第 5 次
|
多重宇宙論和賭徒思考逆向謬誤 -- Philip Goff
|
|
|
推薦1 |
|
|
|
我轉載下文的原因有兩個: 1) 軋福教授的大作討論:「多重宇宙論」和「賭徒思考逆向謬誤」的關連;閱讀該文可以視為一種益智遊戲或腦筋急轉彎訓練。 2) 做為學習「方法論」過程中的一個「反面教材」。 以下就上述第2點略表淺見: 如果說,支持「多重宇宙論」學者的思考方式有「賭徒思考逆向謬誤」的傾向,則軋福教授則有「範疇錯誤」傾向:他把「宇宙論」當做「信仰」領域來研究。 索引: gambler's fallacy:賭徒謬誤 Many physicists assume we must live in a multiverse – but their basic maths may be wrong Philip Goff, 11/09/23 One of the most startling scientific discoveries of recent decades is that physics appears to be fine-tuned for life. This means that for life to be possible, certain numbers in physics had to fall within a certain, very narrow range. One of the examples of fine-tuning which has most baffled physicists is the strength of dark energy, the force that powers the accelerating expansion of the universe. If that force had been just a little stronger, matter couldn’t clump together. No two particles would have ever combined, meaning no stars, planets, or any kind of structural complexity, and therefore no life. If that force had been significantly weaker, it would not have counteracted gravity. This means the universe would have collapsed back on itself within the first split-second – again meaning no stars or planets or life. To allow for the possibility of life, the strength of dark energy had to be, like Goldilocks’s porridge, “just right”. This is just one example, and there are many others. The most popular explanation for the fine-tuning of physics is that we live in one universe among a multiverse. If enough people buy lottery tickets, it becomes probable that somebody is going to have the right numbers to win. Likewise, if there are enough universes, with different numbers in their physics, it becomes likely that some universe is going to have the right numbers for life. For a long time, this seemed to me the most plausible explanation of fine-tuning. However, experts in the mathematics of probability have identified the inference from fine-tuning to a multiverse as an instance of fallacious reasoning – something I explore in my new book, Why? The Purpose of the Universe. Specifically, the charge is that multiverse theorists commit what’s called the inverse gambler’s fallacy. Suppose Betty is the only person playing in her local bingo hall one night, and in an incredible run of luck, all of her numbers come up in the first minute. Betty thinks to herself: “Wow, there must be lots of people playing bingo in other bingo halls tonight!” Her reasoning is: if there are lots of people playing throughout the country, then it’s not so improbable that somebody would get all their numbers called out in the first minute. But this is an instance of the inverse gambler’s fallacy. No matter how many people are or are not playing in other bingo halls throughout the land, probability theory says it is no more likely that Betty herself would have such a run of luck. It’s like playing dice. If we get several sixes in a row, we wrongly assume that we are less likely to get sixes in the next few throws. And if we don’t get any sixes for a while, we wrongly assume that there must have been loads of sixes in the past. But in reality, each throw has an exact and equal probability of one in six of getting a specific number. Dice deceive us. Hlorgeksidin/Shutterstock 請至原網頁觀看擲骰子照片 Multiverse theorists commit the same fallacy. They think: “Wow, how improbable that our universe has the right numbers for life; there must be many other universes out there with the wrong numbers!” But this is just like Betty thinking she can explain her run of luck in terms of other people playing bingo. When this particular universe was created, as in a die throw, it still had a specific, low chance of getting the right numbers. At this point, multiverse theorists bring in the “anthropic principle” – that because we exist, we could not have observed a universe incompatible with life. But that doesn’t mean such other universes don’t exist. Suppose there is a deranged sniper hiding in the back of the bingo hall, waiting to shoot Betty the moment a number comes up that’s not on her bingo card. Now the situation is analogous to real world fine-tuning: Betty could not have observed anything other than the right numbers to win, just as we couldn’t have observed a universe with the wrong numbers for life. Even so, Betty would be wrong to infer that many people are playing bingo. Likewise, multiverse theorists are wrong to infer from fine-tuning to many universes. What about the multiverse? Isn’t there scientific evidence for a multiverse though? Yes and no. In my book, I explore the connections between the inverse gambler’s fallacy and the scientific case for the multiverse, something which surprisingly hasn’t been done before. The scientific theory of inflation – the idea that the early universe blew up hugely in size – supports the multiverse. If inflation can happen once, it is likely to be happening in different areas of space – creating universes in their own right. While this may give us tentative evidence for some kind of multiverse, there is no evidence that the different universes have different numbers in their local physics. There is a deeper reason why the multiverse explanation fails. Probabilistic reasoning is governed by a principle known as the requirement of total evidence, which obliges us to work with the most specific evidence we have available. In terms of fine-tuning, the most specific evidence that people who believe in the multiverse have is not merely that a universe is fine-tuned, but that this universe is fine-tuned. If we hold that the constants of our universe were shaped by probabilistic processes – as multiverse explanations suggest – then it is incredibly unlikely that this specific universe, as opposed to some other among millions, would be fine-tuned. Once we correctly formulate the evidence, the theory fails to account for it. The conventional scientific wisdom is that these numbers have remained fixed from the Big Bang onwards. If this is correct, then we face a choice. Either it’s an incredible fluke that our universe happened to have the right numbers. Or the numbers are as they are because nature is somehow driven or directed to develop complexity and life by some invisible, inbuilt principle. In my opinion, the first option is too improbable to take seriously. My book presents a theory of the second option – cosmic purpose – and discusses its implications for human meaning and purpose. This is not how we expected science to turn out. It’s a bit like in the 16th century when we first started to get evidence that we weren’t in the centre of the universe. Many found it hard to accept that the picture of reality they’d got used to no longer explained the data. I believe we’re in the same situation now with fine-tuning. We may one day be surprised that we ignored for so long what was lying in plain sight – that the universe favours the existence of life. Philip Goff, Associate Professor of Philosophy, Durham University This article is republished from The Conversation under a Creative Commons license. Read the original article.
本文於 修改第 1 次
|
《高成就者的共同「人格特質」》讀後
|
|
|
推薦1 |
|
|
|
0. 前言 本欄上一個貼文報導大克沃斯教授的研究。我沒有做過心理學研究,也不熟悉關於「人格特質」和教育心理學兩個領域的研究成果。根據我的淺薄知識,下面是我讀了該報導後的愚見: 1) 雖然該報導倒數第2節中,大克沃斯教授對自己的觀點做了釐清;我仍然認為大克沃斯教授的研究在「方法論」上有其盲點; 2) 雖然全文最後一節加了個「但書」,從這篇報導前3/4的篇幅來看,記者史密斯先生缺乏基本常識和邏輯訓練。 下面就以上兩點批評略做說明。 1. 文化差異 在進入正題之前,先就我認為該研究涉及到「文化」層面的地方說幾句。 中國文化一向推崇該報導提及「毅力」(「意志力」、「堅毅不拔」)這類「精神」或「心態」層面的因素;例如,大家耳熟能詳的:「愚公移山」、「精衛填海」、「水滴石穿」、「繩鋸木斷」、和「鐵杵磨成繡花針」這些成語、鄉野傳說、和神話故事等等。它們或許是一種「安慰獎」,或許是尼切所說的「弱者道德」。 反之,美國社會講究「速成」、「立竿見影」;從而,一般人推崇能夠現買現賣的「才能」和「技藝」。在教育的政策、方式、與評估等層面,往往忽略了「毅力」或「意志力」的培養及其作用。 這應該是大克沃斯教授的研究結果被當成「寶」或「發現新大陸」的背景。 順帶補充一句;我在高中時讀到「人一能之己百之;人十能之己千之」(《中庸》第22),就曾想到:當我「百之」的時候,那個能「一之」的人,不是又多學到另外99種本領了嗎? 2. 「方法論」議題 1) 把就「軍事基本訓練」取得「高成就者」的結論推廣到「所有」領域,並視之為「常理」或「真理」,這種做法在邏輯學上稱之為「以偏蓋全」。 2) 「所有『高成就者』都具有『堅毅不拔』這項『人格特質』」這個前提,並不能導出:「具有『堅毅不拔』這項『人格特質』者都能達到『高成就』」這個結論。 3. 「常識和邏輯」議題 在21世紀,忽視基因、人體結構、大腦結構、和大腦神經連接網路對人「能力」的限制和影響,我認為形同「無知」。 4. 結論 我相信以上2.、3.兩點批評都說得通;所以,我把這篇以大克沃斯教授研為主題的報導放在此欄。
本文於 修改第 1 次
|
高成就者的共同「人格特質」 - Dave Smith
|
|
|
推薦1 |
|
|
|
索引: grit:毅力、意志力、堅毅不拔 Top psychologist says all elite achievers have one thing in common—and it’s not an innate ability like brains or talent Dave Smith, 10/16/25 After years of studying high achievers across diverse fields, top psychologist Angela Duckworth has identified what she calls the most reliable predictor of success—and it challenges conventional wisdom about talent and intelligence. Author Mel Robbins, who has 4.6 million subscribers on YouTube, recently asked Duckworth about her findings during a recording of her podcast, released Monday. “The common denominator of high achievers, no matter what they’re achieving, is this special combination of passion and perseverance for really long-term goals,” Duckworth explains. “And in a word, it’s grit.” Duckworth, a professor at the University of Pennsylvania and MacArthur Fellow, defines grit as two interconnected components that work together over time. “It’s these two parts, right? Passion for long-term goals, like loving something and staying in love with it. Not kind of wandering off and doing something else, and then something else again, and then something else again, but having a kind of North Star,” she said. The perseverance component is equally crucial, according to Duckworth. “Partly, it’s hard work, right? Partly it’s practicing what you can’t yet do, and partly it’s resilience. So part of perseverance is, on the really bad days, do you get up again?” In children or West Point cadets, research shows grit matters most Duckworth’s research, which dates back to 2007, has pushed the idea that grit outperforms traditional predictors of success. She studied over 11,000 cadets across multiple years at the U.S. Military Academy at West Point, measuring their “grit scores” upon entry and tracking their performance through the notoriously difficult “Beast Barracks” training program. The results were striking: Grit proved to be the strongest predictor of which cadets would complete the grueling six-week program, outperforming SAT scores, high school GPA, physical fitness assessments, and even West Point’s comprehensive “Whole Candidate Score.” While 3% of new cadets typically leave during Beast Barracks, those with higher grit scores were significantly more likely to persist. The academy’s traditional metrics failed to capture what mattered most: the ability to persist when facing extreme challenges. Similar patterns emerged in Duckworth’s study of National Spelling Bee contestants. Children with higher grit scores were more likely to advance to later rounds of competition, regardless of their measured intelligence. The research showed that gritty spellers engaged more frequently in what researchers call “deliberate practice”: the effortful, often unenjoyable work of studying and memorizing words alone, rather than more pleasant activities like being quizzed by others. The effort equation Duckworth’s research revealed a counterintuitive relationship between grit and traditional measures of ability. “I think that absolutely anything that any psychologist tells you is a good thing to have is partly under control,” she told Robbins during the podcast. “I am not saying there aren’t genes that are at play, because every psychologist will tell you that that’s also part of the story for everything—grit included. But you know, how gritty we are is very much a function of what we know, who we’re around, and the places we go.” In one study, Duckworth found smarter students actually had less grit than their peers who scored lower on intelligence tests. This finding suggests that individuals who aren’t naturally gifted often compensate by working harder and with greater determination—and their effort pays off. At an Ivy League university, the grittiest students, not the smartest ones, achieved the highest GPAs. Duckworth believes “effort counts twice” in the achievement equation. Her formula is as follows: Talent × Effort = Skill, and Skill × Effort = Achievement. “Talent is how quickly your skills improve when you invest effort. Achievement is what happens when you take your acquired skills and use them,” she told Forbes in 2017. An important caveat: Grit isn’t everything Duckworth’s work has influenced educational policy discussions and military training programs, though she has evolved her thinking about the trait’s role. In 2018, she acknowledged during an interview with EdSurge that “when we are talking about what kids need to grow up and live lives that are happy and healthy and good for other people, it’s a long list of things. Grit is on that list, but it is not the only thing on the list.” Recent studies have both supported and refined Duckworth’s findings. A 2019 study of West Point cadets, which Duckworth also contributed to, found that while grit remained a significant predictor of graduation, cognitive ability was the strongest predictor of academic and military performance. Other research has questioned whether grit adds substantial predictive power beyond established personality traits like conscientiousness. Despite ongoing scholarly debate about grit’s uniqueness as a construct, the core insight remains compelling: Sustained effort and commitment to long-term goals often matter more than natural ability alone. As Duckworth put it back in 2017, “Our potential is one thing. What we do with it is quite another.” You can watch Mel Robbins’s full interview with Angela Duckworth below. This story was originally featured on Fortune.com
本文於 修改第 3 次
|
掃描可能導致錯誤解讀的原因 - M. Costandi
|
|
|
推薦1 |
|
|
|
Bold Assumptions: Why Brain Scans Are Not Always What They Seem
Moheb Costandi, 04/01/15
In 2009, researchers at the University of California, Santa Barbara performed a curious experiment. In many ways, it was routine — they placed a subject in the brain scanner, displayed some images, and monitored how the subject's brain responded. The measured brain activity showed up on the scans as red hot spots, like many other neuroimaging studies.
Except that this time, the subject was an Atlantic salmon, and it was dead.
Dead fish do not normally exhibit any kind of brain activity, of course. The study was a tongue-in-cheek reminder of the problems with brain scanning studies. Those colorful images of the human brain found in virtually all news media may have captivated the imagination of the public, but they have also been subject of controversy among scientists over the past decade or so. In fact, neuro-imagers are now debating how reliable brain scanning studies actually are, and are still mostly in the dark about exactly what it means when they see some part of the brain "light up."
Glitches in reasoning
Functional magnetic resonance imaging (fMRI) measures brain activity indirectly by detecting changes in the flow of oxygen-rich blood, or the blood oxygen-level dependent (BOLD) signal, with its powerful magnets. The assumption is that areas receiving an extra supply of blood during a task have become more active. Typically, researchers would home in on one or a few "regions of interest," using 'voxels,' tiny cube-shaped chunks of brain tissue containing several million neurons, as their units of measurement.
Early fMRI studies involved scanning participants' brains while they performed some mental task, in order to identify the brain regions activated during the task. Hundreds of such studies were published in the first half of the last decade, many of them garnering attention from the mass media.
Eventually, critics pointed out a logical fallacy in how some of these studies were interpreted. For example, researchers may find that an area of the brain is activated when people perform a certain task. To explain this, they may look up previous studies on that brain area, and conclude that whatever function it is reported to have also underlies the current task.
Among many examples of such studies were those that concluded people get satisfaction from punishing rule-breaking individuals, and that for mice, pup suckling is more rewarding than cocaine. In perhaps one of the most famous examples, a researcher diagnosed himself as a psychopath by looking at his own brain scan.
These conclusions could well be true, but they could also be completely wrong, because the area observed to be active most likely has other functions, and could serve a different role than that observed in previous studies.
The brain is not composed of discrete specialized regions. Rather, it's a complex network of interconnected nodes, which cooperate to generate behavior. Thus, critics dismissed fMRI as "neo-phrenology" – after the discredited nineteenth century pseudoscience that purported to determine a person's character and mental abilities from the shape of their skull – and disparagingly referred to it as 'blobology.'
When results magically appear out of thin air
In 2009, a damning critique of fMRI appeared in the journal Perspectives on Psychological Science. Initially titled "Voodoo Correlations in Social Neuroscience" and later retitled to "Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition," the article questioned the statistical methods used by neuro-imagers. The authors, Ed Vul of University of California in San Diego and his colleagues, examined a handful of social cognitive neuroscience studies, and pointed out that their statistical analyses gave impossibly high correlations between brain activity and behavior.
"It certainly created controversy," says Tal Yarkoni, an assistant professor in the Department of Psychology at the University of Texas, Austin. "The people who felt themselves to be the target ignored the criticism and focused on the tone, but I think a large subset of the neuroimaging community paid it some lip service."
Russ Poldrack of the Department of Psychology at Stanford University says that although the problem was more widespread than the paper suggested, many neuro-imagers were already aware of it. "They happened to pick on one part of the literature, but almost everybody was doing it," he says.
The problem arises from the "circular" nature of the data analysis, Poldrack says. "We usually analyze a couple of hundred thousand voxels in a study," he says. "When you do that many statistical tests, you look for the ones that are significant, and then choose those to analyze further, but they'll have high correlations by virtue of the fact that you selected them in the first place."
Not long after Vul's paper was published, Craig Bennett and his colleagues published their dead salmon study to demonstrate how robust statistical analyses are key to interpreting fMRI data. When stats are not done well enough, researchers can easily get false positive results – or see an effect that isn't actually there, such as activity in the brain of a dead fish.
The rise of virtual superlabs
The criticisms drove researchers to do better work— to think more deeply about their data, avoid logical fallacies in interpreting their results, and develop new analytical methods.
At the heart of the matter is the concept of statistical power, which reflects how likely the results are to be meaningful instead of being obtained by pure chance. Smaller studies typically have lower power. An analysis published in 2013 showed that underpowered studies are common in almost every area of brain research. This is specially the case in neuroimaging studies, because most of them involve small numbers of participants.
"Ten years ago I was willing to publish papers showing correlations between brain activity and behavior in just 20 people," says Poldrack. "Now I wouldn't publish a study that doesn't involve at least 50 subjects, or maybe 100, depending on the effect. A lot of other labs have come around to this idea."
Cost is one of the big barriers preventing researchers from increasing the size of their studies. "Neuroimaging is very expensive. Every lab has a budget and a researcher isn't going to throw away his entire year's budget on a single study. Most of the time, there's no real incentive to do the right thing," Yarkoni says.
Replication – or repeating experiments to see if the same results are obtained – also gives researchers more confidence in their results. But most journals are unwilling to publish replication experiments, preferring novel findings instead, and the act of repeating someone else's experiments is seen as aggressive, as if implying they were not done properly in the first place.
One way around these problems is for research teams to collaborate with each other and pool their results to create larger data sets. One such initiative is the IMAGEN Consortium, which brings together neuro-imaging experts from 18 European research centers, to share their results, integrate them with genetic and behavioral data, and create a publicly available database.
Five years ago, Poldrack started the OpenfMRI project, which has similar aims. "The goal was to bring together data to answer questions that couldn't be answered with individual data sets," he says. "We're interested in studying the psychological functions underlying multiple cognitive tasks, and the only way of doing that is to amass lots of data from lots of different tasks. It's way too much for just one lab."
An innovative way of publishing scientific studies, called pre-registration, could also increase the statistical power of fMRI studies. Traditionally, studies are published in scientific journals after they have been completed and peer-reviewed. Pre-registration requires that researchers submit their proposed experimental methods and analyses early on. If these meet the reviewers' satisfaction, they are published; the researchers can then conduct the experiment and submit the results, which are eventually published alongside the methods.
"The low statistical power and the imperative to publish incentivizes researchers to mine their data to try to find something meaningful," says Chris Chambers, a professor of cognitive neuroscience at the University of Cardiff. "That's a huge problem for the credibility and integrity of the field."
Chambers is an associate editor at Cortex, one of the first scientific journals to offer pre-registration. As well as demanding larger sample sizes, the format also encourages researchers to be more transparent about their methods.
Many fMRI studies would, however, not be accepted for pre-registration – their design would not stand up to the scrutiny of the first-stage reviewers. "Neuro-imagers say pre-registration consigns their field to a ghetto," says Chambers. "I tell them they can collaborate with others to share data and get bigger samples."
Pushing the field forward
Even robust and apparently straight-forward fMRI findings can still be difficult to interpret, because there are still unanswered questions about the nature of the BOLD signal. How exactly does the blood rush to a brain region? What factors affect it? What if greater activation in a brain area actually means the region is working less efficiently?
"What does it mean to say neurons are firing more in one condition than in another? We don't really have a good handle on what to make of that," says Yarkoni. "You end up in this uncomfortable situation where you can tell a plausible story no matter what you see."
To some extent, the problems neuro-imagers face are part of the scientific process, which involves continuously improving one's methods and refining ideas in light of new evidence. When done properly, the method can be extremely powerful, as the ever-growing number of so-called "mind-reading" and "decoding" studies clearly show.
It's likely that with incremental improvements in the technology, fMRI results will become more accurate and reliable. In addition, there are a number of newer projects that aim to find other ways to capture brain activity. For example, one group at Massachusetts General Hospital is working on using paramagnetic nanoparticles to detect changes in blood volume in the brain's capillaries. Such a method would radically enhance the quality of signals and make it possible to detect brain activity in one individual, as opposed to fMRI that requires pooling data from a number of people, according to the researchers. Other scientists are diving even deeper, using paramagnetic chemicals to reveal brain activity at the cell level. If such methods come to fruition, we could find the subtlest activities in the brain, maybe just not in a dead fish.
https://www.braindecoder.com/bold-assumptions-why-brain-scans-are-not-always-what-they-seem-1069949099.html
本文於 修改第 2 次
|
何以「語言本能說」成為「顯學」?
|
|
|
推薦0 |
|
|
|
(本文轉貼自【批判「語言本能說」】一欄,換了標題以適合本欄主旨)
我接受唯物論和當代經驗論的觀點。因此,我對各種形式的「先天觀」,如孟子的「性善說」、康德的「(思考)範疇說」、以及卻文斯基(或譯杭斯基)的「語言本能說」等等,都持存疑的態度。
讀到Evans教授的大作《批判「語言本能說」》(請見本城市【批判「語言本能說」】一欄的開欄文)。他引用基因學、當代演化論、比較語言學、文化人類學、大腦神經學、語言(能力)發展學等領域的研究結果,針對「語言本能說」的兩個論點提出了全面和深入的分析與檢驗。我相信對語言學今後研究的方向和對象有一定程度的作用。
我沒有補充Evans教授論點的功力,這一部份只有從略或藏拙。我想討論的是:
何以卻文斯基的「語言本能說」在過去40年能夠成為語言學研究中的「顯學」之一。
1. 許多學者在取得博士學位後,為了駕輕就熟或「偷懶」,往往繼續在自己博士論文題目上做文章或從事研究。同一個指導教授下的博士生,其研究領域和論文題目通常相近。這大概是「學派」或「學幫」以及「學風」形成的原因之一。
2. 一旦「學派」、「學幫」、或「學風」有了一定的地盤或勢力,為了取得助理教授的位置或升等,許多學者只有搭順風車或採取「西瓜偎大邊」的行為模式。
3. 一旦「學派」、「學幫」、或「學風」有了一定的地盤或勢力,其基本假設和研究對象,自然成為庫恩所稱的「思考模式」(或譯「研究『典範』」)。
4. 除了以上三個現實(或混飯吃)因素之外,一個理論因素是我常常強調的「基本假設」。也就是說,接受「唯心論」的人大概比較能夠或傾向接受種種「先天觀」的說法,而有意或在下意識中,忽視它們與基因學、當代演化論、以及大腦神經學等研究成果不相容的地方。例如,Evans教授從以上三個領域對「語言本能說」的批評,也適用於「性善說」與「(思考)範疇說」。
「盡信書,不如無書。」,信哉斯言。
本文於 修改第 7 次
|
學者既非萬事通故不應撈過界 - M. R. Francis
|
|
|
推薦1 |
|
|
|
Quantum and Consciousness Often Mean Nonsense
Lots of things are mysterious. That doesn’t mean they’re connected.
Matthew R. Francis, 05/29/14
Possibly no subject in science has inspired more nonsense than quantum mechanics. Sure, it’s a complicated field of study, with a few truly mysterious facets that are not settled to everyone’s satisfaction after nearly a century of work. At the same time, though, using quantum to mean “we just don’t know” is ridiculous -- and simply wrong. Quantum mechanics is the basis for pretty much all our modern technology, from smartphones to fluorescent lights, digital cameras to fiber-optic communications.
If I had to pick a runner-up in the nonsense sweepstakes, it would be human consciousness, another subject with a lot of mysterious aspects. We are made of ordinary matter yet are self-aware, capable of abstractly thinking about ourselves and of recognizing others (including nonhumans) as separate entities with their own needs. As a physicist, I’m fascinated by the notion that our consciousness can imagine realities other than our own: The universe is one way, but we are perfectly happy to think of how it might be otherwise.
I hold degrees in physics and have spent a lot of time learning and teaching quantum mechanics. Nonphysicists seem to have the impression that quantum physics is really esoteric, with those who study it spending their time debating the nature of reality. In truth, most of a quantum mechanics class is lots and lots of math, in the service of using a particle’s quantum state -- the bundle of physical properties such as position, energy, spin, and the like -- to describe the outcomes of experiments. Sure, there’s some weird stuff and it’s fun to talk about, but quantum mechanics is aimed at being practical (ideally, at least).
Yet the mysterious aspects of quantum physics and consciousness have inspired many people to speculate freely. The worst offenders will even say that because we don’t fully understand either field, they must be related problems. It sounds good at first: We don’t know exactly how some things in quantum physics work, we don’t know exactly how to go from the brain to consciousness, so maybe consciousness is quantum.
The problem with this idea? It’s almost certainly wrong.
Oh, sure: In a sense the brain is quantum, simply because all matter is described by quantum mechanics. However, what people usually mean by quantum isn’t ordinary stuff such as molecules that let brain cells communicate. Instead, the term is usually reserved for the deeper processes that rely on the quantum state. The quantum state is where fun stuff like entanglement lives: the coupling of two widely separated particles that act like parts of a single system. But that level of analysis is not generally helpful for describing the motion of molecules across the gap between cells in the brain.
That’s not to say that quantum effects are entirely ruled out in biology. Some researchers are investigating how photosynthesis or even the human senses of sight and smell might work in part by manipulating quantum states. The retina in the eye is sensitive to small numbers of photons -- particles of light -- and the quantum state of the photon interacts with the quantum state of the retinal cell. But once those signals are translated into something the brain can process, the original quantum state seems to be irrelevant.
The overwhelming success of modern physics does not give physicists the ability to pronounce judgment on other sciences.
I’ll hedge my bets: Maybe there’s room for some small quantum effects in the brain, but I sincerely doubt those will be directly relevant for consciousness. That’s because almost anything involving individual quantum states requires isolation from environmental interference for the weirdness to show up. For example, most particles aren’t entangled in any meaningful way, because interactions with other particles change their quantum state. That process is known as decoherence. (If someone wants to propose a theory of the mind based on decoherence, I might listen, especially on days when I’m distracted.)
However, other people go much further. In his bestselling 1989 book The Emperor’s New Mind, mathematical physicist Roger Penrose proposed that the problems of interpreting quantum states implies that the conscious mind will need a new kind of physics to describe it. Penrose is no crackpot in his area of expertise (the mathematics of general relativity, which also happens to be my area), but his foray into the mind and consciousness is a cautionary tale.
Just because you’re a world expert in one branch of science doesn’t qualify you in any other discipline. As Zach Weinersmith’s painfully funny comic points out, this is a particularly bad habit among physicists.
(請至原網頁瀏覽卡通)
Some of them think that the overwhelming success of modern physics gives them the ability to pronounce judgment on other sciences, from linguistics to paleontology. Celebrity physicist Michio Kaku is a particularly egregious example, getting evolution completely wrong (see this critique) and telling infamous crackpot Deepak Chopra that our actions can have effects in distant galaxies. Then there are the physicists -- including Freeman Dyson, one of the architects of the quantum theory describing interactions between light and matter -- who contradict climate scientists in their own area of expertise.
Physicists aren’t the only culprits, though. A new book by neuroscientist W. R. Klemm implies that the edges of physics could provide answers about human consciousness. Ironically, he writes, “I just hate it when physicists write about biology. They sometimes say uninformed and silly things. But I hate it just as much when I write about physics, for I too am liable to say uninformed and silly things -- as I may well do here.” Nearly everything that follows in the book excerpt is either wrong or misleading. I could write a point-by-point response, but suffice to say: The problems and incompleteness he cites about quantum physics are overblown and frankly incorrect.
I take it back: I will rant briefly about two of his points. First, Klemm writes, “But is mass really identical to energy? True, mass can be converted to energy, as atom bombs prove, and energy can even be turned into mass. Still, they are not the same things.” That’s an unnecessary obfuscation: Einstein’s equation E = mc2 does connect mass and energy in a fundamental and entirely unmysterious way. Probably no other single equation has inspired as many popular explanations, so it’s safe to say we get it: Mass is a form of energy. To be precise, it’s the energy a particle has when it’s at rest. Sure, there are complications in particle physics collisions at high speeds, but the basic concept is really simple.
Second, dark energy -- which I have written about for Slate -- does not impart energy to galaxies or anything smaller. If it turns out to be “vacuum energy,” which looks probable, then the only way dark energy could have anything to do with human consciousness would be if our heads were empty.
The problem with Klemm’s assertions, as well as those of many others who misuse the word quantum, is that their speculation is based on a superficial understanding of one or both fields. Physics may or may not have anything informative to say about consciousness, but you won’t make any progress in that direction without knowing a lot about both quantum physics and how brains work. Skimping on either of those will lead to nonsense.
Matthew R. Francis is a physicist, science writer, public speaker, educator, and frequent wearer of jaunty hats. He blogs at Galileo’s Pendulum.
http://www.slate.com/articles/health_and_science/science/2014/05/quantum_consciousness_physics_and_neuroscience_do_not_explain_one_another.html
本文於 修改第 2 次
|
10個關於心理學的不正確看法 - R. Pomeroy
|
|
|
推薦1 |
|
|
|
10 of the Greatest Myths in Psychology
Ross Pomeroy, 04/06/14
Myth over Mind
Psychology is rife with misinformation and falsehoods. And sadly, the vast majority of them show no signs of vacating popular culture.
In 2009, Scott Lilienfeld, Steven Jay Lynn, John Ruscio, and Barry Beyerstein assembled a compendium of 50 Great Myths of Popular Psychology, then proceeded to dispel each and every one of them. Their book was a triumph of evidence and reason.
Using 50 Great Myths of Popular Psychology as a guide, we've created a list of 10 of the biggest psychological myths. Don't be ashamed if you believe one, or all, of these.
Subliminal Advertising Works
It's one of the great conspiracies of the television era: that advertisers and influencers are flashing subtle messages across our screens -- sometimes lasting as little as 1/3000th of a second -- and altering how we think and act, as well as what we buy.
Rest assured, however, these advertisements don't work. Your unconscious mind is safe. In a great many carefully controlled laboratory trials, subliminal messages did not affect subjects' consumer choices or voting preferences. When tested in the real world, subliminal messaging failed just as spectacularly. In 1958, the Canadian Broadcasting Corporation informed its viewers that they were going to test a subliminal advertisement during a Sunday night show. They then flashed the words "phone now" 352 times throughout the program. Records from telephone companies were examined, with no upsurge in phone calls whatsoever.
The dearth of evidence for subliminal advertising hasn't stopped influencers from trying it. In 2000, a Republican ad aimed at Vice President Al Gore briefly flashed the word "RATS."
There's an Autism Epidemic
Autism is a "disorder of neural development characterized by impaired social interaction and verbal and non-verbal communication, and by restricted, repetitive or stereotyped behavior."
Prior to the 1990s, the prevalence of autism in the United States was estimated at 1 in 2,500. In 2007, that rate was 1 in 150. In March, the CDC announced new, startling numbers: 1 in 68. What's going on?
The meteoric rise in diagnoses has prompted many to cry "epidemic!" Fearful, they look for a reason, and often latch onto vaccines.
But vaccines are not the cause. The most likely explanation is far less frightening.
Over the past decades, the diagnostic criteria for autism have been significantly loosened. Each of the last three major revisions to the Diagnostic and Statistical Manual of Mental Disorders (DSM) has made it much easier for psychiatrists to diagnose the disorder. When a 2005 study conducted in England tracked autism cases between 1992 and 1998 using identical diagnostic criteria, the rates didn't budge.
We Only Use 10% of Our Brain Power
Oh if only it were true... If we found a way to unlock and unleash the remaining 90%, we could figure out the solution to that pesky problem at work, or become a math genius, or develop telekinetic powers!
But it's not true. Metabolically speaking, the brain is an expensive tissue to maintain, hogging as much as 20% of our resting caloric expenditure, despite constituting a mere 2% of the average human's body weight.
"It’s implausible that evolution would have permitted the squandering of resources on a scale necessary to build and maintain such a massively underutilized organ," Emory University psychologist Scott Lilienfeld wrote.
The myth likely stems back to American psychologist William James, who once espoused the idea that the average person rarely achieves more than 10% of their intellectual potential. Over the years, self-help gurus and hucksters looking to make a buck morphed that notion into the idea that 90% of our brain is dormant and locked away. They have the key, of course, and you can buy it for a pittance!
"Shock" Therapy Is a Brutal Treatment
When you think of electroconvulsive therapy (ECT), what comes to mind? Do you picture a straightjacketed individual being bound to a table against his will, electrodes attached to his skull, and then convulsing brutally on a table as electricity courses through his body?
According to surveys, most people view ECT as a barbaric relic of psychiatry's medieval past. And while ECT may once have been a violent process, it hasn't been like that for over five decades. Yes, it is still in use today.
"Nowadays, patients who receive ECT... first receive a general anesthetic, a muscle relaxant, and occasionally a substance to prevent salivation," Lilienfeld described. "Then, a physician places electrodes on the patient's head... and delivers an electric shock. This shock induces a seizure lasting 45 to 60 seconds, although the anesthetic... and muscle relaxant inhibit the patient's movements..."
There's no scientific consensus on why ECT works, but the majority of controlled studies show that -- for severe depression -- it does. Indeed, a 1999 study found that an overwhelming 91% of people who'd received ECT viewed it positively.
Opposites Attract
The union between two electrical charges, one positive and one negative, is the quintessential love story in physics. Opposites attract!
But the same cannot be said for a flaming liberal and a rabid conservative. Or an exercise aficionado and a professional sloth. People are not electrical charges.
Though Hollywood loves to perpetuate the idea that we are romantically attracted to people who differ from us, in practice, this is not the case.
"Indeed, dozens of studies demonstrate that people with similar personality traits are more likely to be attracted to each other than people with dissimilar personality traits," Lilienfeld wrote. "The same rule applies to friendships."
Lie Detector Tests Are Accurate
Those who operate polygraph -- "Lie Detector" -- tests often boast that they are 99% accurate. The reality is that nobody, not even a machine, can accurately tell when somebody is lying.
Lie detector tests operate under the assumption that telltale physiological signs reveal when people aren't telling the truth. Thus, polygraphs measure indicators like skin conductance, blood pressure, and respiration. When these signs spike out of the test-taker's normal range in response to a question, the operator interprets that a lie has been told.
But such physiological reactions are not universal. Moreover, when one learns to control factors like perspiration and heart rate, one can easily pass a lie detector test.
Dreams Possess Symbolic Meaning
Do you ever dream about hair-cutting, tooth loss, or beheading? You're probably worried about castration, at least according to Sigmund Freud.
About 43% of Americans believe that dreams reflect unconscious desires. Over half agree that dreams can unveil hidden truths. Admittedly, dreaming mostly remains an enigma to science, but the act is almost certainly not a crystal ball of the unconscious mind.
Instead, the theory that has garnered the most scientific support goes a little something like this: Dreaming is the jumbled representation of our brain's actions to assort and cobble together information and experience, like a file-sorting system. Thus, as Lilienfeld says, dream interpretation would be "haphazard at best."
"Rather than relying on a dream dictionary to foretell the future or help you make life decisions, it would probably be wisest to weigh the pros and cons of differing courses of action carefully, and consult trusted friends and advisers."
Our Memory Is Like a Recorder
About 36% of Americans believe that our brains perfectly preserve past experiences in the form of memories. This is decidedly not the case.
"Today, there's broad consensus among psychologists that memory isn't reproductive -- it doesn't duplicate precisely what we've experienced -- but reconstructive. What we recall is often a blurry mixture of accurate recollections, along with what jells with our beliefs, needs, emotions, and hunches," Lilienfeld wrote.
Our memory is glaringly fallible, and this is problematic, particularly in the courtroom. Eyewitness testimony has led to the false convictions of a great many innocent people.
Mozart Will Make Your Baby a Genius
In 1993, a study published in Nature found that college students who listened to a mere ten seconds of a Mozart sonata were endowed with augmented spatial reasoning skills. The news media ran wild with it. Lost in translation was the fact that the effects were fleeting. But it was too late. The "Mozart Effect" was born.
Since then, millions of copies of Mozart CDs marketed to boost intelligence have been sold. The state of Georgia even passed a bill to allow every newborn to receive a free cassette or CD of Mozart's music.
More recent studies which attempted to replicate the original study have failed or found miniscule effects. They've also pointed to a much more likely explanation for the original findings: short-term arousal.
"Anything that heightens alertness is likely to increase performance on mentally demanding tasks, but it's unlikely to produce long-term effects on spatial ability or, for that matter, overall intelligence," Lilienfeld explained. "So listening to Mozart's music may not be needed to boost our performance; drinking a glass of lemonade or cup of coffee may do the trick."
Left-Brained and Right-Brained
Some people are left-brained and others are right-brained. Those that use their left hemisphere are more analytical and logical, while those that use their right hemisphere are more creative and artistic.
Except that's not how the brain works.
Yes, certain regions of the brain are specialized and tailored to fulfill certain tasks, but the brain doesn't handicap itself by predominantly using one side or the other -- both hemispheres are used just about equally.
The left-brain/right-brain myth was rampant for decades and perpetuated by New Age thinkers, but the rise of functional MRI has granted us a firsthand look at brain activity. According to Scott Lilienfeld, It's showing us just the opposite.
"The two hemispheres are much more similar than different in their functions."
http://www.realclearscience.com/lists/10_myths_psychology/
本文於 修改第 1 次
|
人們的10大思考盲點 -- RCS
|
|
|
推薦1 |
|
|
|
10 Problems With How We Think
realclearscience.com, 02/14
Inherently Irrational
By nature, human beings are illogical and irrational. For most of our existence, survival meant thinking quickly, not methodically. Making a life-saving decision was more important than making a 100% accurate one, so the human brain developed an array of mental shortcuts.
Though not as necessary as they once were, these shortcuts -- called cognitive biases or heuristics -- are numerous and innate. Pervasive, they affect almost everything we do, from the choice of what to wear, to judgments of moral character, to how we vote in presidential elections. We can never totally escape them, but we can be more aware of them, and, just maybe, take efforts to minimize their influence.
Read on to learn about ten widespread faults with human thought.
Sunk Cost Fallacy (「放不開」錯誤)
Thousands of graduate students know this fallacy all too well. When we invest time, money, or effort into something, we don't like to see that investment go to waste, even if the task, object, or goal is no longer worth the cost. As Nobel Prize winning psychologist Daniel Kahneman explains, "We refuse to cut losses when doing so would admit failure, we are biased against actions that could lead to regret."
That's why people finish their overpriced restaurant meal even when they're stuffed to the brim, or continue to watch that horrible television show they don't even like anymore, or remain in a dysfunctional relationship, or soldier through grad school even when they decide that they hate their chosen major.
Conjunction Fallacy (「聯想」錯誤)
Sit back, relax, and read about Linda:
Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
Now, which alternative is more probable?
1. Linda is a bank teller, or
2. Linda is a bank teller and is active in the feminist movement.
If you selected the latter, you've just blatantly defied logic. But it's okay, about 85 to 90 percent of people make the same mistake. The mental sin you've committed is known as a conjunction fallacy. Think about it: it can't possibly be more likely for Linda to be a bank teller and a feminist compared to just a bank teller. If you answered that she was a bank teller, she could still be a feminist, or a whole heap of other possibilities.
A great way to realize the error in thought is to simply look at a Venn diagram. Label one circle as "bank teller" and the other as "feminist." Notice that the area where the circles overlap is always going to be smaller! (請至原網頁參考Venn diagram)
Anchoring (「依靠」效應或「稻草」效應)
Renowned psychologists Amos Tversky and Daniel Kahneman once rigged a wheel of fortune, just like you'd see on the game show. Though labeled with values from 0 to 100, it would only stop at 10 or 65. As an experiment, they had unknowing participants spin the wheel and then answer a two-part question:
Is the percentage of African nations among UN members larger or smaller than the number you just wrote? What is your best guess of the percentage of African nations in the UN?
Kahneman described what happened next in his book Thinking, Fast and Slow:
The spin of a wheel of fortune... cannot possibly yield any useful information about anything, and the participants... should have simply ignored it. But they did not ignore it.
The participants who saw the number 10 on the wheel estimated the percentage of African nations in the UN at 25%, while those who saw 65 gave a much higher estimate, 45%. Participants' answers were "anchored" by the numbers they saw, and they didn't even realize it! Any piece of information, however inconsequential, can affect subsequent assessments or decisions. That's why it's in a car dealer's best interest to keep list prices high, because ultimately, they'll earn more money, and when you negotiate down, you'll still think you're getting a good deal!
Availability Heuristic (「可用訊息」原則)
When confronted with a decision, humans regularly make judgments based on recent events or information that can be easily recalled. This is known as the availability heuristic.
Says Kahneman, "The availability heuristic... substitutes one question for another: you wish to estimate... the frequency of an event, but you report the impression of ease with which instances come to mind."
Cable news provides plenty of fodder for this mental shortcut. For example, viewers of Entertainment Tonight probably think that celebrities divorce each other once every minute. The actual numbers are more complicated, and far less exorbitant.
It's important to be cognizant of the availability heuristic because it can lead to poor decisions. In the wake of the tragic events of 9/11, with horrific images of burning buildings and broken rubble fresh in their minds, politicians quickly voted to implement invasive policies to make us safer, such as domestic surveillance and more rigorous airport security. We've been dealing with, and griping about, the results of those actions ever since. Were they truly justified? Did we fall victim to the availability heuristic?
Optimism Bias (「樂觀」偏執)
"It won't happen to me" isn't merely a cultural trope. Individuals are naturally biased to thinking that they are less at risk of something bad happening to them compared to others. The effect, termed optimism bias, has been demonstrated in studies across a wide range of groups. Smokers believe they are less likely to develop lung cancer than other smokers, traders believe they are less likely to lose money than their peers, and everyday people believe they are less at risk of being victimized in a crime.
Optimism bias particularly factors into matters of health (PDF), prompting individuals to neglect salubrious behaviors like exercise, regular visits to the doctor, and condom use.
Gambler's Fallacy (「賭徒」錯誤)
On August 13, 1918, during a game of roulette at the Monte Carlo Casino, the ball fell on black 26 times in a row. In the wake of the streak, gamblers lost millions of francs betting against black. They assumed, quite fallaciously, that the streak was caused by an imbalance of randomness in the wheel, and that Nature would correct for the mistake.
No mistake was made, of course. Past random events in no way affect future ones, yet people regularly intuit (PDF) that they do.
Herd Mentality (「群聚」心理)
We humans are social creatures by nature. The innate desire to be a "part of the group" often outweighs any considerations of well being and leads to flawed decision-making. For a great example, look no further than the stock market. When indexes start to tip, panicked investors frantically begin selling, sending stocks even lower, which, in turn, further exacerbates the selling. Herd mentality also spawns cultural fads. In the back of their minds, pretty much everybody knew that pet rocks were a waste of money, but lots of people still bought them anyway.
Halo Effect (「以偏概全」效應)
The halo effect is a cognitive bias in which we judge a person's character based upon our rapid, and often oversimplified, impressions of him or her. The workplace is a haven -- more an asylum -- for this sort of faulty thinking.
"The halo effect is probably the most common bias in performance appraisal," researchers wrote in the journal Applied Social Psychology in 2012. The article goes on:
Think about what happens when a supervisor evaluates the performance of a subordinate. The supervisor may give prominence to a single characteristic of the employee, such as enthusiasm, and allow the entire evaluation to be colored by how he or she judges the employee on that one characteristic. Even though the employee may lack the requisite knowledge or ability to perform the job successfully, if the employee's work shows enthusiasm, the supervisor may very well give him or her a higher performance rating than is justified by knowledge or ability.
Confirmation Bias (「成見」偏執)
Confirmation bias is the tendency of people to favor information that confirms their beliefs. Even those who avow complete and total open-mindedness are not immune. This bias manifests in many ways. When sifting through evidence, individuals tend to value anything that agrees with them -- no matter how inconsequential -- and instantly discount that which doesn't. They also interpret ambiguous information as supporting their beliefs.
Hearing or reading information that backs our beliefs feels good, and so we often seek it out. A great many liberal-minded individuals treat Rachel Maddow or Bill Maher's words as gospel. At the same time, tons of conservatives flock to Fox News and absorb almost everything said without a hint of skepticism.
One place where it's absolutely vital to be aware of confirmation bias is in criminal investigation. All too often, when investigators have a suspect, they selectively search for, or erroneously interpret, information that "proves" the person's guilt.
Though you may not realize it, confirmation bias also pervades your life. Ever searched Google for an answer to a controversial question? When the results come in after a query, don't you click first on the result whose title or summary backs your hypothesis?
Discounting Delayed Rewards (「即刻報酬」效應)
If offered $50 today or $100 in a year, most people take the money and run, even though it's technically against their best interests. However, if offered $50 in five years or $100 in six years, almost everybody chooses the $100! When confronted with low-hanging fruit in the Tree of Life, most humans cannot resist plucking it.
This is best summed up by the Ainslie-Rachlin Law, which states, "Our decisions... are guided by the perceived values at the moment of the decision - not by the potential final value."
http://www.realclearscience.com/lists/10_problems_with_how_humans_think/
本文於 修改第 1 次
|
「偽科學」的「黑洞」性質 - M. Pigliucci
|
|
|
推薦0 |
|
|
|
The Pseudoscience Black Hole
Massimo Pigliucci, 12/23/13
As I’ve mentioned on other occasions, my most recent effort in philosophy of science actually concerns what my collaborator Maarten Boudry and I call the philosophy of pseudoscience. During a recent discussion we had with some of the contributors to our book at the recent congress of the European Philosophy of Science Association, Maarten came up with the idea of the pseudoscience black hole. Let me explain.
The idea is that it is relatively easy to find historical (and even recent) examples of notions or fields that began within the scope of acceptable scientific practice, but then moved (or, rather, precipitated) into the realm of pseudoscience. The classic case, of course, is alchemy. Contra popular perception, alchemists did produce a significant amount of empirical results about the behavior of different combinations of chemicals, even though the basic theory of elements underlying the whole enterprise was in fact hopelessly flawed. Also, let's not forget that first rate scientists - foremost among them Newton - spent a lot of time carrying out alchemical research, and that they thought of it in the same way in which they were thinking of what later turned out to be good science.
Another example, this one much more recent, is provided by the cold fusion story. The initial 1989 report by Stanley Pons and Martin Fleischmann was received with natural caution by the scientific community, given the potentially revolutionary import (both theoretical and practical) of the alleged discovery. But it was treated as science, done by credentialed scientists working within established institutions. The notion was quickly abandoned when various groups couldn't replicate Pons and Fleischmann's results, and moreover given that theoreticians just couldn't make sense of how cold fusion was possible to begin with. The story would have ended there, and represented a good example of the self-correcting mechanism of science, if a small but persistent group of aficionados hadn't pursued the matter by organizing alternative meetings, publishing alleged results, and eventually even beginning to claim that there was a conspiracy by the scientific establishment to suppress the whole affair. In other words, cold fusion had - surprisingly rapidly - moved not only into the territory of discarded science, but of downright pseudoscience.
Examples of this type can easily be multiplied by even a cursory survey of the history of science. Eugenics and phrenology immediately come to mind, as well as - only slightly more controversially - psychoanalysis. At this point I would also firmly throw parapsychology into the heap (research in parapsychology has been conducted by credentialed scientists, especially during the early part of the 20th century, and for a while it looked like it might have gained enough traction to move to mainstream).
But, asked Maarten, do we have any convincing cases of the reverse happening? That is, are there historical cases of a discipline or notion that began as clearly pseudoscientific but then managed to clean up its act and emerge as a respectable science? And if not, why?
Before going any further, we may need to get a bit more clear on what we mean by pseudoscience. Of course Maarten, I and our contributors devoted an entire book to explore that and related questions, so the matter is intricate. Nonetheless, three characteristics of pseudoscience clearly emerged from our discussions:
1. Pseudoscience is not a fixed notion. A field can slide into (and maybe out of?) pseudoscientific status depending on the temporal evolution of its epistemic status (and, to a certain extent, of the sociology of the situation).
2. Pseudoscientific claims are grossly deficient in terms of epistemic warrant. This, however, is not sufficient to identify pseudoscience per se, as some claims made within established science can also, at one time or another, be epistemically grossly deficient.
3. What most characterizes a pseudoscience is the concerted efforts of its practitioners to mimic the trappings of science: They want to be seen as doing science, so they organize conferences, publish specialized journals, and talk about data and statistical analyses. All of it, of course, while lacking the necessary epistemic warrant to actually be a science.
Given this three-point concept of pseudoscience, then, is Maarten right that pseudoscientific status, once reached, is a "black hole," a sink from which no notion or field ever emerges again?
The obvious counter example would seem to be herbal medicine which, to a limited extent, is becoming acceptable as a mainstream practice. Indeed, in some cases our modern technology has uncontroversially and successfully purified and greatly improved the efficacy of natural remedies. Just think, of course, of aspirin, whose active ingredient is derived from the bark and leaves of willow trees, the effectiveness of which was well known already to Hippocrates 23 centuries ago.
Maybe, just maybe, we are in the process of witnessing a similar emergence of acupuncture from pseudoscience to medical acceptability. I say maybe because it is not at all clear, as yet, whether acupuncture has additional effects above and beyond the placebo. But if it does, then it should certainly be used in some clinical practice, mostly as a complementary approach to pain management (it doesn't seem to have measurable effects on much else).
But these two counter examples struck both Maarten and I as rather unconvincing. They are better interpreted as specific practices, arrived at by trial and error, which happen to work well enough to be useful in modern settings. The theory, such as it is, behind them is not just wrong, but could have never aspired to be scientific to begin with.
Acupuncture, for instance, is based on the metaphysical notion of Qi energy, flowing through 12 "regular" and 8 "extraordinary" so-called "meridians." Indeed, there are allegedly five types of Qi energy, corresponding to five cardinal functions of the human body: actuation, warming, defense, containment and transformation. Needless to say, all of this is entirely made up, and makes absolutely no contact with either empirical science or established theoretical notions in, say, physics or biology.
The situation is even more hopeless in the case of "herbalism," which originates from a hodgepodge of approaches, including magic, shamanism, and Chinese "medicine" type of supernaturalism. Indeed, one of Hippocrates' great contributions was precisely to reject mysticism and supernaturalism as bases for medicine, which is why he is often referred to as the father of "Western" medicine (i.e., medicine).
Based just on the examples discussed above - concerning once acceptable scientific notions that slipped into pseudoscience and pseudoscientific notions that never emerged into science - it would seem that there is a potential explanation for Maarten's black hole. Cold fusion, phrenology, and to some (perhaps more debatable) extent alchemy were not just empirically based (so is acupuncture, after all!), but built on a theoretical foundation that invoked natural laws and explicitly attempted to link up with established science. Those instances of pseudoscience whose practice, but not theory, may have made it into the mainstream, instead, invoked supernatural or mystical notions, and most definitely did not make any attempt to connect with the rest of the scientific web of knowledge.
Please note that I am certainly not saying that all pseudoscience is based on supernaturalism. Parapsychology and ufology, in most of their incarnations at least, certainly aren't. What I am saying is that either a notion begins within the realm of possibly acceptable science - from which it then evolves either toward full fledged science or slides into pseudoscience - or it starts out as pseudoscience and remains there. The few apparent exceptions to the latter scenario appear to be cases of practices based on mystical or similar notions. In those cases aspects of the practice may become incorporated into (and explained by) modern science, but the "theoretical" (really, metaphysical) baggage is irrevocably shed.
*****
Can anyone think of examples that counter the idea of the pseudoscience black hole? Or of alternative explanations for its existence?
Originally on Rationally Speaking
http://www.science20.com/rationally_speaking/pseudoscience_black_hole-126943
本文於 修改第 1 次
|
失去嚴謹性的科學研究活動 - The Economist
|
|
|
推薦1 |
|
|
|
How science goes wrong
Scientific research has changed the world. Now it needs to change itself
The Economist, 10/19/13
A SIMPLE idea underpins science: “trust, but verify”. Results should always be subject to challenge from experiment. That simple but powerful idea has generated a vast body of knowledge. Since its birth in the 17th century, modern science has changed the world beyond recognition, and overwhelmingly for the better.
But success can breed complacency. Modern scientists are doing too much trusting and not enough verifying -- to the detriment of the whole of science, and of humanity.
Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.
What a load of rubbish
Even when flawed research does not put people’s lives at risk -- and much of it is too far from the market to do so -- it squanders money and the efforts of some of the world’s best minds. The opportunity costs of stymied progress are hard to quantify, but they are likely to be vast. And they could be rising.
One reason is the competitiveness of science. In the 1950s, when modern academic research took shape after its successes in the second world war, it was still a rarefied pastime. The entire club of scientists numbered a few hundred thousand. As their ranks have swelled, to 6m-7m active researchers on the latest reckoning, scientists have lost their taste for self-policing and quality control. The obligation to “publish or perish” has come to rule over academic life. Competition for jobs is cut-throat. Full professors in America earned on average $135,000 in 2012 -- more than judges did. Every year six freshly minted PhDs vie for every academic post. Nowadays verification (the replication of other people’s results) does little to advance a researcher’s career. And without verification, dubious findings live on to mislead.
Careerism also encourages exaggeration and the cherry-picking of results. In order to safeguard their exclusivity, the leading journals impose high rejection rates: in excess of 90% of submitted manuscripts. The most striking findings have the greatest chance of making it onto the page. Little wonder that one in three researchers knows of a colleague who has pepped up a paper by, say, excluding inconvenient data from results “based on a gut feeling”. And as more research teams around the world work on a problem, the odds shorten that at least one will fall prey to an honest confusion between the sweet signal of a genuine discovery and a freak of the statistical noise. Such spurious correlations are often recorded in journals eager for startling papers. If they touch on drinking wine, going senile or letting children play video games, they may well command the front pages of newspapers, too.
Conversely, failures to prove a hypothesis are rarely even offered for publication, let alone accepted. “Negative results” now account for only 14% of published papers, down from 30% in 1990. Yet knowing what is false is as important to science as knowing what is true. The failure to report failures means that researchers waste money and effort exploring blind alleys already investigated by other scientists.
The hallowed process of peer review is not all it is cracked up to be, either. When a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were being tested.
If it’s broke, fix it
All this makes a shaky foundation for an enterprise dedicated to discovering the truth about the world. What might be done to shore it up? One priority should be for all disciplines to follow the example of those that have done most to tighten standards. A start would be getting to grips with statistics, especially in the growing number of fields that sift through untold oodles of data looking for patterns. Geneticists have done this, and turned an early torrent of specious results from genome sequencing into a trickle of truly significant ones.
Ideally, research protocols should be registered in advance and monitored in virtual notebooks. This would curb the temptation to fiddle with the experiment’s design midstream so as to make the results look more substantial than they are. (It is already meant to happen in clinical trials of drugs, but compliance is patchy.) Where possible, trial data also should be open for other researchers to inspect and test.
The most enlightened journals are already becoming less averse to humdrum papers. Some government funding agencies, including America’s National Institutes of Health, which dish out $30 billion on research each year, are working out how best to encourage replication. And growing numbers of scientists, especially young ones, understand statistics. But these trends need to go much further. Journals should allocate space for “uninteresting” work, and grant-givers should set aside money to pay for it. Peer review should be tightened -- or perhaps dispensed with altogether, in favour of post-publication evaluation in the form of appended comments. That system has worked well in recent years in physics and mathematics. Lastly, policymakers should ensure that institutions using public money also respect the rules.
Science still commands enormous -- if sometimes bemused -- respect. But its privileged status is founded on the capacity to be right most of the time and to correct its mistakes when it gets things wrong. And it is not as if the universe is short of genuine mysteries to keep generations of scientists hard at work. The false trails laid down by shoddy research are an unforgivable barrier to understanding.
http://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong
本文於 修改第 1 次
|
|
|