網路城邦
回本城市首頁 時事論壇
市長:胡卜凱  副市長:
加入本城市推薦本城市加入我的最愛訂閱最新文章
udn城市政治社會政治時事【時事論壇】城市/討論區/
討論區知識和議題 字體:
看回應文章  上一個討論主題 回文章列表 下一個討論主題
關於科學方法的討論
 瀏覽15,183|回應16推薦1

胡卜凱
等級:8
留言加入好友
文章推薦人 (1)

麥芽糖

我在本城市轉貼和發表過一些關於科學方法的討論但散見於相關的不同主題現在專開一欄以後有空再做個索引

 



本文於 修改第 4 次
回應 回應給此人 推薦文章 列印 加入我的文摘

引用
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=4956233
 回應文章 頁/共2頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁
現行科學方法需要修改嗎? - S. Hossenfelder
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Does the Scientific Method need Revision?          

 

Does the prevalence of untestable theories in cosmology and quantum gravity require us to change what we mean by a scientific theory?

 

, 12/17/14

 

Theoretical physics has problems. That’s nothing new  --  if it wasn’t so, then we’d have nothing left to do. But especially in high energy physics and quantum gravity, progress has basically stalled since the development of the standard model in the mid 70s. Yes, we’ve discovered a new particle every now and then. Yes, we’ve collected loads of data. But the fundamental constituents of our theories, quantum field theory and Riemannian geometry, haven’t changed since that time.

 

Everybody has their own favorite explanation for why this is so and what can be done about it. One major factor is certainly that the low hanging fruits have been picked, and progress slows as we have to climb farther up the tree. Today, we have to invest billions of dollars into experiments that are testing new ranges of parameter space, build colliders, shoot telescopes into orbit, have superclusters flip their flops. The days in which history was made by watching your bathtub spill over are gone.

 

Another factor is arguably that the questions are getting technically harder while our brains haven’t changed all that much. Yes, now we have computers to help us, but these are, at least for now, chewing and digesting the food we feed them, not cooking their own.

 

Taken together, this means that return on investment must slow down as we learn more about nature. Not so surprising.

 

Still, it is a frustrating situation and this makes you wonder if not there are other reasons for lack of progress, reasons that we can do something about. Especially in a time when we really need a game changer, some breakthrough technology, clean energy, that warp drive, a transporter! Anything to get us off the road to Facebook, sorry, I meant self-destruction.

 

It is our lacking understanding of space, time, matter, and their quantum behavior that prevents us from better using what nature has given us. And it is this frustration that lead people inside and outside the community to argue we’re doing something wrong, that the social dynamics in the field is troubled, that we’ve lost our path, that we are not making progress because we keep working on unscientific theories.

 

Is that so?

 

It’s not like we haven’t tried to make headway on finding the quantum nature of space and time. The arxiv categories hep-th and gr-qc are full every day with supposedly new ideas. But so far, not a single one of the existing approaches towards quantum gravity has any evidence speaking for it.

 

To me the reason this has happened is obvious: We haven’t paid enough attention to experimentally testing quantum gravity. One cannot develop a scientific theory without experimental input. It’s never happened before and it will never happen. Without data, a theory isn’t science. Without experimental test, quantum gravity isn’t physics.

 

If you think that more attention is now being paid to quantum gravity phenomenology, you are mistaken. Yes, I’ve heard them too, the lip confessions by people who want to keep on dwelling on their fantasies. But the reality is there is no funding for quantum gravity phenomenology and there are no jobs either. On the rare occasions that I have seen quantum gravity phenomenology mentioned on a job posting, the position was filled with somebody working on the theory, I am tempted to say, working on mathematics rather than physics.

 

It is beyond me that funding agencies invest money into developing a theory of quantum gravity, but not into its experimental test. Yes, experimental tests of quantum gravity are farfetched. But if you think that you can’t test it, you shouldn’t put money into the theory either. And yes, that’s a community problem because funding agencies rely on experts’ opinion. And so the circle closes.

 

To make matters worse, philosopher Richard Dawid has recently argued that it is possible to assess the promise of a theory without experimental test whatsover, and that physicists should thus revise the scientific method by taking into account what he calls “non-empirical facts”. By this he seems to mean what we often loosely refer to as internal consistency: theoretical physics is math heavy and thus has a very stringent logic. This allows one to deduce a lot of, often surprising, consequences from very few assumptions. Clearly, these must be taken into account when assessing the usefulness or range-of-validity of a theory, and they are being taken into account. But the consequences are irrelevant to the use of the theory unless some aspects of them are observable, because what makes up the use of a scientific theory is its power to describe nature.

 

Dawid may be confused on this matter because physicists do, in practice, use empirical facts that we do not explicitly collect data on. For example, we discard theories that have an unstable vacuum, singularities, or complex-valued observables. Not because this is an internal inconsistency  --  it is not. You can deal with this mathematically just fine. We discard these because we have never observed any of that. We discard them because we don’t think they’ll describe what we see. This is not a non-empirical assessment.

 

A huge problem with the lack of empirical fact is that theories remain axiomatically underconstrained. In practice, physicists don’t always start with a set of axioms, but in principle this could be done. If you do not have any axioms you have no theory, so you need to select some. The whole point of physics is to select axioms to construct a theory that describes observation. This already tells you that the idea of a theory for everything will inevitably lead to what has now been called the “multiverse”. It is just a consequence of stripping away axioms until the theory becomes ambiguous.

 

Somewhere along the line many physicists have come to believe that it must be possible to formulate a theory without observational input, based on pure logic and some sense of aesthetics. They must believe their brains have a mystical connection to the universe and pure power of thought will tell them the laws of nature. But the only logical requirement to choose axioms for a theory is that the axioms not be in conflict with each other. You can thus never arrive at a theory that describes our universe without taking into account observations, period. The attempt to reduce axioms too much just leads to a whole “multiverse” of predictions, most of which don’t describe anything we will ever see.

 

(The only other option is to just use all of mathematics, as Tegmark argues. You might like or not like that; at least it’s logically coherent. But that’s a different story and shall be told another time.)

 

Now if you have a theory that contains more than one universe, you can still try to find out how likely it is that we find ourselves in a universe just like ours. The multiverse-defenders therefore also argue for a modification of the scientific method, one that takes into account probabilistic predictions. But we have nothing to gain from that. Calculating a probability in the multiverse is just another way of adding an axiom, in this case for the probability distribution. Nothing wrong with this, but you don’t have to change the scientific method to accommodate it.

 

In a Nature comment out today, George Ellis and Joe Silk argue that the trend of physicists to pursue untestable theories is worrisome.

 

I agree with this, though I would have said the worrisome part is that physicists do not care enough about the testability  --  and apparently don’t need to care because they are getting published and paid regardless.

 

See, in practice the origin of the problem is senior researchers not teaching their students that physics is all about describing nature. Instead, the students are taught by example that you can publish and live from outright bizarre speculations as long as you wrap them into enough math. I cringe every time a string theorist starts talking about beauty and elegance. Whatever made them think that the human sense for beauty has any relevance for the fundamental laws of nature?

 

The scientific method is often quoted as a circle of formulating and testing of hypotheses, but I find this misleading. There isn’t any one scientific method. The only thing that matters is that you honestly assess the use of a theory to describe nature. If it’s useful, keep it. If not, try something else. This method doesn’t have to be changed, it has to be more consistently applied. You can’t assess the use of a scientific theory without comparing it to observation.

 

A theory might have other uses than describing nature. It might be pretty, artistic even. It might be thought-provoking. Yes, it might be beautiful and elegant. It might be too good to be true, it might be forever promising. If that’s what you are looking for that’s all fine by me. I am not arguing that these theories should not be pursued. Call them mathematics, art, or philosophy, but if they don’t describe nature don’t call them science.

 

請至原網頁參考相關圖片

 

https://medium.com/starts-with-a-bang/does-the-scientific-method-need-revision-d7514e2598f3



本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5256970
持懷疑科學成果立場者的心態 - A. B. Brezenow
推薦0


胡卜凱
等級:8
留言加入好友

 

Is Anything Certain in Science?       

 

Alex B. Berezow, 09/08/14

 

Last week, I was at a coffee shop working when a lady approached me and invited me to attend a science discussion group. The topic was the "limits of science." Intrigued, I put away my laptop and joined the group, which consisted mainly of elderly people who were thoughtful, well-spoken, and seemingly intelligent. I had no idea what to expect in terms of the tone of the conversation, so I listened eagerly as the discussion leader (who has a master's degree in geology) started the meeting.

 

"Science is subjective, though we like to think of it as objective," he began. "When I speak of 'facts,' I put them in quotation marks." He elaborated that things we once thought to be true were later overturned by further study.

 

Right away, I knew I was going to be in for a ride. While the geologist didn't clarify exactly what he meant, we can deduce one of two things: Either

 

(1) he does not believe facts are real or

(2) he believes facts are not accessible to scientific investigation.

 

Both of these beliefs are problematic from a scientific viewpoint. The first implies that there is no such a thing as a fact, and hence, no such thing as truth. My favorite philosophy professor, former mentor, and (I'm honored to say) friend, Robert Hahn of Southern Illinois University, once quipped, "If the ultimate truth about the universe is that there is no truth, what sort of truth is that?" I would add that if there is no such thing as truth, then science is merely chasing after the wind. Science would be pointless. As fictitious Tottenham Hotspur coach Ted Lasso would say, "Why do you even do this?"

 

The second belief poses a much bigger challenge to science because there is no convincing response to it. Philosopher Immanuel Kant wrote of the noumenon (actual reality) and the phenomenon (our experience of reality). Because we experience reality through our imperfect senses, we do not have direct access to it. For instance, we perceive plants as green, but that is simply the result of our eyes and brains processing photons and interpreting them as the color green. How do we know that perception is reliable? Isn't it possible that plants are actually some other color? Given that we are limited by our sensory capabilities, we can never know the answer to that question. Our experience of the greenness of a plant (phenomenon) is separate from the underlying reality of a plant's color (noumenon).

 

Humans in general, and scientists specifically, ignore this philosophical challenge. We assume that our perception of reality matches actual reality. Do we have any other option? How could we live daily life or accept the findings of scientific research if we believed otherwise?

 

The point of that lengthy aside is that the geologist's comment was at odds with a practical scientific worldview. But, things got even weirder after that.

 

When our conversation turned to the reliability of the scientific method, I commented,

 

"Scientific laws are generalized in such a way that if you perform an experiment like a chemical reaction on Earth or on Mars, you should get the same result."

 

One of the ladies asked, "But how do we know? We've never been to Mars."

 

I answered, "We have a basic understanding of how chemical reactions work. To our knowledge, they aren't affected by gravity.* So, we should get the same reaction on Mars."

 

"In theory."

 

Well, yes, in theory. But this sort of extreme skepticism is difficult to address. Chemistry is a mature science whose basic principles are well understood. Until we have sufficient reason to believe otherwise, we should expect chemical reactions to be identical whether they are performed on Earth or on Mars.

 

Strangely, a bit later on, the same skeptical lady asked me, "How do you explain telepathy?" She added that there have been times when, as she was speaking to another person, that she knew what the other person was going to say before she said it.

 

"Scientists don't believe telepathy is real. That's how I explain telepathy," I responded.

 

"Some scientists do believe in it," retorted the geologist.

 

True. But, some scientists believe that HIV doesn't cause AIDS. That doesn't mean we should take them seriously. I decided to elaborate: "Think of all the times that you thought of words, but nobody said them. Or all the times you thought of somebody, but they didn't call. You forget all of those, but you remember the few times where a coincidence occurred. That's called confirmation bias."

 

Unsurprisingly, I didn't win her over. The conversation then took one final turn.

 

The skeptical lady believed the future would be run entirely by robots and machines. This is referred to as the "singularity" and has been popularized by Ray Kurzweil. It is also probably bunk. Not only are we unable to model a worm's brain accurately, but the scientific knowledge and sheer computing power necessary to properly replicate a human brain -- with its 86 billion neurons and some 100 trillion synapses -- are massive. Besides, there is no compelling reason to believe that computing power will grow exponentially forever. Eventually, some mundane physical factor will limit our technological progress. If (and that's a big if) the "singularity" is even possible, it is likely centuries away.

 

Our evening ended there. Over the next 24 hours, I pondered what could make otherwise intelligent people embrace pseudoscience and science fiction? Moreover, what could make a person doubtful of chemistry, but accepting of telepathy?

 

I'm still not sure, but I have a clue. Conspiracy theorists are known to believe contradictory ideas. For instance, as Live Science reported, "people who believed [Osama] bin Laden was already dead before the raid were more likely to believe he is still alive." Similarly, the lady who believed that science wasn't advanced enough to fully understand chemistry yet also somehow so advanced that it could build Earth-conquering robots may be engaging in conspiracy-like thinking. She had no awareness that her skepticism of chemistry and credulity toward telepathy were, in many ways, fundamentally incompatible.

 

Extreme skepticism and extreme credulity are anathema to the scientific mindset. Successful scientists accept the reliability of the scientific method but question extraordinary claims that are not founded upon extraordinary evidence. That is healthy skepticism, and it was curiously absent from the science discussion group.

 

*Note: The kinetics of chemical reactions could possibly vary under different gravitational conditions. See an interesting discussion here.

 

http://www.realclearscience.com/blog/2014/09/is_anything_certain_in_science.html

 



本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5188784
學者需要有承認錯誤的氣度 - D. W. Drezner
推薦0


胡卜凱
等級:8
留言加入好友

 

The Uses of Being Wrong

 

Daniel W. Drezner, 07/07/14

 

My new book has an odd intellectual provenance -- it starts with me being wrong. Back in the fall of 2008, I was convinced that the open global economic order, centered on the unfettered cross-border exchange of goods, services, and ideas, was about to collapse as quickly as Lehman Brothers.

 

A half-decade later, the closer I looked at the performance of the system of global economic governance, the clearer it became that the meltdown I had expected had not come to pass. Though the advanced industrialized economies suffered prolonged economic slowdowns, at the global level there was no great surge in trade protectionism, no immediate clampdown on capital flows, and, most surprisingly, no real rejection of neoliberal economic principles. Given what has normally transpired after severe economic shocks, this outcome was damn near miraculous.

 

Nevertheless, most observers have remained deeply pessimistic about the functioning of the global political economy. Indeed, scholarly books with titles like No One’s World: The West, The Rising Rest, and the Coming Global Turn and The End of American World Order have come to a conclusion the opposite of mine. Now I’m trying to understand how I got the crisis so wrong back in 2008, and why so many scholars continue to be wrong now.

 

Confessions of wrongness in academic research should be unsurprising. (To be clear, being wrong in a prediction is different from making an error. Error, even if committed unknowingly, suggests sloppiness. That carries a more serious stigma than making a prediction that fails to come true.) Anyone who has a passing familiarity with the social sciences is aware that, by and large, we do not get an awful lot of things right. Unlike that of most physical and natural scientists, the ability of social scientists to conduct experiments or rely on high-quality data is often limited. In my field, international relations, even the most robust econometric analyses often explain a pathetically small amount of the data’s statistical variance. Indeed, from my first exposure to the philosopher of mathematics Imre Lakatos, I was taught that the goal of social science is falsification. By proving an existing theory wrong, we refine our understanding of what our models can and cannot explain.

 

And yet, the falsification enterprise is generally devoted to proving why other scholars are wrong. It’s rare for academics to publicly disavow their own theories and hypotheses. Indeed, a common lament in the social sciences is that negative findings -- i.e., empirical tests that fail to support an author’s initial hypothesis -- are never published.

 

Even in the realm of theory, there are only a few cases of scholars’ acknowledging that the paradigms they’ve constructed do not hold. In 1958, Ernst Haas, a political scientist at the University of California at Berkeley, developed a theory of political integration, positing that as countries cooperated on noncontroversial issues, like postal regulations, that spirit of cooperation would spill over into contentious areas, like migration. Haas used this theory -- he called it "neofunctionalism" -- to explain European integration a half-century ago. By the 1970s, however, Europe’s march toward integration seemed to be going into reverse, and Haas acknowledged that his theory had become "obsolete." This did not stop later generations of scholars, however, from resurrecting his idea once European integration was moving forward again.

 

Haas is very much the exception and not the rule. I’ve read a fair amount of international-relations theory over the years, from predictions about missing the great-power peace of the Cold War to the end of history to the rise of a European superpower to the causes of suicide terrorism. Most of these sweeping hypotheses have either failed to come true or failed to hold up over time. This has not prevented their progenitors from continuing to advocate them. Some of them echo the biographer who, without a trace of irony, proclaimed that "proof of Trotsky’s farsightedness is that none of his predictions have come true yet."

 

The persistence of so-called "zombie ideas" is something of a problem in the social sciences. Even if a theory or argument has been discredited by others in the moment, a stout defense can ensure a long intellectual life. When Samuel P. Huntington published his "clash of civilizations" argument, in the 1990s, the overwhelming scholarly consensus was that he was wrong. This did not stop the "clash" theory from permeating policy circles, particularly after 9/11.

 

Why is it so hard for scholars to admit when they are wrong? It is not necessarily concern for one’s reputation. Even predictions that turn out to be wrong can be intellectually profitable -- all social scientists love a good straw-man argument to pummel in a literature review. Bold theories get cited a lot, regardless of whether they are right.

 

Part of the reason is simple psychology; we all like being right much more than being wrong. As Kathryn Schulz observes in Being Wrong, "the thrill of being right is undeniable, universal, and (perhaps most oddly) almost entirely undiscriminating … . It’s more important to bet on the right foreign policy than the right racehorse, but we are perfectly capable of gloating over either one."

 

Furthermore, as scholars craft arguments and find supporting evidence, they persuade themselves that they are right. And the degree of relative self-confidence a scholar projects has an undeniable effect on how others perceive the argument. As much as published scholarship is supposed to count über alles, there is no denying that confident scholars can sway opinions. I know colleagues who make fantastically bold predictions, and I envy their serene conviction that they are right despite ample evidence to the contrary.

 

There can be rewards for presenting a singular theoretical framework. In Expert Political Judgment, Philip Tetlock notes that there are foxes (experts who adapt their mental models to changing circumstances) and hedgehogs (experts who keep their worldviews fixed and constant). Tetlock, a professor of psychology at the University of Pennsylvania, found that foxes are better than hedgehogs at predicting future events -- but that hedgehogs are more likely to make the truly radical predictions that turn out to be right.

 

That said, the benefits of being wrong are understated. Schulz argues in Being Wrong that "the capacity to err is crucial to human cognition. Far from being a moral flaw, it is inextricable from some of our most humane and honorable qualities: empathy, optimism, imagination, conviction, and courage. And far from being a mark of indifference or intolerance, wrongness is a vital part of how we learn and change."

 

Indeed, part of the reason the United States embraced more-expansionary macroeconomic policies in response to the 2008 financial crisis is that conservative economists like Martin Feldstein and Kenneth Rogoff went against their intellectual predilections and endorsed (however temporarily) a Keynesian approach.

 

It is possible that scholars will become increasingly likely to admit being wrong. Blogging and tweeting encourages the airing of contingent and tentative arguments as events play out in real time. As a result, far less stigma attaches to admitting that one got it wrong in a blog post than in peer-reviewed research. Indeed, there appears to be almost no professional penalty for being wrong in the realm of political punditry. Regardless of how often pundits make mistakes in their predictions, they are invited back again to pontificate more.

 

As someone who has blogged for more than a decade, I’ve been wrong an awful lot, and I’ve grown somewhat more comfortable with the feeling. I don’t want to make mistakes, of course. But if I tweet or blog my half-formed supposition, and it then turns out to be wrong, I get more intrigued about why I was wrong. That kind of empirical and theoretical investigation seems more interesting than doubling down on my initial opinion. Younger scholars, weaned on the Internet, more comfortable with the push and pull of debate on social media, may well feel similarly.

 

For all the intellectual benefits of being incorrect, however, how one is wrong matters. It is much less risky to predict doom and gloom than to predict that things will work out fine. Warnings about disasters that never happen carry less cost to one’s reputation than asserting that all is well just before a calamity. History has stigmatized optimistic prognosticators who, in retrospect, turned out to be wrong. From Norman Angell (who, in 1909, argued that war among European powers was unlikely) onward, errant optimists have been derided for their naïveté. If the global economy tanks or global economic governance collapses in the next few years, I’ll be wrong again -- and in the worst way possible.

 

Daniel W. Drezner is a professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University. His latest book, The System Worked: How the World Stopped Another Great Depression, is just out from Oxford University Press.

 

http://chronicle.com/article/The-Uses-of-Being-Wrong/147459/



本文於 修改第 3 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5152937
10個經常被誤用的科學概念 - A. Newitz
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

10 Scientific Ideas That Scientists Wish You Would Stop Misusing

 

Annalee Newitz, 06/17/14

 

Many ideas have left the world of science and made their way into everyday language — and unfortunately, they are almost always used incorrectly. We asked a group of scientists to tell us which scientific terms they believe are the most widely misunderstood. Here are ten of them.

 

1.     Proof

 

Physicist Sean Carroll says:

 

I would say that "proof" is the most widely misunderstood concept in all of science. It has a technical definition (a logical demonstration that certain conclusions follow from certain assumptions) that is strongly at odds with how it is used in casual conversation, which is closer to simply "strong evidence for something." There is a mismatch between how scientists talk and what people hear because scientists tend to have the stronger definition in mind. And by that definition, science never proves anything! So when we are asked "What is your proof that we evolved from other species?" or "Can you really prove that climate change is caused by human activity?" we tend to hem and haw rather than simply saying "Of course we can." The fact that science never really proves anything, but simply creates more and more reliable and comprehensive theories of the world that nevertheless are always subject to update and improvement, is one of the key aspects of why science is so successful.

 

2.     Theory

 

Astrophysicist Dave Goldberg has a theory about the word theory:

 

Members of the general public (along with people with an ideological axe to grind) hear the word "theory" and equate it with "idea" or "supposition." We know better. Scientific theories are entire systems of testable ideas which are potentially refutable either by the evidence at hand or an experiment that somebody could perform. The best theories (in which I include special relativity, quantum mechanics, and evolution) have withstood a hundred years or more of challenges, either from people who want to prove themselves smarter than Einstein, or from people who don't like metaphysical challenges to their world view. Finally, theories are malleable, but not infinitely so. Theories can be found to be incomplete or wrong in some particular detail without the entire edifice being torn down. Evolution has, itself, adapted a lot over the years, but not so much that it wouldn't still be recognize it. The problem with the phrase "just a theory," is that it implies a real scientific theory is a small thing, and it isn't.

 

3.     Quantum Uncertainty and Quantum Weirdness

 

Goldberg adds that there's another idea that has been misinterpreted even more perniciously than "theory." It's when people appropriate concepts from physics for new age or spiritual purposes:

 

This misconception is an exploitation of quantum mechanics by a certain breed spiritualists and self-helpers, and epitomized by the abomination, [the movie] What the Bleep Do We Know? Quantum mechanics, famously, has measurement at its core. An observer measuring position or momentum or energy causes the "wavefunction to collapse," non-deterministically. (Indeed, I did one of my first columns on "How smart do you need to collapse a wavefunction?") But just because the universe isn't deterministic doesn't mean that you are the one controlling it. It is remarkable (and frankly, alarming) the degree to which quantum uncertainty and quantum weirdness get inextricably bound up in certain circles with the idea of a soul, or humans controlling the universe, or some other pseudoscience. In the end, we are made of quantum particles (protons, neutrons, electrons) and are part of the quantum universe. That is cool, of course, but only in the sense that all of physics is cool.

 

4. Learned vs. Innate

 

Evolutionary biologist Marlene Zuk says:

 

One of my favorite [misuses] is the idea of behavior being "learned vs. innate" or any of the other nature-nurture versions of this. The first question I often get when I talk about a behavior is whether it's "genetic" or not, which is a misunderstanding because ALL traits, all the time, are the result of input from the genes and input from the environment. Only a difference between traits, and not the trait itself, can be genetic or learned — like if you have identical twins reared in different environments and they do something different (like speak different languages), then that difference is learned. But speaking French or Italian or whatever isn't totally learned in and of itself, because obviously one has to have a certain genetic background to be able to speak at all.

 

5.     Natural

 

Synthetic biologist Terry Johnson is really, really tired of people misunderstanding what this word means:

 

"Natural" is a word that has been used in so many contexts with so many different meanings that it's become almost impossible to parse. Its most basic usage, to distinguish phenomena that exist only because of humankind from phenomena that don't, presumes that humans are somehow separate from nature, and our works are un- or non-natural when compared to, say, beavers or honeybees.

 

When speaking of food, "natural" is even slipperier. It has different meanings in different countries, and in the US, the FDA has given up on a meaningful definition of natural food (largely in favor of "organic", another nebulous term). In Canada, I could market corn as "natural" if I avoid adding or subtracting various things before selling it, but the corn itself is the result of thousands of years of selection by humans, from a plant that wouldn't exist without human intervention.

 

6.     Gene

 

Johnson has an even bigger concern about how the word gene gets used, however:

 

It took 25 scientists two contentious days to come up with: "a locatable region of genomic sequence, corresponding to a unit of inheritance, which is associated with regulatory regions, transcribed regions and/or other functional sequence regions." Meaning that a gene is a discrete bit of DNA that we can point to and say, "that makes something, or regulates the making of something". The definition has a lot of wiggle room by design; it wasn't long ago that we thought that most of our DNA didn't do anything at all. We called it "junk DNA", but we're discovering that much of that junk has purposes that weren't immediately obvious.

 

Typically "gene" is misused most when followed by "for". There's two problems with this. We all have genes for hemoglobin, but we don't all have sickle cell anemia. Different people have different versions of the hemoglobin gene, called alleles. There are hemoglobin alleles which are associated with sickle cell diseases, and others that aren't. So, a gene refers to a family of alleles, and only a few members of that family, if any, are associated with diseases or disorders. The gene isn't bad - trust me, you won't live long without hemoglobin - though the particular version of hemoglobin that you have could be problematic.

 

I worry most about the popularization of the idea that when a genetic variation is correlated with something, it is the "gene for" that something. The language suggests that "this gene causes heart disease", when the reality is usually, "people that have this allele seem to have a slightly higher incidence of heart disease, but we don't know why, and maybe there are compensating advantages to this allele that we didn't notice because we weren't looking for them".

 

7.     Statistically Significant

 

Mathematician Jordan Ellenberg wants to set the record straight about this idea:

 

"Statistically significant" is one of those phrases scientists would love to have a chance to take back and rename. "Significant" suggests importance; but the test of statistical significance, developed by the British statistician R.A. Fisher, doesn't measure the importance or size of an effect; only whether we are able to distinguish it, using our keenest statistical tools, from zero. "Statistically noticeable" or "Statistically discernable" would be much better.

 

8.     Survival of the Fittest

 

Paleoecologist Jacquelyn Gill says that people misunderstand some of the basic tenets of evolutionary theory:

 

Topping my list would be "survival of the fittest." First, these are not actually Darwin's own words, and secondly, people have a misconception about what "fittest" means. Relatedly, there's major confusion about evolution in general, including the persistent idea that evolution is progressive and directional (or even deliberate on the part of organisms; people don't get the idea of natural selection), or that all traits must be adaptive (sexual selection is a thing! And so are random mutations!).

 

Fittest does not mean strongest, or smartest. It simply means an organism that fits best into its environment, which could mean anything from "smallest" or "squishiest" to "most poisonous" or "best able to live without water for weeks at a time." Plus, creatures don't always evolve in a way that we can explain as adaptations. Their evolutionary path may have more to do with random mutations, or traits that other members of their species find attractive.

 

9.     Geologic Timescales

 

Gill, whose work centers on Pleistocene environments that existed over 15,000 years ago, says that she's also dismayed by how little people seem to understand the Earth's timescales:

 

One issue I often run into is that the public lacks an understanding of geologic timescales. Anything prehistoric gets compressed in peoples's minds, and folks think that 20,000 years ago we had drastically different species (nope), or even dinosaurs (nope nope nope). It doesn't help that those little tubes of plastic toy dinosaurs often include cave people or mammoths.

 

10.  Organic

 

Entomologist Gwen Pearson says that there's a constellation of terms that "travel together" with the word "organic," such as "chemical-free," and "natural." And she's tired of seeing how profoundly people misunderstand them:

 

I'm less upset about the way that they are technically incorrect [though of course all] food is all organic, because it contains carbon,etc. [My concern is] the way they are used to dismiss and minimize real differences in food and product production.

 

Things can be natural and "organic", but still quite dangerous.

 

Things can be "synthetic" and manufactured, but safe. And sometimes better choices. If you are taking insulin, odds are it's from GMO bacteria. And it's saving lives.

 

Annalee Newitz is the editor-in-chief of io9. She's also the author of Scatter, Adapt and Remember: How Humans Will Survive a Mass Extinction.

 

http://io9.com/10-scientific-ideas-that-scientists-wish-you-would-stop-1591309822

 

-- 請至原網頁參考相關圖片



本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5123481
科學方法之父 - R. Pomeroy
推薦0


胡卜凱
等級:8
留言加入好友

 

Ibn al-Haytham: The Muslim Scientist Who Birthed the Scientific Method

 

Ross Pomeroy, 03/25/14

 

If asked who gave birth to the modern scientific method, how might you respond? Isaac Newton, maybe? Galileo? Aristotle?

 

A great many students of science history would probably respond, "Roger Bacon." An English scholar and friar, and a 13th century pioneer in the field of optics, he described, in exquisite detail, a repeating cycle of observation, hypothesis, and experimentation in his writings, as well as the need for independent verification of his work.

 

But dig a little deeper into the past, and you'll unearth something that may surprise you: The origins of the scientific method hearken back to the Islamic World, not the Western one. Around 250 years before Roger Bacon expounded on the need for experimental confirmation of his findings, an Arab scientist named Ibn al-Haytham was saying the exact same thing.

 

Little is known about Ibn al-Haytham's life, but historians believe he was born around the year 965, during a period marked as the Golden Age of Arabic science. His father was a civil servant, so the young Ibn al-Haytham received a strong education, which assuredly seeded his passion for science. He was also a devout Muslim, believing that an endless quest for truth about the natural world brought him closer to God. Sometime around the dawn of the 11th Century, he moved to Cairo in Egypt. It was here that he would complete his most influential work.

 

The prevailing wisdom at the time was that we saw what our eyes, themselves, illuminated. Supported by revered thinkers like Euclid and Ptolemy, emission theory stated that sight worked because our eyes emitted rays of light -- like flashlights. But this didn't make sense to Ibn al-Haytham. If light comes from our eyes, why, he wondered, is it painful to look at the sun? This simple realization catapulted him into researching the behavior and properties of light: optics.

 

In 1011, Ibn al-Haytham was placed under house arrest by a powerful caliph in Cairo. Though unwelcome, the seclusion was just what he needed to explore the nature of light. Over the next decade, Ibn al-Haytham proved that light only travels in straight lines, explained how mirrors work, and argued that light rays can bend when moving through different mediums, like water, for example.

 

But Ibn al-Haytham wasn't satisfied with elucidating these theories only to himself, he wanted others to see what he had done. The years of solitary work culminated in his Book of Optics, which expounded just as much upon his methods as it did his actual ideas. Anyone who read the book would have instructions on how to repeat every single one of Ibn al-Haytham's experiments.

 

"His message is, 'Don’t take my word for it. See for yourself,'" Jim Al-Khalili, a professor of theoretical physics at the University of Surrey noted in a BBC4 Special.

 

"This, for me, is the moment that Science, itself is summoned into existence and becomes a discipline in its own right," he added.

 

Apart from being one of the first to operate on the scientific method, Ibn al-Haytham was also a progenitor of critical thinking and skepticism.

 

"The duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and... attack it from every side,"

 

he wrote.

 

"He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency."

 

It is the nature of the scientific enterprise to creep ahead, slowly but surely. In the same way, the scientific method that guides it was not birthed in a grand eureka moment, but slowly tinkered with and notched together over generations, until it resembled the machine of discovery that we use today. Ibn al-Haytham may very well have been the first to lay out the cogs and gears. Hundreds of years later, other great thinkers would assemble them into a finished product.

 

http://www.realclearscience.com/blog/2014/03/the_muslim_scientist_who_birthed_the_scientific_method.html

回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5068370
科學家真的快回答了所有問題? - J. Achenbach
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Are we nearing the end of science?

 

Joel Achenbach, 02/11/14

 

Are we nearing the end of science? That is, are we running out of answerable questions, leaving us with only some mop-up duty, working around the edges of the great scientific achievements of Darwin, Einstein, Copernicus, et al.?

 

This was the provocative thesis nearly two decades ago of John Horgan, the Scientific American writer who had spent years interviewing luminaries in a variety of fields and had come away with a decidedly jaundiced view. His book “The End of Science” introduced the reader to superstars and geniuses, most of whom seemed slightly smaller in stature by the time Horgan left the room.

 

Naturally the professionals in the world of science were aghast. Horgan all but said they were wasting their time on marginalia. A delightful romp of a book, it nonetheless suffered from the declarative nature of the title, which had the loud ping of overstatement. There is the provocative and then there is the insupportable.

 

There’s a somewhat related line of argument that has been advanced by professor of medicine John Ioannidis, who says most scientific studies are wrong, their results not reproducible and likely fatally skewed by the unconscious desires of the researcher for a certain result.

 

And he may be onto something. Last month, Francis Collins, head of the National Institutes of Health, and Lawrence Tabak, NIH’s principal deputy director, published a column in the journal Nature stating that the scientific community needs to take steps to address “a troubling frequency of published reports that claim a significant result, but fail to be reproducible.”

 

They list a variety of factors that lead to the lack of reproducibility, including the way some scientists use a “secret sauce” to make their experiments work and the way they withhold crucial details from publication.

 

But it’s unlikely that science, as a whole, is going to run out of legitimate discoveries, and certainly there are questions to keep everyone in business into the distant future.

 

Nearly 20 years ago I typed up a list of “five simple questions that many scientists might accept as a core curriculum of the unknown” (The Washington Post, Aug. 11, 1996):

 

1. Why does the universe exist?

2. What is matter made of?

3. How did life originate?

4. How does consciousness emerge from the brain?

5. Is there intelligent life on other worlds?

 

The simplest questions are the hardest. For example, the origin of the universe isn’t something you can reproduce experimentally, to test the “why” of it -- and so even if you could detect the first sparks of the big bang you would know merely what happened and not necessarily why it happened. You would have correlation/causation issues.

 

The Large Hadron Collider, near Geneva, has been smashing particles in an effort to discern what the universe is made of, but the physicists are nowhere close to nailing that down. In fact, just the other day they announced, in effect, “We’re gonna need a bigger collider.”

 

The origin of life is tricky because by the time you get anything big enough and robust enough and complex enough to form a fossil, you’re already way, way past the point of origin. How do you get the first cell? How do you get rolling with this life business?

 

Consciousness may be an innately murky enterprise, and the issue of intelligent aliens remains completely speculative. Our big radio telescopes have heard not a peep so far. Maybe the Voyager spacecraft will bump into something out there in interstellar space (but don’t count on it).

 

And there are so many smaller questions, too, for scientists to work on:

 

What exactly caused the Permian mass extinction some 250 million years ago (Siberian vulcanism?) and

why did some species survive that while so many died out?

 

Moving to more recent times:

 

How and when did humans migrate around the world?

Did they populate the Americas in a single migration or in multiple waves by land and boat?

 

Talk about unknowns: Most of human existence was, and is, prehistoric -- lost in the fog, like my senior year in high school.

 

This is an abridged version of an article that appeared on Achenblog.

 

http://www.washingtonpost.com/national/health-science/are-we-nearing-the-end-of-science/2014/02/07/5541b420-89c1-11e3-a5bd-844629433ba3_story.html

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5055729
什麼是偽科學? - P. Ellerton
推薦0


胡卜凱
等級:8
留言加入好友

 

Where Is the Proof in Pseudoscience?                       

 

Peter Ellerton, 01/31/14

 

The word “pseudoscience” is used to describe something that is portrayed as scientific but fails to meet scientific criteria.

 

This misrepresentation occurs because actual science has creditability (which is to say it works), and pseudoscience attempts to ride on the back of this credibility without subjecting itself to the hard intellectual scrutiny that real science demands.

 

A good example of pseudoscience is homoeopathy, which presents the façade of a science-based medical practice but fails to adhere to scientific methodology.

 

Other things typically branded pseudoscience include astrology, young-Earth creationism, iridology, neuro-linguistic programming and water divining, to name but a few.

 

What’s the difference?

 

Key distinctions between science and pseudoscience are often lost in discussion, and sometimes this makes the public acceptance of scientific findings harder than it should be.

 

For example, those who think the plural of anecdote is data may not appreciate why this is not scientific (indeed, it can have a proper role to play as a signpost for research).

 

Other misconceptions about science include what the definition of a theory is, what it means to prove something, how statistics should be used and the nature of evidence and falsification.

 

Because of these misconceptions, and the confusion they cause, it is sometimes useful to discuss science and pseudoscience in a way that focuses less on operational details and more on the broader functions of science.

 

What is knowledge?

 

The first and highest level at which science can be distinguished from pseudoscience involves how an area of study grows in knowledge and utility.

 

The philosopher John Dewey in his Theory of Inquiry said that we understand knowledge as that which is “so settled that it is available as a resource in further inquiry”.

 

This is an excellent description of how we come to “know” something in science. It shows how existing knowledge can be used to form new hypotheses, develop new theories and hence create new knowledge.

 

It is characteristic of science that our knowledge, so expressed, has grown enormously over the last few centuries, guided by the reality check of experimentation.

 

In short, the new knowledge works and is useful in finding more knowledge that also works.

 

No progress made

 

Contrast this with homeopathy, a field that has generated no discernible growth in knowledge or practice. While the use of modern scientific language may make it sound more impressive, there is no corresponding increase in knowledge linked to effectiveness. The field has flat-lined.

 

At this level of understanding, science produces growth, pseudoscience does not.

 

To understand this lack of growth we move to a lower, more detailed level, in which we are concerned with one of the primary goals of science: to provide causal explanations of phenomena.

 

Causal explanations

 

Causal explanations are those in which we understand the connection between two or more events, where we can outline a theoretical pathway whereby one could influence the others.

 

This theoretical pathway can then be tested via the predictions it makes about the world, and stands or falls on the results. Classic examples of successful causal explanations in science include our explanation of the seasons, and of the genetic basis of some diseases.

 

While it’s true that homoeopathy supporters try very hard to provide causal explanations, such explanations are not linked to more effective practice, do not provide new knowledge or utility, and so do not lead to growth.

 

In the same way, supporters of neuro-linguistic programing claim a causal connection between certain neurological processes and learned behaviour, but fail to deliver, and astrologists offer no coherent attempt to provide an explanation for their purported predictive powers.

 

The lack of testable causal explanations (or models, if you will) that characterises pseudoscience gives us a second level of discrimination: science provides casual explanations that lead to growth but pseudoscience does not.

 

Operational aspects of science

 

The third level of discrimination is where most of the action between science and pseudoscience actually takes place, over what I earlier called the operational details of science. Getting these details right helps deliver useful causal explanations.

 

This is where battles are fought over what constitutes evidence, how to properly use statistics, instances of cognitive biases, the use of proper methodologies and so on.

 

It is where homeopathy relies on confirmation bias, where the anti-vaccine lobby is energised by anecdotes, and where deniers of climate science selectively highlight agreeable data.

 

This level is also where the waters are muddiest in terms of understanding science for much of the population, as seen in comments on social media posts, letters to the editor, talkback, television, media articles and political posturing.

 

The knowledge is out there

 

It is important to address these basic operational understandings, but we must also highlight, in both science education and science communication, the causal explanations science provides about the world and the link between these explanations and growth in knowledge and utility.

 

This understanding gives us better tools to recognise pseudoscience in general, and also helps combat anti-science movements (such as young-earth creationism) that often masquerade as science in their attempt to play in the same rational arena.

 

A vigorous, articulate and targeted offence against pseudoscience is essential to the project of human progress through science, which, as Einstein reminds us, is “the most precious thing we have”.

 

Peter Ellerton is a lecturer in critical thinking at the University of Queensland.

 

http://theconversation.com/where-is-the-proof-in-pseudoscience-22184



本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5052336
心理學研究的新作風與新態度 -- C. Chambers
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

The Changing Face of Psychology         

 

Chris Chambers, 01/24/14

 

After 50 years of stagnation in research practices, psychology is leading reforms that will benefit all life sciences

 

In 1959, an American researcher named Ted Sterling reported something disturbing. Of 294 articles published across four major psychology journals, 286 had reported positive results – that is, a staggering 97% of published papers were underpinned by statistically significant effects. Where, he wondered, were all the negative results – the less exciting or less conclusive findings? Sterling labelled this publication bias a form of malpractice. After all, getting published in science should never depend on getting the “right results”.

 

You might think that Sterling’s discovery would have led the psychologists of 1959 to sit up and take notice. Groups would be assembled to combat the problem, ensuring that the scientific record reflected a balanced sum of the evidence. Journal policies would be changed, incentives realigned.

 

Sadly, that never happened. Thirty-six years later, in 1995, Sterling took another look at the literature and found exactly the same problemnegative results were still being censored. Fifteen years after that, Daniele Fanelli from the University of Edinburgh confirmed it yet again. Publication bias had turned out to be the ultimate bad car smell, a prime example of how irrational research practices can linger on and on.

 

Now, finally, the tide is turning. A growing number of psychologists – particularly the younger generation – are fed up with results that don’t replicate, journals that value story-telling over truth, and an academic culture in which researchers treat data as their personal property. Psychologists are realising that major scientific advances will require us to stamp out malpractice, face our own weaknesses, and overcome the ego-driven ideals that maintain the status quo.

 

Here are five key developments to watch in 2014

 

1.     Replication

 

The problem: The best evidence for a genuine discovery is showing that independent scientists can replicate it using the same method. If it replicates repeatedly then we can use it to build better theories. If it doesn't then it belongs in the trash bin of history. This simple logic underpins all science – without replication we’d still believe in phlogiston and faster-than-light neutrinos.

 

In psychology, attempts to closely reproduce previous methods are rarely attempted. Psychologists tend to see such work as boring, lacking in intellectual prowess, and a waste of limited resources. Some of the most prominent psychology journals even have explicit policies against publishing replications, instead offering readers a diet of fast food: results that are novel, eye catching, and even counter-intuitive. Exciting results are fine provided they replicate. The problem is that nobody bothers to try, which litters the field with results of unknown (likely low) value.

 

How it’s changing: The new generation of psychologists understands that independent replication is crucial for real advancement and to earn wider credibility in science. A beautiful example of this drive is the Many Labs project led by Brian Nosek from the University of Virginia. Nosek and a team of 50 colleagues located in 36 labs worldwide sought to replicate 13 key findings in psychology, across a sample of 6,344 participants. Ten of the effects replicated successfully.

 

Journals are also beginning to respect the importance of replication. The prominent outlet Perspectives on Psychological Science recently launched an initiative that specifically publishes direct replications of previous studies. Meanwhile, journals such as BMC Psychology and PLOS ONE officially disown the requirement for researchers to report novel, positive findings.

 

2.     Open access

 

The problem: Strictly speaking, most psychology research isn’t really “published” – it is printed within journals that expressly deny access to the public (unless you are willing to pay for a personal subscription or spend £30+ on a single article). Some might say this is no different to traditional book publishing, so what's the problem? But remember that the public being denied access to science is the very same public that already funds most psychology research, including the subscription fees for universities. So why, you might ask, is taxpayer-funded research invisible to the taxpayers that funded it? The answer is complicated enough to fill a 140-page government report, but the short version is that the government places the business interests of corporate publishers ahead of the public interest in accessing science.

 

How it’s changing: The open access movement is growing in size and influence. Since April 2013, all research funded by UK research councils, including psychology, must now be fully open access – freely viewable to the public. Charities such as the Wellcome Trust have similar policies. These moves help alleviate the symptoms of closed access but don’t address the root cause, which is market dominance by traditional subscription publishers. Rather than requiring journals to make articles publicly available, the research councils and charities are merely subsidising those publishers, in some cases paying them extra for open access on top of their existing subscription fees. What other business in society is paid twice for a product that it didn’t produce in the first place? It remains a mystery who, other than the publishers themselves, would call this bizarre set of circumstances a “solution”.

 

3. Open science

 

The problem: Data sharing is crucial for science but rare in psychology. Even though ethical guidelines require authors to share data when requested, such requests are usually ignored or denied, even when coming from other psychologists. Failing to publicly share data makes it harder to do meta-analysis and easier for unscrupulous researchers to get away with fraud. The most serious fraud cases, such as Diederik Stapel, would have been caught years earlier if journals required the raw data to be published alongside research articles.

 

How it’s changing: Data sharing isn’t yet mandatory, but it is gradually becoming unacceptable for psychologists not to share. Evidence shows that studies which share data tend to be more accurate and less likely to make statistical errors. Public repositories such as Figshare and the Open Science Framework now make the act of sharing easy, and new journals including the Journal of Open Psychology Data have been launched specifically to provide authors with a way of publicising data sharing.

 

Some existing journals are also introducing rewards to encourage data sharing. Since 2014, authors who share data at the journal Psychological Science will earn an Open Data badge, printed at the top of the article. Coordinated data sharing carries all kinds of other benefits too – for instance, it allows future researchers to run meta-analysis on huge volumes of existing data, answering questions that simply can’t be tackled with smaller datasets.

 

4.     Bigger data

 

The problem: We’ve known for decades that psychology research is statistically underpowered. What this means is that even when genuine phenomena exist, most experiments don’t have sufficiently large samples to detect them. The curse of low power cuts both ways: not only is an underpowered experiment likely to miss finding water in the desert, it’s also more likely to lead us to a mirage.

 

How it’s changing: Psychologists are beginning to develop innovative ways to acquire larger samples. An exciting approach is Internet testing, which enables easy data collection from thousands of participants. One recent study managed to replicate 10 major effects in psychology using Amazon’s Mechanical Turk. Psychologists are also starting to work alongside organisations that already collect large amounts of useful data (and no, I don’t mean GCHQ). A great example is collaborative research with online gaming companies. Tom Stafford from the University of Sheffield recently published an extraordinary study of learning patterns in over 850,000 people by working with a game developer.

 

5.     Limiting researcher “degrees of freedom”

 

The problem: In psychology, discoveries tend to be statistical. This means that to test a particular hypothesis, say, about motor actions, we might measure the difference in reaction times or response accuracy between two experimental conditions. Because the measurements contain noise (or “unexplained variability”), we rely on statistical tests to provide us with a level of certainty in the outcome. This is different to other sciences where discoveries are more black and white, like finding a new rock layer or observing a supernova.

 

Whenever experiments rely on inferences from statistics, researchers can exploit “degrees of freedom” in the analyses to produce desirable outcomes. This might involve trying different ways of removing statistical outliers or the effect of different statistical models, and then only reporting the approach that “worked” best in producing attractive results. Just as buying all the tickets in a raffle guarantees a win, exploiting researcher degrees of freedom can guarantee a false discovery.

 

The reason we fall into this trap is because of incentives and human nature. As Sterling showed in 1959, psychology journals select which studies to publish not based on the methods but on the results: getting published in the most prominent, career-making journals requires researchers to obtain novel, positive, statistically significant effects. And because statistical significance is an arbitrary threshold (p<.05), researchers="" have="" every="" incentive="" to="" tweak="" their="" analyses="" until="" the="" results="" cross="" line="" these="" behaviours="" are="" common="" in="" psychology="" a="" font="">recent survey led by Leslie John from Harvard University estimated that at least 60% of psychologists selectively report analyses that “work”. In many cases such behaviour may even be unconscious.

 

How it’s changing: The best cure for researcher degrees of freedom is to pre-register the predictions and planned analyses of experiments before looking at the data. This approach is standard practice in medicine because it helps prevent the desires of the researcher from influencing the outcome. Among the basic life sciences, psychology is now leading the way in advancing pre-registration. The journals Cortex, Attention Perception & Psychophysics, AIMS Neuroscience and Experimental Psychology offer pre-registered articles in which peer review happens before experiments are conducted. Not only does pre-registration put the reins on researcher degrees of freedom, it also prevents journals from selecting which papers to publish based on the results.

 

Journals aren’t the only organisations embracing pre-registration. The Open Science Framework invites psychologists to publish their protocols, and the 2013 Declaration of Helsinki now requires public pre-registration of all human research “before recruitment of the first subject”.

 

We’ll continue to cover these developments at HQ as they progress throughout 2014.

 

http://www.theguardian.com/science/head-quarters/2014/jan/24/the-changing-face-of-psychology



本文於 修改第 5 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5051545
「科學基礎論」的目的與功能 - M. Pigliucci
推薦0


胡卜凱
等級:8
留言加入好友

 

Doing Philosophy Of Science, An Example

 

Massimo Pigliucci, 10/08/13

 

I have recently been to the European Philosophy of Science Association meeting, where my colleague Maarten Boudry and I have hosted a symposium on our recently published book on the Philosophy of Pseudoscience.

I have, of course attended several other sessions and talks, as is customary on these occasions (it is also customary to enjoy the local sights, food and drinks, which I dutifully subjected myself to...).

One of these talks was entitled "
Explanatory fictions and fictional explanations," by Sorin Bangu, of the University of Bergen (Norway). I want to use it as a stimulating example of one way of doing philosophy of science. Before we get into it, however, a couple of crucial caveats. As you probably know, some scientists (Lawrence Krauss immediately comes to mind as a major offender) declare philosophy of science to be useless. By this they mean useless to scientists, as apparently their limited imagination cannot conceive of how something could possibly be interesting if it doesn't contribute to science (Shakespeare, anyone? Jazz?? Soccer???). I have argued for a time now that philosophy of science is interesting in at least three senses:

1. It is a self-contained exercise in
reconstructing and understanding the logic
of science. (E.g., discussions of paradigms and scientific revolutions, or what you are about to read below.)

2. It is useful to science
theorizing when it deals with issues at the borderlines
between science and philosophy. (E.g., discussions of species concepts in biology, or of interpretations of quantum mechanics in physics.)

3. It is socially useful either as
science criticism
or in defense of science, whenever science either makes questionable claims or is under attack by reactionary forces. (E.g., criticism of exaggerated claims by evolutionary psychologists or fMRI enthusiasts, defense against creationism and Intelligent Design so-called "theory.")

Now, nn. 2 and 3 should be pretty obvious (ok, not to Krauss, but still). The first mode of doing philosophy of science, however, is naturally a bit more obscure to the outsider, as is the case for pretty much any intellectual endeavor (trust me, there is a lot of science being handsomely funded about which you would scratch your head and ask "who cares?"). So what follows is just a taste of philosophy of science done as an
intellectual activity in its own right, aiming at reconstructing the logic of how science works. To paraphrase Groucho Marx, this is my example, if you don't like it, I have others...

The question which got Bangu started is that of how fiction can have explanatory power. And by "fiction" Bangu means pretty much any scientific theory or model, which are by definition human
imaginative inventions, i.e., fictions. Scientists, of course, are fine with a positive answer to that question, indeed my bet is that they would scoff at it as a non-question. Traditionally, however, many philosophers have answered in the negative for a variety of reasons. Bangu, however, is cautiously optimistic that one can positively deal with the problem. If you are still with me, let's be clear on what exactly is being attempted here: no philosopher is suggesting that somehow scientists have been wrong all along in using "fictional" accounts in their understanding of the world. The question is logical, not practical: how can a notion that is, strictly speaking, false (a theoretical model, which is always approximate) successfully account for something that is true
(the world as it really is)? If this isn't your cup of tea (fair enough), you may want to skip to a more interesting post. If your intellect is even slightly tingled, read on...

If the way I framed the issue so far still sounds bizarre (and it might), then consider clear cases in which fictions don't, in fact, explain facts. For instance: no, Santa (a fiction) didn't bring the presents (a truth) last Christmas. The general logical point is that fictions cannot explain because
falsehoods do not explain. But of course in science we are talking about idealizations and approximations
, not outright falsehoods, i.e., fictions "concerned with the truth." Bangu's project, then, is to unpack in what (logical) sense the Santa falsehood differs from the type of falsehood-concerned-with-truth that scientists traffic in.

There are several ways of tackling this problem, but the particular starting point considered by Bantu is that in science not just the
explanans (i.e., the thing that does the explaining) but also the explanandum (the thing to be explained) has fictional content. But, wait, what does that mean? Are we sliding toward some form of idealism in metaphysics, where reality itself is somebody's (God? The Big Programmer in the Sky?) mental construction? Nothing of the sort (besides, an idealist would simply reply that mental constructions are real, just not physically so!). Instead, Bantu reminded us that data - the raw starting point of any scientific analysis - is immediately shaped by scientists into phenomena, that is, phenomena are constructed from data, they are not "out there," they are posited. To put it into more formal language: fictions in the explananda is what allows the successful use of fictions in the explanans. Bantu refers to this idea as the "Monopoly principle": you can't buy real property with fictional money (well, unless you are Goldman Sachs, of course), but there is no problem in buying fictional property with fictional money
...

Okay, enough with the preliminaries, let's consider an actual example of scientific practice. The one Bantu picked was the answer to the deceptively simple question:

 

why does water boil?

 

The explanandum is water's (or other substances) capacity to undergo "phase transitions." The explanans these days is couched in terms of statistical mechanics. In current practice, a phase transition can be explained by invoking a role for (mathematical) singularities of the function describing the temperature curve of the system transitioning between phases, assuming that the system contains an infinite number of particles. But singularities are "fictional," and of course no real system actually contains an infinite number of particles. Nevertheless, the role of singularities is to represent the phenomenon to be explained, and they do a very good job at it. Moreover, physicists - at least for now - simply do not have a definition of phase transition that doesn't invoke singularities/infinities.

There are of course a number of further issues raised by Bantu's talk. I have already mentioned that a scientist would immediately point out the difference between
idealizations and fictions. It turns out, however, that this only kicks the can a bit further down the road without solving the problem, since now we would have to unpack the (perceived) difference between idealizations and fictions. One could, for instance, think of idealizations as a sub-class of fictions; or maybe one can cash out the idea of idealization in terms of verisimilitude (truth-likeness, which is another philosophically more-difficult-than-you-think idea). And of course there is the broader question of how widely applicable Bantu's account of the relationship between truth and fiction in science actually is:

 

are all scientific theories "fictional" in the philosophical sense of the term?

However you go about it, two things are important to keep in mind:

 

a) no, this isn't the type of philosophy of science that should concern or worry scientists (who can go on using their tools without having to deal with how those tools logically work); but

b) yes, this is an interesting intellectual puzzle in its own right, if your intellectual curiosity happens to be stimulated by logical puzzles and epistemic problems. If not, you can always go back to types 2 and 3 philosophy of science described above.

Originally on
Rationally Speaking, September 4, 2013

 

http://www.science20.com/rationally_speaking/doing_philosophy_science_example-121805



本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5042487
「偽科學」的「黑洞」性質 - M. Pigliucci
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

The Pseudoscience Black Hole

 

Massimo Pigliucci, 12/23/13

 

As I’ve mentioned on other occasions, my most recent effort in philosophy of science actually concerns what my collaborator Maarten Boudry and I call the philosophy of pseudoscience. During a recent discussion we had with some of the contributors to our book at the recent congress of the European Philosophy of Science Association, Maarten came up with the idea of the pseudoscience black hole. Let me explain.

 

The idea is that it is relatively easy to find historical (and even recent) examples of notions or fields that began within the scope of acceptable scientific practice, but then moved (or, rather, precipitated) into the realm of pseudoscience. The classic case, of course, is alchemy. Contra popular perception, alchemists did produce a significant amount of empirical results about the behavior of different combinations of chemicals, even though the basic theory of elements underlying the whole enterprise was in fact hopelessly flawed. Also, let's not forget that first rate scientists - foremost among them Newton - spent a lot of time carrying out alchemical research, and that they thought of it in the same way in which they were thinking of what later turned out to be good science.

 

Another example, this one much more recent, is provided by the cold fusion story. The initial 1989 report by Stanley Pons and Martin Fleischmann was received with natural caution by the scientific community, given the potentially revolutionary import (both theoretical and practical) of the alleged discovery. But it was treated as science, done by credentialed scientists working within established institutions. The notion was quickly abandoned when various groups couldn't replicate Pons and Fleischmann's results, and moreover given that theoreticians just couldn't make sense of how cold fusion was possible to begin with. The story would have ended there, and represented a good example of the self-correcting mechanism of science, if a small but persistent group of aficionados hadn't pursued the matter by organizing alternative meetings, publishing alleged results, and eventually even beginning to claim that there was a conspiracy by the scientific establishment to suppress the whole affair. In other words, cold fusion had - surprisingly rapidly - moved not only into the territory of discarded science, but of downright pseudoscience.

 

Examples of this type can easily be multiplied by even a cursory survey of the history of science. Eugenics and phrenology immediately come to mind, as well as - only slightly more controversially - psychoanalysis. At this point I would also firmly throw parapsychology into the heap (research in parapsychology has been conducted by credentialed scientists, especially during the early part of the 20th century, and for a while it looked like it might have gained enough traction to move to mainstream).

 

But, asked Maarten, do we have any convincing cases of the reverse happening? That is, are there historical cases of a discipline or notion that began as clearly pseudoscientific but then managed to clean up its act and emerge as a respectable science? And if not, why?

 

Before going any further, we may need to get a bit more clear on what we mean by pseudoscience. Of course Maarten, I and our contributors devoted an entire book to explore that and related questions, so the matter is intricate. Nonetheless, three characteristics of pseudoscience clearly emerged from our discussions:

 

1. Pseudoscience is not a fixed notion. A field can slide into (and maybe out of?) pseudoscientific status depending on the temporal evolution of its epistemic status (and, to a certain extent, of the sociology of the situation).

 

2. Pseudoscientific claims are grossly deficient in terms of epistemic warrant. This, however, is not sufficient to identify pseudoscience per se, as some claims made within established science can also, at one time or another, be epistemically grossly deficient.

 

3. What most characterizes a pseudoscience is the concerted efforts of its practitioners to mimic the trappings of science: They want to be seen as doing science, so they organize conferences, publish specialized journals, and talk about data and statistical analyses. All of it, of course, while lacking the necessary epistemic warrant to actually be a science.

 

Given this three-point concept of pseudoscience, then, is Maarten right that pseudoscientific status, once reached, is a "black hole," a sink from which no notion or field ever emerges again?

 

The obvious counter example would seem to be herbal medicine which, to a limited extent, is becoming acceptable as a mainstream practice. Indeed, in some cases our modern technology has uncontroversially and successfully purified and greatly improved the efficacy of natural remedies. Just think, of course, of aspirin, whose active ingredient is derived from the bark and leaves of willow trees, the effectiveness of which was well known already to Hippocrates 23 centuries ago.

 

Maybe, just maybe, we are in the process of witnessing a similar emergence of acupuncture from pseudoscience to medical acceptability. I say maybe because it is not at all clear, as yet, whether acupuncture has additional effects above and beyond the placebo. But if it does, then it should certainly be used in some clinical practice, mostly as a complementary approach to pain management (it doesn't seem to have measurable effects on much else).

 

But these two counter examples struck both Maarten and I as rather unconvincing. They are better interpreted as specific practices, arrived at by trial and error, which happen to work well enough to be useful in modern settings. The theory, such as it is, behind them is not just wrong, but could have never aspired to be scientific to begin with.

 

Acupuncture, for instance, is based on the metaphysical notion of Qi energy, flowing through 12 "regular" and 8 "extraordinary" so-called "meridians." Indeed, there are allegedly five types of Qi energy, corresponding to five cardinal functions of the human body: actuation, warming, defense, containment and transformation. Needless to say, all of this is entirely made up, and makes absolutely no contact with either empirical science or established theoretical notions in, say, physics or biology.

 

The situation is even more hopeless in the case of "herbalism," which originates from a hodgepodge of approaches, including magic, shamanism, and Chinese "medicine" type of supernaturalism. Indeed, one of Hippocrates' great contributions was precisely to reject mysticism and supernaturalism as bases for medicine, which is why he is often referred to as the father of "Western" medicine (i.e., medicine).

 

Based just on the examples discussed above - concerning once acceptable scientific notions that slipped into pseudoscience and pseudoscientific notions that never emerged into science - it would seem that there is a potential explanation for Maarten's black hole. Cold fusion, phrenology, and to some (perhaps more debatable) extent alchemy were not just empirically based (so is acupuncture, after all!), but built on a theoretical foundation that invoked natural laws and explicitly attempted to link up with established science. Those instances of pseudoscience whose practice, but not theory, may have made it into the mainstream, instead, invoked supernatural or mystical notions, and most definitely did not make any attempt to connect with the rest of the scientific web of knowledge.

 

Please note that I am certainly not saying that all pseudoscience is based on supernaturalism. Parapsychology and ufology, in most of their incarnations at least, certainly aren't. What I am saying is that either a notion begins within the realm of possibly acceptable science - from which it then evolves either toward full fledged science or slides into pseudoscience - or it starts out as pseudoscience and remains there. The few apparent exceptions to the latter scenario appear to be cases of practices based on mystical or similar notions. In those cases aspects of the practice may become incorporated into (and explained by) modern science, but the "theoretical" (really, metaphysical) baggage is irrevocably shed.

 

*****

 

Can anyone think of examples that counter the idea of the pseudoscience black hole? Or of alternative explanations for its existence?

 

Originally on Rationally Speaking

 

http://www.science20.com/rationally_speaking/pseudoscience_black_hole-126943



本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=5042482
頁/共2頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁