|
一般方法論 -- 開欄文
|
瀏覽951 |回應8 |推薦1 |
|
|
|
子曰:「工欲善其事,必先利其器」(《論語•衛靈公10》);「器」在這裏指的大概是工具。但是,要把一件事「做對」,「工具」之外,還得講究「方法」。「雖不中不遠矣」 (《大學章句10》),講的應該是「態度」;這個道理用在「方法」上也可以說得通。這是「科學基礎論」中「科學方法」研究的對象。 大概在初中時,家父給了我一本討論「科學方法」的書;書名已經忘了。這是我第一次接觸到這個主題;自然印象深刻。後來成長過程中,我對它一直非常注意。「邏輯學」之外,我還讀過笛卡爾和波普的書。本部落格過去登過不少這方面的評論。 循此處各《開欄文》之意,另立此欄。
本文於 修改第 4 次
|
當物理學家扮演哲學家 -- Paul Austin Murphy
|
|
|
推薦1 |
|
|
|
物理學家有傾向「超理性想法」的傳統。大概40多年前我讀威爾伯的《量子問題》;該書對我們熟知的大物理學家,如海森堡、希瑞丁格、和愛因斯坦等人在這方面的觀點有相當詳細的陳述。 我能了解此傾向的原因:這些物理學家窮畢生之力,追尋「真理」和宇宙的究竟;不過,我不敢妄論他/她們是否也追求「意義」;到了晚年,他/她們發現自己沒有捕捉到「真理」的影子,更別說裙角了;從而,另闢蹊徑,甚至劍走偏鋒,倒也是個理性行為。 請參見Paul Davies。 Physicist Paul Davies’s Mysticism[卜胡1] The English physicist Paul Davies deems himself to be so rational that he’s realised that rational thought itself has its limitations. (At least this is how Davies’s views can be interpreted.) That critical position relates to Davies’s fixation on “the meaning of the universe. He doesn’t believe that “the universe is absurd or meaningless”. Nor does he want to live in an “absurd” universe. However, Davies also believes that science and rational thought cannot show us the meaning of the universe. What can show us its meaning is what “lies outside the usual categories of rational human thought”. Paul Austin Murphy, 12/03/25 Paul Davies states that the belief that “the universe exists, and exists in the form it does, reasonlessly” is no better than “many metaphysical and theistic theories”. Of course, readers will need to know exactly what Davies means by the word “reasonlessly”. (Davies does go into detail elsewhere.) What Davies desires is for someone (perhaps himself!?) “to construct a metaphysical theory that reduces some of the arbitrariness of the world”. It seems that such a metaphysical theory must include some kind of mysticism. However, if we don’t construct this theory, then “[w]e are barred from ultimate knowledge, from ultimate explanation, by the very rules of reasoning that prompt us to seek such an explanation in the first place”. Davies believes that we are barred partly or even primarily because of the findings of such people as Cantor and Gödel. However, we are not barred if we embrace a different concept of ‘understanding’ from that of rational explanation”.That different concept is supplied by some kind of mysticism. All that said, Davies often stresses that he has “never had a mystical experience myself, but I keep an open mind about the value of such experiences”. In various of Davies’s books, the statement that “I have never had a mystical experience myself” has been used a few times. Readers may get the impression that it’s important to Davies that he tells his readers that he hasn’t had a mystical experience himself. Why is that? Sceptically, if Davies had mystical experiences, then he’d be one of the mystics he praises so often in his books. That would muddy the water somewhat. For one, Davies wouldn’t be treated seriously by many of his fellow scientists, at least scientists as they’re portrayed by Davies himself. So Davies settles for simply keeping an open mind about the value of such experiences. To be sceptical again, this may remind some readers of the believers in UFOs, astral travelling, etc. who claim that they have an “open mind”, and that their critics have closed minds. The logic of this type of stance is that the person with the most open mind is the person who believes almost everything. Of course, this is partially unfair to Davies in that he’s a physicist who also provides technical — and sometimes convincing — arguments to back up his metaphysical and mystical positions… unlike most people with open minds. In any case, Davies believes that mystical experiences may “provide the only route beyond the limits to which science and philosophy can take us, the only possible path to the Ultimate”. We can ask questions about the nature of “the Ultimate”, how anyone knows that mystics have discovered it, and many more questions. The Quick Fixes of Self-Important Mystics Despite the self-image of many mystics (as well as the devotion they receive from their followers), there’s something self-important about them — or at least about their claims. Davies himself says that “mystics claim that they can grasp ultimate reality in a single experience”. (聽起來有點近於「禪宗」) Can they? Who says so? Well, they do. Yet even if they could grasp ultimate reality, how would anyone else know that they have done so? Davies finished that sentence by saying “in contrast to the long and tortuous deductive sequence (petering out in turtle trouble) of the logical-scientific method of inquiry”. Perhaps that gets to the heart of the matter. Mystics (or at least some mystics) don’t want to do the hard work. They don’t want to spend any time on a “long and tortuous deductive sequence”. Instead, Ultimate Reality (whatever that is) is grasped in a single experience. Of course, not all mystics claim to find Ultimate Reality. According to Davies, some simply find “an inner passionate, joyful stillness that lies beyond the activity of busy minds”. Mystical Experience and Culture Here’s something very interesting. Davies happily admits that “[t]he language used to describe these experiences usually reflects the culture of the individual concerned”. What Davies doesn’t admit is that the language of the culture may determine the actual content of these experiences too, not only the later descriptions. In other words, Davies might have assumed a languageless and concept-free experience which was only later described in the language of a specific culture. But what if the words, terms, concepts, ideas, etc. of this natural language permeated the actual experience itsel ? (Add to that the memories of the subject who has the experiences, which were themselves determined by the language of a specific culture.) Davies tells his readers that “Einstein spoke of a ‘cosmic religious feeling’ that inspired his reflections on the order and harmony of nature”. He then quotes science writer David Peat. The passage goes as follows: “‘[A] remarkable feeling of intensity that seems to flood the whole world around us with meaning. . . . We sense that we are touching something universal and perhaps eternal, so that the particular moment in time takes on a numinous character and seems to expand in time without limit. We sense that all boundaries between ourselves and the outer world vanish, for what we experiencing lies beyond all are and all attempts to be captured in logical thought.” Note Peat’s uses of the words “seems” and “we sense”. This is an acknowledgement that Peat may not have touched something universal, and that even though he sensed that all boundaries between himself and the outer world vanished, they may not have actually done so. The influence of language was mentioned earlier. Carrying on from that, perhaps the passage above simply reflects all the books and pieces on mysticism which Peat has read. It certainly comes across as almost cliched. Peat’s description also reads like one under the influence of hallucinogenic drugs. Yet even here it can be argued that psychedelic experiences are at least partially determined by what the tripper has previously read about other trips, mystical experiences, what “hippies” in the 1960s have said, etc. (Personally, I once noted that those trippers who had no interest in mysticism, hippy culture, Carlos Castaneda, Ken Wilber, Timothy Leary, Aldous Huxley, Rudy Rucker, etc. didn’t have experiences like the one described above.) More specifically, take Peat’s statement that “[w]e sense that all boundaries between ourselves and the outer world vanish”. Did Peat have this precise experience because he’d previously read about other people who had experienced all boundaries between themselves and the outer world vanish? Again, it can be conceded that experiences like this do occur when people trip. Yet that may be because they’ve read about such things. That said, there’s a chicken and egg situation here. In other words, people have read about such things because people have experienced such things. So surely there must have been a time when people experienced these things even before they could have read about such things. This leads to the possibility that such experiences may occur in cultures without books. Yet they still have an oral culture. So pretty much the same arguments will apply in these cases too. Davies also quotes Ken Wilber on mystical “Eastern” experiences. The passage goes as follows: “In the mystical consciousness, Reality is apprehended directly and immediately, meaning without any mediation, any symbolic elaboration, any conceptualization, or any abstractions; subject and object become one in a timeless and spaceless act that is beyond any and all forms of mediation. Mystics universally speak of contacting reality in its ‘suchness’, its ‘isness’, its ‘thatness’, without any intermediaries; beyond words, symbols, names, thoughts, images.” How did Wilber know all this? More specifically, how did he know that such experiences didn’t involve any symbolic elaboration, any conceptualization, or any abstractions? How did he know that the mystic moved beyond words, symbols, names, thoughts, images? More strongly, how did Wilber know that Reality is apprehended directly and immediately? Paul Davies vs Other Scientists Throughout his many books, Davies often tells his readers what other scientists believe about religion, mysticism and certain metaphysical beliefs. He’s not too happy with what they believe. That said, the following passage isn’t judgemental: “Most scientists have a deep mistrust of mysticism. This is not surprising, as mystical thought lies at the opposite extreme to rational thought, which is the basis of the scientific method.” Apart from the first sentence (which would need a survey of some kind to be demonstrated), I couldn’t have put it better myself. In this chapter, Davies is very keen to show us the limitations of rational thought when it comes to both physics and mathematics. He writes: “My own feeling is that the scientific method should be pursued as far as it possibly can. Mysticism is no substitute for scientific inquiry and logical reasoning so long as this approach can be consistently applied.” Yet Davies clearly believes that it can’t be consistently applied in such cases. And that’s where he takes a dive into mysticism. Or, it should be said, he takes a dive into “espousing mysticism”. According to Davies, scientific inquiry and logical reasoning hit a brick wall when it comes to “ultimate questions”. Then “science and logic may fail us”. (At least here Davies isn’t saying that science and logic will fail us.) Davies relies on the work of Cantor and Gödel to show his readers that there can’t be a literal theory of everything. Gödel’s theorem “is nevertheless full of paradox and uncertainty”. In short, “[t]here will always be truth that lies beyond, that cannot be reached from a finite collection of axioms”. This may be a classic example of a person stretching the meaning of Gödel’s theorems beyond their proper boundaries. Davies then put the cream on the cake when he stated the following: “And here we encounter once more the Gödelian limits to rational thought — the mystery at the end of the universe. We cannot know Cantor’s Absolute, or any other Absolute, by rational means, for any Absolute, being a Unity and hence complete within itself, must include itself.” … But we can experience (if not know) the Absolute through mystical means… at least according to Davies. In terms of Davies himself, he seems to be saying that it isn’t doomed to failure if one takes the mystical route. The mystical route even trumps Stephen Hawking’s “theory of everything”, which isn’t about everything at all. It’s about all the fundamental forces and particles in the universe. The mystical experience of everything, on the other hand, really does include everything. (There are indeed broader theories of everything, and even Hawking himself went further.) The question is whether these limitations have the consequences Davies believes they have. In addition, it doesn’t follow that even if there are a multitude of limitations to rational thought that this should lead us to mysticism. Indeed, mystics — as well as Davies himself — have it that there actually are no limits to what human persons can achieve. It’s just rational thought that has various limits. Thus, the mystic realises that rational thought has its limits, and then goes beyond it to grasp Ultimate Reality. Yet most mystics, both today and historically, have “stretched thought to its limits” without practicing science, mathematics or even philosophy. Instead, they taken what Davies himself calls a “short cut”. That’s why Davies is keen to point out those physicists who didn’t take a short cut. He tells his readers that “many of the world’s finest thinkers, including notable scientists such as Einstein, Pauli, Schrodinger, Heisenberg, Eddington, and Jeans, have also espoused mysticism”. These scientists were human beings. So it’s not much of surprise that some scientists embraced mysticism, just as many have embraced religion. There’s also a difference between espousing mysticism and actually indulging in mysticism. It’s not clear that all the named scientists above actually experienced any mystical states (unless those two words are interpreted very loosely). Yet they did espouse mysticism in various ways and to various degrees. Davies Makes the How-Why Distinction In Davies’s nutshell, science and logic can address the “how questions”, but not the “why questions”. [See note.] This how-why binary opposition has become a bit of cliché when it comes to the critics of science, so it’s worth unpacking. Firstly, we have Google AI mode on this subject: “The criticism of the rigid ‘how-why’ binary opposition in the philosophy of science centers on the argument that it is a false dichotomy and an oversimplification of scientific explanation. Critics contend that ‘why’ and ‘how’ questions are deeply intertwined, with the answers to ‘how’ questions often providing the substance of ‘why’ explanations. “[ ] Many ‘why’ questions in a scientific context (e.g., ‘Why is the sky blue?’) can be answered by providing a detailed ‘how’ explanation (e.g., how light scatters in the atmosphere). The distinction often depends on the level of analysis or the specific context of the inquiry, rather than an inherent difference in the nature of the questions themselves.” Now let’s specifically tackle why-questions. As the American-English philosopher Gordon Park Baker (1938–2002) put it: “The unexamined question is not worth answering.” Baker added: “To accept a question as making good sense and embark on building a philosophical theory to answer it is already to make the decisive step in the whole investigation.” Another problem is summed up by Gordon Baker: “Questions, just as much as assertions, carry presuppositions.” Baker’s questions about questions are partly Wittgensteinian in nature. Thus, readers can certainly note his Wittgensteinian points in the following: “[T]o suppose that the answers to philosophical questions await discovery is to presuppose that the questions themselves make sense and stand in need of answers (not already available). Why should this not be a fit subject for philosophical scrutiny?” Indeed, Wittgenstein did have things to say on the nature of many philosophical questions (both in his “early” and “late” periods). His position is partly summed up in this passage from Robert W. Angelo. (This ends with a quote from Wittgenstein himself.) Thus: “[N]onsense in the form of a question is still nonsense. Which is to say that the question-sign [] can only be rejected, not answered: ‘What is undefined is without meaning; this is a grammatical remark.’ [].” Another good way of summing up the problem with these philosophical why-questions is also cited by Gordon Baker. He wrote: “To pose a particular question is to take things for granted, to put some things beyond question or doubt, to treat some things as matters of course.” One obvious “presupposition” to a question is that there’s an answer to it — or at least a possible answer. (這句話顯然有問題) To sum up. Aren’t the askers of these types of philosophical why-questions “taking certain things for granted”? That is, aren’t they taking for granted that their questions are legitimate and that there are answers? Moreover, aren’t these questioners also “put[ting] some things beyond question or doubt”, as well as “treat[ing] some things as matters of course”? Note: I’ve commented on why-questions many times throughout the years. The last section, admittedly, is almost a copy-and-paste of previous uses. Written by Paul Austin Murphy MY PHILOSOPHY: https://paulaustinmurphypam.blogspot.com/ My Flickr Account: https://www.flickr.com/photos/193304911@N06/
本文於 修改第 1 次
|
「大眾做科學」理論 -- Steve Fuller
|
|
|
推薦1 |
|
|
|
我雖然拿了一個物理學碩士學位,但並非物理學家,更不是科學基礎論領域的學者。根據我對後者的一知半解,以及對自然/社會科學的粗淺常識,我不認為庫恩的「典範移轉論」說得通。 基於「集思廣益」和「眾志成城」兩個原則,下文作者夫勒教授的看法應該有幾分道理。另一方面,「科學」畢竟是專業領域,「理論」的產生在大量「觀察」和「實驗」外,更有它一套嚴謹的步驟和嚴格的標準需要遵循。「大眾做科學」可以鼓勵,它也會提供一些有幫助的資料和啟發性「點子」;如果寄望它產生「突破性理論」則不免天真;或許夫勒教授把「量變引起質變」的想法用錯場合。 The next scientific revolution won’t come from scientists 80% of scientific studies are ignored, but that's about to change Steve Fuller, 12/19/25 Editor’s Notes:Thomas Kuhn taught us that scientific revolutions arrive only in moments of a crisis of the paradigm. Now, as philosopher Steve Fuller argues, we may be able to intervene without having to wait for a Kuhnian paradigm shift. In a scientific world dominated by computer simulations and unread research, generative AI offers a radical solution. By mining the entirety of scientific knowledge and placing it in the hands of non-experts, AI could trigger a metascientific revolution -- one that finally delivers on science’s promise of collective empowerment. The most influential work on the nature of science for at least the past fifty years has been The Structure of Scientific Revolutions, first published in 1962 by a young physicist-turned-historian, Thomas Kuhn. Although influential, the book has also been widely misunderstood. It is quite common to think -- certainly based on the title -- that Kuhn was providing a formula for producing scientific revolutions. On the contrary, he was arguing that revolutions only happen once scientists confront insurmountable obstacles in attempting to solve their own research problems. In the Kuhnian jargon, the “paradigm” is then in “crisis.” Such crises typically involve the presence of phenomena that cannot be explained within the terms set by the paradigm, even after much research has been devoted for many decades -- if not centuries -- to the matter. Kuhn’s own case in point was the persistent difficulties that physicists working in the Newtonian paradigm faced with accounting for the nature of light, which eventuated in the relativity and quantum revolutions in the early twentieth century. The paradigm that was formed after those revolutions continues to dominate physics research today. Nowadays, many authors -- including accredited scientists -- believe that physics is once again in crisis and that a revolution is required to establish a new paradigm. Interestingly, the first call came in 1996 from an editor at Scientific American magazine, John Horgan, whose book The End of Science predicted -- accurately, I believe -- that the increasing use of computer simulations in cutting edge research across the sciences would shift the site of validation from hard empirical fact to more aesthetic criteria, such as beauty and elegance, which are normally associated with pure mathematics. Horgan had interviewed Kuhn himself but went beyond him to suggest that scientific research was becoming the collective realization of an artistic vision that might then be imposed as the lens through which everyone sees the world. This reading of Kuhn makes sense if you think about “paradigm” as meaning “worldview” or “world-picture.” Imagine Newton as being like Rembrandt, both master artists who first sketch a vision and fill in much of the detail but then leave it to others to follow their example and complete the final work. Horgan was vehemently opposed by the scientific establishment. Nevertheless, he had history on his side. What we now regard as the first ‘scientific revolution’ in seventeenth century Europe started a shift in the source of privileged evidence from the field to the lab. It was ultimately about not trusting your senses until they were systematically mediated, starting with telescopic observation but quickly incorporating all the other instruments that are commonly found in scientific laboratories today – not least computers. In this respect, physics was the vanguard science, followed by chemistry and then biology and the social sciences. We can regard what began four hundred years ago as a kind of technical solution to the profoundly fallen nature of humanity, a common belief shared by these early scientists, due to an exceptionally strong reading of the Christian doctrine of “Original Sin.” The secular upshot was that they felt they had to “steelman” (to make the strongest possible argument for a claim) everything they proposed about the world because their compromised minds made any naked observations and intuitions fundamentally unreliable. Whereas Descartes proposed that a strict adherence to deductive reasoning as a mental discipline could perform that corrective function, most scientists adopted a version of Francis Bacon’s “experimental method,” whereby technology is taken to provide an independent arbiter of human judgment. This is the trajectory that leads us from Galileo’s original telescope to science’s omnipresent reliance on computers today. Moreover, the fallen state of humanity extended to natural languages. Over the subsequent centuries, it has inspired various projects of linguistic renewal, most of which devolved into the jargons that have rendered scientific writing impenetrable even to educated non-experts. Nevertheless, for Kuhn, the combination of specialized discourse and instrumentation served to ringfence scientific inquiry from those who might want to exploit prematurely its novel and powerful insights. He invoked the 1660 Charter of the Royal Society of London as the institutional origin of modern science for its official insulation of scientific from non-scientific concerns. From that moment, Kuhnian paradigms could flourish, initially by a self-selecting group of correspondents but eventually by academically trained professionals. It is perhaps no accident that the person who coined “scientist” in the 1830s to mean a scientific professional, William Whewell, was himself a theologian. He understood science as a “vocation,” a kind of secular priesthood. This helps to explain why even today “lay people” may refer to either those who have not taken Holy Orders or those who have not received advanced formal training in science. In both cases, the laity participates through involvement in public demonstrations of the faith. Thus, Whewell was also a founder of the British Association for the Advancement of Science. However, there is a downside to thinking about science as this autonomous and somewhat exalted form of inquiry, which has in turn motivated the periodic calls for revolution. In a famous 1965 debate in London, Kuhn’s only serious contender as a twentieth-century science influencer, Karl Popper, declared that science should be subject to “permanent revolution,” a phrase he provocatively adapted from the Trotskyites of the time. He worried that science’s institutional autonomy undermines the sort of cognitive autonomy that inquirers need to advance the frontiers of knowledge. In short, paradigms are potential incubators of groupthink. But is permanent revolution the solution? Not even Popper’s admirers, who were generally more sensitive to political matters than Kuhn, could follow him on this point. Calls for permanent revolution have tended to result in repeated purges of the sort all too familiar from the French and Russian Revolutions. Epistemologically speaking, Popper’s signature critical stance to taken-for-granted assumptions in science would quickly dissolve into a self-devouring skepticism, which eventually would call into question the very legitimacy of scientific inquiry. Many who fear that we inhabit a “post-truth condition” believe that we have already landed in Popper’s nightmare scenario. Be that as it may, does it follow that Kuhn is correct that scientific revolutions should be postponed as long as possible? The latest development in computer-based technology -- generative artificial intelligence -- sheds an interesting light on this question. Sociologists from Robert Merton onward have long observed that the collective attention span of the scientific community is highly skewed. The referencing habits of scientists suggest that up to 80% of the published scientific literature is effectively ignored, notwithstanding the exponential growth in the number of scientists and their publications over the past half-century. Indeed, nowadays most scientists publish not to be read by their colleagues, but to be promoted in their universities. Nevertheless, it does not follow that these unread publications are not worth reading. On the contrary, they are a repository for what the information scientist Don Swanson forty years ago called “undiscovered public knowledge.” After all, the main reason such publications go unread is their failure to directly address the research frontier of the scientific paradigm in which they are presumed to be located. However, the history of science teaches that the whole of reality always exceeds the grasp of the sum of current paradigms. While it may be difficult to alter the reading habits of scientists whose careers depend on surfing the latest research wave, a computer programmed with all the published scientific literature is, in principle, only burdened by the interests of its users. In the case of generative AI, whose cardinal virtue is the ability to match and combine texts in very large databases, one can pose to it novel questions that cross disciplinary boundaries and reveal the bases for answers that require relatively little additional original research. Thus, Swanson -- an educated person with no medical background -- could discover a treatment for a mysterious disease, namely that fish oil could reduce blood viscosity and improve circulation in Raynaud’s disease, using the relatively primitive search and retrieval technologies available the 1980s. Generative artificial intelligence has the potential to kill several epistemic birds with one stone. Equipped with all the published scientific literature, it can provide equal access to users who are educated but not necessarily expert. In this way, the “laity” can redress the biases of the professionals in science, resulting in alternative scientific cultures that hybridize the default academic trajectories of paradigms and contemporary interests in knowledge. Yes, it would break the monopoly that academics have in knowledge production, but it would also allow all the available knowledge to be accessible to the widest range of people. This “metascientific revolution” would finally deliver the promise of science to be a vehicle of mass empowerment. Steve Fuller is the Professor of Social Epistemology at the University of Warwick. Among his many books are Kuhn vs Popper: The Struggle for the Soul of Science (Icon and Columbia) and Reforming the Governance of Science (Springer). Related Posts: Science must move from materialism to mystery The divide between art and science is a mistake Materialism is holding science back There will never be a theory of everything Experimenting with the truth With Claudia de Rham, John Ioannidis, Harry Collins, Daniel Glaser Related Videos: Why most published research findings are false Truth, theory and ultimate reality On the edges of knowledge The Limits of Logic
本文於 修改第 2 次
|
福音派教眾為什麼拒絕演化論 - Tanner on Truth & Myths
|
|
|
推薦1 |
|
|
|
Why Evangelicals Still Don’t Understand Evolution
The stubborn refusal to accept basic science — and the absurdity of the arguments Evangelicals still throw at evolution. Tanner on Truth & Myths, 08/09/25 The misunderstanding of evolution is widespread, but Evangelicals in the United States take it to a new level. From willful ignorance of dinosaurs to outright denial of science, it seems they’re determined to settle back into the Stone Age. Modern science and evolutionary theory make it clear: the Earth is not 6,000 years old. Evolution is not — and has never been — a conspiracy to take down Christianity. It’s simply the scientifically backed answer to the question of why and how there is so much life on Earth. Yet the tireless resistance from Evangelicals is mind-boggling. For the sake of sanity, let’s slice it all up. 1. The Adam and Eve Obsession The figure of Adam and Eve captivates many Evangelicals. For them, their existence is a cornerstone of the doctrine of original sin. If evolution is true, then Adam and Eve weren’t historical figures — which renders Christianity, in their view, fundamentally flawed. Without a literal Fall of Man, salvation becomes unnecessary. So then, what is Jesus there for? Historians and scholars — like Peter Enns in The Evolution of Adam — argue that the story is symbolic, not literal. Genesis is best understood as part of Ancient Near Eastern literature, not a factual account. When analyzed using historical and critical biblical methods, the narrative does not support a literal interpretation of Adam, Eve, or original sin. Evangelicals are right to sense that evolution conflicts with a literal reading of Genesis. But Genesis is not a scientific textbook. It’s an ancient creation myth. There’s no need for science and religion to be at war here. Yet for Evangelicals, accepting evolution feels like pulling the first thread in a theological unraveling. They cannot fathom a faith that doesn’t begin with Adam and Eve in a literal garden. That’s a problem — but it’s not one evolution caused. 2. The Literal Interpretation of the Bible It’s not just Adam and Eve — it’s the entire Bible. Evangelicals cling to a literal interpretation of scripture, and that’s where they stumble. Evolution directly contradicts this approach. But the Bible was never meant to be read as a scientific manual. The ancient authors didn’t know about DNA, fossils, or the Earth’s age. They were writing in a different time, with different tools and intentions. Researchers like Bart Ehrman have shown how biblical authors used cultural narratives and mythological language to convey meaning — not scientific truths. The literalism imposed by modern readers stems from theological frameworks developed long after the texts were written. Ironically, Evangelicals refuse to consider that the Bible might contain symbolic depth. For them, abandoning literalism means their entire worldview collapses. That’s how they end up disregarding scientific discoveries that don’t fit their simplistic lens. 3. Evolution Is “Just” a Theory This classic argument — “it’s just a theory” — offers Evangelicals an easy exit. But in science, a theory is not a guess. It’s a well-supported explanation based on evidence and experimentation. Evolution is backed by overwhelming data: fossil records, genetic studies, observable natural selection. Philosophers like Karl Popper have emphasized that a theory is science’s highest form of knowledge. Evolution, rigorously tested and refined, is a robust framework for understanding life. Calling it “just a theory” reveals a fundamental misunderstanding of how science works. Evangelicals often misuse the term, showing a lack of grasp on critical scientific concepts. A theory is not weak — it’s strong. Dismissing evolution as “just a theory” is not skepticism; it’s ignorance. 4. The “Missing Link” Myth Evangelicals often argue that because the “missing link” hasn’t been found, evolution must be false. This is a flawed notion. Evolution is not a linear process — it’s a branching tree. There is no single fossil that connects everything. Instead, we have a cumulative body of evidence from countless species and populations. In The Greatest Show on Earth, Richard Dawkins explains that the “missing link” concept doesn’t make sense in evolutionary biology. Evolution is a branching process, not a straight line. The so-called “missing link” is not one fossil — it’s a mosaic of transitional forms that show gradual development over millions of years. Despite thousands of fossils and genetic evidence, Evangelicals cling to this outdated argument. It’s baffling — and it reflects a refusal to engage with the actual science. 5. Denying the Evidence Evangelicals actively deny the evidence. We’re not just talking about centuries of fossil records — we’re talking about mountains of genetic data and observable examples of natural selection. Yet to them, it’s all a lie. Some even claim it’s the Devil’s work. Why? Because their belief in the literal truth of the Bible overrides everything. Science becomes impossible to accept. There’s no reasoning with that. Scholars like Jerry Coyne, in Why Evolution Is True, insist the evidence is overwhelming. From fossils to genetics, the conclusion is clear: all life shares common ancestry. Ignoring this evidence isn’t just anti-science — it’s intellectually dishonest. But science doesn’t care what you believe. It’s not out to destroy religion. It’s out to understand reality. Thanks to fossil archaeologists and geneticists, the facts are in. When Evangelicals refuse to accept evolution, they’re not just rejecting science — they’re choosing to live in a world where evidence doesn’t matter. 6. The Wrong Choice: Faith and Science Evangelicals and evolutionists have a problem which boils down to a false choice: It is either faith or science. This is a wholly ungrounded position. Each of these approaches approaches something different. Science takes the task of explaining the physical world and all life within it. Faith handles the unseen- dealing purpose and ethics. They do not stand at odds with each other. This ungrounded position is addressed by some, such as Francis Collins. The American physician, geneticist and a devout Christian, published The Language of God: A Geneticist’s Quest for Truth where he argues faith and science can coexist. Collins, who led the Human Genome Project, states that scientific breakthroughs only enrich his love for God, not lessen it. The idea that one has to pick between the two is primarily a byproduct of fear, not reason. Creationists assume this choice is real because believing in evolution would force them to part away from faith all together. This is false. The evidence for evolution can coexist with God. Many Christians, including some Evangelicals, have figured out that the two do not one has to forfeit the other. Science has not destroyed belief in God. 7. Reason for Denial of Evolution To be frank, there is a lack of acceptance of evolution, not because people genuinely believe in it, rather, it is denial that is driven by political ideology. Evangelists, for example, have a strong interest in preserving the ignorance of their followers about evolution, because it helps them manage the narrative. The idea that science poses a threat to faith is a narrative that helps them push their agenda without any questions. Evolution is simply the convenient scapegoat that helps them further their narrative. Evangelists have adopted evolution as a symbol in the culture war. The battle is no longer about science. Rather it is about the ideology that maintains their control. The culture war is a political tool, thus, they have to cling to outdated arguments even when the evidence is right in front of them. The Verdict We can firmly conclude that for Evangelicals, the understanding of evolution remains out of reach, not because of lack of information, but rather the willingness to access it. Denial of reality is far more convenient than confronting one’s deeply rooted belief system. Evolution does not endanger Christianity, but it certainly challenges the unconditional, fear-peddling interpretation of the holy text. Rather than finding peace in reality, they choose to oppose it, which is truly tragic. Any discussion surrounding evolution should not include options or personal preference, because it is, and always will be a fact. It is high time Evangelicals come to terms with that. And now, feel free to follow for more insights and leave a comment to join the conversation. See you next time! Sources and Further Reading * Why Evolution Is True (Jerry Coyne, 2009) * The Greatest Show on Earth (Richard Dawkins, 2009) * The Evolution of Adam (Peter Enns, 2012) * The Bible: A Historical and Literary Introduction (Bart D. Ehrman, 2014) * The Logic of Scientific Discovery (Karl Popper, 1959) * The Language of God (Francis Collins, 2006) * The Republican War on Science (Chris Mooney, 2005) * Why People Believe Weird Things (Michael Shermer, 1997) Written by Tanner on Truth & Myths I write about the myths that shape society, culture, and politics. Blunt takes, sharp history, no sacred cows. Read with curiosity. Leave with better questions. If you enjoyed this post, subscribe here to get notified whenever a new one comes out.
本文於 修改第 4 次
|
「反向思考」的決策過程模式 - Tom Addison
|
|
|
推薦1 |
|
|
|
雖然墨菲定律說:「凡是會出錯的事,一定會出錯」(1),但事先周詳的考慮,如使用「風險評估」和「立於不敗之地」等思考方式,應該能降低出錯/失敗的機率。 下文作者艾迪生先生對他所引用芒格博士雋語的詮釋,可以用俗話說的「負負得正」來理解。 附註: 1. “Anything that can go wrong will go wrong.” The Incredible Decision-Making Mental Model You’ve Never Heard Of Practical wisdom from a personal hero of mine Tom Addison, 06/22/25 Charlie Munger famously said: “All I want to know is where I’m going to die so that I don’t go there.” On the surface, this seems like a typical quick-witted, funny Charlie Munger thing to say. However, Munger isn’t being literal when he talks about dying. What he’s doing is using it as a metaphor for avoiding things that almost certainly guarantee personal disaster, failure, and ruin. It’s only once we delve a bit deeper that these 17 words deeply reflect one of the key mental models Munger lived by throughout his long life. He’s using the model of inversion. He’s trying to solve problems in reverse by looking at them backwards. Instead of asking “how can this go right?”, he’s asking “how can this go terribly?” For example, using a real-world example, before asking “how do I make sure I become financially stable in the future?” Start by asking, “How can I guarantee I stay poor?” Here’s a list of things you might come up with: * Carry on gambling my money away. * Buying things I don’t need and can’t afford * Investing in things I don’t understand. * Taking financial advice from people who are already poor. * Marrying someone who doesn’t share the same values as me. * Taking on unnecessary debt. * Having no financial plan or goal to aim for. Asking how things can go wrong first, rather than last, makes problems significantly easier to solve. Asking questions in reverse allows us to jump straight to the heart of the problem and immediately pinpoint the worst-case scenario, rather than encountering it later on. And ultimately (hopefully) avoid total disaster. Munger argues that by consciously avoiding terrible things, we’re way more likely to succeed in the long term. It makes a lot of sense, and at the end of the day, who are we to argue otherwise? It worked phenomenally well for Munger throughout his long life and career, so why shouldn’t it work for us? What do you think? Will you be using the power of inversion in the future? Thank you for reading this article and spending your most precious asset on me — your time. I appreciate it, and I hope to see you again soon! Want to be notified whenever I publish a new article? Click here. Also, become part of a growing community and subscribe to my Substack for absolutely free! Written by Tom Addison I write about personal development, books, and key life lessons I learn. Please, feel free to subscribe. Email me on addisontom2@gmail.com to connect with me. Published in ILLUMINATION We curate & disseminate outstanding stories from diverse domains to create synergy. Apply: https://digitalmehmet.com https://substackmastery.com Subscribe to content marketing strategy: https://drmehmetyildiz.substack.com/ External: https://illumination-curated.com
本文於 修改第 2 次
|
非文學類的7個寫作小心得 -- Mental Garden
|
|
|
推薦1 |
|
|
|
7 Harsh Writing Lessons I Learned After 300 Articles — So You Don’t Waste Years How to dramatically accelerate your growth Mental Garden, 11/21/25 I want you to know this: I wish I had read this when I first started. If you want to start writing and you feel like your words could go further — that they lack impact or fail to resonate with anyone… I’ll share with you 7 keys I’ve learned and applied during more than 500 days of writing and over 300 published articles. Here are the 7 keys you can use right now to strengthen your writing. 1. Ignore the word count A text isn’t better because it has 800, 1,000, or 2,000 words. The only thing that matters is: Did you answer what the reader came looking for? A title is nothing more than a promise. It tells the reader what they’ll find once they begin reading. Your only real job is to deliver that valuable information as clearly as possible. * A Twitter thread won’t have 20,000 words, but it might have 200. * A newsletter won’t have 200 words, but it might have 1,200. * A book won’t have 1,200, but it might have 50,000. The medium you write for defines what’s acceptable — nothing more. Just write within that range; it doesn’t matter if today’s newsletter is 700 or 1,500 words. When I started writing on Substack, I’d think: “Too short, it should have at least 800 words.” Now I tell myself: “If the idea is resolved in 700, then it’s 700; if it needs 1,500, then it’s 1,500.” The metric is not the word count. The metric is: Did the reader leave with what they came for? If yes, they’ll come back. 2. Make an “anti-list” People are tired of hearing the same things. * The perfect diet. * The perfect study routine. * The 10 secrets of productivity. They’re always the same clichés: sleep 8 hours, drink water, plan, meditate. Everyone repeats them. And when everyone repeats the same thing, you add nothing. Imagine your writing is a fruit shop. Apples are the typical advice: easy to sell, everyone likes them. But here’s the problem: there are fifty fruit shops on the same street selling the exact same apples. No one will remember yours. The solution is simple: offer something different. Yes, sell apples — every fruit shop needs some clichés — but add the specialized fruit that makes you unique: passion fruit, dragon fruit, lychee, papaya. Those unexpected fruits make customers say: “I haven’t seen this anywhere else — I’ll try it!” Want to stand out as a writer? Do the opposite of what everyone else is doing. Example: If you want to write “The perfect productivity routine,” you think of waking up early, organizing schedules, taking breaks, drinking coffee, sleeping 8 hours. Sound cliché? Good. Now the interesting part: you can’t use any of that. That’s your anti-list. What else can you offer? When you eliminate the obvious, the unexpected emerges. This technique comes from option suppression — look it up if you want to go deeper. For example, you could start your real productivity list like this: * Batch errands into a single day: groceries, pharmacy, paperwork. * Decide your weekly menu on Sunday: no more “What do we eat today?” * Write your tasks on paper, not your phone: avoid digital distractions. These productivity tips are far less common. That’s the power of the anti-list. Want an example from me? Please See:How aerodynamic drag in Formula 1 applies to productivity. 3. Start strong and end strong The first and last lines are the two pillars of the text — if they’re weak, everything collapses. Readers mostly remember what grabbed them at the beginning and what you concluded at the end. What’s in the middle matters, of course, but the introduction and the ending carry the maximum tension; the middle is just the development. This applies both at the article level and within each block. That’s what makes the text effortless to read and brings key ideas to the foreground. If you look back at what you just read, you’ll see the structure: first a striking statement — “The first and last lines are the two pillars of the text” — then the development that expands it, and finally an unforgettable closing: “That’s what makes the text effortless to read and brings key ideas to the foreground.” You’ve just seen how effective this structure is. When you write online — where attention is fragile and readers can close your article in a second — writing this way ensures they stay. This is due to the “F-pattern effect.” Look it up to understand how we read on the internet and how to adapt your text to that reading pattern. Then you’ll understand why this “strong start and strong finish” is so effective. 4. Don’t write only long paragraphs Not everything has to be dense text blocks. Newsletters are read on small screens, amid distractions, sometimes standing on the subway or bus. Readers need breathing room and text that’s effortlessly legible. Give them exactly that. * Use lists when you enumerate ideas, descriptions, or steps. * Don’t hesitate to include visuals, images, or supporting elements. * Insert short sentences when you want to emphasize something — like in tip #3. This makes your writing more readable and aligned with how people read online. 5. Delete the “I…” “I think…”, “I believe…”. These openings are redundant. You’re the author — obviously the ideas are yours, you don’t need to clarify it constantly. I realized this after re-reading my articles many times: those filler phrases became stones in the path. They tired the reader and added nothing. Deleting “I…” makes the sentence stronger. Instead of “I think writing is a muscle,” write: “Writing is a muscle.” The second hits harder, more directly. The exception? When quoting others. If I mention Hemingway, García Márquez, or a scientific study, I clarify. That’s where attribution matters. In all other cases, readers already know the voice is yours. Anything not attributed to someone else is yours. Clear the stones and weeds from the reader’s path. 6. Warm up before you start No one walks into the gym and lifts 100 kilos without warming up. Why do we do that when we write? Julia Cameron recommended what she called “morning pages,” meaning writing pages about anything that comes to mind to wander, generate ideas, and prepare the mind for focused writing. Spend 10 minutes writing freely. Write about what you dreamed, what you think of the day, what you ate for breakfast. It doesn’t matter. The result doesn’t matter; the goal is to clear mental noise and enter flow. Press enter or click to view image in full size 7. Use few adverbs — avoid the ones ending in -ly Hemingway knew it and García Márquez said it clearly: “Adverbs ending in -mente are an impoverishing vice. I haven’t used any in my books for a long time. I force myself to find richer, more expressive ways of writing.” — Gabriel García Márquez, Living to Tell the Tale Every time you write quickly, subtly, delicately, the text loses strength. A good verb does the job better than any adverb. Don’t say “walked slowly,” say “ambled.” Don’t say “spoke passionately,” say “exclaimed.” The message becomes clearer — therefore more memorable. When revising your text, do this exercise: look for adverbs and delete them. You’ll be surprised how little you needed them. Writing better is having the right tools for each situation: * Ignore the word count * Make an anti-list * Start and end strong * Don’t rely only on long paragraphs * Delete the “I…” * Warm up before writing * Use few adverbs — avoid the -ly ones Small practices that, together, make your writing resonate and leave an impression.
Nothing more to add. Your turn: What other writing key has worked for you and deserves to be on this list? Quote of the day: “Writing as a poet is one thing; writing as a historian, another.” — Miguel de Cervantes, Don Quixote. Here I plant ideas. In the newsletter, I make them grow. Daily insights on self-development, writing, and psychology — straight to your inbox. If you liked this, you’ll love the newsletter. Join 43.000+ readers: Mental Garden See you in the next letter, take care! References Márquez, G. G. (2002). Vivir para contarla. Written by Mental Garden Productivity and psychology inisghts in useful life lessons +3M monthly views and +300 articles Published in Change Your Mind Change Your Life Read short and uplifting articles here to help you shift your thought, so you can see real change in your life and health.
本文於 修改第 1 次
|
讀書務必使用「根基法」 ---- Mental Garden
|
|
|
推薦1 |
|
|
|
請參見拙作《淺談「讀書方法」》(本欄2025/03/18),以及此文(該欄2025/03/17)。 The Root Method: How to Absorb Books Like a Genius (Without Highlighting a Single Page) Stop collecting information — start crafting your own knowledge Mental Garden, 04/30/25 We immerse ourselves in books because we seek to understand the world, to develop new skills, to change the way we think. But if you’re like me — or like so many voracious readers — have you ever finished a book that inspired you, that you even talked about with a friend, but a few weeks later… You can barely remember anything. Maybe you feel like you could get more out of your reading — all those hours and all that wisdom that end up going nowhere, unorganized, not put into practice… Today, I want to share with you the most effective minimalist method I know to fix this. The Root Method. You’ll retain more, understand better, and most importantly: you’ll start connecting ideas across books, authors, and disciplines. Same readings, deeper learning. What is the Root Method? It’s based on a fundamental principle: shifting from passive reading to active learning. 1. Before starting a book, take a blank sheet of paper and write down everything you already know about the topic you’re about to read about. Even if you think you know very little. 2. During your reading session, add what you learn in a different color. 3. Before your next session, review your sheet. During the session, return to step 2. 4. Store the sheets in a binder and study them periodically. That’s it. But what happens inside you while you do this… that’s where the magic lies. Let’s break down each of the 4 steps: Step 1: Before starting a book Take a blank sheet of paper and write down everything you know (or think you know) about the topic. It doesn’t matter if it’s just three words or half a page. The important thing is to force your brain to recall and organize your ideas before consuming new ones. Don’t know anything? Perfect. Write down the questions you have. Or create a mind map with scattered words. The key here is to establish your starting point. It’s a quality filter to see what beliefs you hold, what preconceptions you carry, and it opens you up to the possibility of correcting them later. Step 2: During your reading session After reading a section of the book (a chapter, a block of ideas, or even just a dense page), return to your sheet and add what you learned — using a different color. This detail is crucial. Each color represents a different reading session. Visually, you can see how your knowledge evolves. It’s not just “read and turn the page.” It’s about extracting the best from each session. Documenting your growth. Building a structure of ideas you didn’t have before. Never copy a paragraph or idea word for word. You must explain it in your own words, in a summarized form. This forces you to process the information, not just store it. And while doing so, you’ll likely notice errors in what you wrote at the beginning or in previous sessions. Awesome, right? Correct them, cross things out, jot down the updated version. This comparison makes you think critically about what you wrote, sharpening your learning more and more. Step 3: Before your next session Before starting your next reading session, review your blank sheet. It’ll only take a few minutes, but this simple act refreshes your memory about what you’ve already learned and prepares you to connect the new ideas with what you already know. It transforms every learning into building blocks, session after session, allowing you to see connections that previously seemed invisible. * Recurrent keywords or technical terms. * Core ideas that structure the whole topic. * Cause-effect patterns or parallels. In other words: you read more deeply. Step 4: After finishing the book You’ve finished the book and now have several sheets filled with colors, ideas, corrections, and notes. Store them in a folder. That sheet is a map of your learning. A valuable resource you can review later. I recommend rewriting it neatly. Doing so forces you to synthesize again, organize your ideas definitively, and filter out the essential now that you have the full perspective. Then, every few months, review it. Spend a few minutes. This simple, repeated act embeds the information into your long-term memory — it’s called spaced repetition, and it’s the most effective technique for retaining knowledge for years. Why does it work so well? Because it combines 3 powerful learning principles in one simple action: writing. 1. It naturally applies the Feynman Technique When you write and structure what you learn in your own words, you force yourself to explain it clearly. You don’t just repeat — you truly understand what you read. And that’s harder than regular note-taking… but also far more effective. Every connection between concepts, every outline, every review is an opportunity to detect mistakes or gaps. Rewriting is learning because it pushes you to make sense of what’s inside you, just as Feynman recommended. 2. It leverages spaced repetition Spaced repetition is the most effective way to memorize long-term. Every time you review your sheet before a new session, you’re refreshing your memory right when you need it. You fight the forgetting curve with small doses of active recall. And when you rewrite the sheet cleanly at the end, you create a distilled version of the essentials — a synthesis your mind can retain. That’s knowledge you’ve worked for, connected, and internalized. 3. It avoids the collector’s fallacy The collector’s fallacy is the idea that hoarding PDFs, highlighting everything, and taking endless notes equals learning. No. That just fills your folders with unprocessed data. With the blank sheet method, you can’t hide. You only write down what you truly understand, and you only keep what makes sense to you. You store less information, but you learn more. You don’t just collect ideas: you craft them. And that extra effort makes each sheet packed with real knowledge. That’s why it works. Because it simplifies, repeats, and refines. It’s not enough to know: you have to understand. The Root Method isn’t just for books; it works for any learning process: courses, conferences, YouTube videos, podcasts. You don’t need complicated apps or anything fancy — just paper and colored pens. Don’t stay at the surface of knowledge — dig down to the root of every topic. You’ll notice the change instantly. Your turn: What methods do you use to learn from your reading? Quote of the day: “The skill I was learning was crucial: the patience to read things I could not yet understand.” — Tara Westover, Educated: A Memoir. Here I plant ideas. In the newsletter, I make them grow. Daily insights on self-development, writing, and psychology — straight to your inbox. If you liked this, you’ll love the newsletter. Join 11,000+ readers: Mental Garden See you next time! Written by Mental Garden Productivity and psychology inisghts in useful life lessons +3M monthly views and +300 articles Published in Change Your Mind Change Your Life Read short and uplifting articles here to help you shift your thought, so you can see real change in your life and health.
本文於 修改第 1 次
|
推理過程的演繹法與歸納法 -- David Kyle Johnson
|
|
|
推薦1 |
|
|
|
Deductive vs. Inductive Reasoning Dr. David Kyle Johnson, 10/10/25 Why Aristotle (and your science textbook) is wrong about deduction and induction — and why it matters. A common misconception about the nature of deductive and inductive reasoning 請至原網頁觀看一般人對「演繹法」與「歸納法」誤解之處的說明圖 It’s commonly said that deductive reasoning goes from the universal to the particular, while inductive reasoning goes from the particular to the universal. You’ve probably seen these definitions in a science textbook; indeed, this notion was endorsed by “the rogues” on episode 1055 of my favorite science podcast, the Skeptics Guide to the Universe. But this understanding of deduction and induction is inaccurate — or, more specifically, outdated. (The SGU graciously had me on to correct the record.) And the point is not just semantic; the philosophical development of our understanding of the difference between induction and deduction completely changed the world, and is vitally important for understanding the nature of science (and debunking pseudoscience). The Common “Aristotelian” Understanding The common understanding of deduction and induction is usually attributed to Aristotle. A classic example of the kinds of arguments that he said, in the Prior Analytics (e.g., I.1 (24b18–20)) capture deduction, goes like this: All men are mortal. (Universal) Socrates is a man. So, Socrates is mortal. (Particular)[i] In the Posterior Analytics (II.19 (99b35–100b5)), as an example of induction, he gives us “posing the skilled pilot is the most effective, and likewise the skilled charioteer, then in general the skilled man is the best at his particular task.” In the Rhetoric, Aristotle offers an even clearer example: Socrates was wise and just. (Particular) So, all wise men are just. (Universal). Based on these examples, it looks like deduction reasons from universals, while induction reasons to universals. But this is not how modern logicians define these terms — and for very good reason. What’s Wrong with the “Aristotelian” Understanding of Deduction? The first problem is this. Aristotle does not actually define deduction in this way. He says that deduction captures “syllogistic reasoning,” which he defined (in the Prior Analytics) as “a discourse [logos] in which, certain things being stated, something other than what is stated follows of necessity from them.” In other words, deduction happens when one statement (or statements) guarantees another. Now, many of the examples of this kind of reasoning Aristotle gave went from the universal to a particular, but there are other kinds of syllogistic reasoning that does not. For example: Obama was president from 2009 to 2017. Therefore, all persons born during his presidency were born in the 21st century. This actually goes from a particular (fact about when Obama was president), to a universal (a statement about when a whole category of people were born). Yet, in this example, the premise guarantees the conclusion.[ii] What’s more, one can reason from the universal to the particular without the conclusion being guaranteed. For example, in 2019, someone might have said: “All the previous seasons of Game of Thrones were good. Therefore, the last season will be good.” While that conclusion, given the evidence, was likely — as we all know, it was not a guarantee.[iii] What’s more, there are many more kinds of syllogisms — arguments where the premises guarantee the conclusion. There are hypothetical syllogisms, like modus ponens (If P then Q. P. Therefore Q) and disjunctive syllogisms (Either A or B. Not A, Thus B.). And these don’t have to go from universals to particulars, or from particulars to universals, at all. If the alarm is armed, then the light will be red. The alarm is armed. Thus the light is red. Either the light is on or off. It’s not off. So it’s on. So “from universals to particulars” doesn’t fully capture the kind of syllogistic “the conclusion is guaranteed” reasoning that Aristotle was trying to describe. What’s Wrong with the “Aristotelian” Understanding of Induction? Interestingly, Aristotle didn’t really conceive of induction as a method of reasoning; for him, it was more of a psychological or epistemic process by which we come to grasp universal truths from particular experiences — the way we learn what he called “first principles.” To paraphrase a famous passage from his Posterior Analytics, “the soul sees the universal through induction.” Since this is a habit of mind, rather than an argument form, or even a form of reasoning, it’s obviously not very useful for classifying arguments or kinds of reasoning. What’s more, something that Aristotle seems to have failed to recognize (at least in places) is what sets induction apart from deduction. The important difference is not that it is “a passage from individuals to universals,” but that it is not syllogistic; it doesn’t guarantee its conclusion. Sure, Socrates being wise somewhat raises the probability that all men are wise. But it by no means guarantees it. While Aristotle recognized this fact about this argument, he seemed to fail to recognize that this was induction’s defining feature (not the fact that it went from the particular to the universal). But there are lots of different ways to raise the probability of conclusions. Arguments from analogy do this. Thing 1 has properties A, B and C. Thing 2 has properties A and B. So Thing 2 has property C as well. In certain circumstances, such a conclusion can be likely — but is never guaranteed; yet this kind of reasoning does not involve going from the particular to the universal. Or consider inference to the best explanation. You take two or more explanations, compare them to certain criteria — criteria that explicate what good explanations are supposed to do — and then accept the explanation that adheres to them best. By no means is such a conclusion guaranteed — although, if done correctly, it can be beyond all reasonable doubt — but it does not involve reasoning from the particular to the universal. Arguments from authority raise the probability of their conclusion but don’t always argue to the universal. Statistical arguments raise the probability of their conclusion but don’t always argue to the universal. Again, it seems Aristotle’s understanding of induction — or at least what is commonly thought to be Aristotle’s understanding of induction — is shortsighted. The Modern Definitions Although I am taking a shortcut through the work of Francis Bacon, David Hume, John Stuart Mill, C.S. Peirce (Purse), John Maynard Keynes, Hans Reichenbach, Rudolf Carnap here — what modern logicians eventually realized was that the following was a better understanding of the distinction at which Aristotle was seemingly pointing: Deduction: Reasoning that guarantees its conclusion. Induction: Reasoning that (merely)[iv] supports its conclusion. To summarize this, we might say that deductive arguments are those with premises that guarantee their conclusion, while inductive arguments are those with premises that raise their conclusion’s probability. And if you are looking for a “short takeaway” from this article, that is it. A Nuanced Point That can’t be the whole story, however. Why? Because, if we accept the italicized phrases above at face value, we run into a problem: there can’t be “bad” deductive or inductive arguments (e.g., a deductive argument which fails to guarantee its conclusion). For example, most people consider arguments which deny the antecedent to be deductive. “If P then Q, not P, therefore not Q.” When people give this argument, they often think the conclusion necessarily follows. But it doesn’t; denying the antecedent is a fallacy. Sure, if I am in Philly then I am in PA. But the fact that I am not in Philly does not guarantee than I am not in PA — I could be in Pittsburg, for example. So, although such an argument is clearly deductive, it does not guarantee its conclusion. Likewise, someone could give a bad inductive argument. “Bob is blonde and plays football. Steve is blonde. So Steve probably plays football too.” The premises don’t raise the probability of the conclusion at all — yet clearly the argument is inductive (it’s an argument from analogy and analogies aren’t deductive). To avoid this problem, modern logic textbooks usually suggest that whether an argument is deductive or inductive is determined by the intentions of the one who gives it. If the speaker intends for the premise of their argument to guarantee its conclusion, it is considered deductive; if they merely intend for it to provide probable support, then it is inductive. Once such intentions are clear, we can categorize the argument and bring the appropriate logical tools to bear to determine whether the argument does what the speaker intended for it to do. Same Form, Different Categorization Interestingly, this means that two arguments which take exactly the same form could fall under different categories. One can be deductive and the other inductive even though they look the same. For example, another classic deductive but invalid form of reasoning is affirming the consequent. (If P then Q. Q. Therefore P.) “If I am in Denver, then I am in Colorado. I am in Colorado. Therefore I am in Denver.” Obviously, this is not guaranteed; there are ways of being in Colorado without being in Denver. But clearly this argument is deductive. But now consider a scientist running an experiment to test a hypothesis; they will consider an experimental result that a hypothesis predicted to be evidence for that hypothesis. (If H then R. R. Thus H.) Clearly, this follows the same pattern — yet we would not consider this form of reasoning deductive. Indeed, the good scientist will know and admit that the experimental result doesn’t prove the hypothesis true — it merely supports it or raises its probability. So this is an example of inductive reasoning. And we wouldn’t consider it “fallacious.” If the experiment is done correctly, it could provide a good amount of support for the hypothesis. So whether an argument is deductive or inductive can be sensitive to context too; it depends on the intentions of the person giving it. Why This Is Important There are two reasons you should care about this. First, this issue is not just semantic; it’s not just about the definitions of words. Logicians replacing the old interpretation laid the groundwork for formal logic (like propositional logic and predicate calculus), the system that can mathematically determine whether the premises of an argument guarantees its conclusion. And formal logic laid the groundwork for all modern computing. You can’t build logic gates with categorical logic. Everything that uses a computer — which, today, is practically everything — is impossible without realizing that deduction is reasoning that guarantees its conclusion (rather than reasoning that argues to universals). Second, science is inductive. Both scientists and philosophers of science recognize that, given the very nature of what scientific reasoning is, science never guarantees its conclusion. Although it can involve many kinds of reasoning, some of which are deductive, they are all in service of finding the best explanation — and inference to the best explanation is an inductive form of reasoning. This is important to realize because one of the most common arguments of pseudoscientists, and those that deny what science has revealed, is rooted in what science has and has not “proven.” “You can’t 100% prove how much damage climate change will do, so we don’t have to worry.” “You can’t prove that vaccines don’t cause autism, so they do.” As any good logician will recognize, these arguments commit a fallacy called “appeal to ignorance.” The fact that you can’t 100% prove something true doesn’t mean that it is false. But understanding that the nature of science in inductive reveals clearly why such arguments, especially when leveled against science, are fallacious. They are asking science to do something that, by definition, by its very nature, it cannot do. Science is inductive, so it doesn’t deal in proof. It, instead, raises the probability of certain theories and statements; it shows them to be more likely to be true. So, if we hold out for proof; it will never come. When deciding what we should believe regarding scientific matters, we must simply, as David Hume put it, proportion our belief to the evidence. Or, to paraphrase a point Aristotle made in Book I, paragraph 3 of the Nicomachean Ethics: “It’s the mark of an informed mind to be satisfied with the degree of precision that the nature of the subject admits; it does not to seek for exactness where only an approximation to the truth is possible.” When something is beyond any reasonable doubt, we should accept it — and science can and does, quite often, establish things beyond any reasonable doubt. The vast majority of scientific evidence suggests that climate change will be catastrophic and that vaccines are safe and effective. Those who refuse to believe such things are simply holding onto straggling threads of evidence, thinking that gives them the “right” to just believe whatever they want, because of a fundamental misunderstanding of what science is. A Final (Unnecessary) Note on Einstein In a somewhat famous essay, Einstein decried induction and praised deduction as the method by which physicists established new physical theories (like relativity). This makes it sound like perhaps he was contradicting some of the points I am making here. So was he? And was he right? Not exactly. First of all, he wasn’t working with the same definitions of the terms. When he said induction, what he had in mind was Aristotelian “particular to universal” reasoning, and he was arguing that that is not how physicists come up with theories. They don’t make observations of particulars and generalize about the entire universe; instead, he argued, they generate new theories through acts of inspiration. The process of theory creation is a bit like art. While the latter may be how he came up with relativity, I’m not convinced that is how all physicists (and certainly not all scientists) generate new theories. I think generalizing from particular observations can and does play a role in some cases. Now, clearly, particular to universal reasoning doesn’t prove anything; it doesn’t even help establish it. It’s just a way of coming up with a hypothesis. But it doesn’t matter how theories are generated; what matters is whether they survive being tested and turn out to be better than competing theories. Which brings me to my second point. What Einstein meant by “deduction” is actually what I and other modern logicians would call an inductive reasoning process. Once a theory is hypothesized, Einstein argued, physicists gather support for them by “deducing” (deriving) what the theory would entail (what one would expect to observe if the theory were true), and then performing experiments to see if those observational predictions are correct. Does what the theory say would happen actually happen? (A famous example of relativity being confirmed in this way is when relativity correctly predicted where astronomers would observe stars in the Hyades Cluster during the famous 1919 eclipse.) But notice that, while it does involve what we might call a “deduction” (by yet another definition of the word), if this whole process is actually deductive reasoning, it is invalid. “If my theory is right then I should expect result R. Result R did happen. Therefore my theory is true.” (If H then R. R. Therefore H.) That’s the fallacy of affirming the consequent! But as an inductive argument, this reasoning is strong. Relativity correctly predicting where stars in the Hyades Cluster would be observed during the 1919 eclipse is what made many physicists realize that relativity was correct. So, while Einstein was pretty accurately describing the reasoning process behind physics, he was using the wrong terminology. Perhaps he can’t be blamed for this; the logic textbooks (e.g., by Copi) that made non-philosophers aware of the updated more accurate deduction/induction distinction were still a few decades away. But still, recognizing the distinction that Einstein missed is important: Deduction is reasoning that guarantees its conclusion; induction is reasoning that provides conclusions with probable support. Notes: [i] Aristotle did not give this example himself, but it is a classic example (given by medieval scholars) to demonstrate what he had in mind. [ii] I believe Aristotle actually recognizes this basic fact in the Prior Analytics when he says “Similarly too, if the premise is particular. For if some B is A, then some of the As must be B. For if none were, then no B would be A. But if some B is not A, there is no necessity that some of the As should not be B; e.g. let B stand for animal and A for man. Not every animal is a man; but every man is an animal.” [iii] Aristotle may have also recognized this when he said, in Rhetoric, “The other kind of Sign, that which bears to the proposition it supports the relation of universal to particular, might be illustrated by saying, ‘The fact that he breathes fast is a sign that he has a fever’. This argument also is refutable, even if the statement about the fast breathing be true, since a man may breathe hard without having a fever.” [iv] I put the world “merely” in parentheses here because I do not wish to imply that inductive reasoning is less powerful or valuable; indeed, most knowledge is acquired through inductive reasoning. But since guaranteeing a conclusion is one thing providing support is another, a distinction needs to be made between how inductive and deductive arguments support their conclusions. (** 此段文字的原文似有錯簡。淺藍色字體為編者所做修正,以求文從字順。) Written by Dr. David Kyle Johnson I am a philosophy and Great Courses (Wondrium) professor who publishes on religion, metaphysics, logic, and the intersection of philosophy with popular culture.
本文於 修改第 1 次
|
淺談「讀書方法」:《增強閱讀記憶的七個方法》讀後
|
|
|
推薦1 |
|
|
|
0. 前言 讀了卡羅史先生的大作後(該欄03/17/2025,下稱《記憶》),我不禁想起我十多年前經常看書時所用的一些「方法」。它們跟《記憶》推薦的「方法」有不謀而合之處;不過,當時我並不知道那樣做能幫助我記住書中內容。下面談談我做法和這七個方法相近的三個,給年輕網友們參考;括號裏的數字對應於《記憶》中七個方法的序號。 本文子標題為「《增強閱讀記憶的七個方法》讀後」;由於該文在「自我提升篇」之下,而本文主旨實屬「科學基礎論」下的「方法論」;故刊於此欄。 1. 「寫下書中有趣的點子」(方法3) 23歲到49歲之間我在美國唸書和工作,不容易買到中文書。因此,這段時間我讀的書大概80%以上是英文。回國後,由於成了習慣,看書還是以英文為主。為了確定自己讀懂,我會把術語、關鍵詞、重要概念、甚至一整段話翻譯成中文,寫在頁緣。當時雖然不知道這會幫助記憶,現在回想起來,它的確有這個作用。 2. 「跟作者對談」(方法4) 《記憶》方法4中的兩個建議也是我的做法。我用黃色和粉紅色的螢光筆來顯示我認為重要的文字;用星形、菱形、或三角形等來標示重要的段落和關鍵詞。我當時這樣做,並不知道它會加強記憶,而是為了日後要引用這些想法或概念時,容易找到它們。 此外,當我認為作者前後用不同方式陳述/強調同一個觀點,或作者前後說法不相容,我會在頁緣處做兩者相對照的註記;例如,「見102頁的星號」,或「比較69頁的三角形」。 我還有一個習慣跟這個方法相近。當我看到一個概念或理論時,我會回想一下:在我讀過的中國典籍和其他學者著作裏,是不是有類似或相反的說法。我不敢說自己每讀一本書都試圖「融會貫通」,但在幫助了解書本內容上,「比較」是一個非常有效的方法;它可以視為「跟作者對談」的方式之一。 3. 「活學活用」(方法5) 這一點我可以舉出三個實例: 1) 《我說「不」時,會覺得難為情》(1975) 看完這本書後,我常常把「不」掛在嘴邊;直到有一天我發現自己「不」過了頭。然後做了些修正;此之謂:「盡信書,不如無書」。 2) 《看人如看書》(1973) 讀過這本書後,我開始注意自己手、腳、胸、和肩膀等等的「位置」和坐姿、站姿等等。中國人說的:「賊眉鼠眼」和「誠於中,形於外」 (《大學章句7》),大概跟「肢體語言」有幾分近似。 3) 《打招呼以後該說什麼》(1973) 這本書幫助人們了解一些溝通時可能遇到的障礙;我1983讀它,過了40多年,現在還偶而還會有用到「互動分析」理論的機會。 由於寫文章時以「言之成理」和「言必有據」期許自己;從而,我常常要引用自己讀過的概念、觀點、和理論。這是「活學活用」到思考,而不是日常生活的行為。但它應該也有加深記憶的作用。我用學到的概念和理論建立了一些自己的「觀點」;例如,《唯物人文觀》、《「自由意志」的討論》、和《交換價值和資源分配》。 4. 儒家學習方法 以上三個方法是我自己摸索而來,當時目的不在加強記憶;而在幫助我了解書本的內容。下面是我認為值得我們實踐的儒家「學習方法」。 1) 「博學之,審問之,慎思之,明辨之,篤行之。」《禮記•中庸22》 2) 「學而時習之」《論語•學而1》;「溫故而知新」《論語•為政11》 3) 「學而不思則罔」《論語•為政15》 這三個方法跟上面討論的「跟作者對談」、「活學活用」、以及「溫習!溫習!溫習!」(方法7)都相通;不在此多所著墨。 各位在此不妨實踐「活學活用」和「跟作者對談」兩者;比較一下以上三個「土法學習」和《記憶》的七個方法;然後自行評判我認為它們「相通」的看法能不能成立。
本文於 修改第 4 次
|
|
|