|
哲學 – 開欄文
|
瀏覽1,903|回應11|推薦2 |
|
|
杜蘭的《西洋哲學史話》不是我讀的第一本哲學書,記得在它之前我就讀過《老子》。但它是對我影響最大的幾本書之一。主要的原因是: a. 它引起我對哲學的興趣; b. 它奠定了我對哲學一知半解的基礎;以及 c. 整體來說,它堅定了我追求知識的決心(該欄開欄文第3節)。 順帶說一句:我不敢以「知識份子」自居,但頗以身為「讀書人」自豪(該欄開欄文及《目的、行動、和方法》一文);也就對兩者都有所期許(該文第4節)。 我不是哲學系出身;但因為對「人應該如何自處」以及「人應該如何待人接物」這兩個問題很有興趣,免不了接觸到一些探討「基本問題」的書籍(請見本欄第二篇文章)。現在垂垂老矣,不再有讀書的腦力;只能把過去的心得做個整理,算是收收網吧。
本文於 修改第 1 次
|
如何面對「後真相」時代 -- Manuel Delaflor
|
|
推薦1 |
|
|
轉載這篇文章不是因為我認為它言之成理;而是我認為它是:一個胡說說八道的範例。一個哲學家腦筋也會打結的反面教材。 A way forward for a world where truth has died A framework for a Post-Truth World Manuel Delaflor, 12/19/24 Editor’s Notes:As the prophets of certainty wage their wars in our fractured age, a radical new approach from philosopher Manuel Delaflor shows us how to dance with the uncertain. Consider a curious phenomenon: When passionate fans discuss their favourite shows online, they rarely debate what happened in a scene. For example, in online forums dedicated to Game of Thrones, viewers spent countless hours dissecting a brief moment where Cersei Lannister lingered near a dimly lit archway during a crucial negotiation scene, debating whether the shadows implied secret motives, inner turmoil, or a looming betrayal. Through this collective endeavour, fans weave intricate webs of interpretation that reflect their values, their beliefs, and even their shadows. The very meaning of what they see on screen becomes a mirror, reflecting not just the content itself, but the deeper patterns of how humans create understanding from their personal experiences.
For millennia, truth served as humanity's foundation - a way to end arguments, establish facts, and build knowledge. It stood as a beacon of certainty in an uncertain world, promising that through divine foretelling, or later with proper methods and careful reasoning, we could arrive at unshakeable conclusions about the world, our lives, and our place in the universe. This faith in truth shaped our institutions, our education systems, and our very way of thinking. It provided the bedrock upon which we built our understanding of ourselves and the cosmos. From time to time, a handful of dissenting thinkers queried truth, from the Pyrrhonian skeptics of Ancient Greece to figures like David Hume, who questioned the reliability of causation and empirical certainty, and Immanuel Kant, who argued that our understanding of the world is mediated by the structures of human cognition. Then we have Nietzsche's rejection of absolute truth in the 19th century that further shook these foundations. Then, in the 20th century, a particular kind of thinker started to dig deeper, geniuses like Derrida, Foucault and Deleuze delivered what might have been the final blow: truth itself, they argued, was nothing more than a construct shaped by power and language. The certainties upon which our institutions rest began to crumble, and as a result, we've been entering a kind of second dark age. Nonetheless, a lay belief in truth by the countess masses prevailed. What constitutes evidence, what a fact is, or used to be, has lately taken completely unexpected twists. We are witnessing a profound shift: the rejection of established frameworks has become widespread, fuelled not just by past institutional failures, but by a deepening social distrust of expertise itself. What's remarkable isn't just the rejection or dismissal of traditional ways of establishing shared understanding, but the incredible refusal to even accept sets of facts. For instance, debates over whether the Earth's curvature can be observed directly or whether atmospheric distortion explains such observations highlight how even well-established scientific principles can be dismissed to fit alternative narratives in a blink of an eye. The social media landscape amplifies and accelerates this crisis in ways we're only beginning to comprehend. These platforms don't simply create echo chambers - they actively reward content that triggers emotional responses, which puts humans into animal states of fight or flight, triggering the amygdala, bypassing the frontal lobe. Research confirms what we intuitively feared: social media rewires our very brains, with studies like this in JAMA Pediatrics showing how it fundamentally alters neural pathways, particularly in developing minds. Unlike traditional media, platforms like Facebook and X use algorithms designed to maximize engagement by reinforcing pre-existing beliefs. Research, such as Cass Sunstein's Republic, highlights how these mechanisms entrench divisions, transforming disagreements into unbridgeable chasms. The machinery of engagement runs on the fuel of outrage and confirmation bias, fragmenting our shared ways of making sense of experiences into increasingly isolated islands of understanding. Each click, each share, each angry reaction further entrenches these divisions. Like a fever spreading through a weakened body, this machinery of fragmentation has created perfect conditions for a more dangerous infection. In this fragmenting landscape, we're not really seeing the end of truth, but rather its splintering into opposite sets of competing absolutisms. Each group, each echo chamber, each populist movement claims to possess not just an interpretation, but the "real truth" that "the others" have been hiding. The irony is striking - in our supposed "post-truth" era, we're witnessing not an absence of truth claims, but their multiplication, with each claiming absolute validity while rejecting all others. Today, digital platforms take these philosophical insights to their most extreme consequences, bypassing reasoned debate. Conspiracy theories like QAnon thrive in these environments, fuelled by algorithms that amplify sensationalist narratives over nuanced ones, reflecting the splintering of shared understanding into echo chambers. But the problem is exactly the same, we've discovered that our beloved traditional solutions have failed us. Better education? Facts bounce off hardened beliefs. Improved communication? Our most articulate arguments fall on deaf ears. Enhanced access to information? It only seems to deepen the divide. I believe that we've been approaching our “post-truth” problem with a fundamental misunderstanding about human nature. Traditional approaches, such as Karl Popper's emphasis on falsifiability, Thomas Kuhn’s exploration of paradigm shifts, and Festinger's cognitive dissonance theory, assume we're primarily seeking accurate descriptions of the world but are led astray by biases, emotions, or faulty reasoning. While these frameworks have profoundly influenced our understanding, they presuppose a rationalist ideal that often fails to address the deeper human craving for meaning and coherence. This assumption shapes how we try to address disagreements - with more facts, better arguments, clearer evidence. And this is precisely what leads opportunists with enough audience to force their own “truths” on others. Despite unprecedented access to information, our divides are just deepening, to the extent that rational discourse has become impossible, culminating in people embracing the weirdest possible beliefs, like flat earthism, because they seem somehow preferable to the perceived “mainstream lies” (as they call them). And who is to tell them that they don't have the "real truth" when facts become whatever they want them to be? And this is how old explanatory frameworks persist despite contrary evidence, how scientific findings, no matter how robust, fail to convince those who've found reassurance in alternative interpretations. Take religious frameworks - they persist not because they better explain natural phenomena, but because they provide profound meaning about life's purpose, death's mystery, and human suffering. Their power lies not in their empirical accuracy but in their meaning-making capacity. These aren't just failures of education or reasoning. They point to a deeper pattern in human cognition, a profound need we've been blind to all along. We have now reached the point where we are standing at a crossroads: either continue this sacrificial dance of competing “absolute truths”, each more strident than the last, or recognize that we have failed and that we've been asking the wrong question all along. As Nietzsche once suggested, humanity’s obsession with absolute truth often blinds us to deeper existential needs, meaning and coherence. This insight resonates with my personal conclusion: what we crave is meaning. Not truth. What we seek are frameworks that illuminate our experiences, a way to make sense of our struggles, narratives which give purpose to our pain. We hunger for stories that help us navigate the vast complexity of existence, which make our world coherent, comprehensible, meaningful. If I am correct, this revelation changes everything. It was in part this realization that led me to develop Model Dependent Ontology (MDO), an epistemic framework that addresses how humans engage with their experiences rather than how we think they should. At its core, MDO recognizes that meaning-making isn't just one aspect of human cognition - it's fundamental to how we live. It touches our most inner core. Our understandings, explanations and beliefs aren't attempts to capture some “external reality” or “eternal truths”; they're tools we use to make sense of what we live, of how we engage with others to create coherent narratives that guide our actions and decisions in a hyper-complex world. Consider how even our most successful scientific theories - quantum mechanics and relativity - demonstrate that what we observe fundamentally depends on our framework of observation, not on some eternal “truth” of any kind, one that is valid for anyone at every point. Different frameworks yield different but equally valid descriptions of the same phenomena. MDO takes this insight further: it suggests that even our most basic concepts about what exists or what counts as evidence emerge from specific models with their own assumptions and limitations. This isn't about denying the importance of evidence or observation - quite the opposite. It's about recognizing that any statement, any fact, any concept we employ is necessarily interpreted within some model or framework. This is a point that Rudolf Carnap made in a seminal paper entitled Empiricism, Semantics, and Ontology (1950). Understanding this opens up more sophisticated ways of evaluating and working with multiple models while maintaining their pragmatic utility. Rather than attempting to determine which model is "true," rather than enforcing new "truths" like some figures do, it examines how different ways of understanding serve different purposes, create different meanings, and lead to different outcomes. Consider how differently this approach works in practice. Instead of treating economic frameworks as competing claims, where one and only one can be "the right one," MDO examines their use as tools for making sense of different scales and contexts of human endeavours. MDO then acknowledges that the breathtaking complexity of global markets cannot be reduced to a single model any more than human political beliefs can be reduced to imagined "lefts" or "rights." MDO actually encourages generating a potentially unlimited quantity of different models in different dimensional axes, as a way to overcome the stubbornly persistent human insistence on using bipolar, monodimensional axes. This represents a radical shift in how we approach understanding itself, and the irony is exquisite: by abandoning our obsession with finding the truth, we might finally develop better ways of understanding our world than we ever could through the lens of absolute certainty. Instead of imposing single "correct" ways of seeing the world, we might be better by cultivating multiple model competency, teaching people to work with different frameworks while understanding their contexts and limitations. Imagine an education system that taught not what to think, but how to move fluently between different ways of making sense of the world, recognizing that different contexts require different tools for understanding. More crucially, imagine having practical tools to evaluate these models based not on their claim to any truth, but on their ability to illuminate our experiences while remaining consistent with observed evidence. The challenge ahead isn't choosing between truth and relativism - that's the old game, the one that led us into our current crisis. The real challenge is learning to dance more skilfully with the multiple frameworks through which humans create meaning. To find the greater meaning, not the greater truth. This isn't just an academic exercise - it's about survival in an age where competing absolutisms threaten to tear societies apart. MDO offers not just an epistemic framework for understanding this dance, but the practical tools for performing it with greater awareness, flexibility, and effectiveness. Perhaps then we can move beyond the endless war of "truths" toward something more sophisticated: a world where meaning-making becomes not a source of division, but a shared human adventure in understanding. Manuel Delaflor is Director of the Metacognition Institute and the author of the forthcoming book, Model Dependent Ontology. 表單的底部
Related Posts: There is no escaping metaphysics Language conceals reality 9 Philosophers on Humanity’s Uncertain Future Hegel vs Heidegger: can we uncover reality? Related Videos: The life and philosophy of Slavoj Žižek The rise and fall of the grand narrative The matrix, myths, and metaphysics On the nature of reality The rise and fall of the grand narrative With Rana Mitter, David Aaronovitch, Konstantin Kisin, Jessica Frazier The Value of Everything With Mariana Mazzucato Dissent, division and democracy With Peter Lilley, Ella Whelan, Paul Mason, Rana Mitter, Peter Mandelson
本文於 修改第 1 次
|
黑格爾和海德格 -- R. Pippin
|
|
推薦1 |
|
|
Hegel vs Heidegger: can we uncover reality? Reason, reality, and the fate of philosophy Robert Pippin, 11/15/24 Editor/s Notes:For most of its history, Western philosophy tried to use pure reason to know reality. But, argues Robert Pippin, Heidegger showed that this entire philosophical tradition was doomed, due to its mistaken assumption that what it is to be a feature of reality is to be available to rational thought. This assumption, which culminated in Hegel, led philosophy to forget the meaningfulness of reality for humans, and so left us lost. Only by recognising that we encounter reality not primarily through reason, but through the ways in which it matters for us, can philosophy recover the world as something meaningful for humans. 1. What is forgotten in the Western philosophical tradition Heidegger claimed that German Idealism and especially Hegel’s philosophy was the “culmination” (Vollendung) of the entire Western philosophical tradition. This meant that one could most clearly see in the work of Kant and Hegel the decisive, underlying assumption guiding that tradition from its inception in Plato and Aristotle to its final fate. Because of that assumption, philosophy had exhausted its possibilities; all that was left for it was to recount its own past moments either in some triumphalist mode (Hegel) or in some deflationary irony (Derrida). This failure, Heidegger hoped, might tell us something crucial for the possibility of a renewal of a philosophy that had something to do with human life as it is actually lived. The heart of that prior tradition was metaphysics, the attempt by empirically unaided pure reason to know the “really real,” traditionally understood as “substance.” This most basic question in philosophy was taken to be the question of “the meaning of being qua being,” but in reality, Heidegger claimed, this question had never been properly addressed; indeed, it had been “forgotten.” Instead, the major philosophers in the Western tradition simply assumed that the primary availability of any being is as material for cognition, that “to be” was “to be a detectable substance enduring through time and intelligible as just what it is and not anything else.” Moreover, Heidegger claimed that this assumption about the primary availability of being to discursive thinking, in all the developing variations in later philosophy and especially in modernity, had set in place by its implications various notions of primacy, significance, orders of importance, social relations and relations with the natural world that had led to a disastrous self-estrangement in the modern West, a forgetfulness and lostness that ensured a permanent and ultimately desperate homelessness. This assumption was that being – anything at all – was primarily manifest as a detectable substance enduring over time, something merely present before us. He called this the “metaphysics of presence.” So, if the tradition takes itself to be answering the question of the meaning of being – from Platonic Ideas, to Aristotelian forms, to atomism, to materialism, to Leibnizian monads, to Cartesian mental representations, to whatever the most advanced physical sciences say – what in the question of the meaning of being has been forgotten? 2. Heidegger’s corrective: we first encounter beings through their mattering for us The heart of Heidegger’s answer concerns how we understand the question itself: what does it mean to be, in what way does anything at all come to mean anything for us? The problem of “the meaning of Being” is the problem of the meaningfulness of beings; that is, beings in the way they matter. Their way of mattering is their original way of being available; they become salient in a familiarity permeated by degrees of significance; it is how beings originally show up for us in our experience. The source of that meaningfulness is the possibility of meaningfulness as such, the meaningfulness of Being as such, that beings can matter at all. We immediately assume that this is all a matter of “subjective projection,” that individuals somehow determinate what matters to them and project that onto the world and others. This is what Heidegger most of all wants to contest. He wants to relocate the possible sources of meaningfulness in a shared historical world, a horizon of possible meaningfulness into which we are “thrown,” in his famous term. So, in his most well-known account in his 1927 Being and Time, Heidegger wanted to convince his readers of two initial claims, along the way to a much longer project that he had planned for the book. One was that entities are available for experience in their significance (Bedeutsamekeit), salient in experience because of the way they matter, given various comportments, practical undertakings in our engagements with beings and with others. In making this claim, he was concerned with the issue he called primordiality or fundamentality. While various sensible and material properties of objects could be attended to, his phenomenological claim was that this sort of attentiveness was secondary, “founded,” an abstraction from what was our original, practical engagement. The second followed from that claim of primordiality. It was that this primary availability could not be understood as a matter of discursive discrimination, as if the objects’ significance were a function of or result of our judging or even being able to judge the objects to be significant. His now famous examples involved the use of tools or “equipment.” While we obviously have reasons to grab a hammer by the wooden handle and not the metal top, our understanding of how to use the hammer was not a matter of those reasons guiding or directing our use: the know-how involved in hammer competency need have no basis in prior beliefs or implicit beliefs about proper hammering. The hammer came to matter as some task or other arose, and it could so matter because of a nondiscursive familiarity with hammers and the equipmental context assumed as a background for that significance, a context itself not appealed to or invoked in any discursive way. That context was itself a component of a general horizon of possible meaningfulness, a source of comportments that would make sense to engage in, a world. Our general orientation in any such equipmentman context, our knowing our way around in a given historical world, is much more a matter of what he called “attunement,” a way of being onto, appreciating, registers of significance in experience, rather than rule-following or conscious directedness. This meant that there was a primordial normative dimension in the availability of entities, significances, meaningfulness, mattering, that was not properly understood as the product of or even as subject to rational assessment. We are oriented from such possible meaningfulness non-discursively by this “attunement”, in the way friends or orchestra members might be said to be attuned to one another. This also speaks to the “critical” potential of Heidegger’s approach. What he wants to claim about the ultimate groundlessness and dogmatism of the traditional metaphysical orientation has a normative consequence that Heidegger clearly thinks is catastrophic, a dimension brought out best by “Heideggerians” such as Herbert Marcuse on “One-Dimensionality” and Hannah Arendt on “thoughtlessness.” It is also manifest in his explicit linking of consumerist capitalism with the implications of the metaphysics of presence. This has an important bearing on what might be possible if we manage to recover, to remember, what we have forgotten, the question we need to ask. The absence of any acknowledged, genuine source of meaningfulness has a political dimension, even, perhaps especially, when it is not acknowledged. Such a dimension is manifest in such ever more common pathologies such as boredom, anxiety, depression, “deaths of despair,” resentment, and alienation, and these express themselves in the rage we see coursing through contemporary politics. Robert Pippin is the Evelyn Stefansson Nef Distinguished Service Professor in the Committee on Social Thought, the Department of Philosophy, and the College at the University of Chicago, and author of the recent book The Culmination: Heidegger, German Idealism and the Fate of Philosophy Related Posts: Nonsense vs Nothingness: The great philosophical divide Wittgenstein vs Freud: Does the unconscious exist? Wittgenstein vs Wittgenstein The Return of Metaphysics: Hegel vs Kant Related Videos: The rise and fall of the grand narrative Metaphysics vs consciousness The life and philosophy of Slavoj Žižek Nihilism and the meaning of life SUGGESTED VIEWING Philosophy at war With Maria Balaska, Hilary Lawson, Paul Horwich, Lisa Randall Something and Nothing With James Ladyman, Rupert Sheldrake, George Ellis, Amie Thomasson
本文於 修改第 1 次
|
現實蘊含某些矛盾 -- Graham Priest/Omari Edwards
|
|
推薦1 |
|
|
我沒有對下文所呈現思路和觀點置喙的功力;但相信它們值得深思。Happy reading and contemplating. 此文可以和拙作《淺談dialectic》以及我跟孫隆基教授對談諸文中關於「辯證觀」部份參照;孫教授大作請見此處。此外,在以下的「索引」外,請使用正文中的「超連接」。 索引:
begs the question:「竊取論點」,在討論或著述時,「前提」已經預設未經證實的「結論」為真。 Dialetheism:「現實兩面性論」,認為”某些”「A」和「非A」可以同時「成立」的思路和/或理論;「A」在此指某個「命題」。《維基百科》此處「中譯」可能導致誤解。請注意上面說明中的「某些」一詞;此觀點並未號稱「全面性」。 formal logic:「形式邏輯」,又稱「數理邏輯」或「符號邏輯」 paradox:「悖論」;兩面性,雙重性 per se (請參考該條目註解下,對此詞「用法」的詳細說明):--- 的本身;就 --- 而言,--- 是per se所指示的事物、概念、或理論等。 Principle of explosion:邏輯學上的「無限擴充原則」;根據一對相互矛盾的「命題」,人們可以推論出任何「命題」。《維基百科》此處「中譯」難稱信、達。請參考 ”Why the principle of explosion works? [duplicate]” ramifications:可能的後果;衍生的影響 realism (philosophical):(哲學)「實在論」 self-reference:自我指涉,自我參照 sensible:合情合理的,明智的 soundness (logic):”論點” (或”論述”)「成立」;(邏輯上)「成立」需同時滿足兩個要件:a.「論證過程」在形式上「合格」(符合推論格式或邏輯規則;b.「前提」必須「為真」(或至少「被普遍接受」)。請見此說明。 transitivity (philosophy):關係的傳遞性,可傳遞的關係
validity (logic):(論證)「合格」;「論證過程」在形式上符合推論格式或邏輯規則。
The paradoxes at the heart of reality
An interview with Graham Priest We believe things can’t be both true and false, it can’t be both raining and not raining at the same time. Philosopher Graham Priest, however, thinks differently. In this interview, he argues true contradictions are an intrinsic part of reality. Most of us think in binaries: either it’s raining or it’s not, things are black or white, true or false. The philosophies of Frege and Russell in the 20th century formalised these binaries into the very logic of our thinking. Graham Priest pushes back against this. There is a huge variety of paradoxes which have puzzled philosophers throughout the ages, and that old view that contradictions are always false has yet to be justified. In this interview with the leading theorist of dialetheism we discover how it can be raining and not raining at the same time. So, Graham what is the argument? Some commentators have argued that you seem to be saying that paradoxes are not just linguistic or conceptual issues, but that they exist in the world itself? I don’t think I have ever said that. What I have said is that some contradictions are true (dialetheism, as it is now called). What makes a statement true is, in general, a combination of what words mean and how the world is. So ‘Australia is in the Southern Hemisphere’ is true partly in virtue of the meaning of ‘Southern Hemisphere’, and partly in virtue of some geographical facts. Many true contradictions are no exception to this general rule. What was your intellectual journey towards dialetheism? Was there a specific moment or experience that convinced you that true contradictions are a fundamental aspect of reality? I was trained as a mathematician. My doctorate is in mathematical logic, and I was as classical a logician as anyone could be. Any logician must engage with some of the most profound mathematical results of the 20th century, such as Gödel’s Incompleteness Theorems. These are closely related to the paradoxes of self-reference. So I became interested in these. These paradoxes have been discussed by logicians for nearly two and a half thousand years, and there has been no success in solving them—at least if success is judged by consensus. These paradoxes are arguments for certain contradictions. Most have assumed that there must be something wrong with the arguments. I started to think ‘maybe this isn’t true: these arguments just establish their contradictory conclusions’. So I started to investigate this possibility, its ramifications and applications. After a period of time, I became persuaded that, despite being highly unorthodox, this is a very sensible view. How do you define a paradox? Are there criteria that distinguish a "true" paradox from one that merely appears paradoxical due to linguistic confusion or logical error? A paradox is an argument which appears to be sound, but which ends in a contradiction. If the argument really is sound, the contradiction is true. If not, there is no reason to believe so. Of course, all the hard work has to go in determining whether the argument is sound. There is no magic bullet to determine this. The Law of Non-Contradiction, that something has to be true or false, is a foundational principle of classical logic. How do you respond to those who argue that abandoning this law leads to an incoherent or meaningless view of reality?And how do you counter the explosion argument in favour of the law of Non-Contradiction, the idea that if we allow contradictions to be true we can reach any conclusion we want? Can you provide a concrete example of a true paradox in the world? Well, what is called classical logic is a logic invented by logicians such as Frege and Russell at the turn of the 20th century—though it has some things (but only some things) in common with really classical logics. those of Ancient Greece, for instance. If someone claims that abandoning the Principle of Non-Contradiction leads to an incoherent or meaningless view of reality (whatever that is supposed to mean), the onus is on them to make good this claim. I have never seen a successful argument for this. The argument for Explosion presupposes that truth and falsity are exclusive. Hence an argument against true contradictions on the basis of Explosion begs the question. There are now many well understood formal logics which do not have this presupposition. They are called paraconsistent. There are many possible examples of the kind you ask for (though of course they are all contentious). One is the sorites paradox. Take a long sequence of colour strips such that the colour of each is indistinguishable from that of the strips immediately adjacent to it, but such that the first strip is red and the last is not (say, blue). The strips in the middle of the sequence are symmetrically poised between being red and not being red. One may argue that they are both. So, taking an example like "It's raining and it's not raining." How would you defend the claim that this could be a true paradox? What are the implications for our understanding of truth and reality? This is a standard example taken from another sorites paradox, concerning a slow but continuous transition from a heavy downpour to the rain having stopped. Like the paradoxes of self-reference, there is no consensus about how this should be solved. A dialetheic solution is one of them, but of course there are others. One just has to engage in the discussion of which solution is best. For what it is worth, when ordinary people (not philosophers!) are interviewed about what to say of the such borderline cases, many are quite happy to say that it is raining and not raining. What are the implications? That some contradictions are true, and that reality is such as to make them so. You’ve argued against the transitivity of identity, the idea that everything has to be identical with itself. How do you address the criticism that rejecting this principle undermines the coherence of identity itself? Could you explain how this view aligns with, or contradicts, your commitment to realism? If someone claims that rejecting transitivity undermines the coherence of identity (whatever that means), the onus is on them to justify the claim. The transitivity of identity is certainly a standard assumption, but that is not good enough. One thing we have learned from the history of philosophy, logic, and science, is that standard assumptions are often false. Once such an assumption is challenged a case needs to be made for it. Whether or not I’m a realist might depend on what you mean by ‘realism’. The word gets used to mean many different things. But in any case, I see no obvious connection between the transitivity of identity and realism—whatever that might mean. In embracing paradoxes as real, what are the ontological commitments that follow? How does this impact your view of metaphysics, particularly regarding concepts like objecthood and identity? I’m not sure there is anything much more to say about this. Since I am a dialetheist, I think that some objects are contradictory. That says nothing about their objecthood per se. Dialetheism does not, in itself, imply that identity is non-transitive. It is quite compatible with a standard view of identity. However, in one of my books I applied it to give a non-transitive theory of identity. So, what are the practical consequences of accepting that true paradoxes exist? How should this influence our everyday reasoning, decision-making, and scientific inquiry? This is a big question, but the outline of an answer is as follows. Whenever we hold a view on some matter, it is usually embedded in some general theory or other—maybe a confused one. The rational thing to do is to accept whichever is the best theory, and so the answer it gives. If one is not a dialetheist all inconsistent theories are off the table; but with dialetheism, some may well be. We still choose the best theory, but now this may be an inconsistent one. Mathematics and science are fields heavily reliant on consistency and non-contradiction. How do you see dialetheism affecting these domains? Are there areas where embracing paradoxes could lead to new insights or breakthroughs? No, this is not true. First, we now know that there are coherent mathematical theories based on non-classical logics. Those based on a paraconsistent logic will be inconsistent. Moreover, inconsistent theories have been accepted by scientists. The most obvious example of this is classical dynamics. For about 200 years, this was based on the infinitesimal calculus, which was well known to be inconsistent. Scientists will accept whatever theory produces the right empirical results. If this is inconsistent, so be it. True, for the last 200 years scientists have not deliberately constructed inconsistent theories; but now that we have well-established non-classical mathematics, perhaps they will be. If contradictions can be true, what does this mean for ethical reasoning and moral philosophy? Could there be true moral paradoxes, and if so, how should we navigate them? Yes, there would appear to be normative dialetheias, where you ought to bring it about that something, and it is not the case that you ought to bring it about. If this is so, you just have to live with the contradiction, and take the consequences. What are the most common objections you face regarding dialetheism, and how do you typically respond to them? For instance, how do you defend against the claim that accepting contradictions leads to logical and practical chaos? The traditional arguments against dialetheism are those provided by Aristotle in his Metaphysics and they are pretty hopeless, as most modern commentators now agree. Probably the most common objection now is that dialetheism implies that everything is true. This depends on the principle of Explosion. The validity of Explosion presupposes that truth and falsity are exclusive, and so begs the question. There is no logical chaos in a paraconsistent logic. Such logics are as precise and mathematically articulated as any other contemporary system of formal logic. Perhaps the other most common objection is: ‘I just don’t see how a contradiction can be true’. That says more about the speaker than the view itself. Many people found the Special Theory of Relativity hard to accept at first because they just ‘couldn’t see how time could run at different rates in different frames of reference’. In both cases, you just have to get used to the new theoretical framework. Looking forward, what do you see as the future of dialetheism in philosophy? Do you believe that your views will gain wider acceptance, and if so, what shifts in philosophical thought or practice do you anticipate? I certainly hope so, but of course the future of philosophy—as of so much else—is inherently unpredictable. Philosophical thinking (at least in the West) has been constrained by the dogmatism of consistency since Aristotle. Once these blinkers are off, who knows where philosophy could go? How has your commitment to paradoxes and dialetheism influenced your personal worldview? Do you find that it has changed the way you approach life’s uncertainties and contradictions? Not really. Perhaps the fact that I have had to come to reject something I took to be obvious has made me more suspicious of views which many people (including myself) are wont to take for granted. What advice would you give to young philosophers who are grappling with the idea of true paradoxes? How should they approach the study of logic and metaphysics in light of your theories? Approach things with an open mind. Don’t believe something simply because tradition says it is so. Don’t believe something simply because you read it in a book—mine included. Explore the ideas and look at the evidence for yourself; make your own mind up. Graham Priest, is a philosopher and logician at CUNY. A key defender of dialetheism, the view that there are true contradictions. Omari Edwards, is contributing Editor for IAI News, the online magazine of the Institute of Art and Ideas. Twitter: @edwardsomari1 Join Priest alongside other speakers such as Slavoj Zizek, Roger Penrose, and Phillipa Gregory at the HowTheLightGetsIn festival on September 21st-22nd debating topics from consciousness, to quantum mechanics, politics to beauty. Learn more here. Related Posts: Experts and charlatans: The role of science in democracy Science and religion are not in conflict 11 philosophers you don't know about, but should Hossenfelder vs Goff: Do electrons exist? On Language and Logic, With Timothy Williamson, Saul Kripke, Romina Padro
Maths, like quantum physics, has observer problems, Edward Frenkel
Taking leave of reason, With Joanna Kavenna, Rebecca Roache, Bahar Gholipour, Rory Sutherland Related Videos: Biology beyond genes Genes are not the blueprint for life Dawkins re-examined Free will is not an illusion
本文於 修改第 6 次
|
混沌與原因 - Erik Van Aken
|
|
推薦1 |
|
|
這篇文章有點啟發性;值得一讀和順著作者思路更進一步或更深入一層。由於其主旨的哲學性高於自然科學性,刊於此欄。 Chaos and cause Can a butterfly’s wings trigger a distant hurricane? The answer depends on the perspective you take: physics or human agency Erik Van Aken, Edited by Pam Weintraub, 06/13/24 A slight shift in Cleopatra’s beauty, and the Roman Empire unravels. You miss your train, and an unexpected encounter changes the course of your life. A butterfly alights from a tree in Michoacán, triggering a hurricane halfway across the globe. These scenarios exemplify the essence of ‘chaos’, a term scientists coined during the middle of the 20th century, to describe how small events in complex systems can have vast, unpredictable consequences. Beyond these anecdotes, I want to tell you the story of chaos and answer the question: ‘Can the simple flutter of a butterfly’s wings truly trigger a distant hurricane?’ To uncover the layers of this question, we must first journey into the classical world of Newtonian physics. What we uncover is fascinating – the Universe, from the grand scale of empires to the intimate moments of daily life, operates within a framework where chaos and order are not opposites but intricately connected forces. In his bestselling book Chaos: Making a New Science (1987), James Gleick observes that 20th-century science will be remembered for three things: relativity, quantum mechanics (QM), and chaos. These theories are distinctive because they shift our understanding of classical physics toward a more complex, mysterious and unpredictable world. Classical physics, which reached its pinnacle in the work of Isaac Newton, painted a universe ruled by determinism and order. It was a world akin to a perfectly designed machine, where each action, like the fall of a domino, inevitably triggered a predictable effect. This absolute predictability – a world where understanding the present means knowing the future – became the essence of Newtonian mechanics. Classical physics not only presented an orderly universe among Newton’s followers, but it also instilled a profound sense of mastery over the natural world. Newton’s discoveries fostered the belief that the Universe, previously shrouded in mystery, was now laid bare, sparking an unprecedented optimism in the power of science. Armed with Newton’s laws and revolutionary mathematics, leading thinkers felt they had finally unlocked the secrets of reality. In this atmosphere of scientific triumph, Alexander Pope, the great poet of the Enlightenment, wrote a fitting epitaph for Newton that captured the monumental impact of his contribution: Nature and Nature’s laws lay hid in night. God said, Let Newton be! and all was light. Not everyone was excited. In his beautiful work Lamia (1820), John Keats poignantly expressed concern over the loss of mystery and wonder in the face of empirical scrutiny: Do not all charms fly At the mere touch of cold philosophy? There was an awful rainbow once in heaven: We know her woof, her texture; she is given In the dull catalogue of common things. Philosophy will clip an Angel’s wings, Conquer all mysteries by rule and line, Empty the haunted air, and gnomed mine – Unweave a rainbow, as it erewhile made The tender-person’d Lamia melt into a shade. The ‘cold philosophy’ of classical physics seemed to ‘unweave a rainbow’, stripping the natural world of its enchantment and mystery. Keats resented the process of scientific rationalisation, which could ‘clip an Angel’s wings’ and reduce the world’s wonders to simple entries in ‘the dull catalogue of common things’. And yet, the 20th century witnessed a dramatic shift with the emergence of relativity, which redefines our understanding of space and time; QM, which revolutionised our understanding of the subatomic world; and chaos theory. The orderly and predictable world of Newtonian physics, the dream of a mechanical universe ready to unveil her innermost workings, was, happily or not, something of an illusion. In the 20th century, science revealed a far more intricate, less predictable and, indeed, chaotic universe. Like the other two pillars Gleick identified, chaos theory challenges our understanding of classical physics. However, unlike QM and relativity, chaos theory operates within a Newtonian framework – it assumes a deterministic reality governed by specific laws. Yet chaos theory reveals a beguiling level of unpredictability, particularly at a macroscopic level. The unpredictability revealed by chaos theory, seemingly at odds with a deterministic worldview, arises from the complex nature of nonlinear systems. In dynamical systems, behaviour changes over time. The concept of determinism implies that future states are precisely determined by current conditions, without any randomness or chance involved. However, when dynamical systems exhibit nonlinearity, their behaviour becomes more complex and less predictable. This complexity arises from a disproportionate relationship between input or cause and output or effect. Consider a simple faucet. At low pressure, water flows in a smooth, or laminar, pattern. As pressure increases, the flow remains steady but broadens slightly. At one critical point, however, marked by no more than a tiny pressure change, we see a ‘phase transition’ – the orderly flow suddenly becomes turbulent, exemplifying chaos: the sensitivity of nonlinear systems like fluids to minor changes, leading to unpredictable outcomes. Think about the movement of a small pebble rolling down a mountain slope. Tiny variations in its starting point, uneven terrain, soil density, even wind direction can drastically alter its path and final position. For instance, imagine we drop a pebble at a specific location and it comes to rest in another location. Imagine we run a simple experiment, dropping the pebble one millimetre away from where we dropped it in the first place. If the pebble’s movement is slightly altered by external factors like wind, hitting a patch of highly dense soil or a large rock, its speed could increase dramatically, ultimately stopping in an unexpected location 5,000 mm away from where it landed in the first drop. A parallel in celestial mechanics is the so-called three-body problem, with three bodies in space like the recent Netflix series. Consider two bodies in space: Earth and the Moon. Newtonian mechanics allows us to predict the orbital motions of these two bodies perfectly. Yet, when we add a third body, the Sun, we discover a level of complexity that defies Newtonian predictability. The gravitational interactions among these three bodies create a dynamic, nonlinear system where slight variations in initial conditions, for example, minor variations in the distances or velocities of any one body, can lead to vastly different outcomes; the long-term positions of the three bodies become practically impossible to predict. In broader mathematical and scientific terms, ‘chaos’ refers to systems that appear random yet are inherently deterministic. Take the example of a roulette wheel, commonly perceived as a game of chance. While we might assume the outcome is purely random, the underlying mechanics of the roulette wheel, including the motion, friction and the force of the spin, adhere to deterministic physical laws. The true source of unpredictability stems from its extreme sensitivity to initial conditions: how forcefully the ball is dropped, the speed at which the wheel spins, subtle vibrations from environmental factors like an air conditioner, and even the movement of patrons around the table. These factors, often unnoticed, can significantly influence the outcome of each spin. Chaos theory teaches us that even seemingly insignificant variations in initial conditions – a fraction of a millimetre difference in the ball’s drop point – can lead to disproportionately large effects. Colloquially known as the butterfly effect, chaos theory can shatter our common notion of cause and effect. It suggests that predicting the long-term future is incredibly complex because even tiny, seemingly irrelevant events can have significant consequences. The term ‘butterfly effect’ is often attributed to the meteorologist Edward Lorenz, who used the now-familiar example to describe chaos: a butterfly flapping its wings in Brazil could set off a chain of events leading to a hurricane in Texas three weeks later. This seemingly outlandish scenario underscores the counterintuitive nature of chaos theory. While the idea of small causes having large effects might feel familiar, chaos theory challenges our common assumptions about how the world works. The surprising lesson isn’t that small events can have significant consequences but, rather, the profound difficulty in predicting those consequences. This core principle – the difficulty in prediction – has a technical definition: ‘sensitive dependence’ on initial conditions – for instance, a roulette ball’s position before it is dropped, the speed of the roulette wheel, etc. But sensitive dependence is not a novel concept. It has a place in history: For want of a nail the shoe was lost. For want of a shoe the horse was lost. For want of a horse the rider was lost. For want of a rider the battle was lost. For want of a battle the kingdom was lost. ‘For want of a nail’ captures a familiar notion about causality – small events can cascade into significant consequences. Yet, within the framework of chaos theory, we can take this idea further. Consider the potential for sudden changes due to phase transitions, as when the swirling water goes from smooth to turbulent. Tiny variations in a system’s conditions, like a seemingly insignificant missing nail, can accumulate and trigger an unexpected shift – the shoe falling off, the horse being injured, the battle being lost. These sudden changes, surprising transitions within the system, are driven by underlying physical laws, yet they reveal the inherent unpredictability and complexity embedded in what might seem like straightforward events. Just as a missing nail leads to the loss of a kingdom, could the flutter of a distant insect trigger catastrophic events? The answer, perhaps surprisingly, depends on perspective – how we choose to look at the world and how we understand cause and effect. Before we consider the two distinct perspectives, it is critical to note that the butterfly effect is a metaphor for a theory, namely, chaos – the idea that small changes in conditions can have large, unexpected effects. While the butterfly effect is a powerful image, it’s important to remember the scientific foundation Lorenz’s work provided.
I mentioned that Lorenz was a meteorologist. Indeed, he studied the weather and tried to find ways to improve forecasting – predicting when a storm might arise, where it would turn, when it would die down, and so on. During his investigations at MIT, Lorenz developed a simple computer model to track hypothetical weather systems in a targeted environment (the actual world). As the story goes, Lorenz entered some numbers into his computer program and left his office to get a coffee. When he returned, he discovered a shocking result. His model was relatively simple. It used a set of differential equations to represent how air moves and temperatures fluctuate. Lorenz was repeating a simulation he had run earlier – but he had rounded off one variable from .506127 to .506, a seemingly inconsequential alteration. To Lorenz’s surprise, that tiny alteration drastically transformed the model’s output. Lorenz’s groundbreaking work uncovered a startling phenomenon: small changes can have enormous, unforeseen consequences, leading to impenetrable barriers in long-term prediction. We call this phenomenon the butterfly effect, but its scientific foundation lies in the sensitivity of nonlinear systems to initial conditions. The chaotic nature of nonlinear systems impacts more than just mathematics. For instance, small genetic mutations or environmental changes in biological evolution can lead to significant evolutionary shifts over time. The path of evolution is not linear or predictable; instead, it is full of unexpected twists and turns, like the movement of a pebble down the mountain. Similarly, in economics, markets function as complex, nonlinear systems. Rumours about a company or slight changes in interest rates can act as triggers, setting off substantial and unanticipated shifts. The 2007-08 financial crisis provides a sobering reminder that minor perturbations in one sector can ripple into a global meltdown. Perhaps the point about small events is best stated by Terry Pratchett and Neil Gaiman in their book Good Omens (1990): It used to be thought that the events that changed the world were things like big bombs, maniac politicians, huge earthquakes, or vast population movements, but it has now been realised that this is a very old-fashioned view held by people totally out of touch with modern thought. The things that really change the world, according to Chaos theory, are the tiny things. A butterfly flaps its wings in the Amazonian jungle, and subsequently a storm ravages half of Europe. Tiny things matter. But can the movement of a butterfly, weighing roughly the same as a penny, cause a sizeable storm? The answer is rather complex. The answer is both yes and no – yes, from the perspective of classical physics, and no, from our perspective as human agents. Allow me to explain. Consider the act of lighting a match. Conventionally, this act is perceived in a simple, linear fashion – the striking of the match (event A) leads to ignition (event B), ostensibly illustrating what 19th-century philosophers called the ‘law of causality’ – given event A, event B will follow. Simple enough. Until we learn that the law of causality breaks down when scrutinised through the lens of classical physics. Physics informs us that igniting a match is not solely an outcome of its striking but rather the aggregate effect of a vast multitude of elements. These include the match’s chemical composition, the force exerted in the strike, the presence of oxygen, and many other factors. The critical point is that, from a physical perspective, causation is not a simplistic sequence but a complex interplay of myriad factors, each contributing more or less subtly to the final event. Thus, in the realm of classical physics, the concept of cause is dramatically broadened, suggesting that nearly every event within an event’s ‘past light cone’ – everything in its past – could be considered causal. To illustrate, consider the example of a tree falling in a forest. Here, the event’s past light cone encompasses all preceding events that could have influenced this particular tree’s fall; the concept, the ‘past light cone of an event’, indicates that information or influence travels at or below the speed of light. For the falling tree, the past light cone includes immediate factors like wind, the tree’s health, and soil conditions, as well as a multitude of more distant events – from the formation of weather patterns to ecological changes and even distant solar activity impacting Earth’s climate. No matter how seemingly unrelated or remote, each event converges within the tree’s past light cone, contributing to a complex web of causality. The philosopher Alyssa Ney summarises the above point with notable clarity. In ‘Physical Causation and Difference-Making’ (2009), Ney writes, assuming we look to physics to ground or understand causality: [T]here are a lot of causal relations at this world, perhaps a lot more than we ordinarily assume. The fields of our best physical theories are spread out across the entire universe and interact with everything in their reach. They link small events like your leaving the house this morning with those more significant ones transpiring in Iraq a little later and more distant ones farther away in the galaxy. It is not quite true on this picture that ‘everything causes everything’, but things come close. Bertrand Russell’s arguments in ‘On the Notion of Cause’ (1912-13) complicate the picture of causality in physics even further. Russell attacks the idea of cause and effect altogether. In essence, he argues that if A produces B, and A encompasses the environment (the past light cone of A), this broadens the scope of event A to such an extent that it becomes essentially unrepeatable. Russell’s argument leads us to a dilemma: to uphold the law of causality, we must define events by noting invariable uniformities and by abstracting away most of the physical influences on A. Yet, this abstraction may inadvertently exclude causal influences, undermining the principle of causality. Thus, Russell asserts two significant conclusions: first, that our conventional notion of causality is not grounded in physics; and second, if notions like ‘cause’ must be reducible to physics, we should eliminate our use of the term ‘cause’. According to Russell, there is no cause and effect at all. What does this mean for the butterfly effect? Quite simply, it means that when we look at causality through the lens of physics, the flapping of a butterfly’s wings counts as a contributing cause to a later storm. But so too is everything else within the storm’s past light cone. All flapping butterflies, a breaching whale in the Pacific, a young child playing football in Edinburgh, and the Moon’s gravitational effect all count as causal. The tension compels us closer to Russell’s radical conclusion – if nearly everything influences everything else, the word ‘cause’ begins to lose its meaning. Yet, there is a line in the philosophy of causation, traceable through thinkers like R G Collingwood, Nancy Cartwright, Huw Price and James Woodward, which posits that we must locate the notion of cause in human practice by focusing on things like manipulation and control. In this view, ‘causes’ are seen as ‘handles’, things in nature that provide us with a measure of control. This framework emphasises the role of human perspectives in shaping, framing or limiting events, and compels us to consider the extent of our influence in complex systems. This highlights the distinction between physical causation and how we use the concept of cause to understand and navigate the world. Consider my efforts to prevent a common cold: I focus on controllable factors like diet, sleep and who I interact with, and I disregard seemingly irrelevant factors like butterflies and distant whale breaches. The hook is this: while remote and uncontrollable factors like the movement of a butterfly can have some minor physical influence, the movement of the butterfly does not make a difference to my physical health. Philosophers often spell this out in terms of probability: I can alter the probability of catching a cold by ensuring I get sufficient sleep, while the probability is unaltered by catching a butterfly and keeping it safely in a jar; or counterfactuals: had I not stayed awake until 4am, I would not have become ill. We reject as absurd the counterfactual: had this particular butterfly not moved from one flower to another, I would not have become ill. But notice the minor tension even here. If it is true that the movement of a butterfly (or anything in the past light cone) has some effect on my health, is it arbitrary to focus on controllable factors like how much I sleep? No, because we enable basic causal reasoning once we shift our focus from physics to a more practical, human-level perspective. Indeed, it seems to be a central aspect of our ordinary use of ‘cause’. We may want to avoid being injured or getting ill, and our interest leads us to ask a specific set of questions; in turn, this leads us to the fact that much of the world becomes irrelevant. If we want to avoid getting lung cancer or the flu, for example, we will not be interested in the current migration patterns of monarch butterflies or the number of universities in California. Consider E H Carr’s seminal work What Is History? (1961). In a chapter titled ‘Causation in History’, Carr admits that determinism introduces serious complications in historical analysis. However, he emphasises that historians focus on fruitful generalisations, or what Carr calls ‘real’ causes. To illustrate, imagine that Smith, walking to buy a pack of cigarettes, is killed by a drunk driver speeding around a blind corner. While it is true that had Smith not been a smoker, he would not have died, we cannot generalise the proposition, ‘smoking caused Smith’s death.’ It is much more useful, certainly in the context of history and everyday life, to say that the real cause of Smith’s death was the drunk driver, the speed of the vehicle, or the blind corner. This is why historians cite the Treaty of Versailles or the Nazi invasion of Poland in 1939 as a cause of the Second World War and not Hitler’s being born. A basic thesis emerges: we can overcome Russell’s problem, the problem of causation in physics, by shifting our perspective. If we look at the world through the ordinary lens of human agency, rather than the lens of physics, we can talk about causes as handles, events within nature that make a difference to some effect and provide us with a sense of control. Imagine that Sam and Suzy are standing near a fire. Each bystander desires to extinguish the flame. Imagine further that Suzy decides to spray the fire with a hose, and that Sam decides to pray for the fire to go out. From a physical perspective, Sam and Suzy – one spraying and one praying – affect the fire by their mere presence and, thus, by their actions. Yet, from a macro-level human perspective, only one individual affects the fire. That is, only Suzy’s spraying makes a difference to the flames. When we shift our perspective from physics to agency and difference-making, we land on the most intuitive assessment of the butterfly effect. From our perspective, the butterfly is not a cause of the storm because we cannot affect storms by manipulating butterflies. And while the butterfly could have an effect on a storm, it does not make a difference to the occurrence of storms in a way that we can predict or control. Exploring the dichotomy between the perspectives of physics and human agency uncovers a paradox: our actions are simultaneously bound by the determinism of physical laws and enriched with intention, purpose and meaning that go beyond them. To fully appreciate what this means, heed a lesson from Fyodor Dostoyevsky’s great novel, The Brothers Karamazov (1880), which asks how a benevolent God could allow suffering. There is just one virtuous character in the novel, the monk Father Zosima, whose simple teaching, dictated through the genius of Dostoyevsky, sheds light on chaos, causation and difference-making: See, here you have passed by a small child, passed by in anger, with a foul word, with a wrathful soul; you perhaps did not notice the child, but he saw you, and your unsightly and impious image has remained in his defenceless heart. You did not know it, but you may thereby have planted a bad seed in him, and it may grow, and all because you did not restrain yourself before the child, because you did not nurture in yourself a heedful, active love … for one ought to love not for a chance moment but for all time. Anyone, even a wicked man, can love by chance. My young brother asked forgiveness of the birds: it seems senseless, yet it is right, for all is like an ocean, all flows and connects; touch it in one place and it echoes at the other end of the world. Erik Van Aken is an instructor of philosophy and religious studies at Rocky Mountain College in Montana, US. This Essay was made possible through the support of a grant to Aeon Media from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Foundation. Funders to Aeon Media are not involved in editorial decision-making.
本文於 修改第 1 次
|
唯心論觀點對社會的助益 ---- James Tartaglia
|
|
推薦1 |
|
|
我接受唯物論,自然對這篇文章的觀點和論點都不以為然。不過,求知第一要謙虛;第二不可畫地自限。刊登於此;或許可以找時間「切磋」、「切磋」。 下文中的數字為原著所無,我加上它們來標示各個段落;以後討論塔他黎雅教授大作時比較方便。 How metaphysical idealism can benefit society Why the modern world needs idealism James Tartaglia, 05/29/24 Many in the 20th century abandoned idealism. James Tartaglia now advocates for a revival of metaphysical idealism, arguing that it is misunderstood and often unfairly dismissed by the scientific establishment. By clarifying common misconceptions, Tartaglia reveals how idealism could offer significant social benefits, encouraging a more philosophical society and one focused on the primacy of experience. His new book Inner Space Philosophy: Why the Next Stage of Human Development Should Be Philosophical, Explained Radically (Suitable for Wolves) comes out on the 28th June 2024. – Editor’s Notes 1. These days, metaphysical idealism is an immediate turn-off for most secular, scientifically minded people, who start to think of gods and spirits, maybe even Ouija boards. I think that’s a shame, because it’s a prejudice that results from some straightforward misunderstandings – misunderstandings which have greatly benefited the fortunes of idealism’s ancient metaphysical rival, materialism. To understand the potential social benefits of idealism you need to be able to take the view seriously, so let’s start by clearing away some of those misunderstandings. 2. Firstly, idealism has no commitment to gods or spirits, only to the existential primacy of conscious experience – or, at least, that’s the kind of idealism I’m talking about, there are others. The 19th century German philosopher Arthur Schopenhauer was no less ardent in his commitment to atheism than to idealism. I’m not denying that if you’re a believer then you’re better off with idealism than materialism – a physical God is a weird idea – but my point is simply that the two don’t necessarily need to go together. I think the idealist interpretation of reality is the best we’ve got, and I have no religious beliefs and do not believe in anything supernatural; never have, and unless something very unexpected happens, never will. 3. Secondly, materialism is not science, it’s not even close – materialism provides a metaphysical interpretation of what science tells us about the world, just as idealism does. Materialism originated in the 5th century B.C. in Greece and India, back in the days when you were allowed to define your own “atoms”, the materialist’s building blocks of reality. Materialism and idealism are competing metaphysical interpretations of our reality, as are dualism, panpsychism and all the other more esoteric options. Maybe you’re suspicious of metaphysical theorizing in general (in which case I think that on closer examination you’d find your suspicions were ungrounded) – but in that case you should be just as suspicious of materialism as of idealism. 4. Thirdly, idealism is not saying that solid things, like rocks and pebbles, are wispy and immaterial things, akin to clouds or puffs of smoke. For a start, that comparison doesn’t even make sense, since clouds and puffs of smoke exist in space, and according to idealists, experience doesn’t. And if idealism really were saying that rocks and pebbles are clouds of immaterial stuff, then it would be making a scientific hypothesis, one which would need to be empirically tested. Idealism simply isn’t in that kind of business. 5. OK then, so what is idealism saying about rocks and pebbles? It’s saying we are part of a universe of pure conscious experience, and that believing in the physical existence of rocks and pebbles allows humans to get their bearings in that ocean of sentience. Rocks and pebbles are a way of making sense of experience, they are posits within an explanatory model – the experience exists, it’s independently there, and we understand it by thinking of it as experiences of things like rocks and pebbles. Rocks and pebbles don’t independently exist, but the nature of experience is such that they seem to – that’s why the explanatory model works, that’s why we came up with a specifically physical model. The materialists instead say that the rocks and pebbles independently exist, but then they have a problem with what to say about experience, a problem they’ll never solve, in my view – instead they’ll endlessly oscillate between the two nonsensical options of saying it doesn’t exist or that it mysteriously arises from the brain. But that’s another matter – if you want to know what convinces me of idealism you can look at my book, Philosophy in a Technological World: GODS AND TITANS (the warring gods and titans of the story I tell in that book are idealism and materialism, by the way). What I want to concentrate on here are some possible benefits of a societal turn to idealism. 6. The main benefit I’m going to talk about, one which might have all kinds of knock-on effects, is that idealism tells us we live in a reality consisting of all we ever really cared about: experience. What do I mean by saying that experience is all we care about? And why would believing that have societal benefits? To start with the first question, consider the following. Three things that people might very seriously care about are: owning a new Italian sportscar, becoming a famous YouTuber, and not getting seriously ill. But suppose you knew that the moment you first climbed into your shiny new Lamborghini Huracá (yellow), you would fall into the most terrible depression ever, one that would last for as long as you owned the car – you wouldn’t want it then, because all you ever really wanted was the experiences associated with owning it, elation, pride, excitement, that kind of thing, none of which is available when you are seriously depressed. The same applies to becoming a famous YouTuber – if it made you feel awful then you’d regret it immediately. And in the case of getting seriously ill it’s even more obvious – you don’t want the pain, you don’t want the fear, and you certainly don’t want your experiences to be brought to an end by death. Experience, I maintain, is all we really care about: love, contentment, excitement, interest, satisfaction, tingles, all that kind of business. And there is nothing remotely selfish about that either, since we can care about each other’s feelings, love would be impossible otherwise. 7. Our attraction to experience makes perfect sense, according to idealism, because we’re experiential beings. But even if that’s true, why does it matter? Why would being more in touch with the metaphysically ultimate nature of reality benefit us practically? Well, the way I see it is that if the idealist view is indeed correct, but you don’t believe it, perhaps because you’re a materialist, then you’ll end up with a strange mismatch between what you say you believe and what you act as if you believe. You’ll be someone who spends their life in pursuit of experiences even though experience has little or no place in their conception of reality. Ask most people about their conception of reality and their thoughts quickly turn to outer space or infinitesimally small particles - experience completely drops out of the picture. But then, after they stop thinking about “reality” and return to their everyday lives, experience once again becomes their main focus. 8. I don’t think letting our officially sanctioned conception of reality become so completely out of kilter with our lives is a good idea. To be fair, it may be unavoidable, because materialism might be true even though science cannot explain experience at present – but I don’t think this is the case. But either way, the societal effect of this mismatch is that general, philosophical reflection on the nature of the reality you’ve found yourself born into has been seriously disincentivised. Reality has become something for experts to concern themselves with, something interesting to hear about on science podcasts, perhaps, but remote from your everyday concerns with experience. So, people become less inclined to actively, creatively and critically reflect on their existential situation, that is, they become less philosophical. 9. Now, suppose that while people were becoming less philosophical, their technological capabilities were rapidly developing, and that this development wasn’t directed by a philosophical vision of a desirable human future, but rather by the ingenuity of scientists and technologists making whatever new, previously unmakeable things they could. Is that not essentially the situation we find ourselves in? I think it is, and yet look where our technology is heading, however unintentionally – to experience! Our technological development, driven by the market, is chasing after experience, just as we chase after it in our everyday lives. 10. We have very rapidly gone from the passive experience of television to the active control over non-natural experience provided by video games; and now virtual reality is developing fast, when they get that right how are we ever going to stay outside of it for long? As these “experience machines” have become better and better, they have taken up more and more of our lives; as far as our younger generations are concerned they seem to be completely taking over. And now we are trying to make autonomous experience machines too, artificial intelligences, because whether through the fog of materialism or just lack of reflection, they seem like created minds – and to be able to create minds suggests godlike control over experience. 11. These directions of technological travel – natural to us, an idealist would say, but not reflective of any wisdom – have not been decided through philosophical reflection on how we want human life to develop. That is not how it happens at all. What happens is that scientists and technologists compete to make breakthroughs, then the breakthroughs get commercialized and people buy into the new tech – society then benefits from the upsides while trying to deal with the downsides, until the next big tech development comes along. 12. But think what might happen if idealism starts to catch on. For more and more people, what they care most about is what they consider to be most real. A population like that, of the kind that has not yet existed, would be a lot more philosophical. Imagine yourself believing that idealism is true – if this were a genuine new belief for you, then the world you thought you were familiar with would suddenly seem very odd indeed, you’d think about it a lot! After all, you’ve just realised you’re swimming in that ocean of sentience I spoke of earlier. While you’re trying to get to grips with the enormity of it all, your thoughts will likely spin off in all kinds of new philosophical directions. 13. As our population becomes increasingly philosophical, thanks to idealism, people could be expected to take much more interest in technological development and how it is being used to shape the human future. They might start taking a view on how technological development ought to be happening, views which might feed into democratic politics. Then, the next thing you know, the human race is coordinating their technological development, which has become firmly focused on refining and improving our experiences, so that there’s more love, beauty, ecstasy, and cleverness around, but less hate, ugliness, boredom and stupidness. Since we now think we’re experiential beings, identities such as gender and race seem less important, at the ultimate level we know we’re all alike and we find cooperation easier. When we first make contact with extraterrestrials, they remark that humans are a remarkably philosophical species. They’re impressed by the kind of experiences we have and emulate us. Our good influence starts to spread around the galaxy. James Tartaglia is Professor of Metaphysical Philosophy at Keele University. His latest book is Inner Space Philosophy: Why the Next Stage of Human Development Should Be Philosophical, Explained Radically (Suitable for Wolves) Related Posts: Consciousness came before life Consciousness does not require a self The anxiety of trying to control everything Consciousness is the collapse of the wave function Biology beyond genes Electricity creates consciousness The trouble with string theory The matrix, myths, and metaphysics
本文於 修改第 3 次
|
貝斯平衡點-Ed Gibney/Zafir Ivanov
|
|
推薦1 |
|
|
下文的附標題是:「貝斯推理方法如何幫助我們在相對觀和自以為真陷阱兩者間做取捨」。文中相關術語請自行上Google搜尋。 兩位作者在「前言」部份的論述(下文前7段),在討論日常生活中所說的「真理」、「真相」、和「信仰」等概念/詞彙的本質。如果各位不想看長篇大論的文章,或對「技術性」討論沒啥興趣;我大力推薦大家只讀這一部分。它們基本上支持我對知識的了解(此文1.1-1)小節,和此文就「相對觀」的討論)。 BAYESIAN BALANCE: HOW A TOOL FOR BAYESIAN THINKING CAN GUIDE US BETWEEN RELATIVISM AND THE TRUTH TRAP Ed Gibney & Zafir Ivanov, 04/19/24. 0. 前言 (原文未分節,故無此子標題;下同) On October 17, 2005 the talk show host and comedian Stephen Colbert introduced the word “truthiness” in the premier episode of his show The Colbert Report:1 “We’re not talking about truth, we’re talking about something that seems like truth— the truth we want to exist.”2 Since then the word has become entrenched in our everyday vocabulary but we’ve largely lost Colbert’s satirical critique of “living in a post-truth world.” Truthiness has become our truth. Kellyanne Conway opened the door to “alternative facts”3 while Oprah Winfrey exhorted you to “speak your truth.”4 And the co-founder of Skeptic magazine, Michael Shermer, has begun to regularly talk to his podcast guests about objective external truths and subjective internal truths, inside of which are historical truths, political truths, religious truths, literary truths, mythical truths, scientific truths, empirical truths, narrative truths, and cultural truths.5 It is an often-heard complaint to say that we live in a post-truth world, but what we really have is far too many claims for it. Instead, we propose that the vital search for truth is actually best continued when we drop our assertions that we have something like an absolute Truth with a capital T. Why is that? Consider one of our friends who is a Young Earth creationist. He believes the Bible is inerrant. He is convinced that every word it contains, including the six days of creation story of the universe, is Truth (spelled with a capital T because it is unquestionably, eternally true). From this position, he has rejected evidence brought to him from multiple disciplines that all converge on a much older Earth and universe. He has rejected evidence from fields such as biology, paleontology, astronomy, glaciology, and archeology, all of which should reduce his confidence in the claim that the formation of the Earth and every living thing on it, together with the creation of the sun, moon, and stars, all took place in literally six Earth days. Even when it was pointed out to him that the first chapter of Genesis mentions liquid water, light, and every kind of vegetation before there was a sun or any kind of star whatsoever, he claimed not to see a problem. His reply to such doubts is to simply say, “with God, all things are possible.”6 Lacking any uncertainty about the claim that “the Bible is Truth,” this creationist has only been able to conclude two things when faced with tough questions: (1) we are interpreting the Bible incorrectly, or (2) the evidence that appears to undermine a six-day creation is being interpreted incorrectly. These are inappropriately skeptical responses, but they are the only options left to someone who has decided beforehand that their belief is Truth. And, importantly, we have to admit that this observation could be turned back on us too. As soon as we become absolutely certain about a belief—as soon as we start calling something a capital “T” Truth—then we too become resistant to any evidence that could be interpreted as challenging it. After all, we are not absolutely certain that the account in Genesis is false. Instead, we simply consider it very, very unlikely, given all of the evidence at hand. We must keep in mind that we sample a tiny sliver of reality, with limited senses that only have access to a few of possibly many dimensions, in but one of quite likely multiple universes. Given this situation, intellectual humility is required. Some history and definitions from philosophy are useful to examine all of this more precisely. Of particular relevance is the field of epistemology, which studies what knowledge is or can be. A common starting point is Plato’s definition of knowledge as justified true belief (JTB).7 According to this JTB formulation, all three of those components are necessary for our notions or ideas to rise to the level of being accepted as genuine knowledge as opposed to being dismissible as mere opinion. And in an effort to make this distinction clear, definitions for all three of these components have been developed over the ensuing millennia. For epistemologists, beliefs are “what we take to be the case or regard as true.”8 For a belief to be true, it doesn’t just need to seem correct now; “most philosophers add the further constraint that a proposition never changes its truth-value in space or time.”9 And we can’t just stumble on these truths; our beliefs require some reason or evidence to justify them.10 Readers of Skeptic will likely be familiar with skeptical arguments from Agrippa (the problem of infinite regress11), David Hume (the problem of induction12), Rene Descartes (the problem of the evil demon13), and others that have chipped away at the possibility of ever attaining absolute knowledge. In 1963, however, Edmund Gettier fully upended the JTB theory of knowledge by demonstrating—in what has come to be called “Gettier problems”14—that even if we managed to actually have a justified true belief, we may have just gotten there by a stroke of good luck. And the last 60 years of epistemology have shown that we can seemingly never be certain that we are in receipt of such good fortune. This philosophical work has been an effort to identify an essential and unchanging feature of the universe—a perfectly justified truth that we can absolutely believe in and know. This Holy Grail of philosophy surely would be nice to have, but it makes sense that we don’t. Ever since Darwin demonstrated that all of life could be traced back to the simplest of origins, it has slowly become obvious that all knowledge is evolving and changing as well. We don’t know what the future will reveal and even our most unquestioned assumptions could be upended if, say, we’ve actually been living in a simulation all this time, or Descartes’ evil demon really has been viciously deluding us. It only makes sense that Daniel Dennett titled one of his recent papers, “Darwin and the Overdue Demise of Essentialism.”15 So, what is to be done after this demise of our cherished notions of truth, belief, and knowledge? Hold onto them and claim them anyway, as does the creationist? No. That path leads to error and intractable conflict. Instead, we should keep our minds open, and adjust and adapt to evidence as it becomes available. This style of thinking has become formalized and is known as Bayesian reasoning. Central to Bayesian reasoning is a conditional probability formula that helps us revise our beliefs to be better aligned with the available evidence. The formula is known as Bayes’ theorem. It is used to work out how likely something is, taking into account both what we already know as well as any new evidence. As a demonstration, consider a disease diagnosis, derived from a paper titled, “How to Train Novices in Bayesian Reasoning:” 1. 貝斯平衡點的應用 (原文未分節,故無此子標題) 1.0 概述 10 percent of adults who participate in a study have a particular medical condition. 60 percent of participants with this condition will test positive for the condition. 20 percent of participants without the condition will also test positive. Calculate the probability of having the medical condition given a positive test result.16 Most people, including medical students, get the answer to this type of question wrong. Some would say the accuracy of the test is 60 percent. However, the answer must be understood in the broader context of false positives and the relative rarity of the disease. Simply putting actual numbers on the face of these percentages will help you visualize this. For example, since the rate of the disease is only 10 percent, that would mean 10 in 100 people have the condition, and the test would correctly identify six of these people. But since 90 of the 100 people don’t have the condition, yet 20 percent of them would also receive a positive test result, that would mean 18 people would be incorrectly flagged. Therefore, 24 total people would get positive test results, but only six of those would actually have the disease. And that means the answer to the question is only 25 percent. (And, by the way, a negative result would only give you about 95 percent likelihood that you were in the clear. Four of the 76 negatives would actually have the disease.) Now, most usages of Bayesian reasoning won’t come with such detailed and precise statistics. We will very rarely be able to calculate the probability that an assertion is correct by using known weights of positive evidence, negative evidence, false positives, and false negatives. However, now that we are aware of these factors, we can try to weigh them roughly in our minds, starting with the two core norms of Bayesian epistemology: thinking about beliefs in terms of probability and updating one’s beliefs as conditions change.17 We propose it may be easier to think in this Bayesian way using a modified version of a concept put forward by the philosopher Andy Norman, called Reason’s Fulcrum.18 1.1 Figure 1. A Simple Lever. Balancing a simple lever can be achieved by moving the fulcrum so that the ratio of the beam is the inverse of the ratio of mass. Here, an adult who is three times heavier than the child is balanced by giving the child three times the length of beam. The mass of the beam is ignored. Illustrations in this article by Jim W.W. Smith (請至原網頁參看圖片) Like Bayes, Norman asserts that our beliefs ought to change in response to reason and evidence, or as David Hume said, “a wise man proportions his belief to the evidence.”19 These changes could be seen as the movement of the fulcrum lying under a simple lever. Picture a beam or a plank (the lever) with a balancing point (the fulcrum) somewhere in the middle, such as a playground teeter-totter. As in Figure 1, you can balance a large adult with a small child just by positioning the fulcrum closer to the adult. And if you know their weight, then the location of that fulcrum can be calculated ahead of time because the ratio of the beam length on either side of the fulcrum is the inverse of the ratio of mass between the adult and child (e.g., a three times heavier person is balanced by a distance having a ratio of 1:3 units of distance). If we now move to the realm of reason, we can imagine substituting the ratio of mass between an adult and child by the ratio of how likely the evidence is to be observed between a claim and its counterclaim. Note how the term in italics captures not just the absolute quantity of evidence but the relative quality of that evidence as well. Once this is considered, then the balancing point at the fulcrum gives us our level of credence in each of our two competing claims. 1.2 Figure 2. Ratio of 90–10 for People Without–With the Condition. A 10 percent chance of having a condition gives a beam ratio of 1:9. The location of the fulcrum shows the credence that a random person should have about their medical status. (請至原網頁參看圖片) To see how this works for the example previously given about a test for a medical condition, we start by looking at the balance point in the general population (Figure 2). Not having the disease is represented by 90 people on the left side of the lever, and having the disease is represented by 10 people on the right side. This is a ratio of 9:1. So, to get our lever to balance, we must move the fulcrum so that the length of the beam on either side of the balancing point has the inverse ratio of 1:9. This, then, is the physical depiction of a 10 percent likelihood of having the medical condition in the general population. There are 10 units of distance between the two populations and the fulcrum is on the far left, 1 unit away from all the negatives. 1.3 Figure 3. Ratio of 18 False Positives to 6 True Positives. A 1 to 3 beam ratio illustrates a 25 percent chance of truly having this condition. The location of the fulcrum shows the proper level of credence for someone if they receive a positive test. (請至原網頁參看圖片) Next, we want to see the balance point after a positive result (Figure 3). On the left: the test has a 20 percent false positive rate, so 18 of the 90 people stay on our giant seesaw even though they don’t actually have the condition. On the right: 60 percent of the 10 people who have the condition would test positive, so this leaves six people. Therefore, the new ratio after the test is 18:6, or 3:1. This means that in order to restore balance, the fulcrum must be shifted to the inverse ratio of 1:3. There are now four total units of distance between the left and right, and the fulcrum is 1 unit from the left. So, after receiving a positive test result, the probability of having the condition (being in the group on the right) is one in four or 25 percent (the portion of beam on the left). This confirms the answer we derived earlier using abstract mathematical formulas, but many may find the concepts easier to grasp based on the visual representation. To recap, the position of the fulcrum under the beam is the balancing point of the likelihood of observing the available evidence for two competing claims. This position is called our credence. As we become aware of new evidence, our credence must move to restore a balanced position. In the example above, the average person in the population would have been right to hold a credence of 10 percent that they had a particular condition. And after getting a positive test, this new evidence would shift their credence, but only to a likelihood of 25 percent. That’s worse for the person, but actually still pretty unlikely. Of course, more relevant evidence in the future may shift the fulcrum further in one direction or another. That is the way Bayesian reasoning attempts to wisely proportion one’s credence to the evidence. 1.4 Figure 4. Breaking Reason’s Fulcrum. Absolute certainty makes Bayes’ theorem unresponsive to evidence in the same way that a simple lever is unresponsive to mass when it becomes a ramp. (請至原網頁參看圖片) What about our Young Earth creationist friend? When using Bayes’ theorem, the absolute certainty he holds starts with a credence of zero percent or 100 percent and always results in an end credence of zero percent or 100 percent, regardless of what any possible evidence might show. To guard against this, the statistician Dennis Lindley proposed “Cromwell’s Rule,” based on Oliver Cromwell’s famous 1650 quip: “I beseech you, in the bowels of Christ, think it possible that you may be mistaken.”20 This rule simply states that you should never assign a probability of zero percent or 100 percent to any proposition. Once we frame our friend’s certainty in the Truth of biblical inerrancy as setting his fulcrum to the extreme end of the beam, we get a clear model for why he is so resistant to counterevidence. Absolute certainty breaks Reason’s Fulcrum. It removes any chance for leverage to change a mind. When beliefs reach the status of “certain truth” they simply build ramps on which any future evidence effortlessly slides off (Figure 4). So far, this is the standard way of treating evidence in Bayesian epistemology to arrive at a credence. The lever and fulcrum depictions provide a tangible way of seeing this, which may be helpful to some readers. However, we also propose that this physical model might help with a common criticism of Bayesian epistemology. In the relevant academic literature, Bayesians are said to “hardly mention” sources of knowledge, the justification for one’s credence is “seldom discussed,” and “Bayesians have hardly opened their ‘black box’, E, of evidence.”21 We propose to address this by first noting it should be obvious from the explanations above that not all evidence deserves to be placed directly onto the lever. In the medical diagnosis example, we were told exactly how many false negatives and false positives we could expect, but this is rarely known. Yet, if ten drunken campers over the course of a few decades swear they saw something that looked like Bigfoot, we would treat that body of evidence differently than if it were nine drunken campers and footage from one high-definition camera of documentarians working for the BBC. How should we depict this difference between the quality of evidence versus the quantity of evidence? We don’t yet have firm rules or “Bayesian coefficients” for how to precisely treat all types of evidence, but we can take some guidance from the history of the development of the scientific method. Evidential claims can start with something very small, such as one observation under suspect conditions given by an unreliable observer. In some cases, perhaps that’s the best we’ve got for informing our credences. Such evidence might feel fragile, but…who knows? The content could turn out to be robust. How do we strengthen it? Slowly, step by step, we progress to observations with better tools and conditions by more reliable observers. Eventually, we’re off and running with the growing list of reasons why we trust science: replication, verification, inductive hypotheses, deductive predictions, falsifiability, experimentation, theory development, peer review, social paradigms, incorporating a diversity of opinions, and broad consensus.22 We can also bracket these various knowledge generating activities into three separate categories for theories. The simplest type of theory we have explains previous evidence. This is called retrodiction. All good theories can explain the past, but we have to be aware that this is also what “just-so stories” do, as in Rudyard Kipling’s entertaining theory for how Indian rhinoceroses got their skin—cake crumbs made them so itchy they rubbed their skin until it became raw, stretched, and all folded up.23 Even better than simply explaining what we already know, good theories should make predictions. Newton’s theories predicted that a comet would appear around Christmas time in 1758. When this unusual sight appeared in the sky on Christmas day, the comet (named for Newton’s close friend Edmund Halley) was taken as very strong evidence for Newtonian physics. Theories such as this can become stronger the more they explain and predict further evidence. 1.5 Finally, beyond predictive theories, there are ones that can bring forth what William Whewell called consilience.24 Whewell coined the term scientist and he described consilience as what occurs when a theory that is designed to account for one type of phenomenon turns out to also account for another completely different type. The clearest example is Darwin’s theory of evolution. It accounts for biodiversity, fossil evidence, geographical population distribution, and a huge range of other mysteries that previous theories could not make sense of. And this consilience is no accident—Darwin was a student of Whewell’s and he was nervous about sharing his theory until he had made it as robust as possible. Figure 5. The Bayesian Balance. Evidence is sorted by sieves of theories that provide retrodiction, prediction, and consilience. Better and better theories have lower rates of false positives and require a greater movement of the fulcrum to represent our increased credence. Evidence that does not yet conform to any theories at all merely contributes to an overall skepticism about the knowledge we thought we had. (請至原網頁參看圖片) Combining all of these ideas, we propose a new way (Figure 5) of sifting through the mountains of evidence the world is constantly bombarding us with. We think it is useful to consider the three different categories of theories, each dealing with different strengths of evidence, as a set of sieves by which we can first filter the data to be weighed in our minds. In this view, some types of evidence might be rather low quality, acting like a medical test with false positives near 50 percent. Such poor evidence goes equally on each side of the beam and never really moves the fulcrum. However, other evidence is much more likely to be reliable and can be counted on one side of the beam at a much higher rate than the other (although never with 100 percent certainty). And evidence that does not fit with any theory whatsoever really just ought to make us feel more skeptical about what we think we know until and unless we figure out a way to incorporate it into a new theory. 2. 結論 We submit that this mental model of a Bayesian Balance allows us to adjust our credences more easily and intuitively. Also, it never tips the lever all the way over into unreasonable certainty. To use it, you don’t have to delve into the history of philosophy, epistemology, skepticism, knowledge, justified true beliefs, Bayesian inferences, or difficult calculations using probability notation and unknown coefficients. You simply need to keep weighing the evidence and paying attention to which kinds of evidence are more or less likely to count. Remember that observations can sometimes be misleading, so a good guiding principle is, “Could my evidence be observed even if I’m wrong?” Doing so fosters a properly skeptical mindset. It frees us from the truth trap, yet enables us to move forward, wisely proportioning our credences as best as the evidence allows us. Zafir Ivanov is a writer and public speaker focusing on why we believe and why it’s best we believe as little as possible. His lifelong interests include how we form beliefs and why people seem immune to counterevidence. He collaborated with the Cognitive Immunology Research Initiative and The Evolutionary Philosophy Circle. Watch his TED talk. Ed Gibney writes fiction and philosophy while trying to bring an evolutionary perspective to both of those pursuits. He has previously worked in the federal government trying to make it more effective and efficient. He started a Special Advisor program at the U.S. Secret Service to assist their director with this goal, and he worked in similar programs at the FBI and DHS after business school and a stint in the Peace Corps. His work can be found at evphil.com. References 1. https://rb.gy/ms7xw 2. https://rb.gy/erira 3. https://rb.gy/pjkay 4. https://rb.gy/yyqh0 5. https://rb.gy/96p2g 6. https://rb.gy/f9rj3 7. https://rb.gy/5sdni 8. https://rb.gy/zdcqn 9. https://rb.gy/3gke6 10, https://rb.gy/1no1h 11. https://rb.gy/eh2fl 12. https://rb.gy/2k9xa 13. Gillespie, M. A. (1995). Nihilism Before Nietzsche. University of Chicago Press. 14. https://rb.gy/4iavf 15. https://rb.gy/crv9j 16. https://rb.gy/zb862 17. https://rb.gy/dm5qc 18. Norman, A. (2021). Mental Immunity: Infectious Ideas, Mind-Parasites, and the Search for a Better Way to Think. Harper Wave. 19. https://rb.gy/2k9xa 20. Jackman, S. (2009). The Foundations of Bayesian Inference. In Bayesian Analysis for the Social Sciences. John Wiley & Sons. 21. Hajek, A., & Lin, H. (2017). A Tale of Two Epistemologies? Res Philosophica, 94(2), 207–232. 22. Oreskes, N. (2019). Why Trust Science? Princeton University Press. 23. https://rb.gy/2us27 24. Whewell, W. (1847). The Philosophy of the Inductive Sciences, Founded Upon Their History. London J.W. Parker This article appeared in Skeptic magazine 28.4 Buy print edition Buy digital edition Subscribe to print edition Subscribe to digital edition Download our app
本文於 修改第 1 次
|
「觀念論」是一種「現實論」 - Jeremy Dunham
|
|
推薦1 |
|
|
我連「躺椅上的哲學家」都說不上,或許這就是何以我在下文中看不到:支持作者標題這個命題、他第一段論述、和他第二兩段論述等三者的「論點」。 或許,這篇文章的標題應該改為:《黑格爾觀念論簡介》。 Idealism is Realism Thoughts are more real than objects Jeremy Dunham, 04/19/24 編者前言 Idealism is often met with some ridicule; surely the world doesn't just exist in our heads. Jeremy Dunham argues this view of idealism is a misconception. Idealism is a much more realist worldview than we think, and more realist than its alternatives, as it does not deny the existence of the most real things there are: thoughts. What is idealism? Throughout history, in most cases, philosophical idealism is a metaphysical position. The idealist is concerned with reality’s fundamental nature. It is often mistakenly thought to be a reductive theory of the fundamental nature of reality. Many critics have supposed that the idealist tries to reduce reality to the subjective states of individual minds. According to this form of subjective idealism there is no world outside our minds. This view is often associated with the British empiricist Bishop Berkeley (1685-1753). Famously, when told that such idealism was irrefutable, the English author Samuel Johnson 1709-1784) shouted, “I refute it thus” and kicked a stone. The idealist therefore is thought to be the one who denies. They are anti-realist, anti-materialist, anti-naturalist, and certainly anti-stones. This way of thinking about philosophical idealism is misleading. Many kinds of philosophers have both voluntarily and involuntarily been referred to as idealists. However, they are united by an understanding of idealism as a form of realism. Idealism is not a reductive philosophy. It argues for the real existence of elements of reality often dismissed. It is a realism about ideas. Even Berkeley frames his position as a realism. He wrote that “the real things are those things I see, and feel, and perceive by my senses”. Berkeley opposed his view to those who regard our rich conscious phenomenal world, the world of tastes, feels, colours, and sounds, as in some way less real than the physical world. For Berkeley, the real stone is the coloured object which we see and feel and that resists us when kicked. If Johnson kicked the stone as hard as I imagine, he entered a world of pain. For Berkeley, this world is the real world. His idealism is ampliative, not reductive. Its aim is to account for the full extent of our reality. Berkeley, then, is not anti-stone. He argued that only idealism can do justice to stones. Although idealism may refer to a doctrine that affirms the reality of our ideas in this subjective sense, there is another sense of the word ‘idea’. This is the Platonic Idea, often referred to as ‘Form’ or ‘Universal’. Idealism shares something in common with the modern philosophical view known as Platonism. But there are significant differences. Platonists defend the existence of universals in addition to particular properties. A Platonist about properties, for example, believes that in addition to the individual things in the world that have redness amongst their properties, such as the red pen in front of me and the red symbols on my computer screen, there is the universal redness. This universal isn’t in front of me. It doesn’t exist anywhere in space or time. It is an abstract object. An abstract object is neither physical nor mental. It is causally inert, fixed, and unchanging. Yet, when we see redness in the world, this redness is an exemplification or instantiation of that universal. Particular red things are united by the fact that they instantiate this universal. Accordingly, the modern Platonist seems to postulate two worlds. One of abstract objects and another in which they are instantiated. However, since the abstract objects are causally inert, the relationship between these worlds is mysterious. One of the most important schools of idealism in its history is that known as absolute idealism. It originates with Hegel in Germany, but flourished towards the end of the nineteenth-century with many adherents in the Oxbridge philosophy departments and worldwide. Here, the idea in idealism explicitly refers to Plato’s ideas. However, the absolute idealist attempts to bring the two worlds described above together into one. Consequently, the abstract universal is made concrete. In several places, Plato suggests that things have the properties they do in virtue of participating in the Idea (or universal). A beautiful thing is beautiful in virtue of the fact that it participates in the Idea of beauty. However, this suggests that the particulars stand in a causal relationship with the universals. Ideas are causally responsible for the existence of properties in the concrete world. Perhaps we are wrong to think of Plato’s Ideas as abstract objects after all? Abstract universals are causally inert, so whatever relationship there is between them and their instantiating particulars, it cannot be causal. This is the absolute idealist’s starting point. The universals do not exist outside of our world. They are immanent to it. They are not abstract, rather they are concrete. As Hegel claimed, since the living world is concrete not abstract, those who consider universals as abstract kill the living thing. This kind of idealist argues that our world has the structure or form that it does because of the universals immanent to it. Hegel wrote that “The universal is the essential, true nature of things” and that “through thinking these over we become acquainted with the true nature of things”. Any individual bear, for Hegel, has a universal nature. It’s that aspect of its nature it shares with any other bear and thus enables us to identify it as a bear, even if we’ve never seen this individual bear before. But it is also different to every other bear. It has particular features that distinguish it from any other bear and make it an individual. Crucially, in the case of the concrete universal, the particular features that make an individual the individual it is are not external to the universal but rather contained within it. You do not get the individual bear by bundling a bunch of extra particulars to the universal bear. Hegel dedicates much of his famous Phenomenology of Spirit to demonstrating that if you start with properties that are only externally related, it’s impossible to combine them together into the kind of unities that make up our world. A bear isn’t a bundle of qualities. It’s a self-preserving organism for which the parts depend on the whole as much as the whole depends on the parts. Its particular properties, like the thickness of its fur, are different in the winter than in the summer because they are internally related to the organism as a whole and sensitive to its survival needs. What does it mean to say that the concrete universal contains particulars within itself? It means that the individual bear becomes the individual bear not by addition, but by negation. To think the abstract universal, you abstract away all the properties that differentiate one bear from another and the universal is whatever is left. The concrete universal, on the other hand, includes all those differences. The particularisation of the bear is the process by means of which it negates the properties that do not belong to it, leaving behind just those that make it the individual bear. This is the meaning behind Hegel’s often quoted phrase: all determination is negation. ‘The true, infinite universal’, Hegel writes, ‘determines itself… it is creative power as self-referring absolute negativity. As such, it differentiates itself internally’. This points to an important characteristic of the concrete universal: it determines the development of the individual. The universal guides the bear’s ideal development. It should develop from a cub to a yearling and then from a young adult to a mature adult. However, it develops in its own particular way. Although all bears develop from cub to yearling, only this individual cub developed in this particular way. The thought is that if you took away from the universal every particular way that the bear might develop leaving us with the abstract universal consisting of just the features all bears share, you’re actually left with nothing. Certainly, you’re left with nothing living. You’ve murdered the living thing. According to the most prominent contemporary metaphysical readings of Hegel, such as Robert Stern’s, the concrete universals should be understood as similar to Aristotelian substance kinds. This means that there are as many concrete universals as there are individuals to instantiate them. Emily is the individual human she is because she is a self-particularising concrete universal. However, the absolute idealists who dominated the British philosophical world towards the end of the nineteenth century believed that all these concrete universals were ultimately interrelated as parts of one all-encompassing concrete universal. For the nineteenth-century British idealist Bernard Bosanquet, the perverse thing about abstract universals is that the wider their extension is, the less there is to them. This is because you get the universal giant panda when you abstract everything particular away from every individual giant panda. Then, to get the universal bear, you must abstract all the features that particularise it as one of its particular species of bear, like giant panda. To get the universal mammal you then abstract all the features that make each animal a mammal rather than a reptile, bird, or fish. The more things that supposedly instantiate a universal, the sparser the features of that universal are. On the contrary, the logic of the concrete universal, Bosanquet says, does violence to the ‘inverse ratio of intention to extension’. There is not less to the universal animal than there is to the universal bear, rather there is more because the universal animal contains bear within it and a whole host of other animals too. It’s the most substantial Noah’s ark you can imagine. However, if bears are part of a higher universal of mammals and mammals are part of a higher universal of animals, why stop there? Couldn’t there be a universal ‘living thing’? And perhaps one above that? For Bosanquet, this is exactly right. We keep going until we end up with just one concrete universal, the absolute Idea, the world as a whole. For Bosanquet this is ‘a system of members, such that every member, being ex hypothesi distinct, nevertheless contributes to the unity of the whole in virtue of the peculiarities which constitute its distinctness’. In agreement with the Aristotelian reading of idealism, each individual is the self-particularising of the concrete universal, but, ultimately, it’s one and the same concrete universal self-particularising in various different ways. The result of this is that we owe our individuality to a larger whole in which we are all systematically related and which relates us to each other in a fundamental way. Earlier I claimed that many people incorrectly regard idealism as a philosophy that is characterised by the things that it is against. However, here we find something that this kind of idealist really is anti: the idea of fundamental separateness. This has significant ethical implications. The most important absolute idealist of the twenty-first century, Timothy Sprigge (1932-2007), wrote that absolute idealism’s main message is that ‘we are nearer the core of things when we partly transcend it [our separateness] in cooperative ethical, cultural, and intellectual endeavours and in mutual aid’. Idealism is a label that has been used to refer to a huge variety of different philosophical positions. I’ve focused on metaphysical versions to show how different idealism is from its common misconceptions. Idealism is not a reductive philosophy but an inflationary one. Idealism aims to do justice to the full extent of the characteristics of the world in which we live. Any thorough-going realism, any realism that takes every feature of our world seriously, must be a realism about the idea. Many of the ideas in the article were developed in collaboration with Iain Hamilton Grant and Sean Watson when we wrote our 2011 Idealism book together. I’m also very grateful for many conversations with Robert Stern since then which have improved my understanding of the concrete universal. Thanks also to Joe Saunders and Emily Thomas for their comments on an earlier draft. Jeremy Dunham is an Associate Professor in Philosophy at the University of Durham. Related Posts: The universe didn't exist before it was perceived The Return of Metaphysics: Hegel vs Kant The world is both subjective and real Metamodernism: A response to modernism and postmodernism SUGGESTED VIEWINGThe return of idealismWith James Tartaglia SUGGESTED VIEWINGFragments and realityWith Tim Maudlin, Michael Della Rocca, Kathleen Higgins, Jack Symes Related Videos: Roger Penrose | Interview Consciousness and material reality The trouble with time The trouble with string theory
本文於 修改第 4 次
|
《7本古代哲學著作形塑西方思想》讀後
|
|
推薦1 |
|
|
0. 前言 韓最克斯先生所介紹「形塑西方思想」的這7本著作中(請見本欄),我對Dialogue of Pessimism毫無印象。Poems、On Nature、和Lives and Opinions of the Eminent Philosophers三本則在了解古希臘思想的閱讀過程中,應該看到它們被提及或引用(古代哲學-1956、古希臘1-1979、古希臘2-1981)。我沒有讀過Discourses of Epictetus,但奧瑞尼亞斯的《沉思錄》則在15歲那年讀過(該文第1.1小節);所以,我對斯多噶學派的思想略有所知。Republic一書有點冗長,我大概只重點式的讀了1/3到一半。7本著作中,我唯一讀完的只有The Nicomachean Ethics;已經是40多年前的事了。 1. 讀後 1) 在漢默拉比法典之外,我對巴比倫的了解大概就數空中花園、巴柏塔寓言、和《舊約》所記載的一些歷史或軼事。讀了此文才知道巴比倫文化也是哲學的發源地,和以「對話形式」來進行「論述」的開山老祖。 2) 瑟拉芬尼斯「人以自己的形象造神」的說法相當睿智;他認為「神即宇宙」的想法,應該和儒家、道家的學說有相通之處。 3) 我對帕門奈迪斯思想的認識,主要來自《柏拉圖對話錄》一書中的《帕門奈迪斯篇》。以我當時的英文程度和哲學水準,應該連「一知半解」都說不上。韓最克斯先生的大作提到帕氏:”The world we interact with is not the “true” reality but only a set of appearances.” 這個看法;我想它和唯識宗「萬法唯識」,以及《金剛經》「凡所有相皆是虛妄」的觀點可說「所見略同」。 4) 韓最克斯先生在介紹亞里斯多德的倫理學思想時,提及「德性論」、「效益論」、和「義務論」三大學派。我個人傾向「效益論」。15歲時我根據在雜誌上一篇介紹「效益論」的文章寫了《效果論》,主張以它當做「行為準則」。一個月後我就發現該文有一個嚴重的困難:「效果」有長期和短期之分;也有小我和大我之分。所以,「效果」(或「效益」)不能高居「行為」的指導綱領。我將在討論倫理學時進一步表達拙見。 2. 結論 「學而不思則罔,思而不學則殆。」
本文於 修改第 1 次
|
11位你不知道但應該知道的哲學家 -- Martha Nussbaum等
|
|
推薦1 |
|
|
11 philosophers you don't know about, but should Martha Nussbaum, Carlo Rovelli and other philosophers draw the short list. IAI Editorial, 11/14/23 For World Philosophy Day 2023, eleven leading thinkers nominate philosophers you probably haven't heard of, but you should know about, from Ancient Greece all the way to the present day. Martha Nussbaum, Carlo Rovelli, Cheryl Misak, Peter Adamson, Andrew Bowie, Tommy J Curry, Emily Thomas, Paul Giladi, Maria Balaska, Sara Heinämaam, Hugo Drochon and Sophie-Grace Chappell put forward their choices. Martha C. Nussbaum Porphyry (c. 234 - c. 305 CE) A Neoplatonic philosopher born in Tyre, and later living at Rome, who wrote in Greek. He wrote the best work in the entire history of Western philosophy before the twentieth century on the intelligence and complex sentience of nonhuman animals and the cogent reasons for not killing and eating them. Called "On Abstaining from Eating Animals," it is a very lengthy work and survives in its entirety. An English translation by Gillian Clark was published in 2000 by Cornell University Press. Martha C. Nussbaum is Distinguished Service Professor of Law and Ethics at the University of Chicago and one of the world’s most celebrated moral philosophers, according to the Financial Times. Carlo Rovelli Anaximander of Miletus, 5th century BCE Anaximander is a giant of human thinking, standing at one of the deep roots of modernity and is very little known. He is the one that realized the first cosmological revolution, understanding that the sky above us continues below us and the Earth is a big stone floating in the void. This is the discovery of the worldview that will characterize the West for centuries, the birth of cosmology, and is the first scientific revolution. It is the discovery that scientific revolutions are possible: in order for us to understand the world, we must be aware that our current worldview may be mistaken, and we can redraw it. He is the first geographer.: the first to draw geographical maps. The first biologist, contemplating the possibility that living beings evolved over time. He is the first astronomer, making a rational study of the movements of heavenly bodies and seeking to reproduce them with a geometrical model. He is the first to propose two conceptual tools that would prove fundamental to scientific activity: the idea of natural law, guiding the unfolding of events over time and by necessity; and the use of theoretical terms to postulate new entities, hypostases used to make sense of the observable world. He starts the critical tradition that forms the basis of today’s scientific thinking: he follows his master Thales's path while at the same time searching for his master’s mistakes. He created what the Greeks called Peri phuseos istoria (hence “physics”), the “inquiry into nature,” giving birth to a tradition that would form a deep root basis for the entire scientific development to come. Even the literary form of this tradition, a treatise in prose, starts with him. Carlo Rovelli is an Italian theoretical physicist and loop quantum gravity pioneer. He is the author of, many bestselling books, including The Order of Time, Reality Is Not What It Seems, and Anaximander. Maria Balaska Cora Diamond (1937 - ) Asking ‘What do women want in a moral theory?’, Annette Baier in 1985 included Cora Diamond in her list of women philosophers who spoke 'in a different voice from a standard moral philosopher's voice'. Since then many years have passed and Cora Diamond has been giving us more and more samples of that voice. At a time where so many philosophers still put forward a view of ethics as a series of calculations, mapped on endless versions of trolley problems, it is imperative that we listen to a voice that can teach us differences, as Wittgenstein had hoped for. From her early thought-provoking 'Eating Meat and Eating People' to her more recent discussions on the nature of truth in ethics (what should we make of the statement 'Slavery is wrong'?), Diamond keeps bringing philosophy face to face with what she calls 'difficulties of reality'. This is where philosophy begins. Maria Balaska is a research fellow at the University of Hertfordshire and at Åbo Akademi University. She is the author of Wittgenstein and Lacan at the limit: meaning, and astonishment, and the editor of Cora Diamond on Ethics. Peter Adamson Fakhr al-Din al-Razi (1150 - 1210 CE) A formidable theologian and philosopher who responded in great detail to the ideas of the more famous Ibn Sina (Avicenna, who died in 1037). Al-Razi is well known in the Islamic world as the author of a massive commentary on the Quran but also wrote many works detailing his disagreements, and occasional agreements, with Ibn Sina. He had innovative things to say on pretty much every area of philosophy, for instance by challenging Ibn Sinā on points of logic, developing a new conception of metaphysics, and arguing for a consequentialist ethics. It's especially interesting to see how he weaves together ideas from Avicennan philosophy with doctrines from Islamic theology. He was also very influential, and was quoted for centuries in later Islamic philosophy. Peter Adamson is Professor of Ancient and Medieval Philosophy at King’s College London, and Professor of Philosophy in Late Antiquity and the Islamic World at Ludwig Maximilan University of Munich. He is the host of the podcast History of Philosophy Without Any Gaps. Cheryl Misak Frank Ramsey (1903 – 1930) Frank Ramsey died at the age of 26, too young to have personally influenced generations of students. But his importance is legendary. He was at least Wittgenstein’s equal in philosophy and a major influence on his difficult friend. In philosophy, we have Ramsey Sentences, Ramsey Conditionalization, and much more. A fruitful branch of combinatoric mathematics is named after him. He founded two branches of economics and, in figuring out how to measure degrees of belief, laid the basis for contemporary economics and social science. His life was as exciting. He began his Cambridge undergraduate degree just as the Great War was ending; he was part of the race to be psychoanalyzed by Freudians in Vienna; and he was in with the Bloomsbury set of writers and artists. Virginia Woolf described him ‘as something like a Darwin, broad, thick, powerful, & a great mathematician, & clumsy to boot.’ As if that weren’t enough, he had important things to say about (socialist) politics and the (atheist) meaning of life. Cheryl Misak is University Professor and Professor of Philosophy, University of Toronto, and author of, among other books, Frank Ramsey: A Sheer Excess of Powers. Andrew Bowie Friedrich Daniel Ernst Schleiermacher (1768 - 1834) Schleiermacher is well known as a key founder of modern Protestant theology, but in the Anglophone world he is still not widely known for his philosophical insights into language, that presage key ideas present in Wittgenstein, Habermas, Brandom, and others, let alone his views on aesthetics, and ethics. These ideas emerge particularly through his concern with interpretation as fundamental to human existence. He sees interpretation as an ‘art’, which is ‘that for which there admittedly are rules, but the combinatory application of these rules cannot in turn be rule-bound’. This leads him to a rejection of foundational forms of epistemology: the ‘completion’ of episteme is ‘coming to an understanding’, which is ‘an art techne’. It is ‘an activity’ and episteme and techne ‘are the same’. As such, ‘the art of finding principles of knowledge can be none other than our art of carrying on conversation’. This leads him to a rejection of the analytic/synthetic distinction: the ‘difference therefore just expresses a different state of the formation of concepts’. By taking issues present in aesthetics as central to philosophy he offers an alternative approach to modern philosophy which has not been adequately appreciated. Andrew Bowie is Emeritus Professor of Philosophy and German at Royal Holloway, university of London, and author of, most recently, Aesthetic Dimensions of Modern Philosophy. Tommy J. Curry Jean-Louis Vastey (1781 - 1820) Also known as Pompée Valetin, Baron de Vastey, he authored The Colonial System Unveiled in 1814. This book analysed the colonial endeavours of Europe broadly as a system of ideological claims driven by greed. Vastey explained: “When Europeans came to the new world, their first steps were accompanied by crimes on a grand scale, massacres, the destruction of empires, the obliteration of entire nations from the ranks of the living”. Baron de Vastey argued that Europe’s economic and intellectual advances were not based on its superior white civilization but on its unbridled genocide and violence against darker races. Baron de Vastey claimed that Europe’s Enlightenment was rooted in racist pseudo-sciences dedicated to advancing slavery and colonialism against the humanity of African and Indigenous peoples and greed, not knowledge. Baron de Vastey’s philosophy explored topics of historiography, ethnology, and Black self-determination that laid the foundation of normative Black political theory and decolonial ethics well into the early 20th century. Tommy J. Curry is an American scholar, author and professor of philosophy, holding a Personal Chair in Africana philosophy and Black male studies at the University of Edinburgh. He is the author, most recently, of Another White Man's Burden: Josiah Royce's Quest for a Philosophy of White Racial Empire. Emily Thomas Constance Naden (1858 – 1889) Constance Naden is well-known as a Victorian poet, but she was also a philosopher. During the 1880s, she drew on the science of her day to controversially defend atheism and materialism. I read Naden as arguing that, whilst we only directly know the contents of our own minds, we should believe that outside our minds lies matter. And, as neuroscience shows our brains deeply affect our minds, we should believe our minds are made of matter. To give a flavour of Naden’s work, take her 1882 “Animal Automatism”. It attacks evolutionist T. H. Huxley for advocating a kind of ‘phenomenalism’, on which matter depends for its existence on our minds. Against this, Naden turns Huxley’s own evolutionary theories against him. “Unless his views have recently undergone a marvellous change,” Naden writes, “the earth was in being very long before the appearance of any sentient organism”, and “mind is developed from matter”. Mind, indeed life itself, evolved in the distant past from non-living matter. So matter must exist independently of us. Emily Thomas is Professor of Philosophy at the University of Durham, and author, most recently, of a book on Victoria Welby. Paul Giladi Enrique Dussel (1934 - 2023) Dussel was arguably the most important philosopher from South America born in the 20th century. Though rarely (if at all) featured and discussed in mainstream Anglo-American and continental European philosophical circles, he was a prolific writer and world-leading figure in the articulation and propagation of liberation theology. Dussel is also known for developing the radical and challenging concept of ‘transmodernity’. Throughout his life’s work on this difficult subject, Dussel argued that transmodernity is a way of producing both a lifeworld (a set of values that are formative for developing society, personality, and culture) and economic systems that transcend the Global North-centric conflict between modernity and postmodernity. Transmodern thinking and institutional design operate under an ‘ana-dialectic’ or ‘analectic’ logic, which moves critical theory and political economy beyond the Frankfurt School and Foucault, and instead centres the voices, epistemologies, and vocabularies of ideologically-marginalised Global South communities. Dussel famously wrote that a “project of ‘trans’-modernity, a ‘beyond’ that transcends Western modernity (since the West has never adopted it but, rather, has scorned it and valued it as “nothing”) … will have a creative function of great significance in the twenty-first century”. Given how the Global South is currently posing concrete challenges to the Global North’s hegemony, Frantz Fanon’s vision of a ‘critical humanism’ produced by the Global South may start to materialise in no small part due to Dussel’s work. Paul Giladi is a university lecturer in philosophy at the School for Oriental and African Studies, co-founder of the Naturalism, Modernity, Civilization and International Research Network. He is the co-editor of Epistemic Injustice and the Philosophy of Recognition. Sara Heinämaa Aurel Kolnai (1900 - 1973) Aurel Kolnai’s phenomenology of disgust is still the best philosophical analysis of the emotion that we have. Via various readings, it influenced many 20th century theories of revulsion and nausea, but also ingenious artistic presentations of the emotion in paining and cinema. Most importantly Salvador Dali and, via Dali, also Alfred Hitchcock, drew from Kolnai’s insights. Similarly Kolnai’s analysis of game-behavior still operates as a secret source of many 21st century contributions in philosophy of games. Its distinctions help us understand what is at issue in ice hockey and boxing but also in virtual games, war games and chess. Finally, Kolnai’s early analysis of Nazism, The War Against the West (1938), illuminates new totalitarian ideologies that are mushrooming in our own time. The unifying topic of all Kolnai’s contributions is the question how willing and feeling combine in human behavior. We would do well if we were to attend to his alert. Sara Heinämaa is Professor of Philosophy, University of University of Jyväskylä and President of the Philosophical Society of Finland. She is the author of Toward A Phenomenology of Sexual Difference. Hugo Druchon Vilfredo Pareto (1848 - 1923) Italian economist, sociologist, and philosopher, Pareto is perhaps best known for 80/20 rule - the thesis that 80% of wealth will always belong to 20% of the population, in an early rending of the 1% vs the 99%. He is worth revisiting today to help us understand the political era of populism we are still in the midst of. Populism pits a pure people against a corrupt elite. Yet most populist leaders are themselves elites. Trump is part of the financial 1% and was a TV star (The Apprentice) before coming to the White House. Boris Johnson is as establishment as they get: Eton, Oxford. Here, Pareto’s ‘circulation of elites’ thesis can come in handy. Pareto saw politics as the competition between ‘lions’ and ‘foxes’ (both expressions of the elite): lions more inclined towards centralised and faith-based rule, foxes in support of decentralisation and scepticism. Maybe another way to interpret the politics of the last while is to think of it as a circulation from foxes to lions? Hugo Drochon is Associate Professor of Political Theory at the University of Nottingham, and author of the book Nietzsche's Great Politics. Sophie-Grace Chappell Aurel Kolnai (1900 - 1973) I don’t know if people haven’t heard of Aurel Kolnai (1900-1973)—perhaps they have—but he is certainly someone whose work deserves more attention. He was an ethical philosopher who began with experience, not with systematic theories or grands récits, and whose influences included Aquinas, Husserl, G.K.Chesterton, Scheler, and British “common sense” philosophy. He himself was an influence on Bernard Williams, David Wiggins, Isaiah Berlin, and Vaclav Havel. As someone outside the “analytic mainstream”, and as a Jewish Catholic philosopher from Austria-Hungary in the time of the Nazis, he had great difficulty throughout his life in securing a conventional job in academic philosophy. But his writings, with their scepticism both about totalitarianism in politics and about what Levinas calls totalisation in philosophy, are of permanent value—and given that he wrote an anti-Nazi reflection with the title The War Against The West, perhaps have a particular resonance at our present unfortunate point in history. Sophie-Grace Chappell is a Professor of Philosophy at the Open University, who writes about ethics, politics, feminism and epistemology. Her books include Ethics Beyond the Limits, Knowing What To Do, and Ethics and Experience. Her most recent book is the edited collection Intuition, Theory, and Anti-Theory in Ethics, published in 2015. SUGGESTED READING World Philosophy Day: 13 questions for our centuryBy IAI Editorial SUGGESTED READING 15 Ideas that Inspired the World’s Leading ThinkersBy IAI Editorial SUGGESTED READING 10 questions ignored by philosophyBy IAI Editorial
本文於 修改第 1 次
|
7本古代哲學著作形塑西方思想 -- Scotty Hendricks
|
|
推薦1 |
|
|
7 philosophy books that shaped Western thought Dive into seven texts that continue to shape Western philosophy, from ancient Mesopotamia to Greece's brightest minds. Scotty Hendricks, 09/11/23 KEY TAKEAWAYS * These seven philosophical texts have shaped the contours of Western thought, delving into questions of justice, existence, and human nature. * While celebrated works such as Plato's Republic offer insights into justice and reality, lesser-known pieces like the Mesopotamian Dialogue of Pessimism illuminate ancient perspectives on life's absurdities. * Collectively, these writings underscore the rich tapestry of ideas that have laid the foundation for contemporary philosophical discourse. Books possess a unique magic: With only ink and paper, they can communicate thoughts from a person who is separated from you by thousands of years and unfathomable cultural space. For some particularly original thinkers, this has allowed mere fragments of text to influence the course of human thought for thousands of years. The following seven influential philosophy books have helped shape the intellectual history of the Western world and, more recently, the entire planet. Dialogue of Pessimism — Unknown author Ancient Greece is the culture most associated with philosophy. However, it is wrong to think that nobody else studied it. Plato himself wrote about Egypt’s long philosophical history, for example. Unfortunately, however, there is precious little surviving philosophy that originated across the rest of the Mediterranean and Mesopotamian worlds. One of the surviving texts is the Dialogue of Pessimism. It exists in two similar forms: one Assyrian and one Babylonian. While parts of both are fragmentary, it is the best-preserved example of Mesopotamian “wisdom texts.” Framed as a dialogue between two characters, an Aristocrat and his Slave, the text consists of the Aristocrat proposing ideas for things to do to the Slave, who provides good reasons for them. The Aristocrat then proposes opposing ideas, yet the Slave is just as easily able to defend those. The last few lines reflect on the absurdity of life. Many interpretations of this exist. Some suggest it is a precursor to modern existentialist thought, particularly that of Camus or Søren Kierkegaard. It is easy to see that from the final lines of the text: “Slave, listen to me! Here I am, master, here I am! What then is good? To have my neck and yours broken, or to be thrown into the river, is that good? Who is so tall as to ascend to heaven? Who is so broad as to encompass the entire world? O well, slave! I will kill you and send you first! Yes, but my master would certainly not survive me for three days!” Babylonian thought is so foundational that its influence is often overlooked. We still use Babylonian units to measure time. Their astronomers laid the foundations for both modern astronomy and science itself. And it is speculated that many Greek thinkers, such as Thales, were influenced by Babylonian thought. The Dialogue of Pessimism is thought to have influenced biblical texts, particularly Ecclesiastes, and can be viewed as a forerunner of Plato’s Socratic Dialogues. Poems — Xenophanes The first pre-Socratic philosopher with a considerable amount of extant writing samples for us to review is Xenophanes. Unlike many of his contemporaries, he wrote many different books and poems. Enough fragments of his work survive to give us something beyond later commentary. While a full picture of his thoughts is impossible to form, what does exist demonstrates why he was one of the most influential pre-Socratic philosophers. Xenophanes is principally known for his theology. He argued that common conceptions about the gods in the Greek world were mistaken. His view of God was spherical: lacking human traits, and perhaps directly identifiable with the Universe. While there is some debate around his exact wording, he may have been the first Western monotheist, or arguably even pantheist. He mused that humans tended to give their gods familiar traits: “Thracians that theirs are blue-eyed and red-haired. But if horses or oxen or lions had hands or could draw with their hands and accomplish such works as men, horses would draw the figures of the gods as similar to horses, and the oxen as similar to oxen, and they would make the bodies of the sort which each of them had.” His principal philosophical legacy lies in his approach to epistemology and skepticism. While he argued for the existence of objective truths, he doubted the ability of humans to ascertain them. He noted that our beliefs were limited by our knowledge and used this as evidence for how little we can truly know: “If god had not made yellow honey [we] would think that figs were much sweeter.” The skeptics of the ancient world would claim him as a critical influence. However, recent interpretations lean toward Xenophanes’ warning against dogmatic approaches or claims to certainty rather than a hard-line skeptic position. In either case, his writings are among the first to consider the problem of how we can claim to know anything — a problem people still grapple with today. On Nature — Parmenides Parmenides is one of the most important ancient philosophers you’ve never heard about. Working in Elea, a Greek colony in what is now southern Italy, he wrote a single book that exists only in fragmented quotations and later authors’ commentaries. Through these, he has impacted virtually all subsequent Western philosophy. While the name was probably a later invention — On Nature was a commonly applied name for works describing the Universe — Parmenides’ poem is one of the most important texts in Greek philosophy. In it, he invented metaphysics and contributed to logic by laying out his arguments with deductive rigor. Unlike his predecessors — famed for arguing that the world was made of a single, physical element — Parmenides argued that the world is a single, unchanging substance and that our notions of motion, change, creation, and destruction are all mistaken. The world we interact with is not the “true” reality but only a set of appearances. He also maintains that empty space is impossible since the idea of “nothing” is contradictory. In his words: “…the only routes of inquiry that are for thinking: the one, that it is and that it is not possible for it not to be, is the path of Persuasion (for it attends upon Truth), the other, that it is not and that it is right that it not be, this indeed I declare to you to be a path entirely unable to be investigated: For neither can you know what is not (for it is not to be accomplished) nor can you declare it. For the same thing is for thinking and for being.” Parmenides’ legacy is vast. His work directly influenced Plato, who argued that the world we engage with is a mere copy of the world of “forms.” Through Plato, Parmenides impacted nearly all of subsequent Western philosophy. His ideas on time and space continue to influence modern debates. Discourses of Epictetus — Flavius Arrian Epictetus was a Stoic philosopher in the Roman Empire during the second century. Born in what is now Turkey, he was enslaved and owned by Emperor Nero’s secretary at one point. While in servitude, he began receiving an education in Stoic philosophy from Musonius Rufus. After his freedom was restored, he was banished to Greece, where he founded a well-regarded school. Known for teaching Stoicism as a way of life rather than just a pure philosophy, he was well-known in his day — some sources suggest he was more famous than Plato was during his lifetime. Discourses is a series of polished notes from post-lecture discussions. It was likely written by his student, Flavius Arrian. While the exact length of the original text remains unknown, some sources suggest there were eight books in the complete set. Today we have four. These cover a wide range of topics relatable to anyone at any time and present Stoicism as a guide to life rather than a dry philosophy. One of the more famous quotes expresses why a person should bother to study at all: “For on these matters we should not trust the multitude who say that none ought to be educated but the free, but rather to philosophers, who say that the educated alone are free.” Discourses is one of the earliest records we have of the thoughts of a Stoic thinker. Marcus Aurelius held it in high regard and quoted it in Meditations. It was also the source for The Handbook, an introduction to Stoic philosophy aimed at popular audiences, also likely penned by Arrian. This book has proven popular, especially during increases in the popularity of Stoic thought. Republic — Plato Plato’s Republic is arguably one of the most famous works in philosophy. Framed as a discussion between Socrates and several others about the nature of justice, it provides us with some of the most enduring philosophical arguments and images. Socrates addresses the idea of justice by analogy, using the concept of a “just city” to understand how justice impacts the soul. His perfect city has attracted a great deal of attention over the millennia. Along the way, he considers how acquiring knowledge is like leaving a dark cave, what exactly love is, the differences between reality and the world we engage with, and what would happen if you gave a man a magic ring that turned him invisible. Many lines from Republic have become widely known. One particularly famous example is:
“The punishment which the wise suffer who refuse to take part in the government, is to live under the government of worse men.” The influence of Republic cannot be overstated. It has influenced thinkers from Plato’s student Aristotle to those working in the field today. It remains the most widely read book at American Universities. The “utopia” it describes has been used as a framework for the eponymous book. It has also been argued, but not proven, that Plato’s Ring of Gyges may have influenced Tolkien’s One Ring. The Nicomachean Ethics — Aristotle One of the most important books on ethics ever written — The Nicomachean Ethics — is Aristotle’s attempt to determine what the good life is and how to live it. His answer is a system of virtue ethics. His notion of virtue is that of a median point between two vices. For example, courage is seen as the midpoint between the vices of rashness and cowardice. Exactly what these things look like at the moment will vary, meaning that virtue requires serious study, practice, and work. He admits this and goes so far as to suggest that a good life requires making habits of the virtues so they can be practiced regularly. This is entirely needed, because, as he puts it: “…one swallow does not make a summer, nor does one day; and so too one day, or a short time, does not make a man blessed and happy.” While other systems eclipsed Aristotelian Ethics in popularity, virtue ethics is currently enjoying a major resurgence in popularity. Philosophers are reconsidering virtue ethics to avoid the problems in utilitarian and deontological ethical systems. Lives and Opinions of the Eminent Philosophers — Diogenes Laërtius The last inclusion on this list is the strangest. The Lives and Opinions of the Eminent Philosophers is a text by Diogenes Laërtius, written in the 3rd century C.E. The book covers many famous Greek philosophers’ personal lives and ideas. Modern scholars tend to agree that it is not the most reliable source, that its author tends to focus on minor details to the detriment of telling us what his subjects thought, and that the contradictions in it make it clear that parts of it must be wrong. While on its own merits, the book might be considered of limited value, it’s crucial when considering the loss of many primary ancient texts. Diogenes Laërtius documented the lives and thoughts of Greek philosophers without much critique, offering a likely unbiased glimpse into their worlds. Our modern understanding of many Greek philosophers owes much to this text, making it indispensable in the study of ancient thought.
本文於 修改第 1 次
|
|
|