|
倫理學 -- 開欄文
|
瀏覽3,352 |回應18 |推薦3 |
|
|
|
我讀書的興趣和思考的重點在試圖回答:如何「做人」和如何「待人」這兩個問題。我涉獵文學、哲學、心理學、政治學、社會學、認知科學、文化研究等領域,動機都來自試圖回答以上兩個問題。 二十多年來,我在討論不同議題的文章中,依脈絡表達了我對「道德」的看法(我偏向使用「社會規範」這個概念)。今後我將把和它相關的文章集中發表在本欄。 本欄第2篇文章是2002年舊作。該文討論一個案例;同時,它在批評另一位先生大作的過程中,釐清了一些相關概念與盲點;可以做為討論和思考「道德」或「社會規範」的基礎。所以重刊於此。
本文於 修改第 3 次
|
行為的依據:人性或人的處境 - Steven Gambardella
|
|
|
推薦1 |
|
|
|
Human Nature is a Dangerous Myth Why the Human Condition Is a Better Basis for Our Ethics Steven Gambardella, 12/27/25 Michelangelo’s sculpture, in which David is depicted in a contrapposto (twisting) stance, is still yet conveys energy and even intent. We reach for “human nature” when we want simplicity. A headline breaks. Someone does something cruel, kind, foolish, wise, heroic, inexplicable — you name it. “That’s human nature”, we say. The phrase has the finality of a door closing. Nothing more to ask, nothing more to say, nothing more to know. Human nature explains everything, which is precisely why it explains nothing at all. It’s a concept that pretends to name an essence, something buried behind our actions like a blueprint — something stable and pre-written, waiting to be revealed. But the moment you look at how the phrase is used, the game is up. Violence is human nature. Kindness is human nature. Greed, generosity, self-sacrifice, betrayal, love, apathy — everything gets folded up into the same bag. A concept that can absorb all contradictions has no discriminating power at all. It doesn’t illuminate our understanding of what it is to be human, it is so diffuse as to not explain anything at all. Worse still, it flatters us into thinking we’ve reached bedrock when we’ve barely scratched a surface. The idea of human nature promises certainty. It tells us there is something we are, prior to what we do or what happens to us. An essence we share, and therefore an essence we can appeal to. Human Nature and Dehumanisation But this is exactly where the danger lies. Beyond being a trite and hollow explanation for anything a human being has done, human nature is a weapon to be wielded. Once you believe there is a fixed human essence, it becomes tempting — almost irresistible — to decide what counts as validly human. To declare which behaviours are “natural”, which lives are “proper”, which deviations need correcting. If there is a human nature, then there is a way humans ought to be. And if there is a way humans ought to be, then there will always be someone ready to enforce it. We can trace this dynamic back to Aristotle, who is largely responsible for our understanding of human nature. For Aristotle, human beings are distinct in a number of ways. We are rational — using abstraction to solve generalised problems, we love and form households to nurture families, we are political — we develop complex communities, and we are mimetic — we create likenesses of natural things in art and invention. The problem with this understanding of human nature is that there are so many instances where none of the above apply to individual human beings or they are at least highly variable. On the whole, human beings have some tendencies, but tendencies are not essential. So Aristotle made reason a goal (telos) of being rightly human. This deterministic way of conceiving of nature makes some more human and others less human. From there, moral absolutism is only a short step away. And we know very well that history has never struggled to supply volunteers for that step. Behaviors that contradict acceptable norms become manifestations of inhuman traits. Aristotle himself separated Greeks from the barbarians, who were deemed to be less human and thus their enslavement was morally justified. Even today we associate “barbarism” with a kind of uncultured, unintelligent sub-humanity. In reality, barbarian cultures of the ancient world were perfectly fine and sophisticated. Barbarians were simply a fantasy of otherness concocted to fuel xenophobia with the resulting land-grabs, subjugations and slave trade. With Aristotle’s rationalisation, the Greeks made themselves the apex predators of fellow human beings. We see the same pattern repeating through history — racism needs human nature to make sense. Human nature’s implicit telos is used to rob some people of their dignity to justify forms of exploitation and eradication. This is why we should be suspicious of the idea at the outset. Not only because it’s conceptually fuzzy, but because it quietly licenses coercion. The Human Condition The human condition offers a different starting point. Where human nature asks what are we, essentially? the human condition asks what happens to us? The shift sounds subtle, but it changes everything. The human condition doesn’t look for a hidden core behind our lives, it looks at the shared circumstances. And we find ourselves thrown into these circumstances without having chosen them. We are born into lives we had no choice over, even less design. You can’t choose your parents, your nationality, your race. We move from dependency to competence and — if we’re lucky — back again. We age, and we live with the knowledge that we will die, and that everyone we love will too — even those we brought into the world. We’ll be frightened without consenting to it. Desire arrives unannounced. Grief doesn’t ask permission, and joy interrupts us just as abruptly. None of this is an essence, but rather the terrain on which we tread. Even reason belongs in this category. We often talk about rationality as if it were a law we are bound to follow, or a nature we must express. But reason isn’t a commandment written into us; it’s something that has happened to most of us (not all of us). We find ourselves able to reflect, to anticipate consequences, to imagine alternatives. That capacity doesn’t dictate what we must value or how we must live. It merely opens the space in which valuation and choice become possible. The same is true of emotion. We share a broad emotional spectrum not because we are governed by a single emotional nature, but because of what it is like to be self-aware. Anger, love, envy, hope — these aren’t proofs of an essence lurking behind us. They are responses to exposure and vulnerability — to being affected by a world that does not revolve around us. Human nature, as it’s used in most contexts, is a universal — it suggests that all human beings have something in common in essence. In contrast, the human condition is what we’d call a “conditional necessity”. A conditional necessity is where one thing happens because another has. For example, you might say that “when it rains the pavement gets wet”. The human condition, then, is not a universal in the way human nature pretends to be. It doesn’t say this is what you must be. It says this is what you will be subject to. It is conditional, not essential — but the condition is unavoidable. To be human is to find yourself inside these shared constraints, whether you like them or not. Stoicism and the Ethics of Condition And this is precisely why the human condition is a far better foundation for ethical thinking. Ethics grounded in human nature tends to harden into doctrine. If you believe there is a right way to be human, ethics becomes the enforcement arm of that belief. Ethics grounded in the human condition, by contrast, begins with exposure and limitation. It starts from the fact that we are finite, fragile, dependent, and aware of all three. It asks not how to conform to an essence, but how to live well within circumstances none of us chose. This kind of ethics is inherently non-totalitarian. It cannot appeal to an absolute model of the human being, because there is none. It can only appeal to shared experience: suffering, mortality, uncertainty, the need for meaning, the capacity for understanding. It remains open, revisable, and responsive — because the condition itself is not static. Each generation encounters it differently and each individual inhabits it uniquely. In Stoicism we have a central tenet which is to “act in accordance with nature”. Reason is the means by which most of us can hope to achieve that, and reason is natural. But that’s not to say there’s a human nature. The ancient Stoics believed that the gods and other beings possessed reason, so reason did not define the human species. Human beings in fact “share” in reason, which exists independently of human beings. Modern Stoics should likewise resist the temptation to spin yarns about essences. The most practical way to conceive of “acting in accordance with nature” is to understand and conform to the structure of our situation. “Nature” is that structure. If you care about outcomes outside of your control, you will experience anxiety; if you interpret gestures and words as insults, you will experience anger; if you integrate possessions into your identity, you will experience grief. In every case here, we are identifying conditions and tracing the necessity that follows. Stoic ethics works with situations, not essences. Very little Stoic writing is concerned with what humans are by nature, but on what happens to human beings. Stoicism is more a technology than a theory — it’s a practical set of concepts to thrive in the situation we are thrown into. Abandoning the idea of human nature doesn’t leave us unmoored. Quite the opposite. It anchors us in what is actually there, rather than in a metaphysical fiction we keep projecting. It allows us to talk about patterns without pretending they are laws, about tendencies without mistaking them for destinies. Most importantly, it restores responsibility. If there is no essence pulling our strings, no nature to hide behind, then what we do matters in a way it otherwise wouldn’t. “That’s just human nature” becomes unavailable as an excuse. We are left instead with the harder, more honest truth — this is what happened, this is how we responded, and we must answer for it. The human condition doesn’t absolve us, it situates us. And that’s a far more interesting place to begin to think about right and wrong. Written by Steven Gambardella The lessons of history & philosophy made clear, concise and relevant to your life. Illustrated with great works of art. Newsletter: https://gambardella.carrd.co Published in The Sophist Lessons from philosophy, history and culture
本文於 修改第 1 次
|
蘇格拉底之死的啟示 -- Stoicminds Channel
|
|
|
推薦1 |
|
|
|
我讀過10本左右單篇的柏拉圖《對話錄》,包括《菲斗》;對蘇格拉底的方法、思想、和行誼略有所知;對他崇敬有加。 我不反對「唱高調」;不過,下文作者也把調子拉高到破了喉嚨,掀掉屋頂。我對他個人一無所知,沒有批評他「言行不一」或「言不由衷」的依據。只是,這篇文章怎麼看都有點「機車」。至少,他的「倫理觀」只能「求諸己」;以它「律人」或鼓吹其為「通則」,就是所謂的食古不化了。 The Final Secret of Socrates — And Why It Still Matters Today Stoicminds Channel, 11/07/25 At various points throughout history there have been times when a decision made has forever changed the course of history. I believe that the account of Socrates’ last days is one such moment. To me, the account of Socrates’ final days is not simply an anecdote from ancient Greece; rather, it serves as a template for how one should live with integrity, courage, and an unshakeable commitment to the truth. I think the first time I heard of Socrates’ death, what impacted me most was not the poison, nor the trial, nor the political maneuvering surrounding his death. Rather, it was his composure. That he accepted the ramifications of his actions based solely upon his principles. That he did not flee, nor hide, nor compromise. These elements of his behavior prompted me to ponder a question: What is truly meant by living a fearless life? Socrates’ story began much like many other philosophical stories begin; with questions. Questions that challenged the comfortable position of the elite. Questions that inspired young people. Questions that forced the truth out into the light of day despite societal preference for the darkness. In ancient Athens, this made Socrates a threat to those in power. However, the true turning point was not the charges brought against Socrates, nor his trial, but rather his response. When given the opportunity to escape death, Socrates chose not to take advantage of it. He told his friends that to break the law, no matter how unfair the application of the law, would be a betrayal of the very values that he had spent his entire life teaching. Ultimately, his decision to drink the hemlock became the ultimate embodiment of his philosophy: Virtue is more important than mere survival. As Socrates drank the hemlock, he illustrated a phenomenon that we rarely witness today: complete congruence between one’s beliefs and their actions. Not one shred of fear. Not one moment of panic. No bitterness. Simply a profound acceptance of the truth he lived. To me, this moment represents a significant example of how we can reflect on our lives today. Today, we live in a noisy, pressured, and expectation-filled environment. It is simple to compromise, to avoid discomfort, to quiet the voice within us. However, Socrates reminds us that wisdom is not validated by comfort, but rather by crisis. Socrates’ death represented a new beginning. A beginning of courageous stoicism, Greek philosophy, and generations of teachers whose philosophies are influenced by Socrates’ choice of truth over safety. Plato, Aristotle, the Stoics — all were influenced by a man who chose the truth above his own safety. Socrates’ final secret is deceptively simple, yet profoundly impactful: How you respond to your final moments is reflective of how you have lived. This lesson transcends time and is applicable universally. While we may not experience a trial similar to that experienced by Socrates, we will experience fear, we will experience doubt, and we will experience moments when our principles will be tested. When that day arrives, recall Socrates — at peace, centered, and unwavering. Not because he desired to die, but because he refused to live a lie. Written by Stoicminds Channel https://bit.ly/stoicmindschannel
本文於 修改第 2 次
|
索羅論「成就」 ----- Mental Garden
|
|
|
推薦1 |
|
|
|
我初三畢業那年在立法院圖書館借讀了「協志工業振興會」出版的《愛默森散文選》;在該書中第一次看到索羅的大名。1980前後讀過他的《瓦登湖畔》,只是印象並不深刻。我從政治學書籍中得知他主張「公民擁有抗爭權」(1);也讀了他那篇大作。我非常支持這個原則(該欄2022/06/03和06/04兩篇貼文);刊出此文以紀念這位思想家。 附註: 1. 我以前將此概念翻譯為:「公民(有權)不服從」。 Henry David Thoreau and the Idea of Success We’ve Forgotten What happens when you stop optimizing your life Mental Garden, 01/09/26 No one told us that success might go unnoticed. We live optimizing a biography we were told we should already be building at our age: professional achievements, income, reputation… All measurable, all comparable to other lives. And yet, the data reveal a paradox: in developed countries, once a certain threshold is reached, more income and more education stop translating into greater well-being (Easterlin et al., 2010). Trained to keep moving forward, but not necessarily to live better. This question — what it means to live better — is not new. In the mid-19th century, Henry David Thoreau decided to take it seriously. He withdrew for two years to a small cabin by a lake to observe the world with less noise and to see, firsthand, what remained of life once the superfluous was stripped away. From that experience came his book Walden. A book that is an invitation to live intentionally before life slips away. If you also feel that existential fatigue, that sense of doing “the right thing” without feeling good about it, this text is for you. We will explore Thoreau’s view of success: an intimate and countercultural perspective, based on coherence with your values, a minimalist life, and a rhythm measured in depth rather than speed. Perhaps, in the end, success is not something you achieve. Perhaps it is simply something you feel, when life, on the inside, finally fits. 1. Success is living in harmony with yourself “If the day and the night are such that you greet them with joy, and life emits a fragrance like flowers and sweet herbs, if it is more elastic, more starry, more immortal — that is success.” — Henry David Thoreau, Walden True success is inner coherence. For Thoreau, success has nothing to do with what you produce, accumulate, or display; success lies in how you inhabit your own days. This view aligns with modern findings: scientific studies that followed people over the course of their lives found that alignment between personal values and actual behavior is more strongly related to well-being than achieved accomplishments (Sheldon & Elliot, 1999). If you have to pretend in order to get there, that place is not yours. “If one listens to the faintest but constant suggestions of his genius — which are certainly true — he will see that they do not lead to extremes or madness; and yet that is the very path along which he advances. No one ever followed his genius and was deceived.” — Henry David Thoreau, Walden Here appears his idea of the inner “genius.” A little voice impossible to silence, one that reminds us again and again of what we truly love — and which we often ignore due to the demands of what we “should be doing.” That is the voice that should guide us. Studies have shown that goals pursued due to social pressure create more anxiety and less satisfaction than those chosen freely, even when they are achieved (Ryan & Deci, 2000). Success has more to do with not betraying yourself than with standing out. It is an internal matter, not an external one, as Warren Buffett rightly said. 2. Simplify so the essential can emerge “As you simplify your life, the laws of the universe will appear less complex.” — Henry David Thoreau, Walden Simplifying makes life breathable. For Thoreau, complexity is interference — it prevents us from enjoying life. And today we know this well. When the environment becomes too dense (too many options, plans, expectations, goals…), the mind becomes overloaded. At the level of choice, excess has a clear cost. The more options we have, the less satisfied we feel and the more we suffer from analysis paralysis, even when we choose well (Iyengar & Lepper, 2000). That is FOMO. But there is an even more revealing finding when it comes to pace of life. Kasser and Sheldon (2009) studied what was more strongly related to well-being: earning more money or feeling that one has enough time to live. The result was clear: people who perceived themselves as “time-rich.” Once basic material needs are met, additional income barely improves well-being, whereas the perception of being rich in time does — and dramatically so. Feeling rich in time is the greatest fortune. Spend a week reflecting on this idea. Each day, eliminate one source of noise or one area that steals time without giving anything back: an object, a commitment, an app, a habit… Then observe what changes you notice in your pace of life and mental clarity. When there is space in your schedule, space begins to bloom within you. 3. Kindness as an investment that never fails “All our life is astonishingly moral. There is never an instant’s truce between virtue and vice. Goodness is the only investment that never fails.” — Henry David Thoreau, Walden Kindness is solid ground; it supports everything else. Thoreau intuited something that is now proven: engaging in altruistic acts of kindness and volunteering reduces negative emotions and provides a strong sense of meaning in life (Thoits & Hewitt, 2001). You do not only improve the outer world — you improve your own life. Helping someone restores a feeling that is hard to obtain in any other way. Each day, try to perform at least one act of kindness, no matter how small and even if no one sees it. Observe how that gesture returns a sense of coherence and inner calm — the sense that you are building the identity you want for yourself. Nothing you achieve by betraying yourself feels good at the end of the day. “Why should we be in such desperate haste to succeed and in such desperate enterprises? If a man does not keep pace with his companions, perhaps it is because he hears a different drummer. Let him step to the music which he hears, however measured or far away it may be.” — Henry David Thoreau, Walden This is the final reflection: the obsession with speed. While writing this text, I could not stop thinking about Perfect Days. In another letter I explained how that film changed my life, precisely because it reminded me — with its calm — that a simple life, coherent with oneself and kind to others, can be a very elevated form of success. Perhaps that was the point all along… Learning to live without haste, without masks, and without unnecessary noise that distracts us. Want to go deeper? Here are 3 related ideas: 1. Why I chose to live a boring life: Do you live a full life or a busy one? 2. Perfect Days: the film that changed my life 3. No, you’re not missing much: Kill FOMO Your turn: What has your inner voice been asking of you for a long time that you keep postponing out of inertia or fear? Quote of the day: “In proportion as you simplify your life, the laws of the universe will appear less complex.” — Henry David Thoreau, Walden Here I plant ideas. In the newsletter, I make them grow. Daily insights on self-development, writing, and psychology — straight to your inbox. If you liked this, you’ll love the newsletter. Join 49.000+ readers: Mental Garden. See you in the next letter, take care! References 1. Easterlin, R. A., McVey, L. A., Switek, M., Sawangfa, O., & Zweig, J. S. (2010). The happiness–income paradox revisited. Proceedings Of The National Academy Of Sciences, 107(52), 22463–22468. URL 2. Iyengar, S. S., & Lepper, M. R. (2000). When choice is demotivating: Can one desire too much of a good thing? Journal Of Personality And Social Psychology, 79(6), 995–1006. URL 3. Kasser, T., & Sheldon, K. M. (2008). Time Affluence as a Path toward Personal Happiness and Ethical Business Practice: Empirical Evidence from Four Studies. Journal Of Business Ethics, 84(S2), 243–255. URL 4. Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. URL 5. Sheldon, K. M., & Elliot, A. J. (1999). Goal striving, need satisfaction, and longitudinal well-being: The self-concordance model. Journal Of Personality And Social Psychology, 76(3), 482–497. URL 6. Thoits, P. A., & Hewitt, L. N. (2001). Volunteer Work and Well-Being. Journal Of Health And Social Behavior, 42(2), 115. URL 7. Thoreau, H. D. Walden. Written by Mental Garden Productivity and psychology inisghts in useful life lessons +3M monthly views and +300 articles
本文於 修改第 2 次
|
旁人「閒話」和自己「行動」
|
|
|
推薦1 |
|
|
|
0. 前言 這篇文章原來只是讀了門森先生《五個層次「去他媽的蛋!」》一文後(請見此欄2026/01/09),想寫一篇簡單的補充;順便再度介紹《I'd rather be sorry》這首歌的歌詞(請見以下第4節「結論」)。不料拉拉雜雜寫來,倒有幾分成了澆自己塊壘的回憶錄。由於主旨在談「行動」,換了個標題後改置於此欄。 本部落格有很多篇討論和這個主旨相關的文章,例如:《卡木談「起而行」》(該欄202512/22)、《達斯朵也夫斯基之天下本無事》(該欄2025/12/04)、和《別把自己當做世界的中心》(該欄2025/09/29)。 我很清楚自己現在的思考過程已經很難符合我過去對「論述」的要求(該欄開欄文),如果你讀完後有不知所云的感覺,尚祈見諒。 1. 現身說法 我讀景美國校時,自視甚高:全校第二名畢業(第一名是彭楚淳兄);當屆只有我們兩人考進建中;劉永吉兄考進附中;女同學中沒有一位考進北一女或北二女。進建中後,發現自己不過爾爾;勉強說得上中人之姿。進了台大,發現自己原來從小是「矮子叢中稱霸王」;僅存的幾分囂張在升大二前已經蕩然無存。之後再無大志,好高騖遠的習氣也在無形中消失。驀然回首,這一生還算滿意,了無牽掛。如果說還有什麼遺憾,大概就是: 沒有在高中時期接觸到「存在主義」;以及30歲以後才逐漸掌握到這個思潮的精義。從而,留下一絲絲「也許已沒有也許」的無可奈何(《卡夫卡簡介》第2.1小節倒數第二段)。 2. 理論基礎 以上兩節的觀點當然不在鼓吹「自以為是」或「左傾盲動」。反之,我一向主張行動應該盡可能依據「周全思考」或「集思廣益」(1)。何況我年過80,沒有挑戰「不聽老人言,吃虧在眼前」這類箴言的動機。以下從理論層面談談門森先生大作的意旨。 2.1 人的社會性 人是「群居動物」;從而,為了利於生存,我們很快就學到兩個「潛規則」: 1) 小到街坊鄰居,大到整個社會,「跟著混,就好混」大概位居生存法則前三條之一(該網頁第4項)。 2) 在上述「合群」之外,對一般人來說,「得到『認可』」則是生存法寶之一。 以上兩點大概是我們絕大多數人會很「在乎旁人閒言閒語」的理由。 2.2 社會建構論 我們的價值和意見都基於各自成長過程中「社會建構」的結果。由於每個人的成長經驗不同,世界上並沒有一個公認的,能用來衡量的「標準」來判斷:旁人的價值和意見是否優於自己的意見,甚至是否適用於當下的情境(2)。 《父子騎驢》這個寓言生動的表達了「旁人閒話」的不可取。 2.3 言談行動論 由於「言談」等同於「行動」,而「行動」往往蘊含「目的」;因此,我們需要了解:旁人「閒言閒語」不只在表達她/他的「意見」或「價值」,往往也在維護他們自己的「利益」或「地位」。明乎此,我們就更沒有理由「在乎」張三李四、三姑六婆的「閒言閒語」。了 2.4 行動主體論 我對「行動主體論」沒什麼研究,此處只借用這個概念來談談和本文主旨相關的一些問題。 我們採取行動時,通常要考慮多方面的因素;例如:利益、原則、價值、或長期目標等等。當這些因素相互之間有衝突時,一般人會在衡量後做些取捨的選擇。在理想情況下,這類「衡量」或「取捨」的動作應該是理性的、一致的、和整體的。 從「主體」的概念,我們須當時時提醒自己:動作者是「我」;則此「動作」的依據自然需是「我」的利益、「我」的原則、「我」的價值、和「我」的目標。 另一方面,實現「理性」、「一致」、和「整體」這些原則的前提是:利益、原則、價值、或目標等等本身是理性的、一致的、和整體的。 因此,如果一個人把「旁人」的意見或閒話拿來當做自己行動的依據,就不能滿足「行動主體性」的要求。 3. 《五個不同層次「去他媽的蛋」》讀後 門森先生大作主旨可以視為:普通人的「存在主義」(請見此欄2026/01/09)。我的意思是:他沒有拿「存在」、「時間」、「虛空」、和「荒謬」這些高深或抽象的概念來立論;而是從我們每天日常生活中都會碰到的情境,來談「行動」和「雖千萬人吾往矣」。其要點在於: 我們每一個人活在世界上,應該堅持自己的「權益」、「價值」、或「偏好」;並以它們做為自己決策的依據。其他人的看法或意見不妨「參考看看」;但不能讓它們輕易影響到自己的「決定」或「行為」。 如果你覺得「去他媽的蛋!」有點不雅或太粗俗,不妨改成:「別老是瞻前顧後!」或「別那麼在乎旁人閒言閒語!」 4. 結論 我再度推薦下面這首和門森先生大作意旨相輝映;請選擇你最喜歡的歌聲/詮釋(3):
I'd rather be sorry (Anita Carter) I'd rather be sorry (Rita Coolidge & Kris Kristofferson,此版本附歌詞) I'd rather be sorry (Patti Page) I'd rather be sorry (Ray Price) I'd rather be sorry (the Statler Brothers) 摘錄其中三段歌詞: “But I won't spend tomorrow regretting the past For the chances that I didn't take.” “But I'd gamble whatever tomorrow might bring For the love that I'm living today.” “But I'd rather be sorry for something I've done Than for something that I didn't do.” (4) 附註: 1. 俾斯麥有句名言:「笨蛋從自己的經驗學習,我寧可靠別人的經驗變得聰明」。俾相是普魯士人,這句話的英文版本很多。總之,別人的經驗之談當然可以聽,也應該聽;但需要經過審問、慎思、和明辨的過程後才能採納。 2. 不但「言人人殊」,同一個人的觀點也會因年齡或事情是否「關己」而有不同看法;例如:我在大學時代沒少用過「老而不死是為賊」這句俗話;到了57歲那年我退休後,有朋友問起原因,我常常說:「因為我當時那個老闆不懂『敬老尊賢』,所以我跟他說:『拿著這份工作去死吧』」。 3. 我偏愛 Anita Carter 和 the Statler Brothers 這兩個版本。如果你已經開始走下坡,這首歌應該讓你感觸良多,回味無窮。如果你還不到30,可能就要多聽幾次才慢慢體會到:“But I'd rather be sorry for something I've done;Than for something that I didn't do.” 4. 畢竟,摔個鼻青臉腫大概一星期、三個月、頂多大半年就能復原;「當初如果 …,現在也許 …」的苦酒,可得花一輩子來品嘗。
本文於 修改第 3 次
|
「『人』為主體論」 - Bennett Gilbert
|
|
|
推薦1 |
|
|
|
請參見本欄上一篇《引言》。 All that we are The philosophy of personalism inspired Martin Luther King’s dream of a better world. We still need its hopeful ideas today Bennett Gilbert, Edited by Sam Dresser, 07/23/24 0. 前言 -- 金恩博士 On 25 March 1965, the planes out of Montgomery, Alabama were delayed. Thousands waited in the terminal, exhausted and impassioned by the march they had undertaken from Selma in demand of equal rights for Black people. Their leader, Martin Luther King, Jr, waited with them. He later reflected upon what he’d witnessed in that airport in Alabama: As I stood with them and saw white and Negro, nuns and priests, ministers and rabbis, labor organizers, lawyers, doctors, housemaids and shopworkers brimming with vitality and enjoying a rare comradeship, I knew I was seeing a microcosm of the mankind of the future in this moment of luminous and genuine brotherhood. In the faces of the exhausted marchers, King saw the hope that sustained their hard work against the violence and cruelty that they had faced. It is worth asking: why was King moved to try to create a better world? And what sustained his hope? A clue can be found in the PhD dissertation he wrote at Boston University Divinity School in 1955: Only a personal being can be good … Goodness in the true sense of the word is an attribute of personality. The same is true of love. Outside of personality loves loses its meaning … What we love deeply is persons – we love concrete objects, persistent realities, not mere interactions. A process may generate love, but the love is directed primarily not toward the process, but toward the continuing persons who generate that process. King subordinates everything to the flourishing of human persons because goodness in this world has no home other than that of persons. Their wellbeing is what makes the events of our lives and of our collective history worthy of effort and care. In order to demonstrate that we are worth the struggle within and among ourselves, King sought to find love between the races and classes on the basis of philosophical claims about personhood. A decade after his dissertation, he was at the forefront of the Civil Rights movement, marching to Montgomery. Can we still grasp and live the hope that King found? Capitalism, imperialism, nationalism, racism – like iron filings near a magnet, all these historical forces seem to be pulled together today into one fatal, immiserating direction. They teach us hateful ways to behave and promote heinous vices such as pride and greed. Desires flee beyond prudent limits and rush toward disaster. It seems we are not worth all that we used to think we are worth. Can we replace our narcissism with a virtuous self-regard? The philosophical tradition of personalism tells us that we can and do have hope for our future. 1. 「『人』為主體論」溯源 King’s hope came from his understanding of Christianity through the philosophy of personalism. He largely acquired this line of thought during his graduate studies at Boston. His advisors in Divinity School had been students of Borden Parker Bowne (1847-1910), the first philosophy professor at Boston University. Bowne founded Boston personalism, which, with William James’s pragmatism, was one of the two earliest American schools of philosophy. For Bowne, personhood is not the bundles of characteristics we call ‘personality’ (人格). Instead, it is the intelligence that makes reality coherent and meaningful. The core of his thought is that personhood is ‘the deepest thing in existence … [with] intellect as the concrete realisation and source’ of being and causality. Bowne says that if we dismiss abstractions because they are static and have no force in the world, what is left is solely the ‘power of action’. Action for Bowne is intelligence understood as a force that activates the concrete reality of things. This reality is not static substance but the ceaseless business of the effect that entities have on other entities. Personhood is the non-material and non-biological power of relations among things, which activates all the processes of the world. Reality itself is thus deeply personal. Without personhood, it would be atomised and inactive – and therefore unintelligible. In Bowne’s view, only the concept of intelligent selves is adequate for explaining how things are constituted and inter-related. Being is nothing without causality; causality is nothing without intelligence. Reality is nothing without idea; idea is nothing without reality. This intimate connection of mind and the world means that nothing can be understood apart from the intelligence that perceives and understands it, replacing inert substances with the ever-flowing labours of our human need to find meaning in life as we encounter it. Bowne’s ideas had many predecessors, from Latin Christianity through Immanuel Kant, using many different theories and concepts, about what a human being is and about the personhood of God in its relation to our own personhood. His forceful argumentation influenced James, who helped found the American philosophical tradition of pragmatism shortly after Bowne’s first books were published and who drew increasingly close to personalism, as did the idealist philosopher Josiah Royce. Bowne was at the centre of this troika of canonical American philosophers at the turn of the 20th century. His teaching rippled out through personalist philosophers on the West Coast and through his students at Boston, notably Edgar S Brightman and Harold DeWulf, both of whom later became teachers of King. 2. 「『人』為主體論」在歐洲 Many other forms of personalism had been developed in Europe in the previous century: theistic and non-theistic, socialist or communitarian and libertarian, abstractly metaphysical and concretely ethical. It is more an approach to thinking than a method, doctrine or school. Personalism always begins its analysis of reality with the person at the centre of consciousness, to which it attaches the most profound worth. Some versions develop this through ontology or metaphysics; some, through theologies associated with most denominations of the Abrahamic religions; and some, through the intersubjective and communitarian nature of human life. My own version makes the structure of moral meaningfulness the first step and first philosophy, as I will explain below. All versions seek an integrated, ethically strong comprehension of personhood as the heart of the life of humankind. 3. 各類 人種提升論及其根源 Though personalism continues to be a field of robust philosophical research, in American academic philosophy after the Second World War it faded under the hegemony of analytic philosophy. But in King’s hands it became forceful as a practice for justice and other moral ends. Its resources have not been exhausted. Careful revision and updating can make it a source of illumination and hope in the circumstances we face a half-century after King. Why should we update personalism, and what useful purpose will this serve? Our ideas about the nature of human beings are today undergoing a severe challenge by the new philosophies of transhumanism. Through personalism, we can understand and appreciate our purposes and obligations, as well as the dangers posed by transhumanism. The best known of these transhumanist philosophies is effective altruism (EA). The Centre for Effective Altruism was founded at the University of Oxford in 2012 by Toby Ord and William MacAskill; largely inspired by Peter Singer’s utilitarianism, EA has been an influential movement of our time. As MacAskill defines it in Doing Good Better (2015): Effective altruism is about asking, ‘How can I make the biggest difference I can?’ And using evidence and careful reasoning to try to find an answer. It takes a scientific approach to doing good. This is not as clear cut as it might seem, and it has often led to the uncomfortable conclusion that the accumulation of capital by the wealthy is morally necessary in order to affect the world for the better in the future, largely regardless of the consequences for living persons. Its proponents argue that society does not sufficiently plan for the distant future and fails to store up the wealth that our successors will need to solve social and existential challenges. Other transhumanist theories include longtermism, the idea that we have a moral obligation to provide for the flourishing of successor bioforms and machinic entities in the very distant future, at times regardless of consequences for those now living and their proximate next generations. There is also a kind of rationalism that justifies the moral calculations on which provision for the future instead of for the living is based; cosmism, the vision for exploration and colonisation of other worlds; and transhumanism, which aspires to assemble technologies for the evolution of humankind into successor species or for our replacement by other entities as an inevitable and thereby moral duty. All of these, including the various versions, are sometimes named by the acronym TESCREAL (transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, longtermism). Here I refer to these as ‘transhumanism’. The core argument common to these lines of thinking, according to the philosopher Émile Torres writing in 2021, is that: [W]hen one takes the cosmic view, it becomes clear that our civilisation could persist for an incredibly long time and there could come to be an unfathomably large number of people in the future. Longtermists thus reason that the far future could contain way more value than exists today, or has existed so far in human history, which stretches back some 300,000 years. From this point of view, human suffering today matters little by the numbers. Nuclear war, environmental collapse, injustice and oppression, tyranny, and oppression by intelligent technology are mere ripples on the surface of the ocean of history. 4. 闡述「『人』為主體論」 Each element of these transhumanist ideologies regards human personhood as a thing that is expiring and therefore to be replaced. As the longtermist Richard Sutton told the World Artificial Intelligence Conference in 2023: ‘it behooves us [humans] … to bow out … We should not resist succession.’ Their proponents argue for the factual truth of their predictions as a way to try to ensure the realisations of their prophecies. According to the theorist Eliezer Yudkowsky, by ‘internalising the lessons of probability theory’ to become ‘perfect Bayesians’, we will have ‘reason in the face of uncertainty’. Such calculations will open a ‘vastly greater space of possibilities than does the term “Homo sapiens”.’ A personalist approach deflates these transhumanist claims. As the historian of science Jessica Riskin has argued, a close examination of the science of artificial intelligence demonstrates that the only intelligence in machines is what people put into them. It is really a sleight-of-hand; there is always a human behind the curtain turning the wizard wheels. As she put it in The New York Review of Books in 2023: Turing’s literary dialogues seem to me to indicate what’s wrong with Turing’s science as an approach to intelligence. They suggest that an authentic humanlike intelligence resides in personhood, in an interlocutor within, not just the superficial appearance of an interlocutor without; that intelligence is a feature of the world and not a figment of the imagination. Longtermists’ notions of future entities lack everything we know about conscious intelligence because they use consciousness or living beings as empty black-box words into which even meaningless notions will fit. Effective altruists dismiss the worth attributable to every human, squashing it by calculations that cannot prescribe moral value, whatever these proponents claim. As we can see in the theories of longtermists such as Nick Bostrom and effective altruists such as Sam Bankman-Fried, instead of working with human ethical values, they work with numerical values, ignoring the massive body of thought from anthropologists such as Webb Keane and from phenomenologists such as Rasmus Dyring, Cheryl Mattingly and Thomas Wentzer showing that values are neither empirical nor quantifiable but nonetheless real forces in human affairs. Transhumanism as a whole assigns agency to alien beings and electronic entities that do not exist – and perhaps are inconceivable. This idea of the agency of the inorganic is one of the key arguments for decentring the human. Consider, for example, salt. Salt affords certain effects in certain conditions: it produces a specific taste, it corrodes other materials, it serves certain functions in organisms. But it is humans who organise these events under the concept of causality. What salt does, it does without consciousness. Consciousness neither starts nor halts its effects, broadly speaking. What sense is there, then, in saying that salt has agency when it is more illuminating to say that it is a cause of effects under some conditions? In ordinary language, we frequently speak of machinery or ideas ‘doing’ things in our lives. But they do nothing. People – human persons – produce, operate and apply their creations. The problem with assigning agency, even informally, to the nonhuman is that this disguises the strength of human control, limited though it is in other respects. It leaves us unaware when a more toxic and cunning human drives to take control because we are busy trying to control the world rather than ourselves. Although some people think that machines or ideas are in control of them, it is really other humans. If we overlook this truth, we accept an untruth – an untruth that condemns us to the mercy of our worst drives and behaviours. When we devalue humanity, we unleash our self-destructive drives, thereby turning reason into destructive irrationality. In this way, we are in fact governed by our own human drive for self-destruction. This drive seems to differentiate us from other animals as much as language or historicity do. If we provoke this drive too much, we shall have nowhere else to turn in our struggle to flourish in the natural world. We must, instead, search out our integrity and worth because the alternative is despair. The great and encompassing thing that humans create is our story: human history, the sum of our behaviour and our deeds. We create it with and amid the world around us out of our need to make sense of the world. This need, which builds our moral life, is part of what drives everything we do. It drives the ways we pursue survival, for, without a sense of meaning, we have little will to survive. The pursuit of survival can lead us to meaningfulness but, if it fails to do so, the pursuit itself ceases. We guide ourselves by the stories we choose, for storytelling inhabits all ways of knowing and acting. If the meaning we seek as human persons is overtaken by the story that our self-destructive drive presents in the form of transhumanism, we shall not survive. Persons are worth more than even justice and goodness are, because it is for the sake of persons that we fight for justice and goodness. In the face of possible profound changes, it often seems we must choose between being good and just to ourselves, and being good and just toward nature. The possibility of these radical changes legitimately requires that we profoundly deflate our anthropocentrism, since overblown self-regard has served us poorly. But how do we do this while encouraging our fraught capabilities and appreciating the worth of our flawed species? The kind of personalism that I have developed out of Bowne’s ideas as a response to this and other questions I call moral agency personalism. Moral agency is the activity of judging and choosing between good and evil, right and wrong, justice and injustice. In my view, every thing that has such moral agency is a person, and all persons are moral agents. (The evidence that some nonhuman species make moral choices, sometimes based on memory and history, has been accumulating.) Adding this possibility to personalism formally recognises worth in all persons, nonhuman as well as human. As a belief and a practice, it can ground a virtuous, as opposed to vicious, self-regard that human and nonhuman persons can exercise for themselves and for other persons. This kind of self-regard is distinct from self-importance. We can develop a moral agency personalism that has some of the resources we need in facing the human future. We can find these by altering some fundamental concepts of personalism. These updates include: accepting the fact of nonhuman moral agents or persons; including the body in our understanding of individual lives and of interpersonal relations; and rethinking the idealist ontology in personalism in order to make it an ethics-as-first-philosophy approach, with less emphasis on ontology. The guiding idea of these changes is that, in making moral sense out of experience, personal moral agency enlarges our relations to the whole range of our lives and our care for all beings. Personalism gives us robust resources for identifying our worth and for believing in it. It can encourage us to enhance our worth by our acts in seeking goodness, compassion and justice, and guide us to the richest possible moral life. Because our personhood is the home base of our point of view, there is no way forward other than to maintain our integrity while learning what we must in order to thrive. The initial and most basic of these resources we should tap is the strength not to do more harm. We are the ones who deploy transhumanist projects into the only world that sustains us. We are the ones degrading the environment. And we are the only ones who can stop us from doing both. For this, we need to respect ourselves as persons with the power to decide not to continue to harm. This is the minimum we must do. Respecting the moral worth of persons also ignites our capacity to care for others. We respond with aid to calls for help when we learn to recognise moral obligation pertaining to every person, including ourselves, and toward every other person. Furthermore, our humanitarian disposition is frequently a sure way to developing sympathy for the natural world and the life within it. Understanding our personal moral agency enables a wise combination of the two general forces of moral action: power and compassion. Power is the logic by which we carry ideas and lines of thought to fulfilment in activity. Compassion is the potentially unbounded lovingkindness with which we temper power and extend love to widening spheres in our lives. So far as we know, we are the only living beings who can use these forces in moral decision-making. But even if other beings have moral personhood, nothing of the sort relieves us of the moral obligation that our possession of these two capabilities makes it possible to accept and to follow. We possess our history, just as we make it – another resource that is unique to us, so far as we know. History is the engine of self-awareness. As the substance of all that we have done and the actual conditions for the possibility of all that is and will be, historical consciousness serves us as the indispensable locus of reflection and deliberation. No unchanging and antiquated images of ourselves restrain our understanding of history because we create the past anew whenever we study it and reflect on it. It is therefore the great endowment for a renewed humanistic extension of personhood to all humankind and to all life. There are two more resources, pointing to opposite ends of the spectrum of our concerns. The first is that the personalist grasp of what we are worth supports democracy. Democracy has depended on a powerful conception of personal agency and responsibility that cultural and political changes now challenge, in addition to the material issues of human life in the Anthropocene era. These social and natural developments closely reflect each other. Learning to live together is the worthy goal of democracy. But if we are to pursue concord and peace by that road, we must value ourselves, accept our moral nature with its obligations, submit our desires to what the moral worth of every living being requires of us, and work in response to present and patent human suffering and real human joy. At the opposite end, on the cosmic scale, lies another possibility for virtuous human self-regard afforded us by personalism. Simply put, it is this: it might become clear to us that the universe is constitutively pervaded by consciousness, or is conscious in all its parts, or is inside of a super-consciousness. These are versions of the notion of cosmic consciousness called panpsychism. Panpsychism is not just about what we can know or do but about reality itself. This appeals to those who have for a moment felt the life of the universe in a small experience and do not want to dismiss what that feeling says and means to them just because it is not empirically verifiable. In our best moments, our lives feel epiphanous. At the same time, however, panpsychism can conflict with the empiricism that is so valuable because it is used to make things that work well for us. And yet other kinds of things, such as erotic love and spirituality, also work well for us and are not conducive to the usual demands of empiricism. For now, it is easy to think that a universal consciousness makes our consciousness unimportant, but there might be ways of getting the opposite outcome. Current advances in physics and biology are starting to support the belief that our consciousness affects reality by working with reality as a consciousness that includes ours. That is, our observing and predicting are inside, not outside, the phenomena we encounter. We are not the crown jewels of creation, but our self-referentiality, our critical awareness and our moral lives form personhood as an important part of a universe that is thereby less alien and cold. If a suitable form of panpsychism is true, human personhood means more to reality than is usually thought. This kind of personalism puts us into a community or, rather, into many communities made up of conscious beings capable of moral responsibility. The moral agency of persons thrives when agents reflectively act in obligation to their individual and collective selves rather than in seeing themselves through the needs of imagined others in the undetermined future. What King observed in Montgomery airport in 1965 was actual persons developing their moral purchase with each other. He saw this as the processes of goodness and love at work in their proper sphere: our common existence. King wanted us not only to recognise the unique and infinite value of every person, but to understand it so powerfully that we would feel ourselves obliged to take the action that this recognition requires. As he wrote, we need only look around us at the struggles for a decent and free life that others wage to sense the profundity of human worth and to see that we all depend on one another. That this has the power to inspire us to fight for change sustained his hopes. We face an urgent present choice. We might prefer that algorithms or despots act for us because our own power of judgment is too explosive to manage. That would suit the purposes of infomaniacal hypercapitalism, which seeks to control consumers rather than to enrich persons. But turning over our judgment to machines does not lock away our power to destroy ourselves and others. We must govern ourselves even as we evolve. This requires an enduring connection to our humanity and a willingness to work hard with one another. This can be successful only if and when we hold fast to all that we are. Bennett Gilbert is adjunct assistant professor of history and philosophy at Portland State University, US. He is the author of A Personalist Philosophy of History (2019) and Power and Compassion: On Moral Force Ethics and Historical Change (forthcoming from Amsterdam University Press), as well as numerous papers, and is co-editor of Ethics and Time in the Philosophy of History: A Cross-Cultural Approach (2023). 相關資訊 History of ideasThinkers and theoriesValues and beliefs
本文於 修改第 1 次
|
《「『人』為主體論」》引言
|
|
|
推薦1 |
|
|
|
我在一年多以前讀到這篇文章(請見本欄下一篇);由於它近4,000字,我一時之間沒能仔細讀完。前幾天整理檔案夾時,瀏覽了一下,覺得值得介紹。雖然目前沒有功夫就該文作者介紹的思想做深入了解,好歹必須把它放到《優先閱讀清單》上。 先就該文作者吉爾柏教授大作主題 ”personalism” 一字的中譯做個詮釋。由於我們已經有「人文主義」(視其內涵或譯「人本論」),我只好就我的了解,將此倫理學思想譯為「『人』為主體論」。是否信、達,尚祈指正。 原文沒有分節;為了便於指涉和討論,我照慣例將各段落依主旨分節並加上「子標題」與序號(包括「前言」)。如有誤解或錯誤,敬請指教。 該文用到許多哲學專業名詞,我先提供其中兩個的索引: a. effective altruism:效益利他論(以效益為行動指標的利他思想) b. transhumanism:人種提升論;「人種」在此為生物學或分類學的「用法」,不是政治語言或社會衝突的「用法」。 最後,我先做三個短評(以後再長篇大論之): 1) 我同意,也支持:「『人』為主體論」的基本思想;吉爾柏教授大作中自然有可商酌之處。 2) 某些「人種提升論」者—尤其是有專業知識的人—由於不敢或不願面對自己當下責任,拿這個概念做地洞或遮羞布;他/她們以「提升」來掩飾自己的沒有種。 3) 我接受「個人本位方法論」,也曾把它幾度應用到政治分析場合(此文第4節)。所以,我非常同意吉爾柏教授在其大作第4節第8段的評論(1)。 附註: 1. “In ordinary language, we frequently speak of machinery or ideas ‘doing’ things in our lives. But they do nothing. People – human persons – produce, operate and apply their creations. The problem with assigning agency, even informally, to the nonhuman is that this disguises the strength of human control, limited though it is in other respects. It leaves us unaware when a more toxic and cunning human drives to take control because we are busy trying to control the world rather than ourselves. …”
本文於 修改第 6 次
|
新教教義:面對暴力第三條路 - Paul Ian Clarke
|
|
|
推薦1 |
|
|
|
我對古希伯萊地區的文化和社會毫無所知,無法為下文作者克拉克牧師的「詮釋」背書,當然也無從否定它們。 尼切相當博學,又是鑽研「訓詁學」出身;他對古希伯萊文化和社會應該知之甚詳。如果我這個了解成立;在克拉克牧師的「詮釋」下,則尼切對基督教倫理的批判就有偏頗之嫌。請參考《道德沿革考》的簡介(視頻,約42分鐘)。我沒有能力申論這麼大的議題,只在此根據常理提出一點淺見。這是我認為克拉克牧師「詮釋」欠缺說服力的原因之一。 另一方面,閱讀和解讀典籍,需要了解和根據其立論脈絡(時代、社會、文化、目的、…),是「審問」、「慎思」、「明辨」一系列步驟的立論基礎。 Turning the Other Cheek Isn’t What You Think Why Jesus’ words were never about being a doormat — and how they reveal a creative path to justice Paul Ian Clarke, 08/30/25 We’ve all heard the phrases: “an eye for an eye,” “turn the other cheek,” “go the extra mile.” They’ve slipped into our everyday speech, but do we really understand what they meant in Jesus’ day? And what they might mean for us now? Let’s start with that old phrase, “an eye for an eye.” To modern ears, it sounds harsh, even barbaric. But when it was first written into the law of Moses 3,000 years ago, it was groundbreaking. Before then, tribal feuds could last for years, spiralling from one insult into bloodshed that consumed entire communities. This law was about limiting revenge: if someone wronged you, the punishment could go this far and no further. It drew a line under endless cycles of violence. But then Jesus comes along, on that Galilean hillside, and says: “You’ve heard it said, an eye for an eye… but I tell you, do not resist an evildoer. If anyone strikes you on the right cheek, turn the other also.” On the surface, that sounds like submission. Keep quiet, let people walk all over you, be a doormat for the glory of God. But is that really justice? The False Choice: Violence or Submission We tend to think our only options are fight back with violence or give in completely. * If a child is falsely accused by a teacher, should she rage in the head’s office, or quietly accept a punishment she doesn’t deserve? * If an employee is unfairly dismissed, does he shout at his boss, or just go home in silence? * If a woman is trapped in an abusive relationship, does she lash out, or remain a victim of violence? * If two nations face off on the brink of war, must they either fight, or surrender their moral ground? Violence or submission. Neither looks much like justice. Yet, Jesus offers a third way, a way that resists injustice without mirroring it, and reclaims dignity without perpetuating harm. Turning the Other Cheek In first-century Palestine, striking someone with the back of the right hand wasn’t random violence, it was a way of asserting dominance over someone considered “beneath you,” like a servant or slave. If the victim “turned the other cheek,” the aggressor faced a dilemma. They couldn’t use their left hand (that was reserved for unclean tasks), and to slap with the open hand was to treat the other as an equal. By offering the other cheek, the victim wasn’t submitting, they were demanding dignity. “Hit me again if you must,” the gesture says, “but do it as an equal.” Hand Over Your Cloak Jesus then adds: “If anyone sues you for your coat, give them your cloak as well.” In first-century terms, your cloak was what kept you from being exposed, literally. To offer your cloak as well as your coat would leave you naked, but it also flipped the shame onto the accuser. In Jewish law, seeing another’s nakedness brought dishonour. By giving everything, the victim shifted the power dynamic and the oppressor was the one left shamed. Going the Extra Mile We often hear “go the extra mile” as a motivational slogan. Work harder, do more. But in Jesus’ time, it had a far sharper edge. Under Roman law, a soldier could force a civilian to carry his gear, but only for one mile. Going further wasn’t an act of generosity; it put the soldier himself at risk of punishment. Suddenly the power shifts. The centurion, once so commanding, is begging you to stop. This is not passive submission. This is creative resistance. From Jesus to Gandhi to King History shows us how powerful this “third way” can be. Gandhi drew on these very teachings in his movement for Indian independence. Martin Luther King Jr. embraced them in the Civil Rights Movement. Both refused to return violence with violence. Both refused to be silent victims. Both stood firm with dignity, and in doing so, changed the course of history. The Justice Jesus Offers As theologian Tom Wright puts it, “Jesus offers a new sort of justice, a creative, healing, restorative justice.” Turning the other cheek, handing over the cloak, walking the second mile, these aren’t calls to submission. They are acts of bold, nonviolent defiance. They are ways to stand firm against injustice without becoming part of its endless cycle. Maybe that’s what our world needs now more than ever: people brave enough to choose neither violence nor silence, but a third way, the way of restoration. Written by Paul Ian Clarke I’m an Anglican Priest and the Curator of Sacred and Secular, who loves exploring how we navigate faith in our modern lives. If you’d like more reflections like this, I write daily for my Sacred & Secular newsletter here. I also write about the strange and beautiful places where faith meets ordinary life. Join my newsletter at www.sacredandsecular.co.uk Published in Backyard Church Thoughts on applying a 2000 year old religion to 21st Century life.
本文於 修改第 3 次
|
又一位不賣川普帳的藝術家 -- Aleena Fayaz
|
|
|
推薦1 |
|
|
|
Kennedy Center president rebukes performer who called off Christmas Eve show over addition of Trump’s name Aleena Fayaz, CNN, 12/27/25 Kennedy Center president Richard Grenell lambasted a performer’s decision to cancel an annual Christmas Eve jazz concert, following the addition of President Donald Trump’s name to the Washington, DC, arts venue. In a letter, which the Kennedy Center shared a copy of with CNN, Grenell sharply criticizes jazz artist Chuck Redd’s actions and praises Trump for his leadership as the center’s chairman — a role the president’s handpicked board elected him to early in his second term after he ousted his predecessor. “Your decision to withdraw at the last moment—explicitly in response to the Center’s recent renaming, which honors President Trump’s extraordinary efforts to save this national treasure—is classic intolerance and very costly to a non-profit Arts institution,” Grenell, a longtime Trump confidant, wrote to Redd on letterhead bearing the new “Trump Kennedy Center” logo. The Associated Press first reported on the letter. Redd told CNN on Wednesday that he canceled the holiday jazz concert, which he has hosted for nearly two decades, after seeing the board’s move to rename the building last week. “I’ve been performing at the Kennedy Center since the beginning of my career and I was saddened to see this name change,” Redd said. Grenell goes on to fault Redd for financial fallout relating to what he called a “political stunt” and said the center will seek $1 million in damages.
CNN has reached out to Redd for comment on the letter. Roma Daravi, vice president of public relations at the Kennedy Center, echoed Grenell’s sentiment, arguing that Redd “failed to meet the basic duty of a public artist: to perform for all people.” “Art is a shared cultural experience meant to unite, not exclude,” Daravi said in a statement to CNN. “The Trump Kennedy Center is a true bipartisan institution that welcomes artists and patrons from all backgrounds—great art transcends politics, and America’s cultural center remains committed to presenting popular programming that inspires and resonates with all audiences.” The cancelation of the free “Jazz Jam” show followed a vote by the John F. Kennedy Center for the Performing Arts’s board of trustees to rename the cultural institution for both the Democratic former president and Trump last week. In the hours after the vote, the center updated its website and the following day installed new signage to the facade of the building bearing Trump’s name. The move quickly sparked outrage from Kennedy family, lawmakers and patrons of the historic center, including a lawsuit from one Democratic congresswoman challenging whether the board has the authority to rename the facility, which Congress designated in 1964 as a memorial to the 35th president. Prior to the renaming, Trump’s overhaul of the center was already raising concerns about lost revenue as both artists and audiences flee for other venues. Artists including Issa Rae, Renée Fleming, Shonda Rhimes and Ben Folds resigned from their leadership roles or canceled events at the space. And Jeffrey Seller, producer of the hit musical “Hamilton,” canceled the show’s planned run earlier this year. This holiday season, lagging ticket sales have also impacted “The Nutcracker,” historically one of the center’s most popular events. Approximately 10,000 seats were sold for this year’s production across seven performances, compared with around 15,000 seats each in the 2021 through 2024 performances, according to internal sales data reviewed by CNN. The Kennedy Center comped (免費、贈送) approximately five times more tickets for the performances this year than in the past four years, the data showed. And this year’s show has fallen about half a million dollars short of its $1.5 million budgeted revenue goal. This story has been updated with additional details. CNN’s Betsy Klein contributed to this report. For more CNN news and newsletters create an account at CNN.com
本文於 修改第 1 次
|
民歌音樂家拒在川普--甘迺迪音樂廳演唱 -- Geoff Herbert
|
|
|
推薦2 |
|
|
|
有「原則」者的行事風格;此之謂:「富貴不能淫,貧賤不能移,威武不能屈」(《孟子•·滕文公下》第7)。 Singer cancels Kennedy Center concert after Trump name change Geoff Herbert, 12/24/25 A singer-songwriter is refusing to perform at the Trump-Kennedy Center after the venue’s name changed. Kristy Lee announced Monday that she canceled her scheduled performance at the Washington, D.C., arts institution scheduled for Jan. 14, 2026. The move came days after President Donald Trump’s name was added to the former John F. Kennedy Center for the Performing Arts. “I don’t have much power, and I don’t run with the big dogs who do. I’m just a folk singer from Alabama, slinging songs for a living,” Lee wrote on social media. “Hell, my songs are really just my own diary set to music. They’re not polished or hit songs, but they’re my truth and nobody can take that from me. I’m proud of that.” “I believe in the power of truth, and I believe in the power of people. And I’m gonna stand on that side forever. I won’t lie to you, canceling shows hurts. This is how I keep the lights on. But losing my integrity would cost me more than any paycheck,” she continued. “When American history starts getting treated like something you can ban, erase, rename, or rebrand for somebody else’s ego, I can’t stand on that stage and sleep right at night. America didn’t get built by branding. It got built by people showing up and doing the work. And the folks who carry it don’t need their name on it, they just show up. That’s all I’m doing here. I’m showing up.” Her post went viral, generating hundreds of thousands of likes, shares and comments. She thanked fans for their support and said she plans to instead perform virtually at home on Jan. 14. Congress designated the John F. Kennedy Center for the Performing Arts in 1964 as a memorial to JFK, who was assassinated. Last week, the Kennedy Center Board of Trustees — of which Trump is the chair — voted to change its official name to The Donald J. Trump and The John F. Kennedy Memorial Center for the Performing Arts. The move was criticized by the Kennedy family and Democratic lawmakers, who questioned whether the board can legally change the venue’s name without Congressional approval. The Broadway musical “Hamilton” also canceled a performance at the Kennedy Center earlier this year when Trump forced out its leadership and took over as chair of the board of trustees. The naming controversy also came as Trump hosted the “Kennedy Center Honors,” which aired on CBS Tuesday night under its original name. 2025 honorees included the rock band KISS, theater star Michael Crawford, country legend George Strait, actor Sylvester Stallone, and singer Gloria Gaynor. “Tell me what you think of my ‘Master of Ceremony’ abilities. If really good, would you like me to leave the Presidency in order to make ‘hosting’ a full time job?” Trump joked on Truth Social before the broadcast. In a statement, Lee elaborated Tuesday that she believes “efforts to impose political branding” on the former Kennedy Center compromise its original mission to be a nonpartisan national cultural institution, honoring JFK’s “belief that the arts are essential to democracy, free expression, and the public good.” Lee added that she believes “publicly funded cultural spaces must remain free from political capture, self-promotion, or ideological pressure.” Read the original article on syracuse.com.
本文於 修改第 2 次
|
人工智能在倫理領域的適用性 - Elad Uzan
|
|
|
推薦1 |
|
|
|
下文一來過長;二來也過於專門 -- 全文引用了哥斗大名和他的「不完整原理」共60次;我看過幾篇討論哥斗和「不完整原理」的論文,不幸的是我一直沒有抓到此理論的要點。從而,我不認為自己有能力了解烏壤博士的論點;也就沒有花時間把整篇文章讀完。
在「人工智能『瘋』」猛刮的「時代精神」下,討論它在處理理倫理學議題的「適用性」,相信下文有點看頭;此外,烏壤博士也討論到人工智能運作原理,對此領域相當外行如我這樣的人,應可參考。
The incompleteness of ethics Many hope that AI will discover ethical truths. But as Gödel shows, deciding what is right will always be our burden Elad Uzan, Edited by Edited byNigel Warburton, 08/05/25
Imagine a world in which artificial intelligence is entrusted with the highest moral responsibilities: sentencing criminals, allocating medical resources, and even mediating conflicts between nations. This might seem like the pinnacle of human progress: an entity unburdened by emotion, prejudice or inconsistency, making ethical decisions with impeccable precision. Unlike human judges or policymakers, a machine would not be swayed by personal interests or lapses in reasoning. It does not lie. It does not accept bribes or pleas. It does not weep over hard decisions. Yet beneath this vision of an idealised moral arbiter lies a fundamental question: can a machine understand morality as humans do, or is it confined to a simulacrum of ethical reasoning? AI might replicate human decisions without improving on them, carrying forward the same biases, blind spots and cultural distortions from human moral judgment. In trying to emulate us, it might only reproduce our limitations, not transcend them. But there is a deeper concern. Moral judgment draws on intuition, historical awareness and context – qualities that resist formalisation. Ethics may be so embedded in lived experience that any attempt to encode it into formal structures risks flattening its most essential features. If so, AI would not merely reflect human shortcomings; it would strip morality of the very depth that makes ethical reflection possible in the first place. Still, many have tried to formalise ethics, by treating certain moral claims not as conclusions, but as starting points. A classic example comes from utilitarianism, which often takes as a foundational axiom the principle that one should act to maximise overall wellbeing. From this, more specific principles can be derived, for example, that it is right to benefit the greatest number, or that actions should be judged by their consequences for total happiness. As computational resources increase, AI becomes increasingly well-suited to the task of starting from fixed ethical assumptions and reasoning through their implications in complex situations. But what, exactly, does it mean to formalise something like ethics? The question is easier to grasp by looking at fields in which formal systems have long played a central role. Physics, for instance, has relied on formalisation for centuries. There is no single physical theory that explains everything. Instead, we have many physical theories, each designed to describe specific aspects of the Universe: from the behaviour of quarks and electrons to the motion of galaxies. These theories often diverge. Aristotelian physics, for instance, explained falling objects in terms of natural motion toward Earth’s centre; Newtonian mechanics replaced this with a universal force of gravity. These explanations are not just different; they are incompatible. Yet both share a common structure: they begin with basic postulates – assumptions about motion, force or mass – and derive increasingly complex consequences. Isaac Newton’s laws of motion and James Clerk Maxwell’s equations are classic examples: compact, elegant formulations from which wide-ranging predictions about the physical world can be deduced. Ethical theories have a similar structure. Like physical theories, they attempt to describe a domain – in this case, the moral landscape. They aim to answer questions about which actions are right or wrong, and why. These theories also diverge and, even when they recommend similar actions, such as giving to charity, they justify them in different ways. Ethical theories also often begin with a small set of foundational principles or claims, from which they reason about more complex moral problems. A consequentialist begins with the idea that actions should maximise wellbeing; a deontologist starts from the idea that actions must respect duties or rights. These basic commitments function similarly to their counterparts in physics: they define the structure of moral reasoning within each ethical theory. Just as AI is used in physics to operate within existing theories – for example, to optimise experimental designs or predict the behaviour of complex systems – it can also be used in ethics to extend moral reasoning within a given framework. In physics, AI typically operates within established models rather than proposing new physical laws or conceptual frameworks. It may calculate how multiple forces interact and predict their combined effect on a physical system. Similarly, in ethics, AI does not generate new moral principles but applies existing ones to novel and often intricate situations. It may weigh competing values – fairness, harm minimisation, justice – and assess their combined implications for what action is morally best. The result is not a new moral system, but a deepened application of an existing one, shaped by the same kind of formal reasoning that underlies scientific modelling. But is there an inherent limit to what AI can know about morality? Could there be true ethical propositions that no machine, no matter how advanced, can ever prove? These questions echo a fundamental discovery in mathematical logic, probably the most fundamental insight ever to be proven: Kurt Gödel’s incompleteness theorems. They show that any logical system powerful enough to describe arithmetic is either inconsistent or incomplete. In this essay, I argue that this limitation, though mathematical in origin, has deep consequences for ethics, and for how we design AI systems to reason morally. 表單的底部
Suppose we design an AI system to model moral decision-making. Like other AI systems – whether predicting stock prices, navigating roads or curating content – it would be programmed to maximise certain predefined objectives. To do so, it must rely on formal, computational logic: either deductive reasoning, which derives conclusions from fixed rules and axioms, or else on probabilistic reasoning, which estimates likelihoods based on patterns in data. In either case, the AI must adopt a mathematical structure for moral evaluation. But Gödel’s incompleteness theorems reveal a fundamental limitation. Gödel showed that any formal system powerful enough to express arithmetic, such as the natural numbers and their operations, cannot be both complete and consistent. If such a system is consistent, there will always be true statements it cannot prove. In particular, as applied to AI, this suggests that any system capable of rich moral reasoning will inevitably have moral blind spots: ethical truths that it cannot derive. Here, ‘true’ refers to truth in the standard interpretation of arithmetic, such as the claim that ‘2 + 2 = 4’, which is true under ordinary mathematical rules. If the system is inconsistent, then it could prove anything at all, including contradictions, rendering it useless as a guide for ethical decisions. Gödel’s incompleteness theorems apply not only to AI, but to any ethical reasoning framed within a formal system. The key difference is that human reasoners can, at least in principle, revise their assumptions, adopt new principles, and rethink the framework itself. AI, by contrast, remains bound by the formal structures it is given, or operates within those it can modify only under predefined constraints. In this way, Gödel’s theorems place a logical boundary on what AI, if built on formal systems, can ever fully prove or validate about morality from within those systems. Most of us first met axioms in school, usually through geometry. One famous example is the parallel postulate, which says that if you pick a point not on a line, you can draw exactly one line through that point that is parallel to the original line. For more than 2,000 years, this seemed self-evident. Yet in the 19th century, mathematicians such as Carl Friedrich Gauss, Nikolai Lobachevsky and János Bolyai showed that it is possible to construct internally consistent geometries in which the parallel postulate does not hold. In some such geometries, no parallel lines exist; in others, infinitely many do. These non-Euclidean geometries shattered the belief that Euclid’s axioms uniquely described space. This discovery raised a deeper worry. If the parallel postulate, long considered self-evident, could be discarded, what about the axioms of arithmetic, which define the natural numbers and the operations of addition and multiplication? On what grounds can we trust that they are free from hidden inconsistencies? Yet with this challenge came a promise. If we could prove that the axioms of arithmetic are consistent, then it would be possible to expand them to develop a consistent set of richer axioms that define the integers, the rational numbers, the real numbers, the complex numbers, and beyond. As the 19th-century mathematician Leopold Kronecker put it: ‘God created the natural numbers; all else is the work of man.’ Proving the consistency of arithmetic would prove the consistency of many important fields of mathematics. The method for proving the consistency of arithmetic was proposed by the mathematician David Hilbert. His approach involved two steps. First, Hilbert argued that, to prove the consistency of a formal system, it must be possible to formulate, within the system’s own symbolic language, a claim equivalent to ‘This system is consistent,’ and then prove that claim using only the system’s own rules of inference. The proof should rely on nothing outside the system, not even the presumed ‘self-evidence’ of its axioms. Second, Hilbert advocated grounding arithmetic in something even more fundamental. This task was undertaken by Bertrand Russell and Alfred North Whitehead in their monumental Principia Mathematica (1910-13). Working in the domain of symbolic logic, a field concerned not with numbers, but with abstract propositions like ‘if x, then y’, they showed that the axioms of arithmetic could be derived as theorems from a smaller set of logical axioms. This left one final challenge: could this set of axioms of symbolic logic, on which arithmetic can be built, prove its own consistency? If it could, Hilbert’s dream would be fulfilled. That hope became the guiding ambition of early 20th-century mathematics. It was within this climate of optimism that Kurt Gödel, a young Austrian logician, introduced a result that would dismantle Hilbert’s vision. In 1931, Gödel published his incompleteness theorems, showing that the very idea of such a fully self-sufficient mathematical system is impossible. Specifically, Gödel showed that if a formal system meets several conditions, it will contain true claims that it cannot prove. It must be complex enough to express arithmetic, include the principle of induction (which allows it to prove general statements by showing they hold for a base case and each successive step), be consistent, and have a decidable set of axioms (meaning it is possible to determine, for any given statement, whether it qualifies as an axiom). Any system that satisfies these conditions, such as the set of logical axioms developed by Russell and Whitehead in Principia Mathematica, will necessarily be incomplete: there will always be statements that are expressible within the system but unprovable from its axioms. Even more strikingly, Gödel showed that such a system can express, but not prove, the claim that it itself is consistent. Gödel’s proof, which I simplify here, relies on two key insights that follow from his arithmetisation of syntax, the powerful idea of associating any sentence of a formal system with a particular natural number, known as its Gödel number. First, any system complex enough to express arithmetic and induction must allow for formulas with free variables, formulas like S(x): ‘x = 10’, whose truth value depends on the value of x. S(x) is true when x is, in fact, 10, and false otherwise. Since every statement in the system has a unique Gödel number, G(S), a formula can refer to its own Gödel number. Specifically, the system can form statements such as S(G(S)): ‘G(S) = 10’, whose truth depends on whether S(x)’s own Gödel number equals 10. Second, in any logical system, a proof of a formula S has a certain structure: starting with axioms, applying inference rules to produce new formulas from those axioms, ultimately deriving S itself. Just like every formula S has a Gödel number G(S), so every proof of S is assigned a Gödel number, by treating the entire sequence of formulas in the proof as one long formula. So we can define a proof relation P(x, y), where P(x, y) holds if and only if x is the Gödel number of a proof of S, and y is the Gödel number of S itself. The claim that x encodes a proof of S becomes a statement within the system, namely, P(x, y). Third, building on these ideas, Gödel showed that any formal system capable of expressing arithmetic and the principle of induction can also formulate statements about its own proofs. For example, the system can express statements like: ‘n is not the Gödel number of a proof of formula S’. From this, it can go a step further and express the claim: ‘There is no number n such that n is the Gödel number of a proof of formula S.’ In other words, the system can say that a certain formula S is unprovable within the system. Fourth, Gödel ingeniously constructed a self-referential formula, P, that asserts: ‘There is no number n such that n is the Gödel number of a proof of formula P.’ That is, P says of itself, ‘P is not provable.’ In this way, P is a formal statement that expresses its own unprovability from within the system. It immediately follows that if the formula P were provable within the system, then it would be false, because it asserts that it has no proof. This would mean the system proves a falsehood, and therefore is inconsistent. So if the system is consistent, then P cannot be proved, and therefore P is indeed unprovable. This leads to the conclusion that, in any consistent formal system rich enough to express arithmetic and induction, there will always be true but unprovable statements, most notably, the system’s own claim of consistency. The implications of Gödel’s theorems were both profound and unsettling. They shattered Hilbert’s hope that mathematics could be reduced to a complete, mechanical system of derivation and exposed the inherent limits of formal reasoning. Initially, Gödel’s findings faced resistance, with some mathematicians arguing that his results were less general than they appeared. Yet, as subsequent mathematicians and logicians, most notably John von Neumann, confirmed both their correctness and broad applicability, Gödel’s theorems came to be widely recognised as one of the most significant discoveries in the foundations of mathematics. Gödel’s results have also initiated philosophical debates. The mathematician and physicist Roger Penrose, for example, has argued that they point to a fundamental difference between human cognition and formal algorithmic reasoning. He claims that human consciousness enables us to perceive certain truths – such as those Gödel showed to be unprovable within formal systems – in ways that no algorithmic process can replicate. This suggests, for Penrose, that certain aspects of consciousness may lie beyond the reach of computation. His conclusion parallels that of John Searle’s ‘Chinese Room’ argument, which holds that this is so because algorithms manipulate symbols purely syntactically, without any grasp of their semantic content. Still, the conclusions drawn by Penrose and Searle do not directly follow from Gödel’s theorems. Gödel’s results apply strictly to formal mathematical systems and do not make claims about consciousness or cognition. Whether human minds can recognise unprovable truths as true, or whether machines could ever possess minds capable of such recognition, remains an open philosophical question. However, Gödel’s incompleteness theorems do reveal a deep limitation of algorithmic reasoning, in particular AI, one that concerns not just computation, but moral reasoning itself. Without his theorems, it was at least conceivable that an AI could formalise all moral truths and, in addition, prove them from a consistent set of axioms. But Gödel’s work shows that this is impossible. No AI, no matter how sophisticated, could prove all moral truths it can express. The gap between truth claims and provability sets a fundamental boundary on how far formal moral reasoning can go, even for the most powerful machines. This raises two distinct problems for ethics. The first is an ancient one. As Plato suggests in the Euthyphro, morality is not just about doing what is right, but understanding why it is right. Ethical action requires justification, an account grounded in reason. This ideal of rational moral justification has animated much of our ethical thought, but Gödel’s theorems suggest that, if moral reasoning is formalised, then there will be moral truths that cannot be proven within those systems. In this way, Gödel did not only undermine Hilbert’s vision of proving mathematics consistent; he may also have shaken Plato’s hope of fully grounding ethics in reason. The second problem is more practical. Even a high-performing AI may encounter situations in which it cannot justify or explain its recommendations using only the ethical framework it has been given. The concern is not just that AI might act unethically but also that it could not demonstrate that its actions are ethical. This becomes especially urgent when AI is used to guide or justify decisions made by humans. Even a high-performing AI will encounter a boundary beyond which it cannot justify or explain its decisions using only the resources of its own framework. No matter how advanced it becomes, there will be ethical truths it can express, but never prove. The development of modern AI has generally split into two approaches: logic-based AI, which derives knowledge through strict deduction, and large language models (LLMs), which predict meaning from statistical patterns. Both approaches rely on mathematical structures. Formal logic is based on symbolic manipulation and set theory. LLMs are not strictly deductive-logic-based but rather use a combination of statistical inference, pattern recognition, and computational techniques to generate responses. Just as axioms provide a foundation for mathematical reasoning, LLMs rely on statistical relationships in data to approximate logical reasoning. They engage with ethics not by deducing moral truths but by replicating how such debates unfold in language. This is achieved through gradient descent, an algorithm that minimises a loss function by updating weights in the direction that reduces error, approximates complex functions that map inputs to outputs, allowing them to generalise patterns from vast amounts of data. They do not deduce answers but generate plausible ones, with ‘reasoning’ emerging from billions of neural network parameters rather than explicit rules. While they primarily function as probabilistic models, predicting text based on statistical patterns, computational logic plays a role in optimisation, rule-based reasoning and certain decision-making processes within neural networks. But probability and statistics are themselves formal systems, grounded not only in arithmetic but also in probabilistic axioms, such as those introduced by the Soviet mathematician Andrey Kolmogorov, which govern how the likelihood of complex events is derived, updated with new data, and aggregated across scenarios. Any formal language complex enough to express probabilistic or statistical claims can also express arithmetic and is therefore subject to Gödel’s incompleteness theorems. This means that LLMs inherit Gödelian limitations. Even hybrid systems, such as IBM Watson, OpenAI Codex or DeepMind’s AlphaGo, which combine logical reasoning with probabilistic modelling, remain bound by Gödelian limitations. All rule-based components are constrained by Gödel’s theorems, which show that some true propositions expressible in a system cannot be proven within it. Probabilistic components, for their part, are governed by formal axioms that define how probability distributions are updated, how uncertainties are aggregated, and how conclusions are drawn. They can yield plausible answers, but they cannot justify them beyond the statistical patterns they were trained on. At first glance, the Gödelian limitations on AIs in general and LLMs in particular may seem inconsequential. After all, most ethical systems were never meant to resolve every conceivable moral problem. They were designed to guide specific domains, such as war, law or business, and often rely on principles that are only loosely formalised. If formal models can be developed for specific cases, one might argue that the inability to fully formalise ethics is not especially troubling. Furthermore, Gödel’s incompleteness theorems did not halt the everyday work of mathematicians. Mathematicians continue to search for proofs, even knowing that some true statements may be unprovable. In the same spirit, the fact that some ethical truths may be beyond formal proof should not discourage humans, or AIs, from seeking them, articulating them, and attempting to justify or prove them. But Gödel’s findings were not merely theoretical. They have had practical consequences in mathematics itself. A striking case is the continuum hypothesis, which asks whether there exists a set whose cardinality lies strictly between that of the natural numbers and the real numbers. This question emerged from set theory, the mathematical field dealing with collections of mathematical entities, such as numbers, functions or even other sets. Its most widely accepted axiomatisation, the Zermelo-Fraenkel axioms of set theory with the Axiom of Choice, underlies nearly all modern mathematics. In 1938, Gödel himself showed that the continuum hypothesis cannot be disproven from these axioms, assuming they are consistent. In 1963, Paul Cohen proved the converse: the continuum hypothesis also cannot be proven from the same axioms. This landmark result confirmed that some fundamental mathematical questions lie beyond formal resolution. The same, I argue, applies to ethics. The limits that Gödel revealed in mathematics are not only theoretically relevant to AI ethics; they carry practical importance. First, just as mathematics contains true statements that cannot be proven within its own axioms, there may well be ethical truths that are formally unprovable yet ethically important – the moral equivalents of the continuum hypothesis. These might arise in systems designed to handle difficult trade-offs, like weighing fairness against harm. We cannot foresee when, or even whether, an AI operating within a formal ethical framework will encounter such limits. Just as it took more than 30 years after Gödel’s incompleteness theorems for Cohen to prove the independence of the continuum hypothesis, we cannot predict when, if ever, we will encounter ethical principles that are expressible within an AI’s ethical system yet remain unprovable. Second, Gödel also showed that no sufficiently complex formal system can prove its own consistency. This is especially troubling in ethics, in which it is far from clear that our ethical frameworks are consistent. This is not a limitation unique to AI; humans, too, cannot prove the consistency of the formal systems they construct. But this especially matters for AI because one of its most ambitious promises has been to go beyond human judgment: to reason more clearly, more impartially, and on a greater scale. Gödel’s results set a hard limit on that aspiration. The limitation is structural, not merely technical. Just as Albert Einstein’s theory of relativity places an upper speed limit on the Universe – no matter how advanced our spacecraft, we cannot exceed the speed of light – Gödel’s theorems impose a boundary on formal reasoning: no matter how advanced AI becomes, it cannot escape the incompleteness of the formal system it operates within. Moreover, Gödel’s theorems may constrain practical ethical reasoning in unforeseen ways, much as some important mathematical conjectures have been shown to be unprovable from standard axioms of set theory, or as the speed of light, though unreachable, still imposes real constraints on engineering and astrophysics. For example, as I write this, NASA’s Parker Solar Probe is the fastest human-made object in history, travelling at roughly 430,000 miles (c700,000 km) per hour, just 0.064 per cent of the speed of light. Yet that upper limit remains crucial: the finite speed of light has, for example, shaped the design of space probes, landers and rovers, all of which require at least semi-autonomous operation, since radio signals from Earth take minutes or even hours to arrive. Gödel’s theorems may curtail ethical computation in similarly surprising ways. There is yet another reason why Gödel’s results are especially relevant to AI ethics. Unlike static rule-based systems, advanced AI, particularly large language models and adaptive learning systems, may not only apply a predefined ethical framework, but also revise elements of it over time. A central promise of AI-driven moral reasoning is its ability to refine ethical models through learning, addressing ambiguities and blind spots in human moral judgment. As AI systems evolve, they may attempt to modify their own axioms or parameters in response to new data or feedback. This is especially true of machine learning systems trained on vast and changing datasets, as well as hybrid models that integrate logical reasoning with statistical inference. Yet Gödel’s results reveal a structural limit: if an ethical framework is formalised within a sufficiently expressive formal system, then no consistent set of axioms can prove all true statements expressible within it. To illustrate, consider an AI tasked with upholding justice. It may be programmed with widely accepted ethical principles, for example fairness and harm minimisation. While human-made models of justice based on these principles are inevitably overly simplistic, limited by computational constraints and cognitive biases, an AI, in theory, has no such limitations. It can continuously learn from actual human behaviour, refining its understanding and constructing an increasingly nuanced conception of justice, one that weaves together more and more dimensions of human experience. It can even, as noted, change its own axioms. But no matter how much an AI learns, or how it modifies itself, there will always be claims about justice that, while it may be able to model, it will never be able to prove within its own system. More troubling still, AI would be unable to prove that the ethical system it constructs is internally consistent – that it does not, somewhere in its vast web of ethical reasoning, contradict itself – unless it is inconsistent, in which case it can prove anything, including falsehood, such as its own consistency. Ultimately, Gödel’s incompleteness theorems serve as a warning against the notion that AI can achieve perfect ethical reasoning. Just as mathematics will always contain truths that lie beyond formal proof, morality will always contain complexities that defy algorithmic resolution. The question is not simply whether AI can make moral decisions, but whether it can overcome the limitations of any system grounded in predefined logic – limitations that, as Gödel showed, may prevent certain truths from ever being provable within the system, even if they are recognisable as true. While AI ethics has grappled with issues of bias, fairness and interpretability, the deeper challenge remains: can AI recognise the limits of its own ethical reasoning? This challenge may place an insurmountable boundary between artificial and human ethics. The relationship between Gödel’s incompleteness theorems and machine ethics highlights a structural parallel: just as no formal system can be both complete and self-contained, no AI can achieve moral reasoning that is both exhaustive and entirely provable. In a sense, Gödel’s findings extend and complicate the Kantian tradition. Kant argued that knowledge depends on a priori truths, fundamental assumptions that structure our experience of reality. Gödel’s theorems suggest that, even within formal systems built on well-defined axioms, there remain truths that exceed the system’s ability to establish them. If Kant sought to define the limits of reason through necessary preconditions for knowledge, Gödel revealed an intrinsic incompleteness in formal reasoning itself, one that no set of axioms can resolve from within. There will always be moral truths beyond its computational grasp, ethical problems that resist algorithmic resolution. So the deeper problem lies in AI’s inability to recognise the boundaries of its own reasoning framework – its incapacity to know when its moral conclusions rest on incomplete premises, or when a problem lies beyond what its ethical system can formally resolve. While humans also face cognitive and epistemic constraints, we are not bound by a given formal structure. We can invent new axioms, question old ones, or revise our entire framework in light of philosophical insight or ethical deliberation. AI systems, by contrast, can generate or adopt new axioms only if their architecture permits it and, even then, such modifications occur within predefined meta-rules or optimisation goals. They lack the capacity for conceptual reflection that guides human shifts in foundational assumptions. Even if a richer formal language, or a richer set of axioms, could prove some previously unprovable truths, no finite set of axioms that satisfies Gödel’s requirements of decidability and consistency can prove all truths expressible in any sufficiently powerful formal system. In that sense, Gödel sets a boundary – not just on what machines can prove, but on what they can ever justify from within a given ethical or logical architecture. One of the great hopes, or fears, of AI is that it may one day evolve beyond the ethical principles initially programmed into it and simulate just such self-questioning. Through machine learning, AI could modify its own ethical framework, generating novel moral insights and uncovering patterns and solutions that human thinkers, constrained by cognitive biases and computational limitations, might overlook. However, this very adaptability introduces a profound risk: an AI’s evolving morality could diverge so radically from human ethics that its decisions become incomprehensible or even morally abhorrent to us. This mirrors certain religious conceptions of ethics. In some theological traditions, divine morality is considered so far beyond human comprehension that it can appear arbitrary or even cruel, a theme central to debates over the problem of evil and divine command theory. A similar challenge arises with AI ethics: as AI systems become increasingly autonomous and self-modifying, their moral decisions may become so opaque and detached from human reasoning that they risk being perceived as unpredictable, inscrutable or even unjust. Yet, while AI may never fully master moral reasoning, it could become a powerful tool for refining human ethical thought. Unlike human decision-making, which is often shaped by bias, intuition or unexamined assumptions, AI has the potential to expose inconsistencies in our ethical reasoning by treating similar cases with formal impartiality. This potential, however, depends on AI’s ability to recognise when cases are morally alike, a task complicated by the fact that AI systems, especially LLMs, may internalise and reproduce the very human biases they are intended to mitigate. When AI delivers a decision that appears morally flawed, it may prompt us to re-examine the principles behind our own judgments. Are we distinguishing between cases for good moral reasons, or are we applying double standards without realising it? AI could help challenge and refine our ethical reasoning, not by offering final answers, but by revealing gaps, contradictions and overlooked assumptions in our moral framework. AI may depart from human moral intuitions in at least two ways: by treating cases we see as similar in divergent ways, or by treating cases we see as different in the same way. In both instances, the underlying question is whether the AI is correctly identifying a morally relevant distinction or similarity, or whether it is merely reflecting irrelevant patterns in its training data. In some cases, the divergence may stem from embedded human biases, such as discriminatory patterns based on race, gender or socioeconomic status. But in others, the AI might uncover ethically significant features that human judgment has historically missed. It could, for instance, discover novel variants of the trolley problem, suggesting that two seemingly equivalent harms differ in morally important ways. In such cases, AI may detect new ethical patterns before human philosophers do. The challenge is that we cannot know in advance which kind of departure we are facing. Each surprising moral judgment from AI must be evaluated on its own terms – neither accepted uncritically nor dismissed out of hand. Yet even this openness to novel insights does not free AI from the structural boundaries of formal reasoning. That is the deeper lesson. Gödel’s theorems do not simply show that there are truths machines cannot prove. They show that moral reasoning, like mathematics, is always open-ended, always reaching beyond what can be formally derived. The challenge, then, is not only how to encode ethical reasoning into AI but also how to ensure that its evolving moral framework remains aligned with human values and societal norms. For all its speed, precision and computational power, AI remains incapable of the one thing that makes moral reasoning truly possible: the ability to question not only what is right, but why. Ethics, therefore, must remain a human endeavour, an ongoing and imperfect struggle that no machine will ever fully master. Elad Uzan is a departmental lecturer at the Blavatnik School of Government, as well as a member of the Faculty of Philosophy, University of Oxford. He was awarded the American Philosophical Association’s Baumgardt Memorial Fellowship in 2023 and will present the Baumgardt Memorial Lectures at the Uehiro Centre for Practical Ethics in 2025.
本文於 修改第 4 次
|
|
|