網路城邦
回本城市首頁 時事論壇
市長:胡卜凱  副市長:
加入本城市推薦本城市加入我的最愛訂閱最新文章
udn城市政治社會政治時事【時事論壇】城市/討論區/
討論區知識和議題 字體:
看回應文章  上一個討論主題 回文章列表 下一個討論主題
語言和語言學 – 開欄文
 瀏覽1,045|回應5推薦1

胡卜凱
等級:8
留言加入好友
文章推薦人 (1)

胡卜凱

語言的重要性不需我強調到目前為止人們對語言的起源演化所知不多。語言對人心理與思考的影響更是讓科學家摸不著頭腦。本城市過去也刊登了一些這方面的研究報告。本欄先轉登兩篇相關文章。

第一篇討論關於語言起源以及使用語言所必須具有的先決條件(請見本第二篇文章)它們包括身體結構和大腦功能等等面向

今年是微軟「書寫軟體」發行40周年紀念。第二篇討論微軟「書寫軟體」對一般人在使用語言上的微妙影響(請見本第三篇文章)此文和《網際網路的起源和演化》參看,我們可以體會到技術做為文化的一部分,它是如何在不知不覺中影響著人類的生活

我計畫抽空整合本城市討論/報導過的各個重要議題;第一步是把相關文章的標題附上超連結合輯起來,以便搜尋。第二步則是把我對它們的觀點做系統性的陳述。


本文於 修改第 4 次
回應 回應給此人 推薦文章 列印 加入我的文摘

引用
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7216153
 回應文章
字、詞如何在大腦中取得一席之地 -- Cody Cottier
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

索引

countervailing
對抗的,抵消的,制衡的,補償的
gauntlet
:公開挑戰(如進行比劃、搏擊、或戰鬥);原意為手套、護手等
idiosyncratic
有氣質的;有特性的
iteration
反覆(通常為了作出改善而重複做某事),反覆運算
PNAS
《美國國家科學院院刊》
primed
:被誘導(心理學術語);priming為誘導
provocative
挑釁的(行為或衣服)挑逗性的
quirk
:異類奇葩,古怪的,獨特的


How Words Struggle For Existence in Our Brains

Why are some words forgotten over time? Researchers investigate how words secure their place in the vocabulary of the future.

Cody Cottier, 03/15/24

(Credit: Kittyfly/Shutterstock
請至原網頁查看圖片)

Words, like biological species, are engaged in what Charles Darwin called a “struggle for existence.” Some have what it takes, earning the right to roll off the next generation of tongues, while others get consigned to the pages of Merriam-Webster — or become forgotten entirely.

What sets the survivors apart? A 
recent study in the journal PNAS, by a team of international researchers, found that many successful English words have three crucial traits: they’re acquired early in life, they refer to something concrete, and they’re emotionally arousing. (They offer “sex” and “fight” as two notable examples.)

Playing the “Telephone” Game

(Credit: ESB Professional/Shutterstock
請至原網頁查看圖片)

To figure that out, they asked some 12,000 people to retell short stories. That is, they essentially ran a giant game of “telephone,” where one person whispers something to the person beside them, they repeat it to the next, and so on. As every 8-year-old knows, it’s an object lesson in the challenge of preserving a message across multiple retellings. With enough intervening speakers, “The dog chews shoes” easily transforms into “Which blog do you use?

Yet certain patterns emerge from the inconsistency, revealing which words are likely to make it through the gauntlet. “The beauty of this approach,” says Fritz Breithaupt, a cognitive scientist at Indiana University Bloomington and a lead author of the study, “is that it shows a transition of the original story to something that is more optimally suited to our own cognitive apparatus.”

To make that more concrete, the point is that we shape language (often without realizing it) to fit our mental abilities. We pick and choose from the countless words vying for space in our brains. If one is too hard to understand and recall, or if it just doesn’t grab our attention, then we’re likely to discard it, sometimes in favor of an alternative. You don’t hear “pulchritudinous” much these days, because “beautiful” does a better job.

Baby Talk

(Credit: LeManna/Shutterstock
請至原網頁查看圖片)

Unsurprisingly, the words we learn first are some of the best adapted to the environment of our minds. As the speakers retold their stories, they quickly reverted to what they’d learned at a young age. (Of course, we don’t all learn the same words at the exact same moment in life, but there are well-established averages).

This suggests that no matter how large our lexicon grows, the sophisticated, technical language of adulthood can’t compete with basic vocabulary. “Baby language is not something we just shed and forget,” Breithaupt says. “It's the core we go back to.”

But if that were the only force at work, we’d all be babbling like infants in the most rudimentary terms, never getting far beyond “mama” and “cookie.” There are countervailing (bet that word wouldn’t last two retellings) pressures, social and cultural processes that nudge language in different directions.

Technological advances, for example, introduce all sorts of new words (or neologisms), like “television” and “Bluetooth.” They can also originate in the never-ending need to express new ideas, as well as reframe old ones that have “lost their ability to engage the listener,” as the researchers put it. And existing but difficult words may take refuge in subcultures that keep them alive for idiosyncratic purposes, like “hypothesis” in scientific communities and “acquittal” in legal circles.

Words We Can Picture

Another common characteristic of words learned late in life is abstractness. “Hypothesis” may have called some image to mind, perhaps glass beakers and white lab coats, but it probably didn’t summon anything as distinct as the word “dog.”
Research has shown that when language evokes something accessible to our senses, we find it more interesting and understandable.

Breithaupt is quick to note that we need abstractions. “Truth,” “love” and “kindness” don’t refer to physical entities, but that doesn’t diminish their importance. In fact, every word is to some degree abstracted from reality. “But ultimately,” he says, “the concrete words, the things we can picture, they have an advantage.”

Emotion Rules, Good or Bad

The words that stand the test of time also tend to bring out strong emotions. Interestingly, it doesn’t matter whether those feelings are positive or negative — “sex” and “terrorist” are both provocative in their own way. They jump out at us, almost as if seizing cognitive territory by force.

This fits with 
psychological studies showing that emotional arousal enhances memory. The idea is that because we can’t possibly remember everything, we preferentially pay attention to and remember whatever is most significant. And what’s arousing tends to be significant, regardless of its positive or negative associations (snake in the grass, mate in the bed).

Words for a Better World

(Credit: PeopleImages.com - Yuri A/Shutterstock
請至原網頁查看圖片)

To see if these factors scale up, influencing language change over the course of not just a few retellings but entire human generations, Fritz and his colleagues also analyzed a vast set of text from the past 200 years. Incredibly, the many differences between spoken and written language notwithstanding, they found the same three trends toward words that are acquired early, are concrete, and that arouse feeling.

There was one unexpected discrepancy, though: Both positively and negatively arousing words had a leg up on neutral ones in the “telephone” experiment, but over long spans of time there seems to be a stronger bias toward the positive. As one potential explanation, Breithaupt points to the work of cognitive psychologist and public intellectual Steven Pinker (who coincidentally edited the paper) on the rise of global wellbeing over the past century.

In spite of widespread pessimism about the future of humanity and its home planet, Pinker 
has argued that the world is in fact a happier, safer, more peaceful place than it’s ever been. “And if that is true,” Breithaupt says, “you would expect language to reflect that somewhat. If you have a lot of suffering and pain and so on, you need the vocabulary that expresses that.”

Agents of Creativity

(Credit: Kittyfly/Shutterstock
請至原網頁查看圖片)

If this all makes us sound a bit like mindless vehicles of linguistic evolution, speaking in words we’re cognitively primed to select, Breithaupt has a more optimistic take. He describes his participants’ retellings as powerfully transformative acts: “We actually are agents of change, agents of creativity. Every single one of us.”

In 
another recent study, published in Scientific Reports in January, he and several colleagues at Indiana University Bloomington found that when you ask the AI system ChatGPT to repeatedly retell a story, it introduces almost no novelty. Humans, by contrast, replace as much as 60 percent of the words and concepts with each iteration.

So, amid our collective anxiety over the mushrooming capabilities of artificial intelligence, Breithaupt believes we can take solace in the quirks of human cognition and the innovations they enable. “I think we don't have to be completely afraid of ChatGPT,” he says, “because it will not take that away from us, at least not in an easy, direct way.”

相關閱讀

How Learning a Language Changes Your Brain
Words Seem to Lose Their Meaning When We Repeat Them Over and Over. Why?
The Biology of Baby Talk

本文於 修改第 4 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7225652
埃及人何時開始使用象形文字 - Owen Jarus
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

When did the Egyptians start using hieroglyphs?

Owen Jarus, 02/13/24

The earliest known Egyptian hieroglyphic writings appear fully formed, either because they were developed on perishable, now-lost materials or because they were quickly "invented by an unknown genius."

Ancient Egyptian hieroglyphs carved into sandstone at the Temple of Kom Ombo in Aswan. (Image credit: skaman306 via Getty Images
請至原網頁查看圖片)

For thousands of years, the ancient Egyptians inscribed hieroglyphs on tombs, papyri and, in some cases, pyramids.

But when were hieroglyphs invented? Research shows that they emerged about 5,200 years ago, at around the same time another writing system, called cuneiform (楔形文字), was being invented in Mesopotamia.

"German excavations at Abydos in Egypt have revealed hieroglyphic inscriptions from [circa] 3200 BC," James Allen, a professor emeritus of Egyptology at Brown University, told Live Science in an email. Similarly, Ludwig Morenz, an Egyptology professor at the University of Bonn in Germany, told Live Science in an email that Egyptian hieroglyphs were created "around 3300/3200 BC."

Allen said "the hieroglyphic system first appears pretty much fully formed, either because its beginnings were inscribed on perishable materials [that have not survived] or because it was invented by an unknown genius."

Why were hieroglyphs invented?

Why hieroglyphs were invented is a source of debate, 
Marc Van De Mieroop, a history professor at Columbia University, wrote in the second edition of his book "A History of Ancient Egypt" (John Wiley & Sons Ltd., 2021).

At the time hieroglyphs were invented, Egypt was unifying into a single state and administration may have been a reason for their invention. It "is logical that a state of Egypt's size and complexity required a flexible system of accounting that could keep information on the nature of goods, their quantities, provenance and destination, the people in charge of them and the date of transaction," Van De Mieroop wrote in his book.

Another theory is that hieroglyphs were invented to help glorify gods and the king, Van De Mieroop wrote, noting that some early carvings showing kings contain hieroglyphs. "The glorification of the king may have been one of the driving forces in the script's invention," he wrote.

What is the oldest living writing system?

Cuneiform script on a clay tablet that dates to the first millennium B.C. (Image credit: benedek via Getty Images)

The Egyptians created hieroglyphs at around the same time as cuneiform was invented in 
Mesopotamia. Which system was invented first is a matter of debate among scholars.

Allen argues that Egyptian hieroglyphs were invented first, saying that the earliest cuneiform inscriptions date to around 2900 B.C. However, many scholars disagree.

For instance 
Orly Goldwasser, an Egyptology professor at The Hebrew University of Jerusalem wrote that cuneiform was likely developed first. "Based on the evidence at hand, it seems most likely that writing was born in Mesopotamia," Goldwasser wrote in a chapter of the book "Pharaoh's Land and Beyond: Ancient Egypt and Its Neighbors" (Oxford University Press, 2017).

In either case, cuneiform and hieroglyphs are quite different, and the two systems appear to have developed independently of each other. "Cuneiform and hieroglyphic are too dissimilar for the one to have influenced the other directly," Allen said. Cuneiform signs "represent whole words or syllables," while hieroglyphs "represent words or individual consonants" and don't represent vowels, Allen noted.

While there was some contact between the people of Egypt and Mesopotamia, hieroglyphs were developed within the Nile Valley, Morenz said. Goldwasser wrote that while the two systems are quite different, it's possible that the invention of cuneiform in Mesopotamia helped inspire Egyptians to invent hieroglyphs.

The last known Egyptian hieroglyphic inscription dates to A.D. 394, according to the 
University of Memphis in Tennessee. By that point in time, other writing systems such as Coptic were being used in Egypt. Knowledge of how to read and write hieroglyphs became lost and it wasn't until the 19th century, with the decipherment of hieroglyphs, that they were read again.

Related: 

How old is ancient Egypt?
How old are the Egyptian pyramids?
Was ancient Egypt a desert?
Why did ancient Egyptian pharaohs stop building pyramids?

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7222937
印歐語系起源研究的新方法和新理論 -- Kurt Kleiner
推薦2


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (2)

嵩麟淵明
胡卜凱

兩個問題

1) 
中國話跟印歐語系有沒有關係
2)  能不能用這個方法來研究中國話各種方言之間的淵源和沿革


A new look at our linguistic roots

Linguists and archaeologists have argued for decades about where, and when, the first Indo-European languages were spoken, and what kind of lives those first speakers led. A controversial new analytic technique offers a fresh answer.

Kurt Kleiner, 02/12/24

Almost half of all people in the world today speak an Indo-European language, one whose origins go back thousands of years to a single mother tongue. Languages as different as English, Russian, Hindustani, Latin and Sanskrit can all be traced back to this ancestral language.

Over the last couple of hundred years, linguists have figured out a lot about that first Indo-European language, including many of the words it used and some of the grammatical rules that governed it. Along the way, they’ve come up with theories about who its original speakers were, where and how they lived, and how their language spread so widely.

Speaking in whistles

Most linguists think that those speakers were nomadic herders who lived on the steppes of Ukraine and western Russia about 6,000 years ago. Yet a minority put the origin 2,000 to 3,000 years before that, with a community of farmers in Anatolia, in the area of modern-day Turkey. Now a new analysis, using techniques borrowed from evolutionary biology, has come down in favor of the latter, albeit with an important later role for the steppes.

The computational technique used in the new analysis is hotly disputed among linguists. But its proponents say it promises to bring more quantitative rigor to the field, and could possibly push key dates further into the past, much as radiocarbon dating did in the field of archaeology.

“I think that linguistics might be in for a sort of equivalent of the radiocarbon revolution,” says Paul Heggarty, a historical linguist at the Pontificia Universidad Católica del Perú in Lima, and a coauthor of the new study; he described the computational approach in the 2021 Annual Review of Linguistics.

Revealing dead languages

To understand what’s going on, it helps to look at how the study of Indo-European languages developed.

During the 16th century, as travel and trade put Europeans in touch with more foreign languages, scholars became increasingly interested in how languages related to one another, and where they might have originated.

In the late 18th century, Sir William Jones, a British judge in India, noticed similarities in vocabulary and grammar in Sanskrit, Latin and Greek that couldn’t have been coincidental.

Historical linguists have reconstructed much of the grammar and vocabulary of the ancestor to Indo-European languages, to the point where we can piece together what conversations might have sounded like. Turn on closed captions to see a translation of the reconstruction presented here.

請至原網頁觀看視頻CREDIT: AB ALPHA BETA  

For instance, the English word “father” is “pitar” in Sanskrit and is “pater” in Latin and Greek. “Brother” is “bhratar” in Sanskrit, “frater” in Latin. Although Jones wasn’t actually the first to notice the similarities, his pronouncement that there must be a common origin helped to spur on a movement to compare languages and trace their relationships.

A major advance came in 1882, when Jacob Grimm formulated what would later be called Grimm’s Law. Grimm is best known today as one half of the Brothers Grimm, who collected and published Grimm’s Fairy Tales. (《格林童話》) But in addition to being a folklorist, Jacob Grimm was also an important linguist.

Grimm showed that as languages developed, sounds changed in regular ways that could help make sense of how languages were related. For instance, the Indo-European word for “two” was “dwo.” But “dwo” was one of a number of words whose initial “dchanged tot” as it passed into the common ancestor of English and German. Later, the “t” sound became “ts” in an ancestor to modern German. So the Indo-European word “dwo” became “two” in English and “zwei” (pronounced “tsvai”) in modern German. Other words starting with the “d” sound behaved similarly. Scholars discovered a lot of these sound shift patterns, each obeying different rules, as one language gave birth to another.

Together with these sound shifts, linguists also study how words are formed, such as the way that English adds an “s” to make a word plural. They also look at how words are arranged, such as the way that English puts subjects before verbs and verbs before objects. And, of course, they look at shared vocabulary. By comparing all these features of different languages, linguists are able to map how languages descended from one another, and to place them in family trees (系譜) that show their relationships.

Grimm’s Law describes the regularity of how sounds change in languages. The chart shows how some sounds from proto-Indo-European shifted in Germanic languages, such as English, while remaining the same in non-Germanic languages, such as French.

Grimm’s Law (
請至原網頁查看圖表)

Today, linguists are in broad agreement on the basics of Indo-European language groupings and how they are related to one another. They agree that the original language, which they call Proto-Indo-European, split into 10 or 11 main branches, two of which are now extinct.

They also generally agree on where to put languages within the main branches. For instance, they know that the Italic branch gave us Latin, which itself developed into the Romance languages such as French, Spanish and Italian. The Germanic branch developed into languages including German, Dutch and English. And the Indo-Iranian branch resulted in languages like Hindi, Bengali, Persian and Kurdish.

Ancestral lifestyles

By 
tracing changes in language backwards towards their sources, linguists have deduced many of the basic characteristics of the original Proto-Indo-European language, including some vocabulary, how words were formed and some idea of how they were pronounced. And many linguists think they have even found hints of how the first Proto-Indo-Europeans might have lived.

For example, the Proto-Indo-European language had a word for axle, two words for wheel, a word for harness-pole and a verb that meant “to transport by vehicle.” Archaeologists know that wheel and axle technology was invented about 6,000 years ago, which suggests that Proto-Indo-European can’t be any older than that. If it was older — in other words, if it had started to split into other languages before it had words for axles and harness-poles — then its daughter languages would have had to invent their own words for these things. The fact that they use the same words suggests that the split started after these technologies were developed.

Other words in the language suggest that the first Indo-European speakers were probably familiar with horses, cattle- and sheepherding, dairy, wool, honey and mead. They seem to have had chiefs (the word “reg” gave us our English word “regal”) and may have been patriarchal (they had words for “in-laws” that applied only to the bride’s side of the family, suggesting that the husband’s family was considered primary).

Many linguists think the vocabulary paints a picture of pastoralistsnomadic herders (
游牧民族) — who used horses and wagons. Combined with genetic evidence that people dispersed rapidly out of the steppes into central Europe about 5,000 years ago, they conclude that Indo-European languages moved out of the steppes and spread with the pastoralists.

According to one theory, Indo-European languages might have been spread by pastoralists traveling in wagons like this Early Bronze Age copper model from Anatolia.

請至原網頁觀看圖片CREDIT: EDITH PERRY CHAPMAN FUND, 1966 / PUBLIC DOMAIN

In 1987, though, the Cambridge archaeologist Colin Renfrew rejected a pastoralist origin for Indo-European. Renfrew reasoned that the dramatic spread of Indo-European languages must have required a bigger push than could be provided by contact with ragtag groups of nomadic herders. For a major shift in which a single language grew to dominate a region stretching from Ireland to India, Renfrew argued, you needed a more powerful force.

He found it in the spread of farming. Simply put, as people took up farming their population grew more quickly than that of their hunting and gathering neighbors. As farming expanded, the languages moved with it. Archaeological evidence shows that farming had begun moving out of Anatolia about 3,000 years earlier than the spread of pastoralists out of the steppe. So, Renfrew concluded, farmers were the real force behind the spread of Indo-European. By the time the pastoralists started migrating, the farmers they met were already speaking an Indo-European language.

Renfrew largely dismissed the linguistic reasoning that the steppe hypothesis was based on. The commonality of words for wheel, wagon-pole and the like, he said, can be explained by parallel shifts in which different languages draw on the same base meaning when creating a new word.

For instance, the original meaning of the Proto-Indo-European word for wheel seems to have meant something like circle, or turn. Different languages might have inherited that basic meaning and drawn on it independently when creating their own words for wheel.

Likewise, if the word “thill” for wagon-pole had a more general meaning of stick or pole, it could have been adopted to mean wagon-pole by more than one language.

Searching for rigor

Arguments like these led a few linguists to try a more quantitative approach to reconstructing the history of Indo-European. For this, they borrowed a technique often used in biology to build evolutionary trees based on measurable traits. Their approach, called computational phylogenetics, treats languages as evolving systems, similar to biological organisms. But instead of tracing changes in DNA, as computational phylogenetics in biology does, the technique in linguistics traces words. Specifically, most analyses have looked at patterns in words that mean the same thing in different languages, and that can be traced back to the same Proto-Indo-European root. The more similar those patterns are, the more closely related languages are generally thought to be.

While this may sound like the language trees long used by linguists, the trees produced by computational phylogenetics are far less subjective: The method is governed by strict algorithms and explicitly stated rules.

In essence, the computer program works by drawing a language tree and estimating the probability that it is correct given all the data and assumptions. Then the program makes a single change to that tree and compares the probability scores, keeping whichever tree is more probable. The process is repeated, sometimes millions of times, resulting in a set of most-probable trees.

These trees show how closely related languages are to one another. To estimate timings — when languages originated and diverged from one another — the researchers also provide the computer program with dates for when they think different languages existed, based on the best estimates of experts. Latin, for instance, existed around 2,050 years ago, Old Icelandic about 800 years ago, and Mycenaean Greek about 3,350 years ago. The computer program uses these anchor dates to create its timing estimates, including a date for the ultimate origin of Indo-European.

The results can be combined with the historical record of where languages were spoken to help figure out a likely map of how they spread geographically. And the dates can be combined with the archaeological record and studies of ancient human DNA to see if the Indo-European language lines up with an early farming origin, or a later steppe origin.

Contradictory results

One such analysis, 
published in 2012, pointed to an origin of Indo-European about 9,000 years ago in Anatolia, supporting the theory that Indo-European originated with farmers. But just three years later, a different team used much the same data to conclude that the origin was just 6,000 years ago on the steppes, supporting the opposite view that pastoralists were the first Indo-European speakers. How could the two teams reach such different conclusions from such a similar list of words?

Two possible origins of Indo-European languages. (
請至原網頁查看地圖)

Two possible origins of Indo-European languages. Most historical linguists favor the origin illustrated in the top map, where the languages originated in the steppes about 6,000 years ago. A minority favor an origin among farmers about 9,000 years ago.

Heggarty delved into the problem and discovered that the issue lay with the dataset used for both of these earlier analyses, which was largely based on one originally put together in the 1960s by 
Isidore Dyen, a linguist at Yale University. Dyen’s dataset had not been a problem for the research Dyen was doing, but when used for the new computational technique, it was throwing off the findings. Computational phylogeny works best when there is a single word for every root meaning researchers are interested in tracing. But the meaning “dirty,” for instance, can have a number of synonyms in English, including “filthy” and “unclean.” The Dyen dataset included synonyms like these for some words in some languages, but not for others.

Including any synonyms at all, Heggarty realized, made the dataset harder for the new computational technique to use. But having an inconsistent number of synonyms — more for some languages, fewer for others — really threw the calculations off. “I said, ‘Look, we have got to do this database completely again, from scratch. (
曾頭開始) We have got to do much better,’” Heggarty says.

So he and his colleagues chose 170 core meanings they wanted to tracebasic words you would expect languages to preserve, such as words for counting numbers, body parts, colors and things like house, mountain, laugh and night. Then they brought together a team of more than 80 linguists and had them determine, for each of 161 Indo-European languages, the primary word for each concept. Only that word, and none of the synonyms, went into the analysis.

“We made a highly consistent database out of it, in a way that nobody has ever done before,” Heggarty says. “And we did a lot of analysis to make sure we chose the most appropriate meanings. If you don’t do your due diligence, your results won’t be valid.”

When Heggarty’s team reran the analysis with this new database, their findings broadly agreed with the earlier, farmer-origin theory, locating the origin squarely in Anatolia about 8,000 years ago. From there, some branches of the language moved eastward and gave rise to languages including Persian and Hindustani. Other branches moved west to eventually develop into Greek and Albanian.

But the analysis also recognizes the steppes as playing an important role as a secondary homeland for most European languages: After one branch traveled northward from Anatolia to the steppes, it radiated from there into northern Europe, giving birth to Germanic, Italic, Gaelic and other European language families.

Not convinced

Mainstream historical linguists remain skeptical, however — of computational phylogenetics in general and the new result in particular. The main criticism is that the approach relies mostly on vocabulary and ignores word sounds and structures, such as the stems, prefixes and suffixes that make up a word. And the critics say that word meanings by themselves don’t give enough information to draw firm conclusions, no matter how sophisticated the computation is.

Thomas Olander, a historical linguist at the University of Copenhagen, says that the problem with depending on related words is that languages borrow words from one another all the time. Just seeing that there are words in common between two languages, then, doesn’t mean the languages come from the same parent. The fact that English speakers now use the word “sushi,” for example, doesn’t mean that English and Japanese are related languages.

Instead, most linguists tend to trust sound shifts — such as the “dwo” – “two” – “zwei” shift — along with similarities in the structures of words that can indicate which language they originated in. Word meanings can also be part of that mix, but they can’t do it alone, Olander says.

Heggarty’s tree has other problems, as well. For instance, it shows Celtic languages as being closely related to Germanic languages. But Olander says most historical linguists think Celtic languages are much more closely related to Italic languages.

“It’s something that, again, is surprising,” Olander says. “I think ‘surprising’ could be translated to ‘It probably means that that their method is wrong.’”

Olander thinks it is far more likely that Celtic and Germanic branches coexisted closely for a long time and loaned one another words. An analysis based solely on shared word meanings shows them as more closely related than they actually are, he says.

James Clackson, a linguist at Cambridge University, also finds the early date for Proto-Indo-European, and other details of the tree, unconvincing. But he thinks computational phylogenetics is worth pursuing. And if nothing else, he says, the most recent research created a very high-quality new dataset that will be important to historical linguists in general as they seek to solve many unsettled issues in their field.

In the meantime, advocates of computational phylogenetics are likely to continue to promote their methods and seek legitimacy (
正確性學術界的權威性) from the wider discipline. Heggarty thinks that as mainstream linguists get more comfortable with the method and the high-quality data it uses, they may give it more of a hearing.

Clackson, for one, says he’s willing to be convinced. “It’s a developing field, and it’s worth keeping an eye on,” he says.

10.1146/knowable-021224-1

Kurt Kleiner is a freelance writer living in Toronto.


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7222897
微軟「書寫軟體」對人們使用語言的微妙影響 ------- Victoria Woollaston
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

The surprisingly subtle ways Microsoft Word has changed how we use language


As Microsoft Word turns 40, we look at the role the software has played in four decades of language and communication evolution.

Victoria Woollaston, 10/25/23

For 40 years there’s been an invisible hand guiding the way many of us write, work, and communicate. Its influence has been pervasive, yet its impact has been subtle to the extent that you’ve likely never noticed. That invisible hand is Microsoft Word.


At its launch in October 1983, this influential software was known as Multi-Tool Word, and not long after, changed to Microsoft Word for Dos. Back then, there were more than 300 word processing programs across multiple platforms. People of a certain age will remember WordStar or WordPerfect, yet in a little over a decade Word eclipsed these rivals. By 1994, Microsoft says it had claimed a 90% share of the word-processing market, making it one of the most successful, well-known software products in history.

While establishing how many people use Word is tricky, recent filings show there are 1.4 billion Windows devices in use each month, and more than 90% of the Fortune 500 use the software. If only a third of those people used Word, it would still be more than the population of North America.

This context is important because it helps to explain why, and how, Word has had such influence on our lives.

Ironically, given its ubiquity, Word has rarely been a pioneer when it comes to features. As mentioned, it was far from the first word processor. It’s often credited with introducing grammar tools, despite the fact these were developed decades earlier. And the idea behind "track changes" – where you can see edits to a document – wasn’t a Microsoft invention.

Yet Word’s superpower was using smart, simple design choices to make such features accessible to a global audience, not just techies. Its "What You See is What You Get" (WYSIWYG) design philosophy is now commonplace in software and on the internet. Word introduced line breaks, along with bold and italic fonts on screen. It revolutionised typeset-quality printing, as well as the use of templates. And it was in these templates that Word’s early impact on communication emerged.

"Word templates led people to use the same formatting in communications, and eventually, this has become instantiated as a norm," says Gloria Mark, a professor of informatics at the University of California, Irvine, where she studies human-computer interaction. If you work in finance, there's a specific way reports are expected to be laid out. Letters follow a set pattern, memos are largely formatted in the same way. "Users know where to find information in these standardised documents; they don’t need to spend time trying to find what they need."

If you take this idea of professional conformity a step further, Word has also been significant in helping establish English as the global language of business. While it would be an overstatement to say Word alone made English the dominant language, as a US firm, Microsoft's mother-tongue is American-English. When this is coupled with Word’s ubiquity, it at least reinforces this dominance.

"Word primarily operates in English," says Noël Wolf, a linguistic expert at the language learning platform Babbel. "As businesses become increasingly global, the widespread use of Word in professional and technical fields has led to the borrowing of English terms and structures, which contribute to the trend of linguistic homogenisation."

Word's spell-checker and grammar features have become subtle arbiters of language, too. Although seemingly trivial, these tools "promote a sense of consistency and correctness", says Wolf, and this uniformity comes at the cost of writing diversity.  "Writers, when prompted by the software's automated norms, might unintentionally forsake their unique voices and expressions."

This becomes even more invasive when you look at the role and impact of autocorrect and predictive text. Today, when typing on Word, the software can automatically correct your spelling, and make suggestions for what to write next. These suggestions aren't (yet) based on your personal writing style and tone – they're rule-based. The suggestions you see will be the same as millions of others. Again, this may feel innocuous but it's another example of how Word standardises language by loosely guiding everyone down the same path. 

"Without auto-completion, a person might check a thesaurus when searching for a word," says Mark. "With auto-completion, it can lead to more rote use of language and may not encourage original writing."

For those trying to learn a language, or even get better at their native tongue, features such as autocorrect have been found to have a negative impact on student's writing abilities and spelling skills. There is also some evidence that, in adults at least, rather than making users lazy spellers, autocorrect features can reduce exposure to misspellings and so avoid the disruptive impact this can have on our memory for how a word should be spelt.

Wolf adds that by promoting uniformity in written communication, grammar and spelling features in word processors such as Word "enforce established language norms".

"Word may not recognise vocabulary or grammar conventions that are part of local dialects, and will try to correct them," she says. This can effectively marginalise regional nuance, she adds. On an individual scale, these impacts may seem minimal but they become magnified when repeated on a software that has millions, if not billions of users. 

Such tools play a broader role in the evolution of language more generally, too. Because Word defaults to US-English, so too do its spellchecking features. Write a word ending in "-ise" and it will suggest changing it to "-ize", unless you've taken the time to change the default settings.

Of course, it is worth remembering that Word isn't just available in English. It supports around 100 languages, including European Spanish and Latin-American Spanish.

But when the differences between dialects are subtle, the suggestions made by the software become less noticeable and more persuasive.

Take the word "trialing/trialling" for instance. "Trialing" is considered the US-English spelling. The Oxford English Dictionary (OED) doesn't even recognise "trialing" as a word, instead, opting for the use of the double-L common in British-English. Despite such a clear distinction by the OED, the American spelling regularly makes its way into British-English publications and style guides. There's an argument that this is because of increasingly global audiences and would again be an overstatement to lay the increasing Americanisation of English entirely at the feet of a piece of word processing software, but as part of influence of American culture has had on language more generally, it may have played a role.

Similarly, the efficiency brought about by standardisation can shape how we write, not just what we write. When clarity is put ahead of stylistic or poetic flair – Word's grammar checker has a specific "clarity" refinement option – it can have implications for how we value forms of creativity.

Based on a quick, albeit arbitrary, experiment, if Harper Lee had used Word to write To Kill a Mockingbird, the software's clarity refinement would have suggested changing: "I never loved to read. One does not love breathing," to "I never loved to read. Breathing is necessary." Does this remove the poetry and depth of the original? The example is somewhat facetious, but it illustrates the effects using such tools can have. In an age where clarity and readability are prioritised by search engines, and in which the glut of online content is said to be shrinking our attention span, such seemingly subtle shifts could potentially have far-reaching impact on creative writing.

Yet there’s no denying Word has improved accessibility and diversity in myriad other ways. By bringing word processing into people's homes, it has empowering more people to write, create and contribute. This increases diversity of voice, rather than impeding it. In 2018, Microsoft added dictation features into Word to empower people with dyslexia and dysgraphia to easily type with their voice. It is one of a number of accessibility features in the software.

In the early days of its adoption, word processing was even linked with promoting greater, not worse, essay-writing prowess. In a 1992 study, essays written on a word processor were rated "significantly higher" than those written by hand. Word processing allowed students to make "microstructural rather than macrostructural" changes to their work, and by being able to continuously revise their writing, the end result was seemingly improved.

According to Wolf, word processing has taken some of the cognitive load out of writing, which allows more space for creativity. For example, while spell-check may not promote spelling retention, "when used mindfully, spell-checkers serve as valuable aids by enhancing language proficiency and communication", says Wolf. "They enable users to focus on word choice and strategy of communication rather than spending time and energy pinning down the correct spelling."

This has the potential to become even more impactful as artificial intelligence (AI) is integrated into word processing. But opinion is divided. On one hand, by removing technical aspects of spelling, sentence construction and so on, AI may free people up to be even more creative. Its ability to learn and understand a person's unique writing style could lead to an era of highly personal, unencumbered imagination. "[Generative AI] cannot be entirely relied upon to produce accurate prose every time, but it can reduce the process of creating text to editing," according to Ed Challis, head of AI strategy at UiPath, which develops robotic automation software. He believes this will "lead to innovations across all areas of content creation and communication".

That said, a potential over-reliance on automation could discourage users from actively learning and improving their linguistic and written skills, believes Wolf. If large language models are being trained on decades of increasingly homogenised content, there's the risk this will make things worse, not better.

Whatever happens, the next 40 years are likely to continue the push-pull effect Word has had on language and society over the previous 40.

"Microsoft Word's impact on linguistic evolution is a complex interplay between standardisation and diversification," says Wolf. "It can homogenise language but also enables expression in various languages. The ultimate impact depends on how individuals and communities choose to use this powerful tool in the evolving landscape of global communication."


If you liked this story, 
sign up for The Essential List newsletter – a handpicked selection of features, videos and can't-miss news delivered to your inbox every Friday.

Join one million Future fans by liking us on 
Facebook, or follow us on Twitter or Instagram.


本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7216166
人類何時開始有語言能力 ---- Avery Hurt
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

When Did Humans Evolve Language?

When did language start? Find out why the exact timeline for the evolution of language remains up for debate among researchers.

Avery Hurt, 10/27/23

Earth is home to more than 7,000 languages, and we use those languages to express ideas as straightforward as the desire for a cup of coffee and as intricate as the details of quantum physics. But how did language start and when did humans first evolve this ability to use language?

When Did Humans Develop Language?

The development of human language has long fascinated scholars and linguists. These experts have various perspectives and theories for when humans started speaking and the reasons language evolved in the way that it did. There is much to learn from the laryngeal descent theory, to the arguments made for neurological and intelligence-based speech development.

The Laryngeal Descent Theory

The laryngeal descent theory (LDT) posits that language became possible only after anatomically modern Homo sapiens evolved around 200,000 years to 300,000 years ago. In H. sapiens, the larynx is lower in the throat than in our pre-H. sapiens ancestors or in modern non-human primates. 

This position of the larynx makes the vocal tract longer, making it possible to produce a variety of speech sounds, particularly the subtle distinctions among vowel sounds that our ancestors could not and other primates cannot make. Scientists call this the LDT and for many years, it was the most widely accepted view.

Challenging the Laryngeal Descent Theory

However, in 2019, a
 study in Science Advances called that dogma into question. The researchers looked at decades of research on primate vocalizations and anatomy, for example, research that found that the macaque, an Old-World monkey, has the necessary anatomy to support spoken language and is quite capable of distinctly producing the vowel sounds, “bit,” “bet,” “bat,” “but,” and “bought.”

How Did Language Start?

Based on this research and other data, the authors argue that the ability to make speech sounds goes back to the time when humans and Old-World monkeys last shared a common ancestor, as long as 27 million years ago.  

Alternate Views on the Origins of Language

As might be expected with such a dramatic shift in dogma, not everyone agrees with that timeline. In an
 article for The Conversation, George Poulos, a linguist at the University of South Africa, says that the first speech sounds came along a mere 70,000 years ago, and the ability to produce vowel and consonant sounds didn’t evolve until around 50,000 years ago.

What Was the First Human Language?

The first signs of human language were clicks followed by more elaborate language as the tongue, mouth, pharynx, nasal passages, and larynx gradually evolved.

The Evolution of Brain and Language

Another theory is that neurological changes may have been the driver of the ability to produce speech. That’s the conclusion of the authors of the 2019 paper. The reason Old-World monkeys can’t talk, the researchers say, is not because of the anatomy of their vocal tracts but because they don’t have the necessary neural structures. The neurological language theory seems to be holding up better than LDT.

How Neurological Development Drives Spoken Language

Richard Futrell, assistant professor at the University of California Irvine, who studies language processing in humans and machines, explains why a certain level of neurological development is necessary for spoken language. “Speaking requires fine-grained motor control, and it has to be fast,” he says. 

The Need for Fine-Grained Motor Control

That high-speed control was provided by more direct connections between the motor cortex and the vocal tract. Futrell says that neurons are like telephone poles sending signals from your brain to other parts of your body.

If the signal has to make too many hops before it can control your mouth, it will be too slow for much precision. “There seems to be a special fast path for control of the vocal apparatus in humans that enables humans to produce more different sounds reliably,” Futrell says.

How Did Language Develop?

Another possible explanation is that language is simply a consequence of increasing general intelligence. The more information processing capacity you have, the more complex patterns you can understand and produce.

Intelligence and Language

Futrell describes a line of research that correlates intelligence and language based on computer simulations. The basic finding, he says, is that if you simulate agents communicating with each other using signals, they have to coordinate with each other to come up with a set of signals that enable them to communicate well. This naturally produces a kind of linguistic structure, but the languages have to be simple so that they’re easy to learn.

The Development of Linguistic Efficiency

It may come as a surprise if you’ve ever tried to master Spanish imperfect tenses or figure out why German verbs keep leaping to the ends of sentences, but language systems are actually pretty simple. Futrell explains: “Imagine if you had to memorize 20 million different words, each of which expresses a very precise meaning. That would be really hard. Instead, we have languages with 10,000 to 50,000 words and a simple set of rules that allows us to combine words to form sentences.”

The Simplicity Bottleneck of Language

As we got smarter and found more things we wanted to communicate, we ran into what Futrell calls a “simplicity bottleneck.” We couldn’t just keep adding more words. We didn’t need a lot of linguistic structure when all we needed to communicate was a few distinct calls to warn of predators or to attract a mate or threaten a rival,

“Our brains aren’t big enough; our lives aren’t long enough to learn them all,” he says.

At that point, if the computer models are correct, linguistic structure was inevitable. This may also, Futrell says, have led to a runaway evolutionary dynamic where an increase in the complexity of culture meant that people who had better communication had more evolutionary success; meanwhile, better communication led to even greater cultural complexity. Before you know it, you have 7,000 languages and mind-twisting conversations about quantum physics.

Read More: 

New Evidence for How Languages Spread 10,000 Years Ago

Your Native Language May Wire the Brain in Unique Ways



本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7216154