網路城邦
回本城市首頁 時事論壇
市長:胡卜凱  副市長:
加入本城市推薦本城市加入我的最愛訂閱最新文章
udn城市政治社會政治時事【時事論壇】城市/討論區/
討論區知識和議題 字體:
看回應文章  上一個討論主題 回文章列表 下一個討論主題
自然科學:普及篇 – 開欄文
 瀏覽2,086|回應15推薦3

胡卜凱
等級:8
留言加入好友
文章推薦人 (3)

亓官先生
嵩麟淵明
胡卜凱

我是物理系畢業生,有了自然科學的基本常識;容易讀懂科學新知的報導,同時也有興趣接觸它們。

過去《中華雜誌》雖是政論性和人文學術性刊物,但有時會介紹一些自然科學的研究結果;每年也都會刊登有關諾貝爾獎得主的消息。我唸大學時就替《中華》翻譯過一篇報導天文學脈動星的文章。

同窗好友王家堂兄在1980前後,介紹我進入高能物理的「普及科學」世界;此後常常讀一些這方面的書籍。因此,我一直保持著對物理學的興趣,之後自然而然的進入宇宙學領域。

以上三點是這個部落格過去經常轉載自然科學方面報導/論文的背景。



本文於 修改第 3 次
回應 回應給此人 推薦文章 列印 加入我的文摘

引用
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7219386
 回應文章 頁/共2頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁
「(基因)機能擴張」研究的風險 -- Ross Pomeroy
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

過去兩、三年間,我開始注意到「(基因)機能擴張」這個「術語」開始在科技報導的文章中出現。不過,因為它太專門,我一直懶得花時間去了解。下文標題相當聳動,內容淺顯易懂;或許是認識這個術語的機會。

下文提及新冠病毒有可能來自這類實驗,雖然作者強調「可能性不大」;有點意思。作者並暗示它可能成為「生物戰武器」;則值得警惕和關注。

索引

airborne
:此處:空氣傳播的;在空中的空運的飛行中的
antigenic具抗原性的(這是我的翻譯,有待專家指正)
dank
陰冷潮濕的,濕冷的
Gain-of-Function Research(基因)機能擴張研究
lyssavirus
麗沙病毒屬
pathogen病原體
rabies
狂犬病
spelunking洞穴探察


The Gain-of-Function Experiment That Could 'Eliminate Humans From the Face of the Earth'

Ross Pomeroy, 06/15/24

A Google search for "Frio Cave" makes the Uvalde County, Texas destination look like a tourists' dream. One quickly learns that the cave is home to tens of millions of Mexican free-tailed bats, and that you can sometimes witness the flapping horde streaming out of their dark, dank home just before sunset, clouding the sky in a "once in a lifetime experience."

But Frio Cave has a darker history that visitors websites don't mention. More than fifty years ago,
two humans contracted rabies while spelunking there.

That humans would get infected with rabies while visiting a bat-infested cave isn't altogether surprising. Bats are a reservoir for the terrifying disease –
99% fatal to humans once symptoms – like hyperactivity, hallucinations, seizures, and fear of water develop. A simple bite from one of the millions of bats could have transmitted a lyssavirus that triggers rabies. However, in this instance, the spelunkers apparently weren't bitten. Rather, it seems they caught the virus from the air itself.

A team of scientists subsequently
investigated. They found that rabies virus could be transmitted to animals housed in empty cages within the cave, apparently just via the atmosphere itself. Moreover, the virus was isolated from samples collected via air condensation techniques.

The episode raised a disturbing prospect. Had rabies, the deadliest virus for humankind, gone airborne?

To be clear, it had not, at least not in a manner that would result in ultra-contagious, human-to-human spread. The sheer number of rabies-carrying bats in the cave likely transformed it into a "hot-box" of infection. Rabies remains transmitted almost entirely through bites and scratches from infected animals, and it is
rapidly inactivated by sunlight and heat. However, for safety, members of the general public are now only allowed to enter Frio Cave on guided tours that remain near the mouth of the cave.

That doesn't mean that rabies virus couldn't mutate to become transmitted through the air. It's an RNA virus, and these are known to have high mutation rates. Indeed, scientists have
found "a vast array of antigenic variants of this pathogen in a wide range of animal hosts and geographic locations."

Moreover, as two Italian scientists wrote in a 2021
article, "Even single amino acid mutations in the proteins of Rabies virus can considerably alter its biological characteristics, for example increasing its pathogenicity and viral spread in humans, thus making the mutated virus a tangible menace for the entire mankind."

Another possible route for this to occur would be through a "gain-of-function" experiment, in which researchers employ gene-editing to tweak the rabies virus, making it evade
current vaccines and endowing it with the ability to spread through the air like measles or influenza. Gain-of-function research has earned increased public scrutiny of late as there's a small, outside chance it may have produced SARS-CoV-2, the virus that causes Covid-19.

Paul Offit, a professor of pediatrics at the Children's Hospital of Philadelphia and co-inventor of a rotavirus vaccine,
commented on the potential to augment rabies through gain-of-function in a recent Substack post.

"In the absence of an effective vaccine, it could eliminate humans from the face of the earth. The good news is that no one has tried to make rabies virus more contagious. But that doesn’t mean that it’s not possible or that no one would be willing to try."


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7231232
什麼是「三體問題」,它真的無解?-Skyler Ware
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

繼續「三體(互動)問題」的通俗科學報導。請參看本欄2024/04/13貼文;以及其它相關文章1文章2(該欄2024/04/12)


What is the 3-body problem, and is it really unsolvable?

, 06/06/24

The three-body problem is a physics conundrum that has boggled scientists since Isaac Newton's day. But what is it, why is it so hard to solve and is the sci-fi series of the same name really possible?

The real-like “Tatooine” planet Kepler-16b orbits two suns at once, illustrating the infamous three-body problem. (Image credit: NASA/JPL-Caltech) 請至原網頁查看照片

A rocket launch. Our nearest stellar neighbor. A Netflix show. All of these things have something in common: They must contend with the "three-body problem." But exactly what is this thorny physics conundrum?

The three-body problem describes a system containing three bodies that exert gravitational forces on one another. While it may sound simple, it's a notoriously tricky problem and "the first real worry of Newton," Billy Quarles, a planetary dynamicist at Valdosta State University in Georgia, told Live Science.

In a system of only two bodies, like a planet and a star, calculating how they'll move around each other is fairly straightforward: Most of the time, those two objects will orbit roughly in a circle around their center of mass, and they'll come back to where they started each time. But add a third body, like another star, and things get a lot more complicated. The third body attracts the two orbiting each other, pulling them out of their predictable paths.

The motion of the three bodies depends on their starting state — their positions, velocities and masses. If even one of those variables changes, the resulting motion could be completely different.

"I think of it as if you're walking on a mountain ridge," Shane Ross, an applied mathematician at Virginia Tech, told Live Science. "With one small change, you could either fall to the right or you could fall to the left. Those are two very close initial positions, and they could lead to very different states."

There aren't enough constraints on the motions of the bodies to solve the three-body problem with equations, Ross said.

But some solutions to the three-body problem have been found. For example, if the starting conditions are just right, three bodies of equal mass could chase one another in a figure-eight pattern. Such tidy solutions are the exception, however, when it comes to real systems in space.

Certain conditions can make the three-body problem easier to parse. Consider Tatooine, Luke Skywalker's fictional home world from "Star Wars" — a single planet orbiting two suns. Those two stars and the planet make up a three-body system. But if the planet is far enough away and orbiting both stars together, it's possible to simplify the problem.

This artist image illustrates Kepler-16b, the first directly detected circumbinary planet, which is a planet that orbits two stars. (Image credit: NASA/JPL-Caltech)
請至原網頁查看照片

"When it's the Tatooine case, as long as you're far enough away from the central binary, then you think of this object as just being a really fat star," Quarles said. The planet doesn't exert much force on the stars because it's so much less massive, so the system becomes similar to the more easily solvable two-body problem. So far, scientists have found more than a dozen Tatooine-like exoplanets, Quarles told Live Science.

But often, the orbits of the three bodies never truly stabilize, and the three-body problem gets "solved" with a bang. The gravitational forces could cause two of the three bodies to collide, or they could fling one of the bodies out of the system forever — a possible source of "rogue planets" that don't orbit any star, Quarles said. In fact, three-body chaos may be so common in space that scientists estimate there may be 20 times as many rogue planets as there are stars in our galaxy.

When all else fails, scientists can use computers to approximate the motions of bodies in an individual three-body system. That makes it possible to predict the motion of a rocket launched into orbit around Earth, or to predict the fate of a planet in a system with multiple stars.


With all this tumult, you might wonder if anything could survive on a planet like the one featured in Netflix's "3 Body Problem," which — spoiler alert — is trapped in a chaotic orbit around three stars in the Alpha Centauri system, our solar system's nearest neighbor.

"I don't think in that type of situation, that's a stable environment for life to evolve," Ross said. That's one aspect of the show that remains firmly in the realm of science fiction.
 


Skyler Ware is a freelance science journalist covering chemistry, biology, paleontology and Earth science. She was a 2023 AAAS Mass Media Science and Engineering Fellow at Science News. Her work has also appeared in Science News Explores, ZME Science and Chembites, among others. Skyler has a Ph.D. in chemistry from Caltech.

Related:

Cosmic 'superbubbles' might be throwing entire galaxies into chaos, theoretical study hints

'Mathematically perfect' star system being investigated for potential alien technology
How common are Tatooine worlds?

Mathematicians find 12,000 new solutions to 'unsolvable' 3-body problem


本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7230651
基因與智能障礙-Emily Cooke
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

New genetic cause of intellectual disability potentially uncovered in 'junk DNA'

Emily Cooke, 06/01/24

Mutations in "junk DNA" could be responsible for rare genetic cases of intellectual disability, new research hints.

Scientists have uncovered a rare genetic cause of intellectual disability in a historically overlooked part of the human genome: so-called junk DNA.

This knowledge could someday help to diagnose some patients with these disorders, the researchers say.

An intellectual disability is a neurodevelopmental disorder that
appears during childhood and is characterized by intellectual difficulties that impact people's learning, practical skills and ability to live independently. Such conditions affect approximately 6.5 million Americans.

Factors such as
complications during birth can trigger intellectual disabilities. However, in most cases, the disorders have an underlying genetic cause. So far, around 1,500 genes have been linked with various intellectual disabilities — but clinicians are still not always able to identify the specific cause of every patient's condition.

One possible explanation for this gap in knowledge is that
previous approaches for reading DNA have only focused on a tiny portion of it. Specifically, they've looked at the roughly 2% of the genome that codes for proteins, known as coding DNA. About 98% of the genome contains DNA that doesn't code for proteins. This DNA was once considered "junk DNA," but scientists are now discovering that it actually performs critical biological functions.

In a new study, published Friday (May 31) in the journal
Nature Medicine, scientists used whole-genome sequencing technology to identify a rare genetic mutation within non-coding DNA that seems to contribute to intellectual disability.

The team compared the whole genomes of nearly 5,530 people who have a diagnosed intellectual disability to those of about 46,400 people without the conditions. These data were gathered from the U.K.-based
100,000 Genomes Project.

The researchers discovered that 47 of the people with intellectual disabilities — about 0.85% — carried mutations in a gene called RNU4-2. They then validated this finding in three additional large, independent genetic databases, bringing the total number of cases to 73.

RNU4-2 doesn't code for proteins but rather for an
RNA molecule, a cousin of DNA; RNA's code can either be translated into proteins or stand on its own as a functional molecule. The RNA made by RNU4-2 makes up part of a molecular complex called the spliceosome. The spliceosome helps to refine RNA molecules after their codes are copied down from DNA by "splicing" out certain snippets of the code.

To further determine the prevalence of this new disorder, the team then launched a separate analysis where they looked at the genomes of another 5,000 people in the U.K. who'd been diagnosed with "neurodevelopmental abnormality." This is a
term that refers to any deviation from "normal" in the neurodevelopment of a child.

The team's analysis revealed that, out of those 5,000 people, 21 carried mutations in RNU4-2. That made the mutations the second most common type seen in the overall group, following mutations on the X chromosome known to cause a disorder called Rett syndrome. If changes in RNU4-2 can be confirmed as a cause of intellectual disability, this finding hints that the mutations may contribute significantly to a variety of conditions.

The new study joins a second that also linked RNU4-2 to intellectual disabilities. The research has opened up "an exciting new avenue in ID [intellectual disability] research," Catherine Abbott, a professor of molecular genetics at the University of Edinburgh in the U.K. who was not involved in either study, told Live Science in an email.

"These findings reinforce the idea that ID can often result from mutations that have a cumulative downstream effect on the expression of hundreds of other genes," Abbott said. RNA molecules that don't make proteins often help control the activity of genes, turning them on or off. The findings also stress the importance of sequencing the whole genome rather than just coding DNA, she said.

The scientists behind the new study say the findings could be used to diagnose certain types of intellectual disability.

The team now plans to investigate the precise mechanism by which RNU4-2 causes intellectual disabilities — for now, they've only uncovered a strong correlation.


Emily Cooke is a health news writer based in London, United Kingdom. She holds a bachelor's degree in biology from Durham University and a master's degree in clinical and therapeutic neuroscience from Oxford University. She has worked in science communication, medical writing and as a local news reporter while undertaking journalism training. In 2018, she was named one of MHP Communications' 30 journalists to watch under 30. (emily.cooke@futurenet.com)


Rates of autism diagnosis in children are at an all time high, CDC report suggests
'Look at all this we don't understand': Study unravels whole new layer of Alzheimer's disease
New syndrome identified in children exposed to fentanyl in the womb
1st UK child to receive gene therapy for fatal genetic disorder is now 'happy and healthy'
This brain structure may grow too fast in babies who develop autism

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7230231
宇宙學的新重力理論--Claudia de Rham
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

如果我沒有誤解德爾涵教授的意思,她試圖:用一個(目前)難以測量得到的「帶質量重力子」概念,來取代一個(目前)無法測量得到的「暗物質」概念,以建立一個解釋「宇宙加速膨脹現象」的新重力理論

我不是物理學家,只是如果我沒有誤解,這篇大作讀起來有點像文字遊戲


New theory of gravity solves accelerating universe

Massive Gravity and the end of dark energy

Claudia de Rham, 05/03/24

The universe is expanding at an accelerating rate but Einstein’s theory of General Relativity and our knowledge of particle physics predict that this shouldn’t be happening. Most cosmologists pin their hopes on Dark Energy to solve the problem. But, as Claudia de Rham argues, Einstein’s theory of gravity is incorrect over cosmic scales, her new theory of Massive Gravity limits gravity’s force in this regime, explains why acceleration is happening, and eliminates the need for Dark Energy.


The beauty of cosmology is that it often connects the infinitely small with the infinitely big – but within this beauty lies the biggest embarrassment in the history of physics.

According to Einstein’s theory of General Relativity and our knowledge of particle physics, the accumulated effect of all infinitely small quantum fluctuations in the cosmos should be so dramatic that the Universe itself should be smaller than the distance between the Earth and the Moon. But as we all know, our Universe spans over tens of billions of light years: it clearly stretches well beyond the moon.

This is the “Cosmological Constant Problem”. Far from being only a small technical annoyance, this problem is the biggest discrepancy in the whole history of physics. The theory of Massive Gravity, developed by my colleagues and I, seeks to address this problem.

For this, one must do two things. First, we need to explain what leads to this cosmic acceleration; second, we need to explain why it leads to the observed rate of acceleration – no more no less. Nowadays it is quite popular to address the first point by postulating a new kind of
Dark Energy fluid to drive the cosmic acceleration of the Universe. As for the second point, a popular explanation is the anthropic principle: if the Universe was accelerating at a different rate, we wouldn’t be here to ask ourselves the questions.

To my mind, both these solutions are unsatisfactory. In Massive Gravity, we don’t address the first point by postulating a new form of as yet undiscovered Dark Energy but rather by relying on what we do know to exist: the quantum nature of all the fundamental particles we are made out of, and the consequent vacuum energy, which eliminates the need for dark energy. This is a natural resolution for the first point and would have been adopted by scientists a long time ago if it wasn’t for the second point: how can we ensure that the immense levels of vacuum energy that fill the Universe don’t lead to too fast an acceleration? We can address this by effectively changing the laws of gravity on cosmological scales and by constraining the effect of vacuum energy.

The Higgs Boson and the Nature of Nothingness

At first sight, our Universe seems to be filled with a multitude of stars within galaxies. These galaxies are gathered in clusters surrounded by puffy “clouds” of dark matter. But is that it? Is there anything in between these clusters of galaxies plugged in filaments of dark matter? Peeking directly through our instruments, most of our Universe appears to be completely empty, with empty cosmic voids stretching between clusters of galaxies. There are no galaxies, nor gas, nor dark matter nor anything else really tangible we can detect within these cosmic voids. But are they completely empty and denuded of energy? To get a better picture of what makes up “empty space,” it is useful to connect with the fundamental particles that we are made of.

I still vividly remember watching the announcement of the discovery of the Higgs boson in 2012. By now most people have heard of this renowned particle and how it plays an important role in our knowledge of particle physics, as well as how it is responsible for giving other particles mass. However, what’s even more remarkable is that the discovery of the Higgs boson and its mechanism reveals fundamental insights into our understanding of nothingness.

To put it another way, consider “empty space" as an area of space where everything has been wiped away down to the last particle. The discovery of the Higgs boson indicates that even such an ideal vacuum is never entirely empty: it is constantly bursting with quantum fluctuations of all known particles, notably that of the Higgs.

This collection of quantum fluctuations I'll refer to as the “Higgs bath.” The Higgs bath works as a medium, influencing other particles swimming in it. Light or massless particles, such as photons, don’t care very much about the bath and remain unaffected. Other particles, such as the W and Z bosons that mediate the Weak Force, interact intensely with the Higgs bath and inherit a significant mass. As a result of their mass the Weak Force they mediate is fittingly weakened.

Accelerating Expansion

When zooming out to the limits of our observable Universe we have evidence that the Universe is expanding at an accelerating speed, a discovery that led to the 2011 Nobel Prize in Physics. This is contrary to what we would have expected if most of the energy in the Universe was localized around the habitable regions of the Universe we are used to, like clusters of galaxies within the filaments of Dark Matter. In this scenario, we would expect the gravitational attraction pulling between these masses to lead to a decelerating expansion.

So what might explain our observations of acceleration? We seem to need something which fights back against gravity but isn’t strong enough to tear galaxies apart, which exists everywhere evenly and isn't diluted by the expansion of the cosmos. This “something” has been called Dark Energy.

A different option is vacuum energy. We’ve long known that the sea of quantum fluctuations has dramatic effects on other particles, as with the Higgs Bath, so it’s natural to ask about its effect on our Universe. In fact, scientists have been examining the effect of this vacuum energy for more than a century, and long ago realized that its effects on cosmological scales should lead to an accelerated expansion of the Universe, even before we had observations that indicated that acceleration was actually happening. Now that we know the Universe is in fact accelerating, it is natural to go back to this vacuum energy and estimate the expected rate of cosmic acceleration it leads to.

The bad news is that the rate of this acceleration would be way too fast. The estimated acceleration rate would be wrong by at least twenty-eight orders of magnitude! This is the “Cosmological Constant Problem” and is also referred to as the “Vacuum Catastrophe.”

Is our understanding of the fundamental particles incorrect? Or are we using Einstein’s theory of General Relativity in a situation where it does not apply?

The Theory of Massive Gravity

Very few possibilities have been suggested. The one I would like to consider is that General Relativity may not be the correct description of gravity at large cosmological scales where gravity remains untested.

In Einstein’s theory of gravity, the graviton like the photon is massless and gravity has an infinite reach. This means objects separated by cosmological scales, all the way up to the size of the Universe, are still under the gravitational influence of each other and of the vacuum energy that fills the cosmos between them. Even though locally the effect of this vacuum energy is small, when you consider its effect accumulated over the whole history and volume of the Universe, its impact is gargantuan, bigger than everything else we can imagine, so that the cosmos would be dominated by its overall effect. Since there is a lot of vacuum energy to take into account, this leads to a very large acceleration, much larger than what we see, which again is the Cosmological Constant Problem. The solution my colleagues and I have suggested is that perhaps we don’t need to account for all this vacuum energy. If we only account for a small fraction of it, then it would still lead to a cosmic acceleration but with a much smaller rate, compatible with the Universe in which we live.

Could it be that the gravitational connection that we share with the Earth, with the rest of the Galaxy and our local cluster only occurs because we are sufficiently
close to one another?

Could it be that we do not share that same gravitational connection with very distant objects, for instance with distant stars located on the other side of the Universe some 10 thousand million trillion km away? If that were the case, there would be far less vacuum energy to consider and this would lead to a smaller cosmic acceleration, resolving the problem.

In practice, what we need to do is understand how to weaken the range of gravity. But that’s easy: nature has already showed us how to do that. We know that the Weak Force is weak and has a finite range distance because the W and Z bosons that carry it are massive particles. So, in principle, all we have to do is simply to give a mass to the graviton. Just as Einstein himself tried to include a Cosmological Constant in his equations, what we need to do is add another term which acts as the mass of the graviton, dampening the dynamics of gravitational waves and limiting the range of gravity. By making the graviton massive we now have ourselves a theory of “massive gravity.” Easy!

The possibility that gravity could have a
finite range is not a question for science-fiction.

In fact Newton, Laplace and many other incredible scientists after them contemplated the possibility. Even following our development of quantum mechanics and the Standard Model, many including Pauli and Salam considered the possibility of gravitons with mass.

But that possibility was always entirely
refuted! Not because it potentially contradicts observations – quite the opposite, it could solve the vacuum catastrophe and explain why our Universe’s expansion is accelerating – but rather because models of massive gravity appeared to be haunted by “ghosts.” Ghosts are particles with negative energies that would cause everything we know, including you, me, the whole Universe, and possibly the structure of space and time to decay instantaneously. So if you want a theory of massive gravity you need to either find a way to get rid of these ghosts or to “trap” them.

For decades, preventing these supernatural occurrences seemed inconceivable. That’s until Gregory Gabadadze, Andrew Tolley and I found a way to engineer a special kind of “ghost trap” that allowed us to trick the ghost to live in a constrained space and do no harm. One can think of this like an infinite loop-Escherian impossible staircase in which the Ghosts may exist and move but ultimately end up nowhere.

Coming up with a new trick was one thing, but convincing the scientific community required even more ingenuity and mental flexibility. Even in science, there are many cultures and mathematical languages or scientific arguments that individuals prefer. So, throughout the years, whenever a new colleague had a different point of view, we were bound to learn their language, translate, and adjust our reasoning to their way of thinking. We had to repeat the process for years until no stone remained unturned. Overcoming that process was never the goal, though: it was only the start of the journey that would allow us to test our new theory of gravity.

If gravitons have a mass, this mass should be tiny, smaller than the mass of all the other massive particles, even lighter than the neutrino. Consequently, detecting it may not be straightforward. Nevertheless, different features will appear which may make it possible to measure it.

The most promising way to test for
massive gravity involves observations of gravitational waves. If gravitons are massive, then we’d expect that low-frequency gravitational waves will travel ever so slightly slower than high-frequency ones. Unfortunately, this difference would be too slight to measure with current ground-based observatories. However, we should have better luck with future observatories. Missions like the Pulsar Timing Array, LISA and the Simons Observatory will detect gravitational waves with smaller and smaller frequencies, making possible the observations we need.

Whether the massive gravity theory developed by my collaborators and I will
survive future tests is of course presently unknown, but the possibility is now open. After all, even if the outcome isn’t certain, when it comes to challenging the biggest discrepancy of the whole history of science, addressing the Cosmological Constant Problem, eliminating the need for dark energy, and reconciling the effect of vacuum energy with the evolution of the Universe, some risks may be worth taking.


Claudia de Rham, is a theoretical physicist working at the interface of gravity, cosmology and particle physics at Imperial College London. Claudia de Rham has recently published her first book The Beauty of Falling: A Life in Pursuit of Gravity with Princeton University Press.

This article is presented in partnership with
Closer To Truth, an esteemed partner for the 2024 HowTheLightGetsIn Hay Festival. Dive deeper into the profound questions of the universe with thousands of video interviews, essays, and full episodes of the long-running TV show at their website: www.closertotruth.com.

You can see Claudia de Rham live, debating in ‘Dark Energy and The Universe’ alongside Priya Natarajan and Chris Lintott and ‘Faster Than Light’ with Tim Maudlin and João Magueijo at the upcoming HowTheLightGetsIn Festival on May 24th-27th in Hay-on-Wye.

This article is presented in association with
Closer To Truth, an esteemed partner for the 2024 HowTheLightGetsIn Festival.

Related Posts:

The most complex thing in the universe
Was the Big Bang a white hole?
Dark energy is the product of quantum universe interaction By Artyom Yurov
The mistake at the heart of the physics of time
Einstein’s failed magic trick

Related Videos:

Roger Penrose | Interview
Fragments and reality
The trouble with string theory
The trouble with time

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7228963
破解生命起源奧秘:過去5年的5個突破性發現 – S. Jordan/L. G. de Chalonge
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

Unravelling life’s origin: five key breakthroughs from the past five years

Seán Jordan/Louise Gillet de Chalonge, 05/02/24

There is still so much we don’t understand about the origin of life on Earth.

The definition of life itself is a source of debate among scientists, but most researchers agree on the fundamental ingredients of a living cell. Water, energy, and a few essential elements are the prerequisites for cells to emerge. However, the exact details of how this happens remain a mystery.

Recent research has focused on trying to recreate in the lab the chemical reactions that constitute life as we know it, in conditions plausible for early Earth (around 4 billion years ago). Experiments have grown in complexity, thanks to technological progress and a better understanding of what early Earth conditions were like.

However, far from bringing scientists together and settling the debate, the rise of experimental work has led to many contradictory theories. Some scientists think that life emerged in deep-sea hydrothermal vents, where the conditions provided the necessary energy. Others argue that hot springs on land would have provided a better setting because they are more likely to hold organic molecules from meteorites. These are just two possibilities which are being investigated.

Here are five of the most remarkable discoveries over the last five years.

Reactions in early cells

What energy source drove the chemical reactions at the origin of life? This is the mystery that a research team in Germany has sought to unravel. The team delved into the feasibility of 402 reactions known to create some of the essential components of life, such as nucleotides (a building block of DNA and RNA). They did this using some of the most common elements that could have been found on the early Earth.

These reactions, present in modern cells, are also believed to be the core metabolism of LUCA, the last universal common ancestor, a single-cell, bacterium-like organism.

For each reaction, they calculated the changes in free energy, which determines if a reaction can go forward without other external sources of energy. What is fascinating is that many of these reactions were independent of external influences like adenosine triphosphate, a universal source of energy in living cells.

The synthesis of life’s fundamental building blocks didn’t need an external energy boost: it was self-sustaining.

Volcanic glass

Life relies on molecules to store and convey information. Scientists think that RNA (ribonucleic acid) strands were precursors to DNA in fulfilling this role, since their structure is more simple.

The emergence of RNA on our planet has long confused researchers. However, some progress has been made recently. In 2022, a team of collaborators in the US generated stable RNA strands in the lab. They did it by passing nucleotides through volcanic glass. The strands they made were long enough to store and transfer information.

Volcanic glass was present on the early Earth, thanks to frequent meteorite impacts coupled with a high volcanic activity. The nucleotides used in the study are also believed to have been present at that time in Earth’s history. Volcanic rocks could have facilitated the chemical reactions that assembled nucleotides into RNA chains.

Hydrothermal vents

Carbon fixation is a process in which CO gains electrons. It is necessary to build the molecules that form the basis of life.

An electron donor is necessary to drive this reaction. On the early Earth, H could have been the electron donor. In 2020, a team of collaborators showed that this reaction could spontaneously occur and be fuelled by environmental conditions similar to deep-sea alkaline hydrothermal vents in the early ocean. They did this using microfluidic technology, devices that manipulate tiny volumes of liquids to perform experiments by simulating alkaline vents.

This pathway is strikingly similar to how many modern bacterial and archaeal cells (single-cell organisms without a nucleas) operate.

The Krebs Cycle

In modern cells, carbon fixation is followed by a cascade of chemical reactions that assemble or break down molecules, in intricate metabolic networks that are driven by enzymes.

But scientists are still debating how metabolic reactions unfolded before the emergence and evolution of those enzymes. In 2019, a team from the University of Strasbourg in France made a breakthrough. They showed that ferrous iron, a type of iron that was abundant in early Earth’s crust and ocean, could drive nine out of 11 steps of the Krebs Cycle. The Krebs Cycle is a biological pathway present in many living cells.

Here, ferrous iron acted as the electron donor for carbon fixation, which drove the cascade of reactions. The reactions produced all five of the universal metabolic precursors – five molecules that are fundamental across various metabolic pathways in all living organisms.

Building blocks of ancient cell membranes

Understanding the formation of life’s building blocks and their intricate reactions is a big step forward in comprehending the emergence of life.

However, whether they unfolded in hot springs on land or in the deep sea, these reactions would not have gone far without a cell membrane. Cell membranes play an active role in the biochemistry of a primitive cell and its connection with the environment.

Modern cell membranes are mostly composed of compounds called phospholipids, which contain a hydrophilic head and two hydrophobic tails. They are structured in bilayers, with the hydrophilic heads pointing outward and the hydrophobic tails pointing inward.

Research has shown that some components of phospholipids, such as the fatty acids that constitute the tails, can self-assemble into those bilayer membranes in a range of environmental conditions. But were these fatty acids present on the early Earth? Recent research from Newcastle University, UK gives an interesting answer. Researchers recreated the spontaneous formation of these molecules by combining H-rich fluids, likely present in ancient alkaline hydrothermal vents, with CO-rich water resembling the early ocean.

This breakthrough aligns with the hypothesis that stable fatty acid membranes could have originated in alkaline hydrothermal vents, potentially progressing into living cells. The authors speculated that similar chemical reactions might unfold in the subsurface oceans of icy moons, which are thought to have hydrothermal vents similar to terrestrial ones.

Each of these discoveries adds a new piece to the puzzle of the origin of life. Regardless of which ones are proved correct, contrasting theories are fuelling the search for answers. As Charles Darwin wrote:

False facts are highly injurious to the progress of science for they often long endure: but false views, if supported by some evidence, do little harm, for everyone takes a salutary pleasure in proving their falseness; and when this is done, one path towards error is closed and the road to truth is often at the same time opened.


Seán Jordan, Associate professor, Dublin City University
Louise Gillet de Chalonge, PhD Student in Astrobiology, Dublin City University

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7228540
三體互動問題:從天文物理到人際關係 – Avi Loeb
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

應該是因為功力不夠,讀完這篇文章後頗有「丈二金剛」的感覺。標題有「人際互動」,但涉及此議題文字可能頂多只有全文的1/8

婁布博士寫這篇文章的目的大概只是為了說出以下這段話:

Four years ago, I recommended this novel to the creators of a new series on Netflix, which recently came to fruition.

劉慈欣
先生這部大作《三體》和根據它拍成的電影,也被此欄04/12/24關於文革的貼文引用,做為該談話節目的「引子」。所以,雖然我沒搞懂作者此文要表達什麼主旨,還是給它轉載,湊個熱鬧;並替劉先生和Netflix打打廣告。


THE THREE-BODY PROBLEM: FROM CELESTIAL MECHANICS TO HUMAN INTERACTIONS

AVI LOEB, 04/04/24

There are striking analogies between the interpersonal relationships of humans and the gravitational interaction of physical bodies in space. Consider a two-body system. In both realms, the systems can have stable configurations, leading to long-lived marriages or stellar binaries. But when a third body interacts strongly with these systems, a non-hierarchical three-body system often displays chaos with one of the members ejected and the other two remaining bound. This brings up analogies with interpersonal relationships when a third body is added to a non-hierarchical two-body system.

The chaotic gravitational dynamics in a system of three stars inspired the storyline for the novel “The Three-Body Problem” by the Chinese science fiction writer Cixin Liu. The book describes a planet in the triple star system, Alpha-Centauri, whose unpredictable chaotic dynamics motivate a civilization born there to travel towards Earth, which possesses a stable orbit around the Sun. Four years ago, I recommended this novel to the creators of a new series on Netflix, which recently came to fruition.

The restricted three-body problem involves a stable orbit of two large bodies accompanied by a small third body. In this case, the satellite resembles a child living with two parents, a configuration that often, but not always, displays stability.

In 1975, the Scottish astronomer Douglas C. Heggie wrote a paper in which he simulated the evolution of pairs of stars embedded in a star cluster. Heggie compared the binding energy per unit mass of each stellar binary to the characteristic energy of the background cluster members. He found that binaries, which are more tightly bound than the background average, tend to get tighter as a result of interactions with the background stars. Conversely, binaries which are more loosely bound than the background, get wider and eventually detach. This resulted in Heggie’s law: “Hard binaries get harder, and soft binaries get softer.” This law rings a bell regarding married couples in a closed society of background people who interact intensely with them.

The above-mentioned analogies are surprising, given that gravity is attractive, whereas human interactions are both attractive and repulsive. In electromagnetism, charges of equal sign repel each other, whereas charges of opposite sign are attracted to each other. This is different from human interactions, where people with aligned views are attracted to each other, and those with opposite views repel each other.

The main difference between a collection of charged particles, a so-called plasma, and a collection of gravitating bodies is that electric interactions can be screened. An embedded charge tends to attract opposite charges around it, resulting in the so-called Debye sphere, outside of which this charge has no influence. The neutralization of embedded charges makes a plasma behave like a neutral fluid on scales much larger than the Debye scale. In contrast, gravity cannot be screened because all known gravitating masses are positive.  

The long-range nature of gravity, with no screening, allows it to dominate the evolution of the Universe. All other known forces, including electromagnetism and the weak and strong interactions, are much stronger than gravity on small scales, but they do not reach the cosmic scales on which gravity is most effective.

Another difference between a plasma and a collection of gravitating bodies is that the latter is dynamically unstable. The core of a star cluster, with more binding energy per star than its envelope, tends to transport energy outwards, just like a hot object embedded in a cold environment. As energy is drained from the core, it condenses to a higher density where it becomes even “hotter”. This results in a gravothermal instability, during which the collapse process accelerates as the interaction time among the stars gets shorter as the cluster core gets denser.  

In 1957, the Austrian-British astrophysicist Herman Bondi wrote a paper in which he considered the existence of negative masses in Albert Einstein’s theory of gravity. A negative mass would repel a positive mass away from it and attract another negative mass towards it. Given that, a pair of positive and negative masses of equal magnitude could accelerate together up to the speed of light. The negative mass would push away the positive mass, which in turn would pull the negative mass for the ride. The runaway pair would accelerate indefinitely without any need for fuel or a propulsion system. Energy conservation would not be violated because the sum of the two masses is zero. Does the real Universe contain runaway pairs of positive and negative masses that accelerate close to the speed of light over billions of years?

A runaway pair of equal and opposite-sign masses would not exert a net-gravitational influence at large distances because the two components sum up to a zero total mass. However, if the runaway pair passes close to a gravitational wave observatory, like LIGO-Virgo-KAGRA, it could induce a brief gravitational signal that could be detected at distances comparable to the separation between the positive and negative masses. The signal will be characterized by a pulse of gravitational attraction followed by a pulse of gravitational repulsion or the other way around.

Given that the net gravitational effect of runaway pairs is zero, they have no effect on the mass budget of the Universe or its expansion history. However, it would be intriguing to search for them. If we ever find material with a negative mass, we could use it for gravitational propulsion. Alternatively, if we ever encounter an alien spacecraft that maneuvers with no associated engine or fuel, we should check whether its creators used negative mass to propel it.

After all, we know that the expansion of the universe is accelerating due to the repulsive gravity generated by an unknown substance called “dark energy.” If we could bottle this substance in a thin enclosure, we might possess a negative mass object that could enable our future exploration of interstellar space.  


Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s – Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011-2020). He is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook Life in the Cosmos”, both published in 2021. His new book, titled “Interstellar”, was published in August 2023

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7226659
關於暗能量的新數據及其導致的推論 -- Dennis Overbye
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

A Tantalizing ‘Hint’ That Astronomers Got Dark Energy All Wrong

Scientists may have discovered a major flaw in their understanding of that mysterious cosmic force. That could be good news for the fate of the universe.

An interactive flight through millions of galaxies mapped using coordinate data from the Dark Energy Spectroscopic Instrument, or DESI. Credit...By Fiske Planetarium, University Of Colorado Boulder And Desi Collaboration
(請至原網頁查看圖片)

Dennis Overbye
, 04/04/24

On Thursday, astronomers who are conducting what they describe as the biggest and most precise survey yet of the history of the universe announced that they might have discovered a major flaw in their understanding of dark energy, the mysterious force that is speeding up the expansion of the cosmos.

Dark energy was assumed to be a constant force in the universe, both currently and throughout cosmic history. But the new data suggest that it may be more changeable, growing stronger or weaker over time, reversing or even fading away.

“As Biden would say, it’s a B.F.D.,” said Adam Riess, an astronomer at Johns Hopkins University and the Space Telescope Science Institute in Baltimore. He shared the 2011 Nobel Prize in Physics with two other astronomers for the discovery of dark energy, but was not involved in this new study. “It may be the first real clue we have gotten about the nature of dark energy in 25 years,” he said. (卜凱B.F.D. = big fucking deal)

That conclusion, if confirmed, could liberate astronomers — and the rest of us — from a longstanding, grim prediction about the ultimate fate of the universe. If the work of dark energy were constant over time, it would eventually push all the stars and galaxies so far apart that even atoms could be torn asunder, sapping the universe of all life, light, energy and thought, and condemning it to an everlasting case of the cosmic blahs. Instead, it seems, dark energy is capable of changing course and pointing the cosmos toward a richer future.

The key words are “might” and “could.” The new finding has about a one-in-400 chance of being a statistical fluke, a degree of uncertainty called three sigma, which is far short of the gold standard for a discovery, called five sigma: one chance in 1.7 million. In the history of physics, even five-sigma events have evaporated when more data or better interpretations of the data emerged.  

This news comes in the first progress report, published as a series of papers, by a large international collaboration called the Dark Energy Spectroscopic Instrument, or DESI. The group has just begun a five-year effort to create a three-dimensional map of the positions and velocities of 40 million galaxies across 11 billion years of cosmic time. Its initial map, based on the first year of observations, includes just six million galaxies. The results were released today at a meeting of the American Physical Society in Sacramento, Calif., and at the Rencontres de Moriond conference in Italy.

DESI has generated the largest-ever 3-D map of the universe. Earth is depicted at the bottommost point of one magnified section.
Credit...Claire Lamman/DESI collaboration; Custom Colormap Package by cmastro (請至原網頁查看圖片)

“So far we’re seeing basic agreement with our best model of the universe, but we’re also seeing some potentially interesting differences that could indicate that dark energy is evolving with time,” Michael Levi, the director of DESI, said in a statement issued by the Lawrence Berkeley National Laboratory, which manages the project.

The DESI team had not expected to hit pay dirt so soon, Nathalie Palanque-Delabrouille, an astrophysicist at the Lawrence Berkeley lab and a spokeswoman for the project, said in an interview. The first year of results was designed to simply confirm what was already known, she said: “We thought that we would basically validate the standard model.”

But the unknown leaped out at them.

When the scientists combined their map with other cosmological data, they were surprised to find that it did not quite agree with the otherwise reliable standard model of the universe, which assumes that dark energy is constant and unchanging. A varying dark energy fit the data points better.

“It’s certainly more than a curiosity,” Dr. Palanque-Delabrouille said. “I would call it a hint. Yeah, it’s not yet evidence, but it’s interesting.”

But cosmologists are taking this hint very seriously.

Wendy Freedman, an astrophysicist at the University of Chicago who has led efforts to measure the expansion of the universe, praised the new survey as “superb data.” The results, she said, “open the potential for a new window into understanding dark energy, the dominant component of the universe, which remains the biggest mystery in cosmology. Pretty exciting.”

Michael Turner, an emeritus professor at the University of Chicago who coined the term “dark energy,” said in an email: “While combining data sets is tricky, and these are early results from DESI, the possible evidence that dark energy is not constant is the best news I have heard since cosmic acceleration was firmly established 20-plus years ago.”

In an artist’s rendering, light from quasars passes through intergalactic clouds of hydrogen gas. The light offers clues to the structure of the distant cosmos.
Credit...NOIRLab/NSF/AURA/P. Marenfeld and DESI collaboration (請至原網頁查看圖片)

Dark energy entered the conversation in 1998, when two competing groups of astronomers, including Dr. Riess, discovered that the expansion of the universe was speeding up rather than slowing, as most astronomers had expected. The initial observations seemed to suggest that this dark energy was acting just like a famous fudge factor — denoted by the Greek letter Lambda — that Einstein had inserted into his equations to explain why the universe didn’t collapse from its own gravity. He later called it his worst blunder.

But perhaps he spoke too soon. As formulated by Einstein, Lambda was a property of space-itself: The more space there was as the universe expanded, the more dark energy there was, pushing ever harder and eventually leading to a runaway, lightless future.

Dark energy took its place in the standard model of the universe known as L.C.D.M., composed of 70 percent dark energy (Lambda), 25 percent cold dark matter (an assortment of slow-moving exotic particles) and 5 percent atomic matter. So far that model has been bruised but not broken by the new James Webb Space Telescope. But what if dark energy were not constant as the cosmological model assumed? (
卜凱L.C.D.M. = Lambda cold dark matter)

At issue is a parameter called w, which is a measure of the density, or vehemence, of the dark energy. In Einstein’s version of dark energy, this number remains constant, with a value of –1, throughout the life of the universe. Cosmologists have been using this value in their models for the past 25 years.

But this version of dark energy is merely the simplest one. “With DESI we now have achieved a precision that allows us to go beyond that simple model,” Dr. Palanque-Delabrouille said, “to see if the density of dark energy is constant over time, or if it has some fluctuations and evolution with time.”

The DESI project, 14 years in the making, was designed to test the constancy of dark energy by measuring how fast the universe was expanding at various times in the past. To do that, scientists outfitted a telescope at Kitt Peak National Observatory with 5,000 fiber-optic detectors that could conduct spectroscopy on that many galaxies simultaneously and find out how fast they were moving away from Earth.

An animated 3-D model of DESI’s focal plane. The movement of the 5,000 robotic positioners is coordinated so that they don’t bump into one another. Credit Credit...By David Kirkby/desi Collaboration
(請至原網頁查看圖片)

As a measure of distance, the researchers used bumps in the cosmic distribution of galaxies, known as baryon acoustic oscillations. These bumps were imprinted on the cosmos by sound waves in the hot plasma that filled the universe when it was just 380,000 years old. Back then, the bumps were a half-million light-years across. Now, 13.5 billion years later, the universe has expanded a thousandfold, and the bumps — which are now 500 million light-years across — serve as convenient cosmic measuring sticks.

The DESI scientists divided the past 11 billion years of cosmic history into seven spans of time. (The universe is 13.8 billion years old.) For each, they measured the size of these bumps and how fast the galaxies in them were speeding away from us and from each other. (
卜凱關於宇宙年齡,請見本欄2024/03/19貼文)

When the researchers put it all together, they found that the usual assumption — a constant dark energy — didn’t work to describe the expansion of the universe. Galaxies in the three most recent epochs appeared closer than they should have been, suggesting that dark energy could be evolving with time.

“And we do see, indeed, a hint that the properties of dark energy would not correspond to a simple cosmological constant” but instead may “have some deviations,” Dr. Palanque-Delabrouille said. “And this is the first time we have that.” But, she emphasized again, “I wouldn’t call it evidence yet. It’s too, too weak.”

Time and more data will tell the fate of dark energy, and of cosmologists’ battle-tested model of the universe

“L.C.D.M. is being put through its paces by precision tests coming at it from every direction,” Dr. Turner said. “And it is doing well. But, when everything is taken together, it is beginning to appear that something isn’t right or something is missing. Things don’t fit together perfectly. And DESI is the latest indication.”

Dr. Riess of Johns Hopkins, who had an early look at the DESI results, noted that the “hint,” if validated, could pull the rug out from other cosmological measurements, such as the age or size of the universe. “This result is very interesting and we should take it seriously,” he wrote in his email. “Otherwise why else do we do these experiments?”


Dennis Overbye
is the cosmic affairs correspondent for The Times, covering physics and astronomy. More about Dennis Overbye

本文於 修改第 1 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7226441
宇宙學ABC和「『第一因』問題」何以無解-Marcelo Gleiser
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

The Big Bang’s mysteries and unsolvable “first cause” problem

The “first cause” problem may forever remain unsolved, as it doesn’t fit with the way we do science.

Marcelo Gleiser, 04/05/24


KEY TAKEAWAYS

*  The quest to understand the Universe’s origin has evolved from mythic narratives to the quantitative insights of modern cosmology, sparked by Einstein’s theory of relativity and its implications for understanding the cosmos’s structure and expansion.
*  Key discoveries, such as the expanding Universe through Hubble’s observations and the Big Bang theory’s predictive success, have grounded our cosmic understanding in observable phenomena, revealing a Universe that was once hotter, denser, and more uniform.
*  Despite significant advances, the earliest moments and the fundamental cause of the Universe’s inception remain shrouded in mystery — perhaps forever so.

If there’s one question that has been present throughout human history across all cultures, it’s the question of the origin of all things. Why is there a Universe? How come we exist in it to be able to ask this question? Across millennia, different cultures
offered mythic narratives to address the mystery of existence. But with the development of modern science, the focus has shifted to a more quantitative approach — a scientific narrative of the origin and history of the Universe, the focus of modern cosmology.

It all started in 1915 when Albert Einstein proposed his new theory of gravity, the general theory of relativity. Einstein’s brilliant innovation was to treat gravity not as a force acting at a distance, as did Newton, but as the curvature of space due to the presence of mass. Thus, according to Einstein, the orbital motions of celestial objects are caused by the spatial curvature of their surroundings. A way of visualizing this is by throwing marbles across a mattress. If no weight bends the mattress, the marbles will move along straight lines. But if you place a heavy lead ball on the mattress, the marbles that roll nearby will trace curved paths. If you practice your throws, you can get the marbles to circle the lead ball, somewhat like planets circle the Sun. Einstein’s theory allows physicists to calculate the geometry of the bent space around an object. He demonstrated his theory’s validity by both showing how Mercury’s orbit wobbles about the Sun (the
precession of Mercury’s perihelium水星最接近太陽位置時的旋進軌道) and by computing how starlight gets bent as it travels near the Sun.

Two years after he launched his new theory, Einstein took a bold step, moving from solar system applications to the whole Universe. He figured that he could solve his equations to estimate the geometry of the Universe as it is bent by the matter inside. To do this, he made three simplifying assumptions: that the Universe is spherical, that it is static, and that matter is, on average, distributed equally everywhere in space (this latter assumption became known as the “cosmological principle”). To his surprise, his imaginary Universe wasn’t stable: perturb it a bit and the whole thing collapses into a point due to its own gravity. Disappointed but not defeated, Einstein added an extra term to his equations to act as a counterbalance to gravity’s attraction: the so-called “cosmological constant.” Adjusting its value, Einstein managed to find his static spherical solution. Hence modern cosmology was born.

Einstein’s pioneering work inspired many theoretical physicists to build their own desktop Universes — mathematical models that changed his assumptions to see what would happen. In 1917, the same year of Einstein’s model, the Dutch Willem De Sitter solved the equations for an empty Universe (one with no matter at all) with the cosmological constant. His solution, not surprisingly, described a space where two points would exponentially move away from one another. In 1922, the Russian Alexander Friedmann abandoned Einstein’s assumption of a static Universe and found, to his delight, solutions for a Universe that would grow with time. How fast it would grow depended on the value of the cosmological constant and on the type of matter that filled all of space.

While theoreticians were imagining different desktop Universes, the American astronomer Edwin Hubble was pointing the 100-inch telescope at Mount Wilson to determine whether the Milky Way was the only galaxy in the Universe or whether there were many galaxies out there — “island Universes” spread across space. In 1924, 100 years ago this year, he hit on the solution: The Milky Way is but one of billions of galaxies out there. The Universe suddenly became enormous, beyond what we humans could contemplate. Five years later, Hubble dropped the real bomb: Not only were there countless galaxies in the cosmos, but the vast majority were moving away from one another. The Universe, Hubble concluded, is expanding. This remarkable discovery changed everything. If the Universe is growing in volume and galaxies are moving apart, that means that in the past they were closer together. Using some coarse approximations, Hubble estimated that some 2 billion years back, galaxies would all be squeezed into a very tiny volume. This would represent the beginning of the comic history. The Universe, it turns out, had a beginning at some distant point in the past.

The Big Bang model

In the late 1940s, the Russian American physicist George Gamow worked with Ralph Alpher and Robert Herman to figure out what could be the story of an expanding Universe. The essential point is that if the Universe was expanding now, then in the past it was smaller, hotter, and denser. Before galaxies, stars, and planets existed, matter was crunched up and broken down into its most basic constituents. Mixing atomic and nuclear physics, the trio arrived at a few stunning conclusions. The first was that a Universe originating in a hot, dense state should now be suffused with microwave radiation, a relic from the era of recombination when hydrogen atoms first formed as protons and electrons joined, thereby releasing photons to traverse the cosmos freely. In modern numbers, that happened about 380,000 years after the Big Bang, the event that marked the beginning of time, the time that clocks the expansion of the Universe. In 1965, the radiation predicted by Gamow, Alpher, and Herman was discovered by Robert Wilson and Arnold Penzias, two radio astronomers working for Bell Labs in New Jersey.

The second prediction from Gamow, Alpher, and Herman was that going even further back in time, the Universe would be so hot and dense that protons and neutrons would roam free, unable to combine to form the first atomic nuclei. As recounted in Steven Weinberg’s classic book
The First Three Minutes, the synthesis of the lightest nuclei — or primordial nucleosynthesis — happened when the Universe was about one second to three minutes old, as protons and neutrons combined to form deuterium and tritium, isotopes of hydrogen with one and two neutrons attached to the proton, respectively, and also helium-4 (two protons and two neutrons) and its isotope helium-3 (two protons and one neutron), and, finally, lithium-7 (three protons and four neutrons). Combining the rules of nuclear physics with the thermodynamics of an expanding and cooling Universe, it was possible to estimate the abundances of these elements and compare them to observations. The concordance was another great success for the Big Bang model. By the late 1960s, there was no doubt that the narrative of a Universe that started hot and dense and with matter dissociated into its simplest constituents correctly described the cosmic infancy. The question in everyone’s minds then was: What about earlier times? How far closer to the beginning of time could physics get? This is where things start to get both more exciting — and murkier.

Toward the beginning of time

As we try to push the reach of our theories to earlier times, we move up in energy from atomic and nuclear physics to particle physics. After all, the earlier you look at the Universe, the hotter and denser it was, and so the particles that filled up space had higher energies. Thus, to dive deeply into the cosmic infancy, physicists must use concepts from high-energy physics, going beyond experimental results. With some confidence, we could push the clock back to one-hundred-thousandth of a second after the bang, when the Universe had energies comparable to those when protons and neutrons got broken down into a
quark-gluon plasma, a state of matter that has been studied in the past two decades or so with great success but also with some serious conceptual limitations. Still, we have good reason to believe that this state actually existed in the early Universe, presenting a wall behind which matter was dissociated into its simplest known constituents. That is, before this, we can really talk about a primordial soup of elementary particles filling up space.

The spectacular discovery of the Higgs boson in 2012 confirmed that we could study different forces of nature under the same framework. We know of four fundamental forces: the gravitational, the electromagnetic, and the strong and weak nuclear forces. The two latter forces are only active at subnuclear distances, explaining why we are not familiar with them in our everyday lives. What the discovery of the Higgs confirmed was that the electromagnetic and weak forces tend to behave in similar ways at very high energies, at the limit of what our current experiments can probe. If we map these energies to the early Universe, we are talking about one-trillionth of a second after the bang, or 10-12  seconds (10
12次方). We don’t understand exactly how particles interact at these energies, but what we do understand, summarized in the Standard Model of particle Physics, indicates that the whole Universe went through a sort of phase transition. This transition resembles the process where cooling liquid water crystallizes into ice, breaking the fluid’s uniform symmetry—where molecules are evenly dispersed—into a rigid, ordered lattice. Similarly, we say that the Universe went through a phase transition at about this early time when the “electroweak” force split into the electromagnetic and the weak forces. Details are missing but that’s the overall picture.

At this juncture, cosmology hits a conceptual barrier, as it needs to navigate backward in time without experimental guidance. Despite decades of efforts across the globe, we have not gathered any information from the very early Universe that could guide us. The solution, of course, is to extrapolate and propose models of the early Universe that are compelling for different reasons. For example, they may offer explanations to current challenges to the Big Bang model, as in the
inflationary model; they may open new avenues for research in very high energy physics — as in theories of quantum gravity; or they may inspire experimental work in novel directions — as in, for instance, searches for dark matter and primordial gravitational waves.

Inflationary models were proposed during the early 1980s as potential solutions to several issues plaguing the Standard Big Bang model. For example, observations tell us that the geometry of space is very nearly flat, and we don’t know why. We also don’t know why the temperature of the microwave background mentioned above is so homogeneous, to one part in one hundred thousand. We also don’t know how the matter clumping needed to form galaxies and large cosmic structures originated. Somehow, matter gathered in various spots and attracted more matter. Inflation was proposed originally by MIT’s Alan Guth to address these issues. To do so, it invokes a sort of Higgs-like field assumed to be part of some unknown particle physics model describing the physics of the very early Universe. In the same way that the Higgs field drove the split between the electromagnetic and weak forces at 10-12 sec, the “inflaton field” drove the dynamics of the very early Universe at…10-35 sec. That’s a lot of zeroes into the unknown.

Guth’s model, along with the many alternatives proposed since his pioneering work, rely on this extrapolation, assuming that a primordial field caused the Universe to expand exponentially fast for a very short period of time. As the field loses energy, it relaxes to its lowest energy state and decays into a plethora of particles with explosive effectiveness, causing an ultrafast heating of the Universe. Some cosmologists call this
the real Big Bang, though this is a matter of taste. One of the challenges here is to uncover an even earlier history of the Universe, one that actually determines what kind of field this hypothetical inflation was and where it came from. Compelling as cosmic inflation is, we don’t really have either a believable model nor any evidence that it truly happened, apart from concordance with current observations. A handful of models nicely describe the Universe that we see (flat, homogeneous, crumpled up) but we still need a mother theory to give it more fundamental validity. What this theory might be requires even more adventurous extrapolations.

Even if we assume that we can push back our models even further in time, we soon hit a monumental conceptual barrier. As we approach the highest energies that we can still make sense of, the Universe may require a different way to be described, borrowing from quantum physics: the physics of the very small. The point is that in the
quantum world, everything is jittery and everything fluctuates. If we carry this notion into the force of gravity — and its relation to space and time — we need to consider the possibility that at very early times, there was no time and space as we know them but some sort of vague quantum foam, where space and time bubbled in different ways here and there (though “here” and “there” become very murky ideas). Unfortunately, our current attempts to describe such quantum spacetime foam in theories known as string theories and loop quantum gravity have been only partially successful or not at all successful, at least in providing a compelling scenario for the origin of the Universe. We seem to be fundamentally stuck when it comes to the question of the origin of all things.

The problem of the first cause

And this is not surprising, once we pay attention to the history of philosophy and the nature of science. The origin of the Universe pushes the boundaries of what we can understand. Simply put, most of science is based on two things: objectivity and causality. Objectivity asks for a clear separation between the observer and what is being observed. Causality assumes an ordering in time whereby an effect is preceded by a cause. As I pointed out in
a recent book with my colleagues Adam Frank and Evan Thompson, the origin of the Universe brings both causality and objectivity to a halt. And it does so in a very different way from quantum physics, where both principles are also challenged. Quantum mechanics blurs the separation between observer and observed and substitutes deterministic evolution by probabilistic inference. It is, however, still a causal theory since an electron will respond to, say, an electromagnetic force in ways dictated by well-known dynamical causes (with, for a technical example, a Coulomb potential in Schrödinger’s equation). There are known forces at play that will induce specific dynamical behaviors.

But when it comes to the origin of the Universe, we don’t know what forces are at play. We actually can’t know, since to know such force (or better, such fields and their interactions) would necessitate knowledge of the initial state of the Universe. And how could we possibly glean information from such a state in some uncontroversial way? In more prosaic terms, it would mean that we could know what the Universe was like as it came into existence. This would require a god’s eye view of the initial state of the Universe, a kind of objective separation between us and the proto-Universe that is about to become the Universe we live in. It would mean we had a complete knowledge of all the physical forces in the Universe, a final theory of everything. But how could we ever know if what we call the theory of everything is a complete description of all that exists? We couldn’t, as this would
assume we know all of physical reality, which is an impossibility. There could always be another force of nature, lurking in the shadows of our ignorance.

At the origin of the Universe, the very notion of cause and objectivity get entangled into a single unknowable, since we can’t possibly know the initial state of the Universe. We can, of course, construct models and test them against what we can measure of the Universe. But concordance is not a criterion for certainty. Different models may lead to the same concordance — the Universe we see — but we wouldn’t be able to distinguish between them since they come from an unknowable initial state. The first cause — the cause that must be uncaused and that unleashed all other causes — lies beyond the reach of scientific methodology as we know it. This doesn’t mean that we must invoke supernatural causes to fill the gap of our ignorance. A supernatural cause doesn’t explain in the way that scientific theories do; supernatural divine intervention is based on faith and not on data. It’s a personal choice, not a scientific one. It only helps those who believe.

Still, through a sequence of spectacular scientific discoveries, we have pieced together a cosmic history of exquisite detail and complexity. There are still many open gaps in our knowledge, and we shouldn’t expect otherwise. The next decades will see us making great progress in understanding many of the open cosmological questions of our time, such as the nature of dark matter and dark energy, and whether gravitational waves can tell us more about primordial inflation. But the problem of the first cause will remain open, as it doesn’t fit with the way we do science. This fact must, as Einstein wisely remarked, “fill a thinking person with a feeling of humility.” Not all questions need to be answered to be meaningful.

FEATURED VIDEOS (
請至原網頁使用視頻)
Michio Kaku: Quantum computing is the next revolution

SPECIAL COLLECTION

The Universe. A History.
What was it like when human beings transformed the Earth?
What was it like when humans first arose on planet Earth?
What was it like when mammals appeared and thrived?
What was it like when life on Earth became complex?

本文於 修改第 3 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7226279
「大爆炸」一詞淵源 -- Helge Kragh
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

下文之所以有趣不只是它敘述了大爆炸」理論以及該名稱的演變也在於它描繪出語言這裏指指號」和「概念-- 在日常生活中演變的例子

索引:

denigrate
:抹黑,貶低,詆毀
derogatory
貶損的,貶低的,瞧不起的
entail
:蘊含,意味
entrenched
牢不可破的,根深柢固的,積重難返的
harpoons
:鉤形頭的或標槍,捕鯨叉
misnomer
:不適當或錯誤的名稱
nomenclature
術語,命名系統
primeval
太古的,原始的,早期的
sitcom
situation comedy的簡稱:以同一群角色在不同有趣情境和情節下的搞笑廣播劇或電視劇


How did the Big Bang get its name? Here’s the real story

Astronomer
Fred Hoyle supposedly coined the catchy term to ridicule the theory of the Universe’s origins — 75 years on, it’s time to set the record straight.

Helge Kragh, 03/25/34

“Words are like
harpoons,” UK physicist and astronomer Fred Hoyle told an interviewer in 1995. “Once they go in, they are very hard to pull out.” Hoyle, then 80 years old, was referring to the term Big Bang, which he had coined on 28 March 1949 to describe the origin of the Universe. Today, it is a household phrase, known to and routinely used by people who have no idea of how the Universe was born some 14 billion years ago. Ironically, Hoyle deeply disliked the idea of a Big Bang and remained, until his death in 2001, a staunch critic of mainstream Big Bang cosmology.

Several
misconceptions linger concerning the origin and impact of the popular term. One is whether Hoyle introduced the nickname to ridicule or denigrate the small community of cosmologists who thought that the Universe had a violent beginning — a hypothesis that then seemed irrational. Another is that this group adopted ‘Big Bang’ eagerly, and it then migrated to other sciences and to everyday language. In reality, for decades, scientists ignored the catchy phrase, even as it spread in more-popular contexts.

The first cosmological theory of the Big Bang type dates back to
1931, when Belgian physicist and Catholic priest Georges Lemaître proposed a model based on the radioactive explosion of what he called a “primeval atom” at a fixed time in the past. He conceived that this primordial object was highly radioactive and so dense that it comprised all the matter, space and energy of the entire Universe. From the original explosion caused by radioactive decay, stars and galaxies would eventually form, he reasoned. Lemaître spoke metaphorically of his model as a “fireworks theory” of the Universe, the fireworks consisting of the decay products of the initial explosion.

However, Big Bang cosmology in its
modern meaning — that the Universe was created in a flash of energy and has expanded and cooled down since — took off only in the late 1940s, with a series of papers by the Soviet–US nuclear physicist George Gamow and his US associates Ralph Alpher and Robert Herman. Gamow hypothesized that the early Universe must have been so hot and dense that it was filled with a primordial soup of radiation and nuclear particles, namely neutrons and protons. Under such conditions, those particles would gradually come together to form atomic nuclei as the temperature cooled. By following the thermonuclear processes that would have taken place in this fiery young Universe, Gamow and his collaborators tried to calculate the present abundance of chemical elements in an influential 1948 paper1.

Competing ideas

The same year, a radically different picture of the Universe was announced by Hoyle and Austrian-born cosmologists
Hermann Bondi and Thomas Gold. Their steady-state theory assumed that, on a large scale, the Universe had always looked the same and would always do so, for eternity. According to Gamow, the idea of an ‘early Universe’ and an ‘old Universe’ were meaningless in a steady-state cosmology that posited a Universe with no beginning or end.

Over the next two decades, an epic controversy between these two
incompatible systems evolved. It is often portrayed as a fight between the Big Bang theory and the steady-state theory, or even personalized as a battle between Gamow and Hoyle. But this is a misrepresentation.

Both parties, and most other physicists of the time, accepted that the Universe was
expanding — as US astronomer Edwin Hubble demonstrated in the late 1920s by observing that most galaxies are rushing away from our own. But the idea that is so familiar today, of the Universe beginning at one point in time, was widely seen as irrational. After all, how could the cause of the original explosion be explained, given that time only came into existence with it? In fact, Gamow’s theory of the early Universe played almost no part in this debate.

Rather, a bigger question at the time was whether the Universe was
evolving in accordance with German physicist Albert Einstein’s general theory of relativity, which predicted that it was either expanding or contracting, not steady. Although Einstein’s theory doesn’t require a Big Bang, it does imply that the Universe looked different in the past than it does now. And an ever-expanding Universe does not necessarily entail the beginning of time. An expanding Universe could have blown up from a smaller precursor, Lemaître suggested in 1927.

An apt but innocent phrase

On 28 March 1949, Hoyle — a well-known popularizer of science — gave a radio talk to the BBC Third Programme, in which he contrasted these
two views of the Universe. He referred to “the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past”. This lecture was indeed the origin of the cosmological term ‘Big Bang’. A transcript of the talk was reproduced in full in the BBC’s The Listener magazine, and Hoyle mentioned it in his 1950 book The Nature of the Universe, which was based on a series of BBC broadcasts he made earlier the same year.

How dwarf galaxies lit up the Universe after the Big Bang (請至原網頁查看圖片)

Although Hoyle resolutely dismissed the idea of a sudden origin of the Universe as unacceptable on both scientific and philosophical grounds, he later said that he did not mean it in ridiculing or mocking terms, such as was often stated. None of the few cosmologists in favour of the exploding Universe, such as Lemaître and Gamow, was offended by the term. Hoyle later explained that he needed visual metaphors in his broadcast to get across technical points to the public, and the casual coining of ‘Big Bang’ was one of them. He did not mean it to be derogatory or, for that matter, of any importance.

Hoyle’s ‘Big Bang’ was a new term as far as cosmology was concerned, but it was not in general contexts. The word ‘bang’ often refers to an ordinary explosion, say, of gunpowder, and a big bang might simply mean a very large and noisy explosion, something similar to Lemaître’s fireworks. And indeed, before March 1949, there were examples in the scientific literature of meteorologists and geophysicists using the term in their publications. Whereas they referred to real explosions, Hoyle’s Big Bang was purely metaphorical, in that he did not actually think that the Universe originated in a blast.

The Big Bang was not a big deal

For the next
two decades, the catchy term that Hoyle had coined was largely ignored by physicists and astronomers. Lemaître never used ‘Big Bang’ and Gamow used it only once in his numerous publications on cosmology. One might think that at least Hoyle took it seriously and promoted his coinage, but he returned to it only in 1965, after a silence of 16 years. It took until 1957 before ‘Big Bang’ appeared in a research publication2, namely in a paper on the formation of elements in stars in Scientific Monthly by the US nuclear physicist William Fowler, a close collaborator of Hoyle and a future Nobel laureate.

 

Before 1965, the cosmological Big Bang seems to have been referenced just a few dozen times, mostly in popular-science literature. I have counted 34 sources that mentioned the name and, of these, 23 are of a popular or general nature, 7 are scientific papers and 4 are philosophical studies. The authors include 16 people from the United States, 7 from the United Kingdom, one from Germany and one from Australia. None of the scientific papers appeared in astronomy journals.

Among those that used the term for the origin of the Universe was the US philosopher Norwood Russell Hanson, who in 1963 coined his own word for advocates of what he called the ‘Disneyoid picture’ of the cosmic explosion. He called them ‘big bangers’, a term which still can be found in the popular literature — in which the ultimate big banger is sometimes identified as God.


A popular misnomer

A watershed moment in the history of modern cosmology soon followed. In 1965, US physicists 
Arno Penzias and Robert Wilson’s report of the discovery of the cosmic microwave background — a faint bath of radio waves coming from all over the sky — was understood as a fossil remnant of radiation from the hot cosmic past. “Signals Imply a ‘Big Bang’ Universe” announced the New York Times on 21 May 1965. The Universe did indeed have a baby phase, as was suggested by Gamow and Lemaître. The cosmological battle had effectively come to an end, with the steady-state theory as the loser and the Big Bang theory emerging as a paradigm in cosmological research. Yet, for a while, physicists and astronomers hesitated to embrace Hoyle’s term.

It took until March 1966 for the name to turn up in a Nature research article
3. The Web of Science database lists only 11 scientific papers in the period 1965–69 with the name in their titles, followed by 30 papers in 1970–74 and 42 in 1975–79. Cosmology textbooks published in the early 1970s showed no unity with regard to the nomenclature. Some authors included the term Big Bang, some mentioned it only in passing and others avoided it altogether. They preferred to speak of the ‘standard model’ or the ‘theory of the hot universe’, instead of the undignified and admittedly misleading Big Bang metaphor.

Nonetheless, by the 1980s, the misnomer had become firmly entrenched in the literature and in common speech. The phrase has been adopted in many languages other than English, including French (théorie du Big Bang), Italian (teoria del Big Bang) and Swedish (Big Bang teorin). Germans have constructed their own version, namely Urknall, meaning ‘the original bang’, a word that is close to the Dutch oerknal. Later attempts to replace Hoyle’s term with alternative and more-appropriate names have failed miserably.


The many faces of the metaphor

By the 1990s, ‘Big Bang’ had migrated to commercial, political and artistic uses. During the 1950s and 1960s, the term frequently alluded to the danger of nuclear warfare as it did in UK playwright John Osborne’s play Look Back in Angerfirst performed in 1956. The association of nuclear weapons and the explosive origin of the Universe can be found as early as 1948, before Hoyle coined his term. As its popularity increased, ‘Big Bang’ began being used to express a forceful beginning or radical change of almost any kind — such as the Bristol Sessions, a series of recording sessions in 1927, being referred to as the ‘Big Bang’ of modern country music.


In the United Kingdom, the term was widely used for a major transformation of the London Stock Exchange in 1986. “After the Big Bang tomorrow, the City will never be the same again,” wrote Sunday Express Magazine on 26 October that year. That use spread to the United States. In 1987, the linguistic journal American Speech included ‘Big Bang’ in its list of new words and defined ‘big banger’ as “one involved with the Big Bang on the London Stock Exchange”.


Today, searching online for the ‘Big Bang theory’ directs you first not to cosmology, but to a popular US sitcom. Seventy-five years on, the name that Hoyle so casually coined has indeed metamorphosed into a harpoon-like word: very hard to pull out once in.


Nature 627, 726-728 (2024)
doi: https://doi.org/10.1038/d41586-024-00894-z


References

1.  Gamow, G. Nature 162, 680–682 (1948).
Article PubMed Google Scholar
2.  Fowler, W. A. Sci. Mon. 84, 84–100 (1957).
Google Scholar 
3.  Hawking S. W. & Tayler, R. J. Nature 209, 1278–1279 (1966).
Article Google Scholar 

COMPETING INTERESTS
The author declares no competing interests.

請至原網頁查看其它相關普及科學主題超連接

本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7225484
暗物質並不存在 ---- Bernard Rizk
推薦1


胡卜凱
等級:8
留言加入好友

 
文章推薦人 (1)

胡卜凱

下面是奧他瓦大學新聞室發佈的研究報告;奧他瓦通譯為「渥太華」。我離開學校已經50多年,看不懂它不稀奇。

我推測:這位辜仆達教授提出了一個可以不需要假設「暗物質」存在來解釋觀測到某些宇宙現象的理論。至於他的理論是否足以否定「暗物質」的存在,或是否能解釋所有觀測到的宇宙現象;我相信還兩說。

宇宙年齡而論,如下文所報導根據辜仆達教授的計算大約是267億年(03/2024)但在2022年之前,該計算則大約是137.87億年。過個三、五年在我掛掉以前,很可能又有新數字。


New research suggests that our universe has no dark matter

Bernard Rizk, Media Relations Office, Univ. of Ottawa, 03/15/24

The current theoretical model for the composition of the universe is that it’s made of ‘normal matter,’ ‘dark energy’ and ‘dark matter.’ A new uOttawa study challenges this.

A University of Ottawa study published today challenges the current model of the universe by showing that, in fact, it has no room for dark matter.

In cosmology, the term “dark matter” describes all that appears not to interact with light or the electromagnetic field, or that can only be explained through gravitational force. We can’t see it, nor do we know what it’s made of, but it helps us understand how galaxies, planets and stars behave.

Rajendra Gupta, a physics professor at the Faculty of Science, used a combination of the covarying coupling constants north_east external link (CCC) and “tired light north_east external link” (TL) theories (the CCC+TL model) to reach this conclusion. This model combines two ideas — about how the forces of nature decrease over cosmic time and about light losing energy when it travels a long distance. It’s been tested and has been shown to match up with several observations, such as about how galaxies are spread out and how light from the early universe has evolved.  

This discovery challenges the prevailing understanding of the universe, which suggests that roughly 27% of it is composed of dark matter and less than 5% of ordinary matter, remaining being the dark energy. 

Challenging the need for dark matter in the universe

“The study's findings confirm that our previous work (“JWST early Universe observations and ΛCDM cosmology north_east external link”) about the age of the universe being 26.7billion years has allowed us to discover that the universe does not require dark matter to exist,” explains Gupta. “In standard cosmology, the accelerated expansion of the universe is said to be caused by dark energy but is in fact due to the weakening forces of nature as it expands, not due to dark energy.”

Redshifts” refer to when light is shifted toward the red part of the spectrum. The researcher analyzed data from recent papers on the distribution of galaxies at low redshifts and the angular size of the sound horizon in the literature at high redshift.

“There are several papers that question the existence of dark matter, but mine is the first one, to my knowledge, that eliminates its cosmological existence while being consistent with key cosmological observations that we have had time to confirm,” says Gupta.

By challenging the need for dark matter in the universe and providing evidence for a new cosmological model, this study opens up new avenues for exploring the fundamental properties of the universe.

The study, Testing CCC+TL Cosmology with Observed Baryon Acoustic Oscillation Featuresnorth_eastexternal link,was published in the peer-reviewed Astrophysical Journal.



本文於 修改第 2 次
回應 回應給此人 推薦文章 列印 加入我的文摘
引用網址:https://city.udn.com/forum/trackback.jsp?no=2976&aid=7225007
頁/共2頁 回應文章第一頁 回應文章上一頁 回應文章下一頁 回應文章最後一頁