大爆炸之前? -- 宇宙學的新理論 -- C. Moskowitz
Glimpse Before Big Bang Possible
Clara Moskowitz, Special to SPACE.com, SPACE.com
The universe appears to be lopsided, and a new model
that aims to explain this anomaly could offer a glimpse of
what happened before the birth of it all.
When astronomers look out at the cosmos, the view in
one direction is turning out to be different than in the
other. Specifically, fluctuations in the density and
temperature of the radiation left over from the theoretical
Big Bang called the Cosmic Microwave Background
seem to be strangely larger on one side of the
A new model suggests this unevenness could be caused
by an imprint left over from before the beginning of the
universe, that is, before the cosmos ballooned almost
instantaneously from less than the size of an atom to
about golf-ball size. This process is called inflation.
Blowing up the balloon
"Inflation theory does predict that we have these density
and temperature fluctuations, but they should look the
same everywhere across the sky," said Caltech
astrophysicist Sean Carroll, who worked on the new
model, detailed in the Dec. 16 issue of the journal
Physical Review D. "But people who look at the data say
they see one side of the universe has bigger fluctuations,
and that's what we're trying to get a handle on."
Scientists think the normal variations in temperature and
density predicted by inflation became the seeds for the
structure we see today throughout the universe. Soon
after inflation, the denser areas would have attracted
more matter and eventually grown into the clusters and
galaxies we see today, while less dense regions would
have become voids mostly absent of galaxies, stars and
But the normal model of inflation can't account for the
asymmetry now noted. To try to explain that, Carroll,
astrophysicist Marc Kamionkowski and graduate student
Adrienne Erickcek (all at Caltech) tested a new version of
inflation theory, in which two fields are responsible for the
universe's early bloom of expansion.
In the standard theory of inflation, one field called the
inflaton (not inflation) caused both the rapid expansion of
the universe and its density fluctuations. But
Kamionkowski and team found that an unevenness in the
density fluctuations could arise if inflation is caused by
two fields instead of one. In the new model, the inflaton is
responsible for ballooning the size of the universe, while a
second field called the curvaton that had been previously
proposed introduces the density variations.
Before the Big Bang?
The model also intriguingly hints at what might have come
before inflation, since it suggests that the universe's
lopsidedness may be an aftereffect of a great fluctuation
that occurred before inflation began.
"It's no longer completely crazy to ask what happened
before the Big Bang," Kamionkowski said. "All of that stuff
is hidden by a veil, observationally. If our model holds up,
we may have a chance to see beyond this veil."
The next step is to gather better data about the Cosmic
Microwave Background, to confirm that the unevenness
seen so far really holds up.
"So far it seems to be in the data, but that doesn't mean
it's in the universe," Carroll told SPACE.com. "There's a
chance this asymmetry is coming from errors in the data."
A new European Space Agency satellite called Planck,
designed to map the background radiation with
unprecedented sensitivity and resolution, is set to launch
in 2009. If Planck finds the radiation densities to be off-
balance, too, then cosmologists must really come to terms
with this puzzling aspect of inflation. Though it would
require some serious amendments to current theories,
many physicists would relish the challenge.
"That's what everyone wants, it's much more interesting
that way," Carroll said.
· Top 10 Strangest Things in Space
· Greatest Mysteries: How Did the Universe Begin?
· 10 Confounding Cosmic Questions
本文於 修改第 1 次
不含膨脹說的大爆炸理論 - M. McKee
Ingenious: Paul J. Steinhardt
The Princeton physicist on what’s wrong with inflation theory and his view of the Big Bang.
Maggie McKee, 09/24/14
Paul J. Steinhardt does not look like a firebrand. With his wiry spectacles and buttoned-up bearing, he would not seem out of place in an office of accountants. But the Director of the Princeton Center for Theoretical Science is an academic agitator, vocally criticizing the leading theory of the universe’s infancy, a theory that he himself helped create more than 30 years ago. According to this picture, called inflation, space itself expanded faster than the speed of light just after the universe’s birth in the big bang, doubling in size 100,000 times in less than a billionth of a billionth of a billionth of a second.
But once started, inflation is hard to stop entirely, so pockets of space should constantly be budding off into new universes with different properties. In such a multiverse, anything that can happen will happen somewhere, and that is a fatal flaw for Steinhardt -- a theory that cannot rule anything out is not scientific, he argues. He has been pursuing an alternative scenario where our universe cycles between periods of expansion and contraction, so that the big bang was really a big bounce. Most other researchers are skeptical of the approach, but Steinhardt is undeterred.
And his search for alternative schemes is not limited to cosmology. For decades, he has been pondering the different ways atoms might be arranged in crystals, discovering that arrangements previously thought to be impossible were actually allowed. In recent years he even struck out into the wilderness of the Russian Far East to look for the rarest arrangements in nature, an expedition that yielded minerals new to science, including one dubbed “steinhardtite.”
Both Steinhardt’s passion for unsolved puzzles and his critiques of overly accommodating scientific theories are on display in our video interview.
Each video question plays at the top of the screen.
What does the term “Big Bang” mean?
According to the theory of inflation, what was the early expansion of the universe like?
What caused the expansion of the universe?
You have become a critic of inflation. Why?
Why is it so unsettling to believe we might live in an accidental universe?
What do you think of recent findings supporting the existence of gravitational waves?
You have been working on alternative theories to inflation. What are they?
What is a cyclic universe?
What is the main criticism of the cyclic universe picture?
Can we ever know the full history of the universe?
What does the Higgs boson have to do with cosmology?
How did you get into science?
Can you share some stories about the Nobelist Richard Feynman?
What are quasicrystals?
A quasicrystal is named after you. How did that happen?
How did an inflation researcher like you come to study quasicrystals?
What would you be if you weren’t a scientist?
What does the term “Big Bang” mean?
Physicists mean two things when they talk about the Big Bang. What cosmologists usually mean is the idea that the universe was once hot and dense, and has been expanding and cooling. So when people who are non-scientists ask us do we believe in the Big Bang theory, that’s usually what we’re talking about. Is there evidence that the universe was once hot and dense and has been expanding and cooling? And the answer is: There’s overwhelming evidence for that. When the public generally asks us about the Big Bang theory though, they have a different idea in mind. They have the idea of this big bang itself, the big bang beginning, the idea that the universe, you know, at one time didn’t exist and suddenly sprang from nothingness into something-ness and that’s the Big Bang. And if you ask physicists are they confident in that idea, the answer is no. There are different ideas about what might have happened as we go back to that moment in time. People have had ideas over, you know, the last century, of what might have occurred during that time and that’s a subject which is central to a lot of the debates we have in cosmology today.
According to the theory of inflation, what was the early expansion of the universe like?
So if the space between you and me, for example, were stretching at this rate at the present time, you’d be either trying to speak to me by sending a sound wave or we could send light signals, [and] the space between us would be stretching so fast that the light would be having to make up, or the sound would be having to make up the new distance. There would be [space] being created so fast, that [sound waves/light signals] would never get to you, or yours to me. We’d lose sight of one another and lose communication with one another. That’s the kind of expansion we’re talking about when we’re talking about inflation.
And just to put some numbers on it, in typical examples, the inflation begins when the universe is about a billionth, billionth, billionth, billionth of a second old, and it doubles in size roughly every billionth, billionth, billionth, billionth of a second -- for maybe a hundred thousand doublings, or a million doublings, or maybe a billion doublings. That means it doubles in size or multiplies by eight in volume, you know, every billionth, billionth, billionth, billionth of a second and after a short time, a region which is smaller than a nucleus blows up to a size which is much larger than the space that we observe today when we look around the universe. We only see a finite patch of the universe. It’d be enormously bigger than that and we’d just be a tiny patch; what we would observe would just be a tiny patch of that piece of space that was once smaller than a nucleus, that blew up and inflated at that time.
What caused the expansion of the universe?
We do not know what caused the inflation. There are, you know… Over the last 30 years, there are probably hundreds, maybe thousands of papers with people with different proposals for what is the precise field or the precise form of energy which -- they all have to have the property they self-repel, they all have to produce this accelerating effect, but as for their precise identity, there are lots and lots and lots and lots of different ideas, some of which involve quantum fields like the original idea; some of which involve the use of extra dimensions; some of which use the idea from string theory, quantum strings, or quantum brains -- membranes; many, many different ideas, and whichever one you choose by the time you… What we observe in the universe today doesn’t help us distinguish very keenly which one of these ideas is correct. We can eliminate some possibilities but there’s so many -- there are many different options.
They all have to have the property that somehow the inflation ends. And the property… I mean one thing that’s always, that’s bothered me about the story since the very first example, is the property of getting this inflation and having it end has always involved some degree of tuning or fine-tinkering of the model, you know, fine-tuning. Every model has some sort of what we call parameters or coefficients or features in it that have to be finely adjusted in order to get what you want. If you don’t adjust them right you get something entirely different, which you don’t want, which is inconsistent with what we observe. So we don’t have what I would call a pretty theory, a theory that naturally explains this process and that’s, you know, one of the problems at present time -- is to find something, a natural explanation, for what we observe. And as we observe more properties of the universe, that becomes more stringent and a more stringent constraint on our models.
You have become a critic of inflation. Why?
What we discovered is that it’s possible, that actually it’s possible, and then eventually realized it’s almost hard to avoid that inflation once it starts is really eternal -- that it can end in some patches, but it will always continue yet in other patches of the universe and where it continues, it blows up in volume so much that it occupies the vast majority of the universe. And although it continues to produce patches where it ends, the patches that are inflating are always outrunning the regions where it ends, and so you end up with patch after patch after patch where inflation has ended being tiny little specks in a universe where it’s continuing.
Now those patches where it’s ended are pretty darn large -- they’re large enough to contain us -- so maybe you shouldn’t be, you might not be concerned at first. But the problem is, due to the effects of quantum physics these patches are not all the same. The effects of quantum physics, when you include them properly, lead to a situation where some patches are like us, but some patches are not like us; and in fact, every conceivable possible outcome of the universe can occur if you look from patch to patch to patch and there’s no particular reason why ours is more likely than any other. So in a sense we would live in this picture, in an accidental universe. We’re trying to explain the universe in a simple, forcefully deterministic way, and instead in this inflation universe, it looks like it’s an accident that we live in the universe such as we do. It could have many widely different properties.
Why is it so unsettling to believe we might live in an accidental universe?
First of all, the fact that the universe is so simple on large scales. If you observe something which could be complicated, but it turns out to be very simple, it’s screaming at you that there’s some explanation for why it is so. Now the problem with an accidental universe is that it’s not an explanation at all. It’s not even a scientific theory in the form we’re talking about -- in the sense that it allows every conceivable possibility. If you allow every conceivable possibility, then there’s no test or combination of tests that can disprove such a concept. You’re allowed to have that idea if you like, but it’s no longer in the realm of science -- you’re some kind of metaphysics or philosophy, which is outside the realm of science.
So the problem with inflation is that it began as an idea which seemed to have definite predictions and properties and, and with the discovery of eternal inflation, the multiverse, it moved to this accidental universe picture where it no longer has any particular test or combination of tests that can disprove it. It’s so flexible -- and this is just one form of flexibility we’re talking about right now, it has other forms of flexibility -- but it’s so flexible just because of this multiverse that there’s nothing… anything you would observe you’d say, “Oh, that could happen in a multiverse. That could happen too,” you know. You could just go on and on. There’d be nothing that would tell you that the theory could possibly be wrong. And such ideas, as I say, lie outside the domain of normal sciences that’s been practiced for the last 400 years. So I think it’s a very… It’s a kind of, I would call it, a failure mode. You know, usually we’re used to theories failing because they make a definite prediction, you go to make the observation, and it disagrees. That’s science as we normally understand it. It makes a prediction, it gets tested, and it fails. This is different. This is a theory which you thought made definite predictions and now you’ve discovered that it has this sort of infinite way out and so that means it’s just no longer, you know, an ordinary scientific idea, which is a different kind of failure mode than what we’re used to.
What do you think of recent findings supporting the existence of gravitational waves?
You can’t be sure what’s causing that signal. Is it really a signal from the deep part of the sky beyond the galaxies, the thing you’re trying to detect? Is it a signal caused by the dust within our galaxy, twisting the light as it scatters from that dust? Is it a signal that’s caused by the atmosphere, which is constantly fluctuating and distorting the light as it comes to my detector? Is it light that bounces off the ground and comes into my detector and is distorted that way, or is it light that is distorted by the lens in my detector? There are many sources you’d have to look in with just a single frequency. So what they were trying to do was extremely difficult and more than could be done I would say, and so there were reasons to be concerned right off the bat and the biggest concern was, had they taken a proper account of the dust in our own galaxy? And that’s the issue that people have been focusing on most up to this point.
Because we know there is, in our own galaxy, dust, which has the property that light which comes, which scatters off of it, becomes polarized -- and that is to say when the light comes, gets scattered off of us, it gets scattered off of the dust. Instead of the light coming toward it with the electric fields oscillating in every possible direction, some directions are preferred over others depending upon the particular dust particle that it scatters off of. Now that is what BICEP2 was trying to measure -- was the polarization of light, not by the dust but caused in the early universe by gravitational waves. But they can’t distinguish by themselves which caused it. The dust? Or was it the gravitational waves?
Now, various groups have tried to improve on what they did and conclude that most [likely], that dust is a large contributor and perhaps the entire source of the signal they were seeing. And we’re waiting now for results from the Planck Satellite experiment, which should be presenting us with a detailed map of that particular region of the sky that the BICEP2 team measured and then we’ll be able to say more about the likelihood that they saw these gravitational waves. (See the related blog post, “Excitement About Gravity Waves Comes Crashing Down,” which reports on the Planck team’s finding that the polarization signal could be entirely explained by dust, rather than gravitational waves.)
You have been working on alternative theories to inflation. What are they?
What if we didn’t start from the Big Bang? Maybe that’s not the beginning of space and time; and maybe what we think of as a bang, is really a bounce: a transition from a preexisting phase -- let’s say of contraction -- [and then] a bounce into expansion. Now suddenly there’s a whole new domain of time, before the bounce, before the bang, [with] which you can introduce processes that would naturally smooth and flatten the universe.
So the theories I’ve been working on have that property. They transform the bang to a bounce, and they introduce processes that would just naturally occur when, in a contracting universe automatically they would tend to flatten and smooth the universe. And then you add the quantum physics into it -- different regions of space contracting at different times due to these random quantum fluctuations. You can’t keep things completely in sync -- quantum physics doesn’t allow it -- so the slight non-uniformity in the rate of contraction will translate into fluctuations, variations in temperature and density after the bounce, that would produce the fluctuations you see in the microwave background. But because this process of contraction is very gentle and slow compared to the very rapid inflationary expansion, it doesn’t produce the violent effects that produce the big gravitational waves, that high amplitude gravitational wave that inflation does. Instead it produces gravitational waves which are much, much weaker, far too weak to be observed.
We have this more realistic, contemporary version, which produces a multiverse in which anything can happen and is completely unpredictive. And then we have a theory which says in the bouncing theory, we shouldn’t see the gravitational waves in this particular kind of… in this kind of bouncing theory, you shouldn’t see these gravitational waves. And that’s the spectrum of models which we know at present and there may be other models yet to be found.
What is a cyclic universe?
The bouncing model which I was just describing is one in which I only talked about a single bounce -- taking the most recent bang and saying suppose it’s a bounce, and suppose… In that case it allows, opens up the possibility of the smoothing that accounts for the smoothness we see today, [that it] was produced during the period of contraction before that bounce.
Expand the story a bit. Was that the only bounce? Could there have been a sequence of bounces? Could there have been a kind of episodic or cyclic universe? Yes, all those things are natural possibilities. They’re natural possibilities, but I should say that during each period of contraction, and in each preceding… [and] each such bounce, there’s always going to be this smoothing process, this flattening process which has the property that in a sense you’re kind of erasing information, or spreading out information so thinly from what preceded it, [that] there is almost no trace of it in the universe that you can look at today. You have to look for indirect evidence of this process.
So you don’t see direct evidence of earlier cycles, but you could infer they might exist based on the fact that you see the smoothness and the flatness and the absence of gravitational waves and maybe other properties explained by this sort of episodic or cyclic universe. Now once you have that possibility around theoretically you can also ask the question well, how did it begin? Did it have a beginning? Maybe. It could have had a beginning and then kind of settled into a regular pattern, or as far as we can tell theoretically, it may have continued forever into the past and forever into the future. So there is… the way you get around the problem of beginning is that there is no beginning. It was always there doing this, forever in the past and forever in the future.
What is the main criticism of the cyclic universe picture?
The one remaining issue is the bounce itself. What exactly happens with the bounce, what physics describes that bounce and there, there are several working ideas that people have. Some, in some cases, one [side] thinking about bounces in which the universe contracts to a point and then reverses itself and begins to expand right away before reaching zero size -- before having to worry about the effects of quantum gravity. And so we both constructed examples like that. And then there are also examples where [another] says no, let’s go ahead and push on and see if we can explore whether quantum gravity would naturally lead to a bounce. Now both those ideas are under development.
And my view is this, this is the key problem. Whether or not we can have this bounce is the key problem of fundamental physics and cosmology. It relates to fundamental physics of quantum gravity, to the problem of cosmology. Could the smoothing have occurred before? Can we avoid the multiverse problem? They relate. You know, all these things are tied up together and I think it’s the key problem that we should be focusing on, you know, as we enter the 21st century. It’s the key problem we should be focusing on because if we can show it’s impossible, then we have to, definitely have to win, win out over the multiverse. Get control of it. If it’s possible, then I just think it’s a much simpler idea than inflation and the multiverse. Just discard that, and I think this bouncing idea is a much simpler way of explaining the universe in which… the simple universe, which we observe.
Can we ever know the full history of the universe?
I’m optimistic about our being able to figure out the history of the universe at this point because what we’ve observed about it on a large scale is this extraordinary simplicity. If it were complicated, if it looked like it came out of some complicated sausage-making machine, then you’d say, well the fact that I can only observe one part of it and I’m only seeing a little piece of the sausage, it makes it pretty hard for me to figure out the machine that produced it. But that’s not what we’re observing. We’re not observing some complicated sausage -- we’re observing an extraordinary symbol of uniformed, featureless -- very few degrees of freedom here to describe the universe on large scales.
It’s also true that our fundamental physics if you, you know… recent discoveries about the Higgs in fundamental physics have just shown that to be simpler than many theorists thought it should be. So what… at the present time, I’m saying there’s fascinating simplicity observed on large scales. There’s fascinating simplicity observed on small scales. That makes me optimistic that it should be, that we should be looking for a very simple solution with so little, so few degrees of freedom that you would be in, you’d immediately be able to recognize that that’s a very sensible compelling model to explain what we observe.
What does the Higgs boson have to do with cosmology?
If we assume for the moment that the Large Hadron Collider has seen all the particles to be seen up to high, you know, reasonably high energies, there’s a surprising result that emerges from this analysis. And that is that our present universe is in a metastable state. Instead of being at the lowest energy state in the universe, it’s actually at a state of relatively high energy compared to what would be the minimum. It’s separated from that minimum by a large energy barrier, which is why we are in the state we are in and aren’t immediately jumping to a state of low energy. But ultimately if this picture is correct, we can’t be in a stable state. Eventually, some sort of quantum fluctuation or a thermal fluctuation could, is going to kick us out and we’re no longer going to be in the present vacuum state.
So that means in our present vacuum state, instead of being in a universe in which the vacuum, energy in the vacuum is relatively small and positive -- which is the way it is today -- and instead of being in a universe which is accelerating its expansion, it’s going to jump at some point into a state in which… that the universe is going to be begin to contract.
This kind of idea is interesting because in the kind of cyclic universe as I was describing, this is exactly what has to be the case. If the universe is going to cycle, it can’t remain in the present accelerating universe, it has to eventually end its acceleration and enter a phase of contraction and here’s the Higgs, maybe providing us with that hint that that will be, that that could occur. Then if it turns out that when you contract you bounce, that would lead to the Higgs coming back to the current vacuum, but now in a universe which is hot and expanding again and the process of expanding and cooling and forming galaxies and stars could begin again.
So this work on microphysics which we, in the Large Hadron Collider, which we don’t normally think about as cosmology -- it was really just designed to see if we could see if there was a Higgs at all -- has turned out to be potentially very interesting for cosmology, much more interesting maybe even then you’d say for particle physics because it may be pointing us to new possibilities for the past and future of our universe that, that we didn’t dream were possible, and that the Higgs is pointing us to.
How did you get into science?
I think ever since I was a toddler I always wanted to be a scientist. My father used to tell me -- he was not a scientist, he was a lawyer -- for some reason, he used to tell me stories about scientists and discovering things in science and that just sounded to me so exciting to discover something new that no one had ever known before. I just found that extremely thrilling and so I always wanted to be a scientist of some sort. And so you know from the first books I remember you know, all my experiences were… science was always a big part of my life. And so you know, as a kid growing up I had a chemistry lab between a biology laboratory, between a telescope, and doing that kind of thing, doing lots of research that I could. You know getting, kind of getting involved in research as young as I could.
The one area which I had very little exposure to up to that point was physics and it wasn’t until I was an undergraduate at Caltech that I… I mean I took physics in high school but they were pretty prosaic courses but when I first, you know, realized that physics was really interesting was when I was an undergraduate at Caltech and I, the first year forced to take physics, for the first two years and that professor… you know, within weeks I’d met you know, very exciting people including Richard Feynman and I was completely sold: That’s, that was the science I wanted to do. And then I began to explore different areas of physics because I didn’t know much about physics when I started in it and the very last one I came to was cosmology. As I mentioned earlier, it really was as a post-doc, and I happened to walk into a lecture by Alan Guth -- really never having taken a course in cosmology -- that’s when I was first exposed to it and it has occupied a big part of my research life ever since.
Can you share some stories about the Nobelist Richard Feynman?
I had several interactions with Feynman. I started a course with him called “Physics X.” My roommate and I asked him if he’d be willing to teach a “pseudo-course” -- a false course -- called “Physics X” in which he would come every week and he would answer any questions that you might throw at him. And that was a real thrill because literally the discussion, you know, ranged all over the map. It wasn’t just about things you know, the obvious things about particle physics you could ask about. He didn’t even particularly like that kind of question. He wanted you to bring in some phenomenon, some mysterious phenomenon and we would be discussing about you know, what might explain that phenomena. And so it’s a, it was a really important influential experience for me.
And then I also did my senior thesis project with him, so that was another set of experiences so it left there a real mark on my thinking, including my thinking about science which has been coming back to me since the BICEP2. A lot of… BICEP2 has brought in a lot of interesting debate about… that you wouldn’t think scientists would have to debate about -- about what is the nature of science, this issue of whether it’s important that science be testable or not testable, falsifiable or not falsifiable. These were issues which I think in Feynman’s mind were extremely clear and I think conventional -- I would have said conventional -- and certainly in my own mind, conventional and very clear, but, you know, I’ve been hearing some very interesting views that, you know, having a theory which is not falsifiable may be okay in science -- and I find that very strange and actually I find it rather dangerous -- but it sort of brought me back to rethinking some of my experience with Feynman from those days.
What are quasicrystals?
Back in the 1980s, my student and I had been hypothesizing that there could exist forms of matter in which the atoms and molecules could organize themselves into patterns that were impossible for crystals, but they weren’t random either. In fact they would have symmetries which crystals, patterns do -- but symmetries which crystals aren’t allowed to have.
So it’s been known for 200 years that atoms could organize themselves like building blocks into certain patterns where the atoms or clusters of atoms regularly repeat. That’s what makes a crystal a crystal. And if I make things out of building blocks that way, it’s been known for nearly 200 years, there are only certain symmetries which are possible. So all the crystals that you observe in nature up until recently, only conform to one of 32 symmetry possibilities established, you know, in the 19th century. Everything we’ve known up to that point lived that way.
But what we showed, my student, Dov Levine and I showed is that if you get away from the idea of just a single repeating unit, if you allow yourself let’s say two repeating units, so two repeating atoms, which repeat at different frequencies, suddenly symmetries which were impossible become possible. So for example, crystals can never organize themselves into any kind of structure which has five-fold symmetry. It’s forbidden for crystals -- mathematically, it’s impossible. But the quasi-, the systems we were thinking about which we call quasicrystals could. In fact, they could arrange themselves and to form a solid with the symmetry of a soccer ball -- which has you know many pentagons on it, many, many different axes of five-fold symmetry -- we could even get that kind of structure. And while we were working on this idea, there was a group at the National Bureau of Standards led by Dan Shechtman, which was looking at various aluminum alloys and they stumbled across one which produced a pattern of diffraction which had five-fold symmetry, which was inconsistent with the laws of crystallography. They had no explanation for it but they, you know, said, “Here it is! We don’t understand it, but you know, here’s a possibility.” And it turned out the patterns they were getting conformed precisely with the kinds of patterns we had predicted hypothetically. And so that’s how the idea of, that’s how the discovery of quasicrystals was made -- the realization that the hypothetical idea and the experimental idea actually related, came through that and in 2011, Dan Shechtman won the Nobel Prize in Chemistry for his discovery of the… we now call the first quasicrystal.
A quasicrystal is named after you. How did that happen?
All the quasicrystals that have been discovered since 1984, up until recently, were discovered in the laboratory synthetically -- and people even argued that they required that; they were such delicate forms of matter that they could only form that way whereas, my own thinking based on, you know, theoretical reasoning was there was no reason why that had to be so. Some quasicrystals might be energetically stable and if so, maybe they’d be found in nature. So I launched a search, a worldwide search to look for natural quasicrystals around 1998 and there’s a long story that goes with it, but about 10 years later we actually found a sample in a museum in Florence thanks to a mineralogist there, Luca Bindi, who helped us search. We found a sample of quasicrystal in a very complicated rock and there’s no question it was a quasicrystal so that could have been the end of the story, but what happened was that when we began to show this rock to geologists, or our results to geologists, they became very skeptical that it could possibly be natural. Not because it was a quasicrystal, but because it… of the particular chemistry of our quasicrystal. It had metallic aluminum in it and aluminum has a strong affinity for oxygen -- so in nature, there’s lots of aluminum but there’s no metallic aluminum unless you go to as aluminum foundry. So they said this must come from an aluminum foundry, not from nature.
So that then launched a quest to try to figure out where this guy came from -- where the sample from Florence came from -- and over the next two years we eventually were able to show that it came from a very obscure region of far Eastern Russia, was found in the ground, was not formed in a foundry, and was actually part of a meteorite that fell there -- probably about 10,000 years ago -- and a meteorite that comes from the very beginning of the solar system, about 4 and a half billion years ago, so our quasicrystal’s about 4 and a half billion years old. And then I put together a geological expedition… I put together a geological expedition to go there to look for more samples, which we found, because we only had the one in the museum to begin with, and we found more and it not only had the quasicrystal but it had other new minerals that had never been seen before. And one of them is a mixture of aluminum, iron, and nickel and the team decided… so when you find a new mineral you have to write a paper explaining its properties and then you have to post a name for it and they did me the honor of calling it Steinhardtite. So that’s the Steinhardtite mineral. It’s one of the minerals found this meteorite that’s 4 and a half billion years old and that includes the first known natural quasicrystal.
How did an inflation researcher like you come to study quasicrystals?
I came to physics rather late, so when I decided I was interested in physics, I had to find out what area of physics I wanted to investigate. So what I decided to do was spend, you know, each of my undergraduate years exploring some area of science, physics rather, to decide which one I would want to choose, figuring at the end I would choose one. But what actually… I didn’t choose. Every one of those experiences led to some, you know, by some trajectory or another, to other projects that continued, almost all of them up to the present day, including spending a summer at Yale University studying, what was originally their structure of amorphous silicon -- so silicon when you cool it rapidly will form a random network, and its properties have never been… were at that time and even today aren’t really fully understood, so I started on that project. That got me interested into thinking about what kind of structures, atoms, and molecules can form. Do they have to really conform to the rules of crystallography? And then again like most of you know these stories, there’s a long circuitous story -- trying different things, failing, eventually led to the idea of quasicrystals.
I’m always looking around for good problems to work on so I don’t have any rules about what problems I work on, I have to… but I need an idea. So I’m always listening, to lots of different areas of science in hopes that I’ll find a good puzzle.
What would you be if you weren’t a scientist?
Hmm. That’s tough because really, that’s the only thing I’ve been thinking about. What would I be doing if I were not a scientist? Well I’d probably be teaching something about science. Yeah, I wouldn’t be a scientist but I’d probably be a teacher of some sort. At least I could… you know, I enjoy learning about it as well as doing research in it. But it’s hard for me to believe that I wouldn’t be doing research in it -- at least tinkering on my own.
Maggie McKee is a freelance science writer focusing mainly on astronomy and physics. Previously an editor at New Scientist and Astronomy, she lives near Boston with her husband.
大爆炸與膨脹理論 - Nautilus Editors
What If the Universe Didn’t Start With the Big Bang?
Nautilus Editors, 10/01/14
Last week, researchers using the Planck spacecraft to study the skies announced that the polarization of light spotted by the BICEP2 experiment could be entirely explained by dust swirling around the Milky Way. This news was a bucket of cold water on the theories of many cosmologists: It meant that BICEP2 might not have been detecting gravitational waves, which were interpreted as being proof of cosmic inflation, the long-held theory that parts of the Universe began growing astonishingly fast very soon after the Big Bang. (Read Maggie McKee’s excellent, more detailed summary of the news.)
But for Paul Steinhardt, the lack of evidence for gravitational waves was good news. Steinhardt advances an alternate theory of cosmology, doing away with inflation in favor of slower expansion of the Universe. This more gentle growth would not produce strong gravitational waves, so the lack of evidence for them means his theory is still a candidate to explain the cosmos.
Steinhardt is also, appropriately, the Nautilus Ingenious for this month. Here is a key part of McKee’s interview with him, where he describes why the Big Bang may have been a transition from a previous phase rather than an absolute beginning. You can see the entire interview and transcript here.
宇宙學的「度量衡對稱」理論 - N. Wolchover
At Multiverse Impasse, a New Theory of Scale
Mass and length may not be fundamental properties of nature, according to new ideas bubbling out of the multiverse.
Natalie Wolchover, Quanta Magazine, 08/18/14
Though galaxies look larger than atoms and elephants appear to outweigh ants, some physicists have begun to suspect that size differences are illusory. Perhaps the fundamental description of the universe does not include the concepts of “mass” and “length,” implying that at its core, nature lacks a sense of scale.
This little-explored idea, known as scale symmetry, constitutes a radical departure from long-standing assumptions about how elementary particles acquire their properties. But it has recently emerged as a common theme of numerous talks and papers by respected particle physicists. With their field stuck at a nasty impasse, the researchers have returned to the master equations that describe the known particles and their interactions, and are asking:
What happens when you erase the terms in the equations having to do with mass and length?
Nature, at the deepest level, may not differentiate between scales. With scale symmetry, physicists start with a basic equation that sets forth a massless collection of particles, each a unique confluence of characteristics such as whether it is matter or antimatter and has positive or negative electric charge. As these particles attract and repel one another and the effects of their interactions cascade like dominoes through the calculations, scale symmetry “breaks,” and masses and lengths spontaneously arise.
Similar dynamical effects generate 99 percent of the mass in the visible universe. Protons and neutrons are amalgams -- each one a trio of lightweight elementary particles called quarks. The energy used to hold these quarks together gives them a combined mass that is around 100 times more than the sum of the parts. “Most of the mass that we see is generated in this way, so we are interested in seeing if it’s possible to generate all mass in this way,” said Alberto Salvio, a particle physicist at the Autonomous University of Madrid and the co-author of a recent paper on a scale-symmetric theory of nature.
In the equations of the “Standard Model” of particle physics, only a particle discovered in 2012, called the Higgs boson, comes equipped with mass from the get-go. According to a theory developed 50 years ago by the British physicist Peter Higgs and associates, it doles out mass to other elementary particles through its interactions with them. Electrons, W and Z bosons, individual quarks and so on: All their masses are believed to derive from the Higgs boson -- and, in a feedback effect, they simultaneously dial the Higgs mass up or down, too.
Alessandro Strumia of the University of Pisa, pictured speaking at a conference in 2013, has co-developed a scale-symmetric theory of particle physics called “agravity.”
“The idea is that maybe even the Higgs mass is not really there,” said Alessandro Strumia, a particle physicist at the University of Pisa in Italy. “It can be understood with some dynamics.”
The concept seems far-fetched, but it is garnering interest at a time of widespread soul-searching in the field. When the Large Hadron Collider at CERN Laboratory in Geneva closed down for upgrades in early 2013, its collisions had failed to yield any of dozens of particles that many theorists had included in their equations for more than 30 years. The grand flop suggests that researchers may have taken a wrong turn decades ago in their understanding of how to calculate the masses of particles.
“We’re not in a position where we can afford to be particularly arrogant about our understanding of what the laws of nature must look like,” said Michael Dine, a professor of physics at the University of California, Santa Cruz, who has been following the new work on scale symmetry. “Things that I might have been skeptical about before, I’m willing to entertain.”
The Giant Higgs Problem
The scale symmetry approach traces back to 1995, when William Bardeen, a theoretical physicist at Fermi National Accelerator Laboratory in Batavia, Ill., showed that the mass of the Higgs boson and the other Standard Model particles could be calculated as consequences of spontaneous scale-symmetry breaking. But at the time, Bardeen’s approach failed to catch on. The delicate balance of his calculations seemed easy to spoil when researchers attempted to incorporate new, undiscovered particles, like those that have been posited to explain the mysteries of dark matter and gravity.
Instead, researchers gravitated toward another approach called “supersymmetry” that naturally predicted dozens of new particles. One or more of these particles could account for dark matter. And supersymmetry also provided a straightforward solution to a bookkeeping problem that has bedeviled researchers since the early days of the Standard Model.
In the standard approach to doing calculations, the Higgs boson’s interactions with other particles tend to elevate its mass toward the highest scales present in the equations, dragging the other particle masses up with it. “Quantum mechanics tries to make everybody democratic,” explained theoretical physicist Joe Lykken, deputy director of Fermilab and a collaborator of Bardeen’s. “Particles will even each other out through quantum mechanical effects.”
This democratic tendency wouldn’t matter if the Standard Model particles were the end of the story. But physicists surmise that far beyond the Standard Model, at a scale about a billion billion times heavier known as the “Planck mass,” there exist unknown giants associated with gravity. These heavyweights would be expected to fatten up the Higgs boson -- a process that would pull the mass of every other elementary particle up to the Planck scale. This hasn’t happened; instead, an unnatural hierarchy seems to separate the lightweight Standard Model particles and the Planck mass.
With his scale symmetry approach, Bardeen calculated the Standard Model masses in a novel way that did not involve them smearing toward the highest scales. From his perspective, the lightweight Higgs seemed perfectly natural. Still, it wasn’t clear how he could incorporate Planck-scale gravitational effects into his calculations.
Meanwhile, supersymmetry used standard mathematical techniques, and dealt with the hierarchy between the Standard Model and the Planck scale directly. Supersymmetry posits the existence of a missing twin particle for every particle found in nature. If for each particle the Higgs boson encounters (such as an electron) it also meets that particle’s slightly heavier twin (the hypothetical “selectron”), the combined effects would nearly cancel out, preventing the Higgs mass from ballooning toward the highest scales. Like the physical equivalent of x + (–x) ≈ 0, supersymmetry would protect the small but non-zero mass of the Higgs boson. The theory seemed like the perfect missing ingredient to explain the masses of the Standard Model -- so perfect that without it, some theorists say the universe simply doesn’t make sense.
Yet decades after their prediction, none of the supersymmetric particles have been found. “That’s what the Large Hadron Collider has been looking for, but it hasn’t seen anything,” said Savas Dimopoulos, a professor of particle physics at Stanford University who helped develop the supersymmetry hypothesis in the early 1980s. “Somehow, the Higgs is not protected.”
The LHC will continue probing for convoluted versions of supersymmetry when it switches back on next year, but many physicists have grown increasingly convinced that the theory has failed. Just last month at the International Conference of High-Energy Physics in Valencia, Spain, researchers analyzing the largest data set yet from the LHC found no evidence of supersymmetric particles. (The data also strongly disfavors an alternative proposal called “technicolor.”)
The implications are enormous. Without supersymmetry, the Higgs boson mass seems as if it is reduced not by mirror-image effects but by random and improbable cancellations between unrelated numbers -- essentially, the initial mass of the Higgs seems to exactly counterbalance the huge contributions to its mass from gluons, quarks, gravitational states and all the rest. And if the universe is improbable, then many physicists argue that it must be one universe of many: just a rare bubble in an endless, foaming “multiverse.” We observe this particular bubble, the reasoning goes, not because its properties make sense, but because its peculiar Higgs boson is conducive to the formation of atoms and, thus, the rise of life. More typical bubbles, with their Planck-size Higgs bosons, are uninhabitable.
“It’s not a very satisfying explanation, but there’s not a lot out there,” Dine said.
As the logical conclusion of prevailing assumptions, the multiverse hypothesis has surged in begrudging popularity in recent years. But the argument feels like a cop-out to many, or at least a huge letdown. A universe shaped by chance cancellations eludes understanding, and the existence of unreachable, alien universes may be impossible to prove. “And it’s pretty unsatisfactory to use the multiverse hypothesis to explain only things we don’t understand,” said Graham Ross, an emeritus professor of theoretical physics at the University of Oxford.
The multiverse ennui can’t last forever.
“People are forced to adjust,” said Manfred Lindner, a professor of physics and director of the Max Planck Institute for Nuclear Physics in Heidelberg who has co-authored several new papers on the scale symmetry approach. The basic equations of particle physics need something extra to rein in the Higgs boson, and supersymmetry may not be it. Theorists like Lindner have started asking,
“Is there another symmetry that could do the job, without creating this huge amount of particles we didn’t see?”
Picking up where Bardeen left off, researchers like Salvio, Strumia and Lindner now think scale symmetry may be the best hope for explaining the small mass of the Higgs boson. “For me, doing real computations is more interesting than doing philosophy of multiverse,” said Strumia, “even if it is possible that this multiverse could be right.”
For a scale-symmetric theory to work, it must account for both the small masses of the Standard Model and the gargantuan masses associated with gravity. In the ordinary approach to doing the calculations, both scales are put in by hand at the beginning, and when they connect in the equations, they try to even each other out. But in the new approach, both scales must arise dynamically -- and separately -- starting from nothing.
“The statement that gravity might not affect the Higgs mass is very revolutionary,” Dimopoulos said.
A theory called “agravity” (for “adimensional gravity”) developed by Salvio and Strumia may be the most concrete realization of the scale symmetry idea thus far. Agravity weaves the laws of physics at all scales into a single, cohesive picture in which the Higgs mass and the Planck mass both arise through separate dynamical effects. As detailed in June in the Journal of High-Energy Physics, agravity also offers an explanation for why the universe inflated into existence in the first place. According to the theory, scale-symmetry breaking would have caused an exponential expansion in the size of space-time during the Big Bang.
However, the theory has what most experts consider a serious flaw: It requires the existence of strange particle-like entities called “ghosts.” Ghosts either have negative energies or negative probabilities of existing -- both of which wreck havoc on the equations of the quantum world.
“Negative probabilities rule out the probabilistic interpretation of quantum mechanics, so that’s a dreadful option,” said Kelly Stelle, a theoretical particle physicist at Imperial College, London, who first showed in 1977 that certain gravity theories give rise to ghosts. Such theories can only work, Stelle said, if the ghosts somehow decouple from the other particles and keep to themselves. “Many attempts have been made along these lines; it’s not a dead subject, just rather technical and without much joy,” he said.
Strumia and Salvio think that, given all the advantages of agravity, ghosts deserve a second chance. “When antimatter particles were first considered in equations, they seemed like negative energy,” Strumia said. “They seemed nonsense. Maybe these ghosts seem nonsense but one can find some sensible interpretation.”
Meanwhile, other groups are crafting their own scale-symmetric theories. Lindner and colleagues have proposed a model with a new “hidden sector” of particles, while Bardeen, Lykken, Marcela Carena and Martin Bauer of Fermilab and Wolfgang Altmannshofer of the Perimeter Institute for Theoretical Physics in Waterloo, Canada, argue in an Aug. 14 paper that the scales of the Standard Model and gravity are separated as if by a phase transition. The researchers have identified a mass scale where the Higgs boson stops interacting with other particles, causing their masses to drop to zero. It is at this scale-free point that a phase change-like crossover occurs. And just as water behaves differently than ice, different sets of self-contained laws operate above and below this critical point.
To get around the lack of scales, the new models require a calculation technique that some experts consider mathematically dubious, and in general, few will say what they really think of the whole approach. It is too different, too new. But agravity and the other scale symmetric models each predict the existence of new particles beyond the Standard Model, and so future collisions at the upgraded LHC will help test the ideas.
In the meantime, there’s a sense of rekindling hope.
“Maybe our mathematics is wrong,” Dine said. “If the alternative is the multiverse landscape, that is a pretty drastic step, so, sure -- let’s see what else might be.”
挑戰大爆炸理論的物理學家 - M. Anderson
Do We Have the Big Bang Theory All Wrong?
One physicist’s radical reinterpretation of the cosmic microwave background.
Mark Anderson, 07/24/14
ll that Hans-Jörg Fahr wants is for someone to prove him wrong. A professor of astrophysics at the University of Bonn in Germany, he has taken a stand against nearly the entire field of cosmology by claiming that the diffuse glow of background microwave radiation which bathes the sky is not, as is commonly believed, a distant echo of the Big Bang, the universe’s fiery moment of creation. The idea held by the cosmology community that tiny temperature fluctuations in this microwave background tell us about the clumpiness of the early universe, he says, is wrong. The rank and file cosmologist may as well be doing Rorschach tests.
Understandably, his ideas have met with skepticism among many. Glenn Starkman, a professor of physics and astronomy at Case Western Reserve University, puts it this way: “If you seek to replace a successful theory with an alternative, then [you] must demonstrate that your alternative explains a similarly full range of phenomena… In this task [Fahr and his colleagues] have not done due diligence.” But at the same time, Fahr’s ideas are rooted in physics that has already been proven in other systems, and they make falsifiable predictions. Pressed to defend his controversial position, the unorthodox theorist stands his ground. Whether he likes it or not, Fahr has become a cosmological iconoclast.
It didn’t start this way for Fahr. Throughout the 1970s and ’80s, Fahr says he wholeheartedly supported the conventional Big Bang models of the universe while he pursued his own research into space physics. He’s made important contributions to the study of the solar wind (the stream of electrons and protons issuing from the sun) and the far solar system, where the solar wind slams into the gas and dust of interstellar space. He coined the term “heliopause” to describe this border region, which the Voyager spacecraft are now exploring today. When he turned 65 in 2005, Fahr’s colleagues organized a symposium in his honor that focused on unsolved problems in solar wind physics. A colleague of Fahr’s at the University of Bonn describes him as “one of the cleverest people around here.”
In parallel with his successes in the physics of the solar wind, Fahr also pursued a more unorthodox line of inquiry. In the 1990s he became aware of what were, in his opinion, curious gaps in the standard interpretation of the cosmic microwave background. The universe is a clumpy place, filled with vast voids interspersed with narrow, stringy filaments of galaxies and galaxy clusters. Yet the microwave background is staggeringly uniform in temperature, to one part in 1,000. Cosmologists usually assume that the microwave background’s homogeneity reflects the homogeneity of the universe as it was shortly after the Big Bang. To get from this smooth-as-cream beginning to today’s spotty universe full of voids and filaments, cosmologists add a clumping agent to their model: mysterious dark matter particles, whose existence remains unconfirmed.
No one has yet tried to observe the cosmic background radiation in the infrared, in part because it would be very difficult.
Fahr objects that this is just using one unknown to explain another unknown, and that there has to be a simpler solution. “If you take it seriously that you have a structured universe, then you need different models than used in [mainstream] cosmology,” says Fahr. “You need to pay attention to the fact that you have void and wall structures in the universe. And the expansion of the void structures is different from the expansion of the wall structures. And all of that makes the cosmos very much more complicated.”
With this in mind, Fahr set off to find a phenomenon that would naturally cause the universe to emanate a smooth microwave glow from all directions in space, like a glowing ember at a few degrees above absolute zero. He says he found one. “There was never a recombination event,” Fahr says of his model of the microwave background. “In my view [the microwave background] is just a kind of entropy feature of the cosmos as it is.”
n debating the interpretation of the cosmic microwave background, Fahr joins a long and distinguished line of heterodox astrophysicists, including the celebrated astronomers Halton Arp, Sir Fred Hoyle, and the Nobel Prize winner Hannes Alfvén. These skeptics have ascribed the microwave background to assortments of glowing clouds of gas, dust, and charged particles throughout the galaxy and nearby universe. These clumps of molecular interlopers, they claim, translate starlight bouncing around the universe into a quiet and dim bath of microwave light, a little bit like how the Earth’s atmosphere scatters blue sunlight to produce the daytime sky.
The problem with these alternative models has been that the cosmic microwave background is not patchy, like gas, dust, and charged particles are. It’s hard to see how patchwork quilts of clouds and plasmas can add up to a smooth, omnidirectional microwave glow.
In a controversial 2009 paper in the journal Annalen der Physik, Fahr suggested an answer to this problem, drawing on his own deep expertise in the solar wind. Space probes voyaging throughout the solar system for the past five decades have detected unexpected hot and cold spots in the solar wind as it works its way past the planets and toward interstellar space. These result from a kind of turbulent interaction of photons with other photons—an interaction which is usually impossible, but is enabled by the mediation of charged particles inside the solar wind.
In 2009 Fahr says he began to realize that the vacuum of space itself has a kind of remote kinship to a plasma. After all, modern physics describes the vacuum as frothy with virtual electric charges blipping into existence only to annihilate and blip back out again. Typically, though not always, these virtual particles are electrons and their antimatter counterparts positrons. So Fahr wondered: If the vacuum is an electron-positron plasma, then why wouldn’t it also enable the same photon-photon interactions that occur inside the solar wind?
If this were happening, then empty space itself could be the source of the microwave background. The photons of starlight that have been streaming through the universe over millions and billions of years interact with each other over time, gradually achieving a kind of thermal equilibrium, and translating hot point-sources of starlight into a dull all-sky glow. “It’s a very slow process which is operating,” says Fahr. “However, assuming you have time enough, then the diffusion is bringing you from stellar emissions to background emissions.”
I cannot stand kooks and illuminated fools. Of course Fahr is nothing at all of this sort.
Fahr says the effect should be observable in the lab. If laser light of a single wavelength were bounced back and forth in a vacuum for a half-year or more, its color should begin to smear, with some photons slipping into slightly higher wavelengths and others into slightly lower ones. “It is like a simulation of free space—like photons passing through cosmic space,” Fahr says. “I am predicting that the photons are not independent of each other in the long run. They interact with each other and redistribute their energies to other energies and other wavelengths.”
Fahr also suggests another experimental test that could decide between standard and alternative interpretations of the microwave background. According to conventional cosmology, the microwave background harkens back to when the universe had cooled enough to become transparent to light for the first time, about 300,000 years after the Big Bang. Previous to this cosmic epoch of “recombination,” the universe had been a dense and opaque plasma through which light could not propagate. When plasmas recombine, they produce a burst of light at a set of wavelengths characteristic of the energy levels of the hydrogen atom. This so-called “Lyman series” of spectral lines is a familiar landmark for anyone studying the behavior of plasmas in astronomy. But no evidence of a Lyman series has been observed in measurements of the microwave background.
That doesn’t mean that such a series doesn’t exist. Fahr notes that any cosmic Lyman spectral lines would be strongly Doppler shifted over the past 13.5 billion years, and so would be strongest in the infrared part of the spectrum. No one has yet tried to observe the cosmic background radiation in the infrared, in part because it would be very difficult. The Milky Way galaxy is even noisier in the infrared than it is in the microwave, making cosmic signals even harder to tease out from contaminating foreground galactic noise. This year’s big cosmic microwave background discovery—claiming to uncover evidence of gravitational waves practically from the moment of the universe’s genesis, but potentially contaminated by foreground signals—offers a cautionary tale in this regard.
But if scientists looked for a Lyman spectrum in the infrared, and didn’t find it, it would be another chink in modern cosmology’s armor.
Joan Solà, a cosmologist at the University of Barcelona, gives points to Fahr for the ingenuity of his theory, but isn’t convinced. “His playing around with numbers is entertaining, but he cannot provide a closed story that is internally consistent in itself,” Solà says.
For example, one of the arguments Fahr makes for his vacuum microwave background theory is that it can explain the observed ratio of photons to matter particles in the universe (it’s 1 billion to one). But Solà points out that one of the numbers Fahr uses for this calculation (the ratio of hydrogen to helium in the universe) comes right out of standard Big Bang theory itself, making the argument internally inconsistent.
Fahr counters that, while Big Bang theories correctly predict helium-to-hydrogen ratios, some recent studies have found much less lithium in the universe than they predict, whereas some non-Big Bang models have claimed a better fit. By questioning the ratios of elements created through nucleosynthesis in the early universe, and the interpretation of the microwave background, Fahr is attacking two of the three main pillars of evidence supporting standard Big Bang theory. The third pillar is based on the observation that the farther away a galaxy is, the greater its redshift, which suggests that our universe is expanding. Yet in his 2009 paper, Fahr cites one study from 1993 that argues for a similar distance-redshift relationship in a non-expanding universe—one which had no Big Bang.
From Solà’s perspective, such doubting of the standard Big Bang model can quickly devolve into crackpot science. But he does not count Fahr as a crackpot. “I cannot stand kooks and illuminated fools,” Solà says. “Of course Fahr is nothing at all of this sort… He is a real scientist, and a good one by the way. But this is one thing, and the other is to buy all his ideas.” Even though he is a skeptic of Fahr’s unorthodox cosmology, Solà says the debate itself has value. “Science makes progress only because we disagree from time to time with the ancient ideas. So it is good to keep trying.”
Mark Anderson is a science and technology journalist who has written for Discover, Technology Review, Scientific American, Science, Wired, IEEE Spectrum, New Scientist, and Rolling Stone.
Ask Ethan #49: Do the cosmic unknowns cast doubt on the Big Bang?
We don’t know the nature of either dark matter or dark energy: 95% of our Universe. Does that mean the Big Bang is in doubt?
本文於 修改第 2 次
宇宙常數新解 - C. Lee
Getting the math of the Universe to cancel out
New modification to gravity may explain the cosmological constant.
Chris Lee, 03/13/14
The vacuum of space isn't actually "empty"; it teems with particles that pop in and out of existence, giving the vacuum an energy of its own. But here's an embarrassing fact about that energy: it predicts that the cosmological constant (which provides a measure of the rate of the expansion of the Universe) should be 10120 times larger than we think it actually is.
Most scientists prefer things to be a bit more accurate than this. Still, the main question on cosmologists' minds is not why the predicted and real values appear to be so different, but how it is that the vacuum energy does so little. An answer of sorts has recently appeared in Physical Review Letters. But before we get to the paper, let's delve into the nature of the problem it's trying to solve.
An expanding Universe
When Einstein was first formulating a new theory of gravity, his solutions predicted that the Universe was expanding. At the time, the Universe was widely regarded to be static, so Einstein added a constant that counteracted the expansion and kept the Universe unchanging. Everyone rejoiced -- electromagnetism, space, time, and gravity could all live together in harmony.
Later, Edwin Hubble took advantage of a new generation of telescopes to measure the speed at which distant galaxies were moving. He found that the further away a galaxy was, the faster away from us it was moving. The conclusion was inescapable: the Universe was expanding. Everyone chuckled over Einstein's big goof and got on with the business of crashing the economy and going to live in Hooverville.
Fast forward to the turn of the century, where yet another generation of telescopes -- combined with an excellent understanding of how a particular type of supernova worked -- allowed scientists to measure whether the rate at which the Universe expands is constant or not. Turns out it's not; every day, the Universe expands a bit faster than it did the day before. Inflation, it seems, is a physical as well as an economic universal, and Einstein's cosmological constant was back (albeit in altered form).
Funnily enough, it wouldn't have mattered whether the new cosmological constant was positive, negative, or zero -- problems were going to arise. This is because Einstein's work had also established that mass and energy are two sides of the same coin. Since mass causes space and time to warp, so too should energy. At the time, no one gave the issue a second thought because we thought that most of space was empty vacuum.
Unfortunately, it turns out that the vacuum is anything but empty. And since it has energy, it should curve space and time. In other words, the vacuum of space should contain enough energy to curl the Universe up into a tight little ball or blow it apart so fast that no stars could ever form (it depends on whether the energy is positive or negative).
Given our current data, there's no argument over the approximate value of the cosmological constant: it is small and positive. So why doesn't the vacuum energy bend space and time? When physicists bolt the quantum vacuum energy on to general relativity, they get absurd results unless some kind of correction factor (to the tune of 10120) is carefully added to counteract the vacuum. This fine-tuning bothers people because there is simply no way to obtain these numbers naturally.
A new idea
Enter the new work by Nemanja Kaloper (UC-Davis) and Antonia Padilla (University of Nottingham), who have proposed a modification to general relativity that naturally generates a small cosmological constant. According to the researchers, the cosmological constant should be treated as the average of the vacuum contribution over all space and time. When this happens, the local vacuum energy contributions appear twice in the equations with opposite signs. No matter what energy the vacuum has right now, it can't bend space and time -- think of it as pushing with one hand and pulling with the other.
The residual cosmological constant is a kind of historical average. That is, all the fluctuations in the vacuum from the beginning of time up to this moment contribute to the cosmological constant we observe now. In the early Universe, this created a large cosmological constant that drove inflation. Later, as the Universe cooled, the cosmological constant became small. Even later, it may change signs, causing the Universe to begin contraction.
One other implication is that the Universe has to be finite in both space and time.
Now, it might be possible to separate this idea out from all the others by trying to piece together the history of the expansion of the Universe in more detail. Alternatively, we can wait about 100 billion years to see if the expansion of the Universe begins to slow. Otherwise, it's going to be very difficult to generate observational evidence to support the idea.
Physical Review Letters, 2014, DOI: 10.1103/PhysRevLett.112.091304
本文於 修改第 1 次
宇宙存在的機率 - R. Thomas
Why are we here?
Rachel Thomas, 02/24/14
David Sloan has a great response for someone asking him what he does for a living:
"I calculate how likely it is that the Universe exists."
This impressive job description comes from Sloan’s role as a post-doctoral researcher at the University of Cambridge working on the project, Establishing the Philosophy of Cosmology. The project aims to bring together philosophers and cosmologists to engage with the big philosophical questions in the area. Sloan is working on the measure problem in cosmology:
"How likely it is, given a random set of initial conditions, that you would end up with a universe that looks rather like ours."
This simple idea immediately conjures a whole range of questions, philosophical and cosmological:
How do we predict what universe we will end up with, starting from some initial conditions?
How do we calculate the probability of such a universe looking like ours? And
just what do we mean by a "universe like ours" anyway?
How did we get here?
The evolution of the Universe is described by the Friedmann equations developed in 1922 by Alexander Friedmann. He used Einstein's theory of general relativity to predict that the Universe was expanding. At the time this was a revolutionary idea, leading Einstein to dismiss Friedmann's equations as a "mathematical curiosity" but today this is part of our standard picture of cosmology. (You can read more about this in What happened before the Big Bang?.)
The equations describe how the Universe evolved during the early phase of inflation, the brief period of massive expansion that occurred shortly after the Big Bang. Echoes of the resulting structure were encoded in an initial burst of radiation which is known as the cosmic microwave background radiation (CMB). "My work was to look at what range of initial parameters led to observations, such as those of the CMB as seen by the WMAP and Planck satellites, and how we can put a probability measure on this." (You can read more about this in What Planck saw.)
In Friedmann's equation for a flat universe:
is taken to be the energy in the Universe (consisting of energy stored in fields and matter, thanks to Einstein's ), which changes over time.
is the Hubble parameter, which indicates the Universe is expanding if is being non-zero.
When Hubble observed the redshift of distant galaxies (redshift is the light equivalent of the doppler effect you hear when an ambulance passes you, you can read more here)he showed that is in fact, positive. The remaining parts of the equations are the familiar constants , (the gravitational constant) and (the speed of light).
There are many possible solutions to the Friedmann equations, each representing a possible universe. Some may be like ours, others may be quite different, say only lasting a few moments, or one where galaxies and stars never form. Together all these form a space of possible solutions. Universes with different properties form different regions in this space and the relative volumes of these regions represent the relative probability of that type of universe will evolve, according to our model. Sloan works out what these solution spaces look like, identifying these different regions, and how to calculate their volumes to give the probability that universes with particular sets of properties will evolve.
The actual calculations he works with are very complicated. But you can picture it, vastly simplified, in terms of the possible values some parameter could take. Suppose a parameter, let's call it , goes into our model of universe evolution, and that can be any value between 0 and 1. Then the properties of the resulting universe might depend on the value of , say, if then there is not enough inflation to match the observations we see from satellites in our Universe. While if then enough inflation occurs, resulting in a universe producing observation like those we've seen like ours. So when considering the range of values that our parameter can take, it is overwhelmingly likely (with a probability of something like 0.99999) that this model gives a universe like ours. (You can read more of the details in this article in New Scientist and the paper it's based on. And you can find our more about how our Universe evolved in the Plus article, From planets to universes.)
You are here
But what exactly do we mean by a "universe like ours"? The pragmatic way to answer this is to say it is a universe that results in observations like those we have observed in our own. Data like the cosmic microwave background radiation (the CMB) has dramatically shaped the picture we have of our own Universe. And any universe "like ours" should also generate the same observations we see in the CMB and other data sources. One complication though, is that we only have our Universe to look at, we don’t have any alternative examples to compare to, or probe for information.
"One of the things that cosmology has that's different from the rest of physics, and actually puts us much more in line with, say, the social sciences, is that we can only really do natural experiments," says Sloan. "We can't design a universe. We can't build a machine, put some initial conditions in, fire it up in a lab and see what happens. We can only observe the Universe as it is."
This is a similar situation to astronomy, but with one significant difference: "Astronomers don’t get to build stars and see what happens, they can only observe them. But they have lots and lots of stars to look at. In some sense, we have precisely one data point: the one universe we live in. And as anyone who has done GCSE science will tell you, one data point is pretty terrible for doing science."
This lack of other data points requires a different approach and it is here that a philosophical perspective can be so useful. The problem of having such a convincing piece of evidence – that our Universe exists – and a lack of any other evidence, could lead to the interpretation that the Universe is fine-tuned for life. If you consider the abundance of carbon in the Universe you might think that abundance is a miracle as carbon is necessary for life, as we know it, to exist. "If we tweaked the parameters this way or that way there wouldn’t have been any carbon and we wouldn’t be here," says Sloan. But this interpretation can start to look shaky with a change of perspective. "There's a good point made by [Douglas Adams] which is that suppose there's a puddle sitting in a hole. It would ask, 'Wow! Isn't it amazing that there's a hole just this shape that I fit into it? This hole must have been designed for me.' " So rather than explaining our perfectly tuned Universe by invoking some benevolent creator, you can instead call on something called the anthropic principle: that our Universe has to be favourable to life, as otherwise we wouldn't be observing it.
Instead it is necessary to take a step back, and consider how broad a spectrum of possible universes resulting as solutions of the Friedmann equations would be acceptable. Are we only interested in universes that we would could exist in, or would any universe that allows for some intelligent observer be one of those we should allow for, humanity just being one particular example of such an observer? "If I do a calculation and [find that if] this number were three instead of seven then everything would be the same but we would be green, would we say that's acceptable or not? Or if different numbers had come out but in those cases the living things would have been built out of silicon instead of carbon. Of course, these kinds of questions are phenomenally difficult to actually get a handle on, we don't have the technology to evolve things to the point of life and so on. But what you can say is, under [certain] conditions there would be far too much radiation, and so we could rule out, at least, life as we know it. And under [other] conditions stars would only last for 15 seconds, so we can rule out anything having evolved under those.”
Why are we here?
This all sounds like a good game for philosophers and for those seeking to explain our place in the Universe, but Sloan's research also provide us with a practical tool for testing our theories of physics. "[Exploring the measure problem] actually gives you a degree of confidence, or lack of confidence, in the explanatory power of your models of physics," says Sloan. For example, suppose his calculations said that, given a particular model of physics, it is incredibly unlikely that a universe like ours evolved (ie, that the probability of such a universe evolving was very close to zero). Then we can ask ourselves, is it more likely that the model is right and we ended up here despite this tiny probability, or is it more likely that our model was wrong in the first place?
"If I said, either physics is wrong or I rolled a die six billion times and it came up six every time, then which of these is more likely? It's more likely that physics is wrong and at that point, as a scientist, you would throw away the physics." So these probability calculations open a window on questions that we otherwise might not be able to interrogate, providing a way to test the untestable.
"The nature of these things is normally that the probabilities turn out to either be very close to zero or very close to one," says Sloan. If the probability turns out to be close to one they you can rest easy that your model of physics provides a good explanation of the evolution of the Universe. For example Sloan has used this approach to examine theories of loop quantum gravity, where the Universe bounces from contracting to expanding instead of starting from an initial big bang. The advantage of this theory over others is that you can do calculations at the point of the bounce, where other theories break down. "There you find there is an incredibly high probability of the Universe looking the way it does currently. [Given the assumptions of the model] then the probability you get the WMAP observations is about one minus something like one part in a million. So it's a really, really, really, highly likely thing," says Sloan. This is a great relief to proponents of this theory: "It says that, yes, if this is the right theory of physics, don't worry. Observations haven't told you to throw away [this theory of] physics."
Eventually, Sloan would like to develop a kind of black box to tackle the measure problem. "One thing that would be very nice, but will take a lot of work, is to create a method of taking in fundamental models of physics – your string theories, your causal set theories, your no-boundary proposal... all of these extensions to general relativity – taking these in, turning the handle, and it spits out a number for you. It tells you 'This is incredibly probable' or 'This is not incredibly probable.' Because then you could use [this box] as a test of some of the theories that [otherwise] are very much un-testable."
Investigating the measurement problem allows us a glimpse of the answer to the biggest question of all:
why are we here?
Who knows when we will answer that question, but Sloan's research will definitely give us a hint.
本文於 修改第 1 次
宇宙何以存在的各種理論 - T. Maudlin
The calibrated cosmos
Is our universe fine-tuned for the existence of life – or does it just look that way from where we’re sitting?
Some things occur just by chance. Mark Twain was born on the day that Halley’s comet appeared in 1835 and died on the day it reappeared in 1910. There is a temptation to linger on a story like that, to wonder if there might be a deeper order behind a life so poetically bracketed. For most of us, the temptation doesn’t last long. We are content to remind ourselves that the vast majority of lives are not so celestially attuned, and go about our business in the world. But some coincidences are more troubling, especially if they implicate larger swathes of phenomena, or the entirety of the known universe. During the past several decades, physics has uncovered basic features of the cosmos that seem, upon first glance, like lucky accidents. Theories now suggest that the most general structural elements of the universe -- the stars and planets, and the galaxies that contain them -- are the products of finely calibrated laws and conditions that seem too good to be true. What if our most fundamental questions, our late-at-night-wonderings about why we are here, have no more satisfying answer than an exasperated shrug and a meekly muttered ‘Things just seem to have turned out that way’?
It can be unsettling to contemplate the unlikely nature of your own existence, to work backward causally and discover the chain of blind luck that landed you in front of your computer screen, or your mobile, or wherever it is that you are reading these words. For you to exist at all, your parents had to meet, and that alone involved quite a lot of chance and coincidence. If your mother hadn’t decided to take that calculus class, or if her parents had decided to live in another town, then perhaps your parents never would have encountered one another. But that is only the tiniest tip of the iceberg. Even if your parents made a deliberate decision to have a child, the odds of your particular sperm finding your particular egg are one in several billion. The same goes for both your parents, who had to exist in order for you to exist, and so already, after just two generations, we are up to one chance in 1027. Carrying on in this way, your chance of existing, given the general state of the universe even a few centuries ago, was almost infinitesimally small. You and I and every other human being are the products of chance, and came into existence against very long odds.
And just as your own existence seems, from a physical point of view, to have been wildly unlikely, the existence of the entire human species appears to have been a matter of blind luck. Stephen Jay Gould argued in 1994 that the detailed course of evolution is as chancey as the path of a single sperm cell to an egg. Evolutionary processes do not innately tend toward Homo sapiens, or even mammals. Rerun the course of history with only a slight variation and the biological outcome might have been radically different. For instance, if the asteroid hadn’t struck the Yucatán 66 million years ago, dinosaurs might still have run of this planet, and humans might have never evolved.
It can be emotionally difficult to absorb the radical contingency of humanity. Especially if you have been culturally conditioned by the biblical creation story, which makes humans out to be the raison d’être of the entire physical universe, designated lords of a single, central, designed, habitable region. Nicolaus Copernicus upended this picture in the 16th century by relocating the Earth to a slightly off-centre position, and every subsequent advance in our knowledge of cosmic geography has bolstered this view -- that the Earth holds no special position in the grand scheme of things. The idea that the billions of visible galaxies, to say nothing of the expanses we can’t see, exist for our sake alone is patently absurd. Scientific cosmology has consigned that notion to the dustbin of history.
So far, so good, right? As tough as it is to swallow, you can feel secure in the knowledge that you are an accident and that humanity is, too. But what about the universe itself? Can it be mere chance that there are galaxies at all, or that the nuclear reactions inside stars eventually produce the chemical building blocks of life from hydrogen and helium? According to some theories, the processes behind these phenomena depend on finely calibrated initial conditions or unlikely coincidences involving the constants of nature. One could always write them off to fortuitous accident, but many cosmologists have found that unsatisfying, and have tried to find physical mechanisms that could produce life under a wide range of circumstances.
Ever since the 1920s when Edwin Hubble discovered that all visible galaxies are receding from one another, cosmologists have embraced a general theory of the history of the visible universe. In this view, the visible universe originated from an unimaginably compact and hot state. Prior to 1980, the standard Big Bang models had the universe expanding in size and cooling at a steady pace from the beginning of time until now. These models were adjusted to fit observed data by selecting initial conditions, but some began to worry about how precise and special those initial conditions had to be.
For example, Big Bang models attribute an energy density -- the amount of energy per cubic centimetre -- to the initial state of the cosmos, as well as an initial rate of expansion of space itself. The subsequent evolution of the universe depends sensitively on the relation between this energy density and the rate of expansion. Pack the energy too densely and the universe will eventually recontract into a big crunch; spread it out too thin and the universe will expand forever, with the matter diluting so rapidly that stars and galaxies cannot form. Between these two extremes lies a highly specialised history in which the universe never recontracts and the rate of expansion eventually slows to zero. In the argot of cosmology, this special situation is called W = 1. Cosmological observation reveals that the value of W for the visible universe at present is quite near to 1. This is, by itself, a surprising finding, but what’s more, the original Big Bang models tell us that W = 1 is an unstable equilibrium point, like a marble perfectly balanced on an overturned bowl. If the marble happens to be exactly at the top it will stay there, but if it is displaced even slightly from the very top it will rapidly roll faster and faster away from that special state.
This is an example of cosmological fine-tuning. In order for the standard Big Bang model to yield a universe even vaguely like ours now, this particular initial condition had to be just right at the beginning. Some cosmologists balked at this idea. It might have been just luck that the Solar system formed and life evolved on Earth, but it seemed unacceptable for it to be just luck that the whole observable universe should have started so near the critical energy density required for there to be cosmic structure at all.
And that’s not the only fine-tuned initial condition implied by the original Big Bang model. If you train a radio-telescope at any region of the sky, you observe a cosmic background radiation, the so-called ‘afterglow of the Big Bang’. The strange thing about this radiation is that it is quite uniform in temperature, no matter where you measure it. One might suspect that this uniformity is due to a common history, and that the different regions must have arisen from the same source. But according to the standard Big Bang models they don’t. The radiation traces back to completely disconnected parts of the initial state of the universe. The uniformity of temperature would therefore already have had to exist in the initial state of the Big Bang and, while this initial condition was certainly possible, many cosmologists feel this would be highly implausible.
In 1980, the American cosmologist Alan Guth proposed a different scenario for the early universe, one that ameliorated the need for special initial conditions in accounting for the uniformity of background radiation and the energy density of the universe we see around us today. Guth dubbed the theory ‘inflation’ because it postulates a brief period of hyper-exponential expansion of the universe, occurring shortly after the Big Bang. This tremendous growth in size would both tend to ‘flatten’ the universe, driving W very close to 1 irrespective of what it had been before, and would imply that the regions from which all visible background radiation originated did, in fact, share a common history.
At first glance, the inflationary scenario seems to solve the fine-tuning problem: by altering our story about how the universe evolved, we can make the present state less sensitive to precise initial conditions. But there are still reasons to worry, because, after all, inflation can’t just be wished into existence; we have to postulate a physical mechanism that drives it. Early attempts to devise such a mechanism were inspired by the realisation that certain sorts of field -- in particular, the hypothesised Higgs field -- would naturally produce inflation. But more exact calculations showed that the sort of inflation that would arise from this Higgs field would not produce the universe we see around us today. So cosmologists cut the Gordian knot: instead of seeking the source of the inflation in a field already postulated for some other reason, they simply assume a new field -- the ‘inflaton’ field -- with just the characteristics needed to produce the phenomena.
Unfortunately, the phenomena to be explained, which include not just the present energy density and background radiation but also the formation and clustering of galaxies and stars, require that the inflation take a rather particular form. This ‘slow-roll’ inflation in turn puts very strict constraints on the form of the inflaton field. The constraints are so severe that some cosmologists fear one form of fine-tuning (exact initial conditions in the original Big Bang theory) has just been traded for another form (the precise details of the inflaton field). But the inflationary scenario fits so well with the precise temperature fluctuations of the background radiation that an inflationary epoch is now an accepted feature of the Big Bang theory. Inflation itself seems here to stay, even while the precise mechanism for inflation
remains obscure, and worryingly fine-tuned.
Here we reach the edge of our understanding, and a deep, correlative uncertainty about whether there is a problem with our current explanations of the universe. If the origin of the inflaton field is unknown, how can one judge whether its form is somehow ‘unusual’ and ‘fine-tuned’ rather than ‘completely unsurprising’? As we have seen, the phenomena themselves do not wear such a designation on their sleeves. What is merely due to coincidence under one physical theory becomes the typical case under another and, where the physics itself is unclear, judgments about how ‘likely’ or ‘unlikely’ a phenomenon is become unclear as well. This problem gets even worse when you consider certain ‘constants of nature’.
Just as the overall history and shape of the visible universe depends upon special initial conditions in the original Big Bang model, many of the most general features of the visible universe depend quite sensitively on the precise values of various ‘constants of nature’. These include the masses of the fundamental particles (quarks, electrons, neutrinos, etc) as well as physical parameters such as the fine-structure constant that reflect the relative strength of different forces. Some physicists have argued that, had the values of these ‘constants’ been even slightly different, the structure of the universe would have been altered in important ways. For example, the proton is slightly lighter than the neutron because the down quark is slightly heavier than the up, and since the proton is lighter than the neutron, a proton cannot decay into a neutron and a positron. Indeed, despite intensive experimental efforts, proton decay has never been observed at all. But if the proton were sufficiently heavier than the neutron, protons would be unstable, and all of chemistry as we know it would be radically changed.
Similarly, it has been argued that if the fine-structure constant, which characterises the strength of the electromagnetic interaction, differed by only 4 per cent, then carbon would not be produced by stellar fusion. Without a sufficient abundance of carbon, carbon-based life forms could not exist. This is yet another way that life as we know it could appear to be radically contingent. Had the constants of nature taken slightly different values, we would not be here.
The details of these sorts of calculations should be taken with a grain of salt. It might seem like a straightforward mathematical question to work out what the consequences of twiddling a ‘constant’ of nature would be, but think of the tremendous intellectual effort that has had to go into figuring out the physical consequences of the actual values of these constants. No one could sit down and rigorously work out an entirely new physics in a weekend. And even granting the main conclusion, that many of the most widespread structures of the universe and many of the more detailed physical structures that support living things depend sensitively on the values of these constants -- what follows?
Some physicists simply feel that the existence of stars and planets and life ought not to require so much ‘luck’. They would prefer a physical theory that yields the emergence of these structures as typical and robust phenomena, not hostage to a fortunate throw of the cosmic dice that set the values of the constants. Of course, the metaphor of a throw of the cosmic dice is unfortunate: if a ‘constant of nature’ really is a fixed value, then it was not the product of any chancey process. It is not at all clear what it means to say, in this context, that the particular values that obtain were ‘improbable’ or ‘unlikely’.
If, however, we think that the existence of abundant carbon in the universe ought not to require a special and further unexplained set of values for the constants of nature, what options for explanation do we have? We have seen how changing the basic dynamics of the Big Bang can make some phenomena much less sensitive to the initial conditions, and so appear typical rather than exceptional. Could any sort of physics provide a similar explanation for the values of the ‘constants of nature’ themselves?
One way to counter the charge that an outcome is improbable is to increase the number of chances it has to occur. The chance that any particular sperm will find an egg is small, but the large number of sperm overcomes this low individual chance so that offspring are regularly produced. The chance of a monkey writing Hamlet by randomly hitting keys on a typewriter is tiny, but given enough monkeys and typewriters, the hypothetical probability of producing a copy of the play approaches 100 per cent. Similarly, even if the ‘constants of nature’ have to fall in a narrow range of values for carbon to be produced, make enough random choices of values and at least one choice might yield the special values. But how could there be many different ‘choices’ of the constants of nature, given that they are said to be constant?
String theory provides a possibility. According to it, space-time has more dimensions that are immediately apparent, and these extra dimensions beyond four are ‘compactified’ or curled up at microscopic scale, forming a Calabi-Yau manifold. The ‘constants of nature’ are then shown to be dependent on the exact form of the compactification. There are hundreds of thousands, and possibly infinitely many, distinct possible Calabi-Yau manifolds, and so correspondingly many ways for the ‘constants of nature’ to come out. If there is a mechanism for all of these possibilities to be realised, it will be likely that at least one will correspond to the values we observe.
One theory of inflation, called eternal inflation, provides a mechanism that would lead to all possible manifolds. In this theory, originally put forth by the cosmologists Andrei Linde at Stanford and Alexander Vilenkin at Tufts, the universe is much, much larger and more exotic than the visible universe of which we are aware. Most of this universe is in a constant state of hyper-exponential inflation, similar to that of the inflationary phase of the new Big Bang models. Within this expanding region, ‘bubbles’ of slowly expanding space-time are formed at random, and each bubble is associated with a different Calabi-Yau compactification, and hence with different ‘constants of nature’. As with the monkeys and the typewriters, just the right combination is certain to arise given enough tries, and so, it’s no wonder that life-friendly universes such as ours exist, and it’s also no wonder that living creatures such as ourselves would be living in one.
There is one other conceptual possibility for overcoming fine-tuning that is worth our consideration, even if there is no explicit physics to back it up yet. In this scenario, the universe’s dynamics do not ‘aim’ at any particular outcome, nor does the universe randomly try out all possibilities, and yet it still tends to produce worlds in which physical quantities might appear to have been adjusted to one another. The name for this sort of physical process is homeostasis.
Here is a simple example. When a large object starts falling through the atmosphere, it initially accelerates downward due to the force of gravity. As it falls faster, air resistance increases, and that opposes the gravitational force. Eventually, the object reaches a terminal velocity where the drag exactly equals the force of gravity, the acceleration stops, and the object falls at a constant speed.
Suppose intelligent creatures evolved on such a falling object after it had reached the terminal velocity. They develop a theory of gravity, on the basis of which they can calculate the net gravitational force on their falling home. This calculation would require determining the exact composition of the object through its whole volume in order to determine its mass. They also develop a theory of drag. The amount of drag produced by part of the surface of the object would be a function of its precise shape: the smoother the surface, the less drag. Since the object is falling at a constant speed, the physics of these creatures would include a ‘constant of nature’ relating shapes of the surface to forces. In order to calculate the total drag on the object, the creatures would have to carefully map the entire shape of the surface and use their ‘constant of nature’.
Having completed these difficult tasks, our creatures would discover an amazing ‘coincidence’: the total gravitational force, which is a function of the volume and composition of the object, almost exactly matches the total drag, which is a function of the shape of the surface! It would appear to be an instance of incredible fine-tuning: the data that go into one calculation would have nothing to do with the data that go into the other, yet the results match. Change the composition without changing the surface shape, or change the surface shape without changing the composition, and the two values would no longer be (nearly) equal.
But this ‘miraculous coincidence’ would be no coincidence at all. The problem is that our creatures would be treating the velocity of the falling object as a ‘constant of nature’ -- after all, it has been a constant as long as they have existed -- even though it is really a variable quantity. When the object began to fall, the force of gravity did not balance the drag. The object therefore accelerated, increasing the velocity and hence increasing the drag, until the two forces balanced. Similarly, we can imagine discovering that some of the quantities we regard as constants are not just variable between bubbles but variable within bubbles. Given the right set of opposing forces, these variables could naturally evolve to stasis, and hence appear later as constants of nature. And stasis would be a condition in which various independent quantities have become ‘fine-tuned’ to one another.
The problem of cosmological fine-tuning is never straightforward. It is not clear, in the first place, when it is legitimate to complain that a physical theory treats some phenomenon as a highly contingent ‘product of chance’. Where the complaint is legitimate, the cosmologist has several different means of recourse. The inflationary Big Bang illustrates how a change in dynamics can convert delicate dependence on initial conditions to a robust independence from the initial state. The bubble universe scenario demonstrates how low individual probabilities can be overcome by multiplying the number of chances. And homeostasis provides a mechanism for variable quantities to naturally evolve to special unchanging values that could easily be mistaken for constants of nature.
But our modern understanding of cosmology does demote many facts of central importance to humans -- in particular the very existence of our species -- to mere cosmic accident, and none of the methods for overcoming fine-tuning hold out any prospect for reversing that realisation. In the end, we might just have to accommodate ourselves to being yet another accident in an accidental universe.
Tim Maudlin is professor of philosophy at New York University. His latest book is Philosophy of Physics: Space and Time (2012).
本文於 修改第 2 次
宇宙何時終結？- D. Goldberg
When will the universe end?
Dave Goldberg, 10/30/13
Robert Frost famously noted that,"Some say the world will end in fire / Some say in ice." Lucky us! We're pretty sure we know the answer: it's ice. But how long do we have until the end of time, and what will it look like? In this week's "Ask a Physicist," we'll find out.
Reader Tony Phan asks:
In an article a few months ago, you said that, if the Big Rip theory is valid, the universe will undergo the rip in 80 billion years. However, you state your disbelief that the Big Rip is a valid theory, being that you believe w = -1. When, then, do you think the universe will end?
Our own Annalee Newitz has long speculated about the end of the world. I, myself, have even dabbled, although my thinking tends to be on the scale of billions of years, rather than hundreds or thousands. But these thoughts tend to be human-centric, and we may or not be the final word on consciousness in our universe. So how long does life itself have, human or otherwise?
As recently as a few decades ago, there seemed to be the very real possibility that the universe would end by collapsing in on itself, at which point all supercivilizations would likely be destroyed. If so, we've only got a few tens of billions of years left. The big question was whether there was enough matter in to "close" the universe, and ultimately end and reverse the expansion.
As near as we can tell, our universe will go on expanding forever. That may seem like good news, but it turns out that even with an infinite number of hours remaining, we can't get an infinite amount done.
To understand why, I need to say a few things about the universe really is made of.
Where we are and where we're going
The 2011 Nobel Prize in Physics was awarded to Saul Perlmutter, Adam Riess and Brian Schmidt for the discovery that the expansion of the universe is accelerating. Cosmologists attribute this acceleration to an as-yet not entirely understood field known as "Dark Energy."
And while we don't know much about it, we have a pretty good idea of one or two particulars. We know, for instance, that it seems to currently account for about 68% of the energy budget of the universe.
We also know something about the pressure of Dark Energy – a detail that might seem kind of irrelevant on first blush, thought it's actually kind of a big deal. One of the great predictions of General Relativity – Einstein's Theory of Gravity – is that all forms of energy, including pressure, momentum density, and the like, contribute to the gravitational field of the universe. For substances that permeate the entire universe, like Dark Energy, the density matters quite a lot.
The pressure of a substance like dark energy is described using a number called w (also known, if you like words, as "the equation of state."). For a pure Cosmological Constant (essentially, the simplest form of Dark Energy, and mathematically identical to the form of Einstein's "greatest blunder") w=-1, which causes the accelerating universe. In the previous article that Tony alluded to, I noted that if w is less than -1 (even by a little bit), the universe will not only continue to accelerate, but even the acceleration will accelerate to the point where all atoms will be ripped apart.
I've said (and maintain) that when the dust settles we'll most likely find w=-1; an opinion shared by most cosmologists. But, it's at least worth mentioning that the Pan-STARRS survey puts the estimate at around -1.186, albeit with systematic errorbars that don't rule out -1. In other words, take the result with a grain of salt.
For today's doomsday scenario, I'm going to simply assume that we live in an ordinary Cosmological Constant universe with no other shenanigans going on like a cyclic universe. If we believe we know anything about gravity, then the universe has no end. It will last literally forever.
But that doesn't mean that there's an infinite amount of future history.
The timeline of Future History
I'll get to the challenges of a supercivilization in due course, but let me begin with some of the problems with sticking around here (and in our puny meat bodies) for too long, by walking you through some of the more obvious milestones in our eschatology.
t+1 billion years - The earth is burnt to a crisp. As I've noted in previous columns, the sun is getting hotter and hotter. In the 4 1/2 billion years since it started out, the sun has increased in luminosity by about 40%. The timescales involved are much longer than man-made climate change, so humans dominate on the century timescale. But on the timescales of billions of years, the sun wins. Eventually, we're going to need to get robot bodies or get out of town.
t+4 billion years - The Milky Way collides with the Andromeda galaxy (or vice-versa, depending on how you look at it). This doesn't necessarily spell disaster, but it is one of the first in a long line of coalescent events, which will culminate in us living in a lonely island universe that will play out over the next hundred billion years or so.
t+5 1/2 billion years - The sun will become a red giant, utterly enveloping the husk of what we once called our planet.
t+2 trillion years - The accelerating universe isolates us utterly. One of the side effects of living in a Dark Energy dominated universe is that structures – galaxies, clusters and superclusters of galaxies – that aren't bound to us by gravity are getting further and further from us at an accelerating rate. Extended far enough into the future, those galaxies are entirely disconnected from us. We couldn't get there even if we had all of the light-speed propulsion in the universe, and even the light emitted from their earliest moment will have lost so much energy in transit that anything outside of our local supercluster will be completely invisible to us. The maximum communication and travel distance, not surprisingly, is known as the "event horizon" and in case you missed it, this is almost exactly the same as what happens when you fall into a black hole.
2 trillion years is functionally the beginning of the end. Though the universe might well be infinite today, by around 2 trillion years, it's pretty clear that we're going to have to do the remainder of our living with the rather paltry energy sources that can found within a few tens of millions of light-years. This doesn't seem like such a big limitation to us now since we're fairly happy coasting on the energy supply from our solar system, but we may want to stretch our legs some day.
Our local isolation may be the biggest limitation to living eternally in a functional sense. In the late 1970's Freeman Dyson argued that in an old-fashioned open universe – one without a cosmological constant – history could literally go on forever.
t+100 trillion years - Star formation ceases, after which all of the stars, one by one, will burn out and become either black holes (for the most massive stars) or white dwarves and neutron stars (for the rest). After this, without a new source of heat, the temperature will quickly drop, along with the background temperature of the universe which has long since dropped to essentially absolute zero.
We tend to think of the universe as cold already, and it is: around 3 degrees above absolute zero. This is all that remains of the radiation from the Big Bang. But every time the universe doubles in size, that temperature halves. After another 10 billion years, we'll be down to 1.5 Kelvin. After 20, we're down to 0.75K. 100 trillion years in the future and the universal temperature will halve roughly a thousand times. It's not terribly instructive to write out what a miniscule temperature that would be.
But I'll do it anyway...
It's 0.– then 434 zeros – 5 Kelvin.
100 Quintillion (10^20) years - Everything either has become a black hole, or has gotten sucked into a black hole. As astronomical bodies lose energy, their orbits tend to decay and they'll spiral inwards. Even now, massive black holes lurk at the centers of massive galaxies, including ours. But eventually, pretty much everything will get crushed into a singularity.
10^100 years - All of the black holes in the universe will evaporate, leaving us with nothing more than a warm (and quickly cooling) bath of photons. This is the ultimate consequence of the 2nd Law of Thermodynamics: Entropy – disorder, as we usually describe it – will get greater and greater until it maxes out. The universe simply can't be more disordered than a uniform swamp of low-energy photons.
Beyond the End?
There may be other milestones along the way. For instance, presumably at some point all of the protons in the universe will decay, but so far we have no real idea how long that will take – just that experimentally protons seem to last at least 10^36 years. But really, that's just a guess.
But do we need protons? If science fiction has taught us anything, it's that somehow the universe is rife with intelligent creatures made of pure energy. But even they can't last forever.
As I mentioned above, the future history of the universe will technically last infinitely long in terms of years, but that doesn't mean that we (or our pure energy progeny) will be able to get an infinite amount done in that time. Since everything gets colder and colder over time, what happens is that there is less and less energy to power a computer or a brain. This means that thoughts (or processes) would take longer and longer as time went on, eventually to the point where a single final thought would essentially freeze midway through. This, too, is a consequence of the second law of thermodynamics. Even what little energy there is out there can't be used indefinitely without creating heat along the way.
Shortly after it was discovered that the universe was accelerating, Lawrence Krauss and Glenn Starkmann took a stab at the finite resources facing an isolated island universe, including details like how much energy it takes to power a brain and how to power an alarm clock that would wake you up between the eons separating subsequent thoughts. It's a good read, but the upshot is that they put the limits of a supercivilization at only around 10^50 years – much less than the evaporation lifetimes of our black holes – before life runs out of steam.
Fortunately, that's a LOT of zeroes.
Dave Goldberg is a Physics Professor at Drexel University, your friendly neighborhood "Ask a Physicist" columnist, and, most recently, author of The Universe in the Rearview Mirror: How Hidden Symmetries Shape Reality. You can also follow him on facebook.
本文於 修改第 3 次
取代大爆炸的新宇宙學理論 -- Z. Merali
Did Hyper-Black Hole Spawn the Universe?
Big Bang was mirage from collapsing higher-dimensional star, theorists propose.
Zeeya Merali, Nature News, 09/13/13
It could be time to bid the Big Bang bye-bye. Cosmologists have speculated that the Universe formed from the debris ejected when a four-dimensional star collapsed into a black hole — a scenario that would help to explain why the cosmos seems to be so uniform in all directions.
The standard Big Bang model tells us that the Universe exploded out of an infinitely dense point, or singularity. But nobody knows what would have triggered this outburst: the known laws of physics cannot tell us what happened at that moment.
“For all physicists know, dragons could have come flying out of the singularity,” says Niayesh Afshordi, an astrophysicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada.
It is also difficult to explain how a violent Big Bang would have left behind a Universe that has an almost completely uniform temperature, because there does not seem to have been enough time since the birth of the cosmos for it to have reached temperature equilibrium.
To most cosmologists, the most plausible explanation for that uniformity is that, soon after the beginning of time, some unknown form of energy made the young Universe inflate at a rate that was faster than the speed of light. That way, a small patch with roughly uniform temperature would have stretched into the vast cosmos we see today. But Afshordi notes that “the Big Bang was so chaotic, it’s not clear there would have been even a small homogenous patch for inflation to start working on”.
On the brane
In a paper posted last week on the arXiv preprint server1, Afshordi and his colleagues turn their attention to a proposal2 made in 2000 by a team including Gia Dvali, a physicist now at the Ludwig Maximilians University in Munich, Germany. In that model, our three-dimensional (3D) Universe is a membrane, or brane, that floats through a ‘bulk universe’ that has four spatial dimensions.
Ashfordi's team realized that if the bulk universe contained its own four-dimensional (4D) stars, some of them could collapse, forming 4D black holes in the same way that massive stars in our Universe do: they explode as supernovae, violently ejecting their outer layers, while their inner layers collapse into a black hole.
In our Universe, a black hole is bounded by a spherical surface called an event horizon. Whereas in ordinary three-dimensional space it takes a two-dimensional object (a surface) to create a boundary inside a black hole, in the bulk universe the event horizon of a 4D black hole would be a 3D object — a shape called a hypersphere. When Afshordi’s team modelled the death of a 4D star, they found that the ejected material would form a 3D brane surrounding that 3D event horizon, and slowly expand.
The authors postulate that the 3D Universe we live in might be just such a brane — and that we detect the brane’s growth as cosmic expansion. “Astronomers measured that expansion and extrapolated back that the Universe must have begun with a Big Bang — but that is just a mirage,” says Afshordi.
The model also naturally explains our Universe’s uniformity. Because the 4D bulk universe could have existed for an infinitely long time in the past, there would have been ample opportunity for different parts of the 4D bulk to reach an equilibrium, which our 3D Universe would have inherited.
The picture has some problems, however. Earlier this year, the European Space Agency's Planck space observatory released data that mapped the slight temperature fluctuations in the cosmic microwave background — the relic radiation that carries imprints of the Universe’s early moments. The observed patterns matched predictions made by the standard Big Bang model and inflation, but the black-hole model deviates from Planck's observations by about 4%. Hoping to resolve the discrepancy, Afshordi says that his is now refining its model.
Despite the mismatch, Dvali praises the ingenious way in which the team threw out the Big Bang model. “The singularity is the most fundamental problem in cosmology and they have rewritten history so that we never encountered it,” he says. Whereas the Planck results “prove that inflation is correct”, they leave open the question of how inflation happened, Dvali adds. The study could help to show how inflation is triggered by the motion of the Universe through a higher-dimensional reality, he says.
Nature, DOI: 10.1038/nature.2013.13743
1. Pourhasan, R., Afshordi, N. & Mann, R. B. Preprint available at http://arxiv.org/abs/1309.1487 (2013).
2. Dvali, G., Gabadadze, G. & Porrati, M. Phys. Lett. B 485, 208–214 (2000).
The event horizon of a black hole — the point of no return for anything that falls in — is a spherical surface. In a higher-dimensional universe, a black hole could have a three-dimensional event horizon, which could spawn a whole new universe as it forms. (請至原網頁瀏覽插圖)
本文於 修改第 6 次
「時-空」與無規律的宇宙結構 - C. Frederick
A Universe Made of Tiny, Random Chunks
Physics: A new idea holds that the space-time that makes up our universe is inherently uncertain.
Carl Frederick, llustration by Daniel Hertzberg (請至原網頁瀏覽相關圖像)
One of science’s most crucial yet underappreciated achievements is the description of the physical universe using mathematics -- in particular, using continuous, smooth mathematical functions, like how a sine wave describes both light and sound. This is sometimes known as Newton’s zeroth law of motion in recognition of the fact that his famed three laws embody such functions.
In the early 20th century, Albert Einstein gave a profound jolt to the Newtonian universe, showing that space was both curved by mass and inherently linked to time. He called the new concept space-time. While this idea was shocking, its equations were smooth and continuous, like Newton’s.
But some recent findings from a small number of researchers suggest that randomness is actually inherent in space-time itself, and that Newton’s zeroth law also breaks down, on small scales.
Let’s explore what this means.
First, what is space-time? You probably recall from plane geometry that if you take two points on a plane and draw x and y axes through the first of those points (meaning that it is the origin), then the distance between the points is the square root of x2+y2, where x and y are the coordinates of the second point. In three dimensions, the analogous distance is the square root of x2+y2+z2. And these distances are constant; their values don’t change if you draw your axes in some other way.
What about in four dimensions, where the fourth dimension is time? A point in a 4-dimensional coordinate system is called an event: a location specified by x, y, and z, at a particular time t. What then is the “distance” between two events? One might think, by analogy, it would be the square root of x2+y2+z2+t2. But it isn’t. If you draw the coordinates differently, that “distance” changes, so it can’t really be considered a distance. Einstein found that the constant distance was the square root of x2+y2+z2 – ct2, where c is the speed of light. If you change the way you draw your coordinate axes, the values of x, y, z, and t will likely change, but the square root of x2+y2+z2 – ct2 won’t. For Einstein, the x, y, z, and t dimensions were really elements of a single concept, which he called space-time.
Einstein deduced, by a brilliant and highly complex chain of logic, that the explanation of gravity was the geometry of space-time itself -- its curvature. And that curvature was the result of the presence of mass. According to Einstein, if there were no mass at all in the universe, space-time would be “flat,” meaning without curvature.
To imagine the curvature of space, think of a flat bug on the surface of a sphere. How would the bug know he was not on an infinite plane? If the bug walked in one direction for a while, he’d eventually come back to where he started. Or if, on the surface, the bug drew an x axis and y axis at right angles, he’d find that the distance from the origin to an arbitrary point would not be the square root of x2+y2. This clever bug might well deduce he was in a curved space.
So curvature influences the distance between two points, and mass determines the curvature.
That is essentially how Einstein thought about space-time. But his theories of relativity were just one of the two revolutionary triumphs of 20th-century physics; the other was quantum mechanics. It is natural, then, to ask: How does quantum mechanics affect the geometry of space-time? This is one of the biggest questions in physics today. And stochastic space-time seems as if it might well be part of the answer.
Quantum mechanics has at its core the Heisenberg uncertainty principle, which says (among other things) that every physical system must have some residual energy, even when its temperature is absolute zero. This residual energy is called the zero-point energy, and even an “empty” vacuum in space-time has it. In the vacuum, particles and antiparticles continually pop into existence, then collide together and annihilate each other. The sudden appearance and disappearance of particles causes the vacuum zero-point energy to fluctuate in time. Because energy is equivalent to mass (E = mc2), and mass produces space-time curvature, vacuum energy fluctuations produce space-time curvature fluctuations. These in turn cause a fluctuation in the distance between points in space-time, which means that, at small scales, space-time is noisy and random, or “stochastic.” Distances and times become ill-defined.
If we look at the quantum fluctuations in a not-too-small region, the fluctuations within the region tend to average out. But if we look instead at an infinitesimally small region -- a point -- we find infinite energy. We might wonder:
How small is small enough to capture the physics we are interested in, without being so small that energies become enormous -- and what is an appropriate unit of measurement to use for that distance?
To answer that question, we follow the line of thinking of Max Planck, arguably the father of quantum mechanics, who wondered what a “natural unit” of distance might be -- something not based on an arbitrary standard like meters or feet. He proposed a natural unit expressed using universal constants:
the speed of light in a vacuum (c); the constant of gravitation, expressing the strength of the gravitational field (G); and what we now call Planck’s constant (h), expressing the relationship between a particle’s energy and its frequency.
Planck found he could construct a distance, now known as the Planck length, LP, with the formula LP = (hG / 2πc3)1/2.
The Planck length turns out to be a very short distance: about 10-35 meters. It is a hundred million trillion times smaller than the diameter of a proton -- too small to measure and, arguably, too small to ever be measured.
But the Planck length is significant. String theory has done away with points altogether and suggests that the Planck length is the shortest possible length. The newer quantum loop gravity theory suggests the same thing. The problem of infinite energies in very small volumes is neatly avoided because very small volumes are prohibited.
There is another important aspect of the Planck length. Relativity predicts that distances as measured by an observer in a fast-moving reference frame shrink -- the so-called Lorentz contraction. But the Planck length is special -- it’s the only length that can be derived from the constants c, G, and h without adding some arbitrary constant -- so it may retain the same value in all reference frames, not subject to any Lorentz contraction. But the Planck length is derived from universal constants, so it must have the same value in all reference frames; it can’t change according to a Lorentz contraction. This implies that relativity theory does not apply at this size scale. We need some new scientific explanation for this phenomenon, and stochastic space-time might provide it. The idea that the Planck length cannot be shortened by the Lorentz contraction suggests that it is a fundamental quantum, or unit, of length. As a result, volumes with dimensions smaller than the Planck length arguably don’t exist. The Planck length then, is a highly likely candidate for the size of a space-time “grain,” the smallest possible piece of space-time.
So now, finally, we can characterize our “stochastic space-time.”
First, it is granular, at about the scale of the Planck length.
Second, the distances between these grains are not well-defined. Quantum mechanics says that the more massive an object is, the less pronounced its quantum properties will be. Therefore, we expect that as the mass in a region of space-time increases, the region will become less stochastic. (This is analogous to the case of relativity, where the more mass there is in a region, the more curvature the region exhibits.) We theorize that if there were no mass in the universe, space-time would not be flat, as in Einstein’s relativity, but completely stochastic: effectively undefined. Without mass, why would we need space?
Third, in stochastic space-time, unlike in string theory and quantum loop gravity theory, these grains are able to drift with respect to each other because of the randomness inherent in that size scale. Imagine the grains as a box of marbles. Stochasticity is like gently shaking the box so the marbles can move around. It is hoped that the drifting volume elements (marbles) might explain why relativity theory doesn’t seem to apply at the Planck length. That is because relativity is a theory requiring Newton’s zeroth law, which demands smooth and continuous mathematical functions -- but near the Planck length, the smooth functions are thought to break down.
Isaac Newton would be surprised. He supposed that space and time were a featureless void, a mere framework onto which were imposed the equations of his three laws of motion. This is, after all, what each of us sees around us in everyday life. Instead, stochastic space-time theory posits a grainy, uncertain space-time beyond the reach of smooth, continuous functions.
The hope is that the equations of quantum mechanics will be derivable from the properties of space-time itself -- not a roof thrown randomly on top of a building, but rather a beam built into the very foundation.
After receiving his doctorate in theoretical physics, Carl Frederick first worked as a researcher at NASA and, after that, at Cornell. He now works at a high-tech start-up and is a pro science-fiction writer.
本文於 修改第 4 次