Chances are good that you’ve thought about the concept of gravity once or twice. If you’ve ever taken a high school physics class, you might have heard that gravity is an invisible force that is responsible for keeping you planted on the earth. At the beginning of the 20th century, the young Albert Einstein was also interested in gravity. The accepted theory of gravity at that time had first been put forth by Isaac Newton in the 17th century, and it had seen great success in physics for centuries.
But Einstein was bothered by one of the key foundations of Newton’s gravity: that space and time are both independent, absolute entities. In his Principia Mathematica, Newton stated that
“Absolute, true, and mathematical time, of it self and from its own nature, flows equably without relation to anything external… absolute space, in its own nature, without relation to anything external, remains always similar and immovable”.
In the early 20th century, physicists were just beginning to understand electricity and magnetism, and while carefully scrutinizing these developments Einstein came up with a new idea: that space and time are not distinct, absolute quantities as Newton said, but rather that they are intertwined in a very special way.
Putting the dimensions of space and time together, into what we now call spacetime, turned out to be necessary to avoid paradoxical outcomes in electricity and magnetism. But the concept of spacetime also leads to some very strange outcomes. A new theory of gravity, called General Relativity, is one of these outcomes.
In General Relativity — Einstein’s theory of gravity — gravity is the curvature of spacetime itself. Physicists often say that spacetime is the “fabric of the cosmos”, but it’s not exactly made up of “stuff”, so how can it be curved?
This is difficult to conceptualize, but one can use an analogy to understand a little better. If you were to place a baseball on a spandex sheet of fabric, the ball would distort the sheet by bending it. A bowling ball would also bend the sheet, even more than the baseball. If you took a marble and added it to the sheet, the marble would follow the spandex surface, curving around the bowling ball (an orbit).
This is the essence of Einstein’s gravity: massive objects bend spacetime, and in turn, spacetime tells matter how to move. Now we’re ready to talk about gravitational waves.
Imagine for a moment that you rotate the bowling ball about its vertical axis on the spandex sheet. The bowling ball has a smooth surface and is round, so there isn’t any effect on the sheet. However, if we took a pair of bowling balls and rotated them around each other, ripples would begin to spread outward on our spandex sheet. Similarly, if we took a bowling ball that wasn’t quite round (maybe with a lump of cement stuck to a side) and tried to rotate the ball, the lump would “catch” on the spandex and create ripples.
These ripples, produced either by two objects orbiting each other (we call this a binary system), or a lumpy object rotating about its axis, are gravitational waves. They propagate radially outward from the objects that produce them and travel at the speed of light. As you can imagine, this means that gravitational waves are being produced all the time, all over the universe, from all kinds of systems: the moon and Earth in their orbit; a pair of ice-skaters spinning while holding hands; a football wobbling from a poor throw; a distant rotating planet with mountains.
When the gravitational waves produced by any of these examples hit matter like you and I, the effect they have is to stretch and squeeze it. All the stretching and squeezing happens in the direction that’s perpendicular to the gravitational wave’s travel path (using more technical physics lingo, we would say that gravitational waves are transverse).
The amount of stretching and squeezing is extraordinarily small because gravity is actually fairly weak [just think: you can defy the entire gravitational pull of the earth just by jumping or using a refrigerator magnet to pick up a paper clip!]. To produce “big” gravitational
waves, we have to look for waves that are produced by something called compact objects. To explain what a compact object is, imagine that you have a large round bread roll. The roll is made up of ordinary atoms and has mass; we could weigh it on a kitchen scale, and measure its diameter. Now imagine squashing the roll with your hands until it forms a dense lump of bread. If we place the squashed bread roll on our scale, it would have the same weight as it did before. It contains the same number and type of atoms it had before we squashed it. But now, the roll occupies a much smaller region of space, and the bread is more dense than it was before. It is now a compact object.
Compact objects bend spacetime in a much more extreme way than other objects, and as a result, the gravitational waves they produce stretch and squeeze matter enough that we can actually detect it. Don’t get too excited about the amount of stretching, though. The most compact astrophysical objects in the universe (that we know of) are black holes and neutron stars, which are dead remnants of very massive stars. If a “big” gravitational wave produced by a pair of orbiting black holes or neutron stars were to hit you head-on, your height would only change by one thousandth the diameter of a proton!
Detecting gravitational waves is thus a very tricky business; the detectors we use must be capable of measuring changes in length that are smaller than the atoms the detectors are made of! We will discuss the methods that we use to construct such detectors and conduct searches for gravitational waves in a future blog post.
But before you go, it’s important to understand why any of this matters. One reason we’re interested in detecting gravitational waves is to test Einstein’s gravity. Although we have significant experimental evidence that supports General Relativity, there are feasible contending theories that are similar and vary only from Einstein’s in the way that gravitational waves behave. So far, our detections of gravitational waves have continued to support Einstein’s theory, but it is important to continue to test this.
The second reason to search for gravitational waves is to learn from them. Every telescope ever built has relied on light of some form (x-ray, gamma ray, infrared, visible, radio, etc.) to observe the universe. Gravitational wave detectors are completely different; they use gravity instead of light to observe the universe. This allows us to study systems, such as black hole binaries, that can’t be directly studied using light. We now have the potential to unlock mysteries about black holes, neutron stars, stellar evolution, the Big Bang and much more.
And that’s not all. Any time our species has found a new way to observe the universe around us, we find unexpected things, things we didn’t even know that we didn’t know. There is no reason to suspect that this will be any different. We’re standing on the cutting edge of a new observational experience for mankind, and there are all kinds of beautiful, bizarre and unexpected things to be discovered!
Stay tuned to hear more about gravitational waves from the IGC. In the meantime, if you’ve got questions, I love to talk science! Feel free to email me.
—A blogpost about a recent paper by Beatrice Bonga (author of this post), Brajesh Gupt and Nelson Yokomizo—
Have you ever wondered what the shape of our universe is? It turns out that you only need three categories to classify all the possibilities for the shape of our universe: closed, flat or open. The closed category contains all shapes that look like a 3-dimensional sphere or any deformation of it. To visualize this better, let me give you some examples in two dimensions: the surface of a potato and the earth are both deformations of a 2-dimensional sphere. The flat category is like a 3-dimensional plane, with a sheet of paper (whether it is crumbled or not) being an easy to visualize example in two dimensions. The open category contains all shapes that look saddle shaped (or any deformation of it of course).
Is there a way to tell which category our universe belongs to? Observations from cosmology are so far all consistent with a flat universe, which also happens to be the easiest to visualize and do calculations with. This is typically the reason why most data is analyzed using the assumption that that our universe is flat. However, data is becoming increasingly more precise. So is there a chance that our universe is curved after all? We would be like the people of ancient Greece who were able to determine that the surface of the Earth is curved even though it looked flat from their perspective.
This question has been studied by numerous physicists. One of the most amazing data available in cosmology is the Cosmic Microwave Background (CMB). The CMB is radiation emitted when our universe was ~380,000 years old and we are able to observe this radiation now with incredible precision. You could think of it as the baby picture of our universe because our universe is now close to 14 billion years old. To be precise, if you compare the age of our universe with a 100 year old person, the CMB is a picture of a one-day old baby. By analyzing this baby picture carefully, we don’t just learn things about the universe when it was 380,000 years old but also about the years before. During one of those earlier years, the universe underwent a phase of inflation (for more information about inflation, see Anne-Sylvie’s blog post). This phase is important to understand our approach to the question: is it possible that our universe is not flat, but closed?
So how does one usually study the shape of our universe? Typically, when studying the CMB one calculates how the data should look at the end of inflation in their favorite inflationary model and then apply Einstein’s and Boltzmann equations to evolve this data to today. This data is then compared to the baby picture we observe today and the better the match between the evolved data and the actual observations of the CMB, the better the calculated form of our data at the end of inflation was. Scientists so far have looked at the effect of a closed universe on the evolution from the end of inflation to today, but they have not calculated how a closed model changes the data at the end of inflation. This is what we did. We then evolved it with the known evolution equations and compared it to what we observe today.
What did we find? The calculated data at the end of inflation looks different, however, the differences are small and the data remains consistent with a flat model. The differences between the flat and the closed model appear at large scales, for which the closed model does moderately better than the flat model, but at these scales the observational error margins are also largest. Thus, the difference is statistically not very significant.
If you want to know more, you can find the pre-print of our article here. You can also always shoot me an email if you have more questions.
My research interests revolves around inflationary cosmology.
Owing to several observations that started with Edwin Hubble, we know that our universe is in expansion. On very large scales, the distance separating two objects grows, as the fabric of spacetime expands more and more. This means that, if we look back in the past, the universe was much smaller, hotter and denser than it is today.
The model that describes most accurately the history and evolution of the Universe today is called the Λ-CDM (or concordance) model. According to the theory, about thirteen billion years ago, all the matter and energy of the universe was forming an insanely dense, hot and homogeneous soup. Well, actually, the soup was not completely homogeneous; some inhomogeneities, however extremely minute, were present. And the existence of these inhomogeneities in our primordial soup had dramatic consequences. Indeed, as they were denser, they could attract more matter, which would make these regions even denser. Therefore, as billions of years passed, the overdense regions saw their density increase, while the underdense regions became less and less filled with matter. This led to the growth of large scale structures that we observe today, such as clusters of galaxies.
A relic of those very homogeneous times is the faint radio signal, called Cosmic Microwave Background (CMB), that we receive from all the directions in the sky. Here is a picture taken by the Planck satellite:
On this picture, we see a snapshot of the universe when it was approximately 380 thousand years old. The red and blue spots show the tiny differences in temperature (or in density) of the universe. At this time, the fluctuations in temperature are one part in a hundred thousand!
Therefore, from very small inhomogeneities present in the early universe, were born today’s galaxies and stars and nebula and all the rest. But where were those inhomogeneities coming from? This question can be answered by the paradigm of inflation, which describes a phase of exponentially accelerated expansion of our spacetime at the beginning of the Universe. While we don’t have strong observational evidence for inflation yet, it solves many of the problems of the Λ-CDM model of cosmology, and therefore many physicists are working on inflation.
Inflation didn’t last long, but was quite considerable; in about 10-32 seconds, the universe expanded by a factor of more than 1026! During that time, the small quantum fluctuations in density of the pre-inflationnary universe were brought to large, classical scales. And that’s how the primordial inhomogeneities were born!
So, we have a mechanism explaining the existence of the small inhomogeneities of the early universe. But there exists a large variety of ways to implement that mechanism. How do we set apart all the models that cosmologists came up with? By studying the statistics of the inhomogeneities.
In particular, we can look at correlation functions. These functions describe the correlation between two – or more – points in the sky that are separated by a specific angle. And what we see is that the statistics of the fluctuations is very well described by a Gaussian distribution. But small deviations from this Gaussian statistics, that we call non-Gaussianities, could tell us a lot about the history of the universe, and it would help tremendously in discriminating the different inflationary models. Therefore, cosmologists are really excited to observe non-Gaussianities in the near future!
Over the history of mankind, the understanding of our Universe has evolved and matured, thanks to remarkable advancements both on theoretical and experimental fronts in the fields of quantum mechanics and general relativity (GR).
Quantum mechanics describes the physics at small scales such as the scale of sub-atomic particles, while gravity is weak and remains practically inert. On the other hand, the large scale structure of the universe is dictated by gravity, which is governed by GR, while quantum mechanics plays no role. Both theories have proved to be robust in their own territory under various theoretical and experimental tests. Unfortunately, it turns out that, as they are, GR and quantum mechanics do not play well with each other when brought under the same umbrella. This leaves us clueless about the situations when the size of the system is small enough for quantum physics to be important and at the same time gravity is so strong that it cannot be neglected anymore.
The very early stage of our own Universe is an example of such a situation, where neither GR nor quantum mechanics can alone be trusted. Although the large scale structure of the universe is very well explained by Einstein’s theory of general relativity (GR), it fails to provide a consistent picture of the early stages of the universe, due to the presence of cosmological singularities such as the big bang. Evolving Einstein’s equations backwards in time from the conditions observed in a large macroscopic universe today, we see that the universe keeps contracting and the space-time curvature keeps increasing, until the universe reaches an extremely high curvature regime where the classical GR description is not reliable. In fact, if one naively continues evolving Einstein’s equations in this regime, one encounters the big bang singularity.
To gain a reliable understanding of the physics in such cases one needs an amalgamation of ideas from both GR and quantum mechanics: a quantum theory of gravity.
Loop quantum gravity (LQG) is one of the leading approaches to quantum gravity, which gives a consistent picture of the discrete quantum structure of space-time geometry (as opposed to the continuum description given by GR). The quantum space-time geometry provided by LQG opens up new avenues to explore the physics of the early universe and cosmological singularities under the paradigm of loop quantum cosmology (LQC). One of the key features of the discrete quantum geometry of LQG is that when the space-time curvature is sub-Planckian, equations of LQC are extremely well approximated by those of classical GR. The difference becomes important when the space-time curvature becomes significant for quantum discreteness to kick in. This leads to the resolution of the big-bang singularity via a quantum bounce, which serves as a smooth quantum geometric bridge between the current expanding branch of our universe and a contracting phase that should have existed in the far past (Fig.2).
In the paradigm of LQC, the history of the Universe is different from that in the standard GR. As shown in Fig.2, there exists a quantum geometric pre-inflationary phase. The origin of quantum perturbations which result in the formation of cosmic microwave background (CMB), and that of the large scale structure observed today, can now be traced all the way back to the quantum gravity regime. Due to a modified pre-inflationary dynamics of LQC, these quantum fluctuations experience a different background evolution than in the standard paradigm, which can in principle have observational imprints on the temperature and the polarization power spectrum observed in the CMB. Understanding the evolution of the quantum fluctuations and extracting out loop quantum geometric imprints in the recent observational data are among the main directions of research pursued by the scientists at IGC.
In forthcoming articles, I will describe different aspects of LQC and its connection with observations, in particular, with the CMB anomalies observed by the recent Planck and WMAP missions.
Cosmic-rays, that is, the high energy subatomic particles, which constantly bombard the Earth’s atmosphere, were discovered by Victor Hess in 1912 in a series of very daring, high altitude balloon flights. With the measurements they made, Hess and his team showed that the ionization level of air in the atmosphere increases with altitude, a confirmation that extra-terrestrial high-energy particles, were constantly impacting atmospheric molecules.
Fifty years later, in the Volcano Ranch experiment, led by John Linsley, the first ultra-high energy cosmic ray (UHECR) with energy exceeding 1020 eV, roughly 1 billion times higher than the energy of protons accelerated at the Large Hadron Collider, was discovered. This discovery led to the development of the scientific field dedicated to the search for the origin of UHECRs, the most energetic particles ever produced after the Big Bang.
When a 1020 eV particle collides with an air molecule in the upper atmosphere, a shower of lower energy particles develops in the atmosphere. By the time the shower reaches ground level, millions of these lower energy, secondary particles have been produced, and the footprint of the shower can extend more than 3 km2 across, that is, roughly 1/3rd of the surface area of State College, PA. By studying the air showers, scientists can measure the properties of the original cosmic ray particles.
The world’s largest UHECR detector, the Pierre Auger Observatory (hereafter Auger), is located in the Pampa Amarilla, in the Mendoza region of Argentina. The experiment covers an area of 3000 km2, that is, more than 30 times the surface area of Paris. Auger consists primarily of two types of extensive air-shower detectors. Water Cherenkov surface stations, and Fluorescence telescopes. The water Cherenkov stations are large plastic tanks, each containing 10 tonnes of purified water that register the blueish Cherenkov light which gets produced when particles travelling at the speed of light collide with the water molecules inside the tank. There are 1660 such stations, spaced 1.5 km apart, forming the Auger surface array. They have 100% duty cycle. At the four edges of the array, surrounding the surface detectors are 27 fluorescence telescopes. These operate only on dark moonless nights, detecting the fluorescence light produced by the de-excitation of nitrogen molecules as the shower propagates in the atmosphere. The power produced by a single UHECR shower is roughly that produced by a single 60 watt light bulb travelling at the speed of light, so this is a challenging measurement. The combination of the fluorescence technique and surface detectors allows us to see UHECR showers in “hybrid” mode, providing the most accurate measurement of the properties of the air shower.
The origin of UHECRs is a mystery. In what kinds of sources do particles get accelerated to 5×1019 eV? It is this question that scientists working in Auger have been addressing. Although we don’t yet have a definitive answer, there are certain minimal requirements that an astrophysical source must satisfy in order to be a plausible UHECR accelerator. A simple requirement can be stated as follows: the Larmor radius of the particle in the magnetic field of the source, must be larger than the radius of the acceleration site, in order for the particle to be effectively confined in the source. This requirement is what limits the energy to which protons can be accelerated in the Large Hadron Collider; the strength of the magnets, and the radius of the tunnel. This simple argument, known as the Hillas criterion, rules out many known classes of powerful astrophysical sources as possible UHECR acceleration sites: supernova explosions, regular galaxies, including our own Milky Way, white dwarf stars, etc.; the list is long. It only leaves a few known source classes as possible UHECR accelerators, all of which are extragalactic: gamma-ray bursts, active galactic nuclei, neutron starts, and rare, powerful shocks in the intergalactic medium.
Two features of the intergalactic propagation of UHECRs make us hopeful that we can discover the origin of UHECRs, by finding associations between known powerful astrophysical accelerators and UHECR arrival directions. Firstly, cosmic rays with energy exceeding 5×1019 eV have a short propagation horizon. They are so energetic that when they collide with a photon from the Cosmic Microwave Background, the cosmic photons that are left over from the Big Bang, they produce a Pi meson (or pion). This is an interaction that results in significant energy loss (generally between 14-50%) for the cosmic ray. The average distance that a 6×10^19 eV a stripped Hydrogen nucleus (i.e. a proton) can travel, while retaining 0.23 of it’s energy is 100 megaparsecs (Mpc): a small distance in extragalactic terms. An analogous (and often smaller) horizon exists for heavier elements such as Helium, Carbon, Oxygen, Iron, Silicon: other frequently observed cosmic ray species. In summary, independent of the chemical composition, the sources of UHECRs must be powerful astrophysical accelerators within ~100 Mpc.
Secondly UHECRs are expected to experience small deflections, after they escape their sources, in the weak magnetic fields that permeate the intergalactic medium. A 5×1019 eV proton is expected to experience a deflection, θ, of 3 degrees or less, over 100 Mpc of propagation. Therefore, the arrival direction of protons are expected to exhibit a correlation with the positions of their sources, within a few degrees. Deflections scale linearly with charge Z, as θZ(E) = Z x θproton(E), hence heavier nuclei are expected to have more significantly deviated arrival directions.
Searches for associations so far, have not resulted in statistically significant correlations between known extragalactic sources, and observed UHECR arrival directions (see e.g. Oikonomou et al. 13, Oikonomou et al. 15, Abreu et al. 14). The absence of a clear correlation is slightly disappointing, as well as puzzling: we still haven’t solved this mystery! This absence of associations could mean that UHECRs are heavy nuclei (e.g. Carbon nuclei), thus suffering large deflections, and not pointing back to their sources, or that magnetic fields in the direction of (at least some of) the sources of UHECRs are stronger than we previously inferred from independent astrophysical measurements. Other possibilities include an origin of UHECRs in some unknown population of astrophysical sources, or something more exotic, like the decay of supermassive primordial particles left over from the Big Bang. Despite the lack of a clear answer so far, the origin of UHECRs remains a very active research topic within the Auger Collaboration, and beyond. As more data get collected with existing, and future experiments, we will get closer to the answer to this mystery.
Two of the major questions in the field of high-energy astrophysics are how and where cosmic rays are accelerated. However, tracing cosmic rays back to their origins is a bit tricky: since they are charged particles, they bend in the Galactic magnetic fields on their way to Earth. Luckily, there are also gamma rays associated with these cosmic ray acceleration processes. Gamma rays are neutral, so they point back to their sources, and are therefore excellent candidates to learn more about cosmic ray processes. These sources include supernova explosions, gamma-ray bursts, and active galactic nuclei and are some of the most intense, highest-energy events known in the Universe. The High Altitude Water Cherenkov (HAWC) Observatory is a fairly new experiment dedicated to studying these gamma rays. This observatory looks very different from the astronomical observatories you may be familiar with: it consists solely of giant (~5 meter tall) tanks of water which detect gamma rays using a unique method known as the Water Cherenkov technique.
When a cosmic ray or gamma ray hits the Earth’s atmosphere, it interacts with the air molecules and starts a cascade of electromagnetic particles via the processes of Bremsstrahlung emission (resulting in the emission of photon) and pair creation (an electron and a positron are created). Each successive step of the chain reaction has exponentially more particles, with each individual particle having less energy than those in the previous step, until the energy of the individual particles hits some critical energy and the shower begins to die out.
HAWC is built at an altitude of 4100m on the saddle point between Pico de Orizaba and Sierra Negra in Mexico, where the number of air shower particles is much greater than at sea level. It consists of an array of 300 tanks of water, each of which is 7.3 meters in diameter, 5 meters high, and has four photomultiplier tubes at the bottom. When the charged particles from the air shower reach the array, they are traveling faster than the phase velocity of light in water. This leads to an emission of a faint blue light known as Cherenkov radiation, which is amplified by the photomultiplier tubes. By looking at the pattern of hits in all the tanks during an event, we can determine where in the sky the event came from as well as its approximate energy. For example, an event with an energy of >10 TeV is expected to hit every tank in the array, while a smaller event would only hit a fraction of the tanks. Gamma/hadron separation techniques are employed to separate the gamma rays from the extremely large background of cosmic rays (Here is a fun game to see if you can distinguish gamma rays from cosmic rays by eye!).
HAWC officially finished construction and was inaugurated last spring, but opportunistic data taking with the partially completed array was taken during the construction phase. A few papers with early results have already been published, with many more to come. HAWC operates 24 hours a day, making it a perfect experiment to survey the entire overhead sky in gamma rays. In addition to searching for new TeV gamma-ray sources, it is capable of monitoring existing sources for flares to get a sense of the time variability of these possible cosmic ray accelerators as well as searching for transients such as gamma-ray bursts.
There are also exciting implications for multi-messenger astrophysics. In addition to notifying other observatories of flaring sources as mentioned above, it can also extend the spectra of sources to higher energies than satellite experiments are capable of. This is especially interesting because the energy range that HAWC operates in is where we expect to see differences in the spectra of gamma rays originating from electron accelerations vs. those originating from hadronic accelerators. HAWC also shares information with non-gamma-ray experiments. An example of this would be the IceCube Neutrino Observatory: both experiments study similar energies and can see the same part of the sky. Both neutrinos and gamma rays are expected in hadronic cascades.
The future is bright for HAWC. Even though the experiment is still in its infancy and the ~100 collaboration members are still busy sifting through early data, an upgrade consisting of a sparse array of smaller “outrigger” tanks was recently funded and will begin construction soon. This will increase the effective area of the detector and in turn, its sensitivity to the higher energy (>10 TeV) air showers.
The following video was produced to feature the Institute for Gravitation & the Cosmos on APS TV, an initiative of the American Physical Society.
“The centers have a lot of synergies among them and as a result the institute is much greater than its parts.”
-Abhay Ashtekar, Director of IGC