The Penrose interpretation

 ……….is a prediction of Sir Roger Penrose about the mass scale at which standard quantum mechanics will fail. This idea is inspired by quantum gravity, because it uses both the physical constants \scriptstyle \hbar and \scriptstyle G.
Penrose’s idea is a variant of objective collapse theory. In these theories the wavefunction is a physical wave, which undergoes wave function collapse as a random process, with observers playing no special role. Penrose suggests that the threshold for wave function collapse is when superpositions involve at least a Planck mass worth of matter. He then hypothesizes that some fundamental gravitational event occurs, causing the wavefunction to choose one branch of reality over another. Despite the difficulties in specifying this in a rigorous way, he mathematically described the basis states involved in the Schrödinger–Newton equations.
Accepting that wavefunctions are physically real, Penrose believes that things can exist in more than one place at one time. In his view, a macroscopic system, like a human being, cannot exist in more than one position because it has a significant gravitational field. A microscopic system, like an electron, has an insignificant gravitational field, and can exist in more than one location almost indefinitely.

In Einstein‘s theory, any object that has mass causes a warp in the structure of space and time around it. This warping produces the effect we experience as gravity. Penrose points out that tiny objects, such as dust specks, atoms and electrons, produce space-time warps as well. Ignoring these warps is where most physicists go awry. If a dust speck is in two locations at the same time, each one should create its own distortions in space-time, yielding two superposed gravitational fields. According to Penrose’s theory, it takes energy to sustain these dual fields. The stability of a system depends on the amount of energy involved: the higher the energy required to sustain a system, the less stable it is. Over time, an unstable system tends to settle back to its simplest, lowest-energy state: in this case, one object in one location producing one gravitational field. If Penrose is right, gravity yanks objects back into a single location, without any need to invoke observers or parallel universes.[1]

Penrose speculates that the transition between macroscopic and quantum begins on the scale of dust particles (whose mass is the planck mass). Dust particles could exist in more than one location for as long as one second, and this is much longer than the time a larger object could be in a superposition. He has proposed an experiment to test this theory, called FELIX (Free-orbit Experiment with Laser Interfometry X-Rays), in which an X-ray laser in space is directed toward a tiny mirror, and fissioned by a beam splitter from thousands of miles away, with which the photons are directed toward other mirrors and reflected back. One photon will strike the tiny mirror moving en route to another mirror and move the tiny mirror back as it returns, and according to Penrose’s approach, that the tiny mirror exists in two locations at one time. If gravity affects the mirror, it will be unable to exist in two locations at once because gravity holds it in place. [2]

However, because this experiment would be difficult to set up, a table-top version has been proposed instead.[3]

See also

References

  1. ^ ‘Folger, Tim. “If an Electron Can Be in 2 Places at Once, Why Can’t You?” Discover. Vol. 25 No. 6 (June 2005). 33.
  2. ^ Penrose, R. Road to Reality pp856-860
  3. ^ ‘Folger, Tim. “If an Electron Can Be in 2 Places at Once, Why Can’t You?” Discover. Vol. 25 No. 6 (June 2005). 34-35.

External links

Posted in Uncategorized | Leave a comment

Quantum superposition

Quantum mechanics
\Delta x\, \Delta p \ge \frac{\hbar}{2}
Uncertainty principle
Introduction · Mathematical formulations

[hide]Fundamental concepts
Quantum state · Wave function
Superposition · Entanglement
Complementarity · Duality · Uncertainty
Measurement · Exclusion
Decoherence · Ehrenfest theorem · Tunnelling

……refers to the quantum mechanical property of a particle to occupy all of its possible quantum states simultaneously. Due to this property, to completely describe a particle one must include a description of every possible state and the probability of the particle being in that state.[citation needed] Since the Schrödinger equation is linear, a solution that takes into account all possible states will be a Linear combination of the solutions for each individual state.[clarification needed] This mathematical property of linear equations is known as the superposition principle.

Contents

The Superposition principle of quantum mechanics

The principle of superposition states that if the world can be in any configuration, any possible arrangement of particles or fields, and if the world could also be in another configuration, then the world can also be in a state which is a superposition of the two, where the amount of each configuration that is in the superposition is specified by a complex number.

Examples

For an equation describing a physical phenomenon, the superposition principle states that a linear combination of solutions to an equation is also a solution. When this is true then the equation is linear and said to obey the superposition principle. Thus if functions f1, f2, and f3 solve the linear equation ψ, then ψ=c1f1+c2f2+c3f3 would also be a solution where each c is a coefficient. For example, the electrical field due to a distribution of charged particles can be described by the vector sum of the contributions of the individual particles.

Similarly, probability theory states that the probability of an event can be described by a linear combination of the probabilities of certain specific other events (see Mathematical treatment). For example, the probability of flipping two coins (coin A and coin B) and having at least one turn face up can be expressed as the sum of the probabilities for three specific events- A heads with B tails, A heads with B heads, and A tails with B heads. In this case the probability could be expressed as:

P(heads >  = 1) = P(AnotB) + P(AandB) + P(BnotA)

or even:

P(heads >  = 1) = 1 − P(notAnotB)

Probability theory, as with quantum theory, would also require that the sum of probabilities for all possible events, not just those satisfying the previous condition, be normalized to one. Thus:

P(AnotB) + P(AandB) + P(BnotA) + P(notAnotB) = 1

Probability theory also states that the probability distribution along a continuum (i.e., the chance of finding something is a function of position along a continuous set of coordinates) or among discrete events (the example above) can be described using a probability density or unit vector respectively with the probability magnitude being given by a square of the density function.

In quantum mechanics an additional layer of analysis is introduced as the probability density function is now more specifically a wave function ψ. The wave function is either a complex function of a finite set of real variables or a complex vector formed of a finite or infinite number of components. As the coefficients in the linear combination that describes our probability density are now complex, the probability must now come from the absolute value of the multiplication of the wave function by its complex conjugate \psi \psi^* =  \mid \psi  \mid ^2. In cases where the functions are not complex, the probability of an event occurring dependent on any member of a subset of the complete set of possible events occurring is the simple sum of the event probabilities in that subset. For example, if an observer rings a bell whenever one or more coins land hands up in the example above, then the probability of the observer ringing a bell is the same as the sum of the probabilities of each event in which at least one coin lands heads up. This is a simple sum since the square of the probability function describing this system is always positive. Using the wave equation, the multiplication of the function by its complex conjugate (square) is not always positive and may produce counterintuitive results.

For example, if a photon in a plus spin state has a 0.1 amplitude to be absorbed and take an atom to the second energy level, and if the photon in a minus spin state has a −0.1 amplitude to do the same thing, a photon which has an equal amplitude to be plus or minus would have zero amplitude to take the atom to the second excited state and the atom will not be excited. If the photon’s spin is measured before it reaches the atom, whatever the answer, plus or minus, it will have a nonzero amplitude to excite the atom, plus or minus 0.1.

Assuming normalization, the probability density in quantum mechanics is equal to the square of the absolute value of the amplitude. The further the amplitude is from zero, the bigger the probability. Where probability distribution is represented as a continuous function the probability is the integral of the density function over the relevant values. Where the wave equation is represented as a complex vector, then probability will be extracted from the absolute value of an inner-product of the coefficient matrix and its complex conjugate. In the atom example above, the probability that the atom will be excited is 0. But the only time probability enters the picture is when an observer gets involved. If you look to see which way the atom is, the different amplitudes become probabilities for seeing different things. So if you check to see whether the atom is excited immediately after the photon with 0 amplitude reaches it, there is no chance of seeing the atom excited.

Another example: If a particle can be in position A and position B, it can also be in a state where it is an amount “3i/5” in position A and an amount “4/5” in position B. To write this, physicists usually say:

|\psi\rangle = {3\over 5} i |A\rangle + {4\over 5} |B\rangle.

In the description, only the relative size of the different components matter, and their angle to each other on the complex plane. This is usually stated by declaring that two states which are a multiple of one another are the same as far as the description of the situation is concerned.

|\psi \rangle \approx \alpha |\psi \rangle

The fundamental dynamical law of quantum mechanics is that the evolution is linear, meaning that if the state A turns into A’ and B turns into B’ after 10 seconds, then after 10 seconds the superposition ψ turns into a mixture of A’ and B’ with the same coefficients as A and B. A particle can have any position, so that there are different states which have any value of the position x. These are written:

|x\rangle

The principle of superposition guarantees that there are states which are arbitrary superpositions of all the positions with complex coefficients:

\sum_x \psi(x) |x\rangle

This sum is defined only if the index x is discrete. If the index is over \reals, then the sum is not defined and is replaced by an integral instead. The quantity ψ(x) is called the wavefunction of the particle.
If a particle can have some discrete orientations of the spin, say the spin can be aligned with the z axis |+\rangle or against it |-\rangle, then the particle can have any state of the form:

C_1 |+\rangle + C_2 |-\rangle

If the particle has both position and spin, the state is a superposition of all possibilities for both:

\sum_x \psi_+(x)|x,+\rangle + \psi_-(x)|x,-\rangle \,

The configuration space of a quantum mechanical system cannot be worked out without some physical knowledge. The input is usually the allowed different classical configurations, but without the duplication of including both position and momentum.
A pair of particles can be in any combination of pairs of positions. A state where one particle is at position x and the other is at position y is written |x,y\rangle. The most general state is a superposition of the possibilities:

\sum_{xy} A(x,y) |x,y\rangle \,

The description of the two particles is much larger than the description of one particle — it is a function in twice the number of dimensions. This is also true in probability, when the statistics of two random things are correlated. If two particles are uncorrelated, the probability distribution for their joint position P(x,y) is a product of the probability of finding one at one position and the other at the other position:

P(x,y) = P_x (x) P_y(y) \,

In quantum mechanics, two particles can be in special states where the amplitudes of their position are uncorrelated. For quantum amplitudes, the word entanglement replaces the word correlation, but the analogy is exact. A disentangled wavefunction has the form:

A(x,y) = \psi_x(x)\psi_y(y) \,

while an entangled wavefunction does not have this form. Like correlation in probability, there are many more entangled states than disentangled ones. For instance, when two particles which start out with an equal amplitude to be anywhere in a box have a strong attraction and a way to dissipate energy, they can easily come together to make a bound state. The bound state still has an equal probability to be anywhere, so that each particle is equally likely to be everywhere, but the two particles will become entangled so that wherever one particle is, the other is too.

Analogy with probability

In probability theory there is a similar principle. If a system has a probabilistic description, this description gives the probability of any configuration, and given any two different configurations, there is a state which is partly this and partly that, with positive real number coefficients, the probabilities, which say how much of each there is.

For example, if we have a probability distribution for where a particle is, it is described by the “state”

\sum_x \rho(x) |x\rangle

Where ρ is the probability density function, a positive number that measures the probability that the particle will be found at a certain location.

The evolution equation is also linear in probability, for fundamental reasons. If the particle has some probability for going from position x to y, and from z to y, the probability of going to y starting from a state which is half-x and half-z is a half-and-half mixture of the probability of going to y from each of the options. This is the principle of linear superposition in probability.

Quantum mechanics is different, because the numbers can be positive or negative. While the complex nature of the numbers is just a doubling, if you consider the real and imaginary parts separately, the sign of the coefficients is important. In probability, two different possible outcomes always add together, so that if there are more options to get to a point z, the probability always goes up. In quantum mechanics, different possibilities can cancel.

In probability theory with a finite number of states, the probabilities can always be multiplied by a positive number to make their sum equal to one. For example, if there is a three state probability system:

x |1\rangle + y |2\rangle + z |3\rangle \,

where the probabilities x,y,z are positive numbers. Rescaling x,y,z so that

x+y+z=1 \,

The geometry of the state space is a revealed to be a triangle. In general it is a simplex. There are special points in a triangle or simplex corresponding to the corners, and these points are those where one of the probabilities is equal to 1 and the others are zero. These are the unique locations where the position is known with certainty.

In a quantum mechanical system with three states, the quantum mechanical wavefunction is a superposition of states again, but this time twice as many quantities with no restriction on the sign:

A|1\rangle + B|2\rangle + C|3\rangle = (A_r + iA_i) |1\rangle + (B_r + i B_i) |2\rangle + (C_r + iC_i) |3\rangle \,

rescaling the variables so that the sum of the squares is 1, the geometry of the space is revealed to be a high dimensional sphere

A_r^2 + A_i^2 + B_r^2 + B_i^2 + C_r^2 + C_i^2 = 1 \,.

A sphere has a large amount of symmetry, it can be viewed in different coordinate systems or bases. So unlike a probability theory, a quantum theory has a large number of different bases in which it can be equally well described. The geometry of the phase space can be viewed as a hint that the quantity in quantum mechanics which corresponds to the probability is the absolute square of the coefficient of the superposition.

Hamiltonian evolution

The numbers that describe the amplitudes for different possibilities define the kinematics, the space of different states. The dynamics describes how these numbers change with time. For a particle that can be in any one of infinitely many discrete positions, a particle on a lattice, the superposition principle tells you how to make a state:

\sum_n \psi_n |n\rangle \,

So that the infinite list of amplitudes \scriptstyle (... \psi_{-2},\psi_{-1},\psi_0,\psi_1,\psi_2 ...) completely describes the quantum state of the particle. This list is called the state vector, and formally it is an element of a Hilbert space, an infinite dimensional complex vector space. It is usual to represent the state so that the sum of the absolute squares of the amplitudes add up to one:

\sum \psi_n^*\psi_n = 1

For a particle described by probability theory random walking on a line, the analogous thing is the list of probabilities (…P − 2,P − 1,P0,P1,P2,…), which give the probability of any position. The quantities that describe how they change in time are the transition probabilities \scriptstyle K_{x\rightarrow y}(t), which gives the probability that, starting at x, the particle ends up at y after time t. The total probability of ending up at y is given by the sum over all the possibilities

P_y(t_0+t) = \sum_x P_x(t_0) K_{x\rightarrow y}(t) \,

The condition of conservation of probability states that starting at any x, the total probability to end up somewhere must add up to 1:

\sum_y K_{x\rightarrow y} = 1 \,

So that the total probability will be preserved, K is what is called a stochastic matrix.
When no time passes, nothing changes: for zero elapsed time \scriptstyle K{x\rightarrow y}(0) = \delta_{xy} , the K matrix is zero except from a state to itself. So in the case that the time is short, it is better to talk about the rate of change of the probability instead of the absolute change in the probability.

  P_y(t+dt) = P_y(t) + dt \sum_x P_x R_{x\rightarrow y} \,

where \scriptstyle R_{x\rightarrow y} is the time derivative of the K matrix:

R_{x\rightarrow y} = { K_{x\rightarrow y}(dt) - \delta_{xy} \over dt} \,.

The equation for the probabilities is a differential equation which is sometimes called the master equation:

{dP_y \over dt} = \sum_x P_x R_{x\rightarrow y} \,

The R matrix is the probability per unit time for the particle to make a transition from x to y. The condition that the K matrix elements add up to one becomes the condition that the R matrix elements add up to zero:

\sum_y R_{x\rightarrow y} = 0 \,

One simple case to study is when the R matrix has an equal probability to go one unit to the left or to the right, describing a particle which has a constant rate of random walking. In this case \scriptstyle R_{x\rightarrow y} is zero unless y is either x+1,x, or x−1, when y is x+1 or x−1, the R matrix has value c, and in order for the sum of the R matrix coefficients to equal zero, the value of R_{x\rightarrow x} must be −2c. So the probabilities obey the discretized diffusion equation:

{dP_x \over dt } = c(P_{x+1} - 2P_{x} + P_{x-1}) \,

which, when c is scaled appropriately and the P distribution is smooth enough to think of the system in a continuum limit becomes:

{\partial P(x,t) \over \partial t} = c {\partial^2 P \over \partial x^2 } \,

Which is the diffusion equation.
Quantum amplitudes give the rate at which amplitudes change in time, and they are mathematically exactly the same except that they are complex numbers. The analog of the finite time K matrix is called the U matrix:

\psi_n(t) = \sum_m U_{nm}(t) \psi_m \,

Since the sum of the absolute squares of the amplitudes must be constant, U must be unitary:

\sum_n U^*_{nm} U_{np} = \delta_{mp} \,

or, in matrix notation,

U^\dagger U = I \,

The rate of change of U is called the Hamiltonian H, up to a traditional factor of i:

H_{mn} = i{d \over dt} U_{mn}

The Hamiltonian gives the rate at which the particle has an amplitude to go from m to n. The reason it is multiplied by i is that the condition that U is unitary translates to the condition:

(I + i H^\dagger dt )(I - i H dt ) = I \,
H^\dagger - H = 0 \,

which says that H is Hermitian. The eigenvalues of the Hermitian matrix H are real quantities which have a physical interpretation as energy levels. If the factor i were absent, the H matrix would be antihermitian and would have purely imaginary eigenvalues, which is not the traditional way quantum mechanics represents observable quantities like the energy.
For a particle which has equal amplitude to move left and right, the Hermitian matrix H is zero except for nearest neighbors, where it has the value c. If the coefficient is everywhere constant, the condition that H is Hermitian demands that the amplitude to move to the left is the complex conjugate of the amplitude to move to the right. The equation of motion for ψ is the time differential equation:

i{d \psi_n \over dt} = c^* \psi_{n+1} + c \psi_{n-1}

In the case that left and right are symmetric, c is real. By redefining the phase of the wavefunction in time,  \psi\rightarrow \psi e^{i2ct}, the amplitudes for being at different locations are only rescaled, so that the physical situation is unchanged. But this phase rotation introduces a linear term.

i{d \psi_n \over dt} = c \psi_{n+1} - 2c\psi_n + c\psi_{n-1}

which is the right choice of phase to take the continuum limit. When c is very large and psi is slowly varying so that the lattice can be thought of as a line, this becomes the free Schrödinger equation:

i{ \partial \psi \over \partial t } = - {\partial^2 \psi \over \partial x^2}

If there is an additional term in the H matrix which is an extra phase rotation which varies from point to point, the continuum limit is the Schrödinger equation with a potential energy:

i{ \partial \psi \over \partial t} = - {\partial^2 \psi \over \partial x^2} + V(x) \psi

These equations describe the motion of a single particle in non-relativistic quantum mechanics.

Quantum mechanics in imaginary time

The analogy between quantum mechanics and probability is very strong, so that there are many mathematical links between them. In a statistical system in discrete time, t=1,2,3, described by a transition matrix for one time step \scriptstyle K_{m\rightarrow n}, the probability to go between two points after a finite number of time steps can be represented as a sum over all paths of the probability of taking each path:

K_{x\rightarrow y}(T) = \sum_{x(t)} \prod_t K_{x(t)x(t+1)}  \,

where the sum extends over all paths x(t) with the property that x(0) = 0 and x(T) = y. The analogous expression in quantum mechanics is the path integral.

A generic transition matrix in probability has a stationary distribution, which is the eventual probability to be found at any point no matter what the starting point. If there is a nonzero probability for any two paths to reach the same point at the same time, this stationary distribution does not depend on the initial conditions. In probability theory, the probability m for the stochastic matrix obeys detailed balance when the stationary distribution ρn has the property:

\rho_n K_{n\rightarrow m} = \rho_m K_{m\rightarrow n} \,

Detailed balance says that the total probability of going from m to n in the stationary distribution, which is the probability of starting at m ρm times the probability of hopping from m to n, is equal to the probability of going from n to m, so that the total back-and-forth flow of probability in equilibrium is zero along any hop. The condition is automatically satisfied when n=m, so it has the same form when written as a condition for the transition-probability R matrix.

\rho_n R_{n\rightarrow m} = \rho_m R_{m\rightarrow n} \,

When the R matrix obeys detailed balance, the scale of the probabilities can be redefined using the stationary distribution so that they no longer sum to 1:

p'_n = \sqrt{\rho_n}\;p_n \,

In the new coordinates, the R matrix is rescaled as follows:

\sqrt{\rho_n} R_{n\rightarrow m} {1\over \sqrt{\rho_m}} = H_{nm}  \,

and H is symmetric

H_{nm} = H_{mn} \,

This matrix H defines a quantum mechanical system:

i{d \over dt} \psi_n = \sum H_{nm} \psi_m \,

whose Hamiltonian has the same eigenvalues as those of the R matrix of the statistical system. The eigenvectors are the same too, except expressed in the rescaled basis. The stationary distribution of the statistical system is the ground state of the Hamiltonian and it has energy exactly zero, while all the other energies are positive. If H is exponentiated to find the U matrix:

U(t) = e^{-iHt} \,

and t is allowed to take on complex values, the K’ matrix is found by taking time imaginary.

K'(t) = e^{-Ht} \,

For quantum systems which are invariant under time reversal the Hamiltonian can be made real and symmetric, so that the action of time-reversal on the wave-function is just complex conjugation. If such a Hamiltonian has a unique lowest energy state with a positive real wave-function, as it often does for physical reasons, it is connected to a stochastic system in imaginary time. This relationship between stochastic systems and quantum systems sheds much light on supersymmetry.

Formal interpretation

Applying the superposition principle to a quantum mechanical particle, the configurations of the particle are all positions, so the superpositions make a complex wave in space. The coefficients of the linear superposition are a wave which describes the particle as best as is possible, and whose amplitude interferes according to the Huygens principle.

For any physical quantity in quantum mechanics, there is a list of all the states where the quantity has some value. These states are necessarily perpendicular to each other using the Euclidean notion of perpendicularity which comes from sums-of-squares length, except that they also must not be i multiples of each other. This list of perpendicular states has an associated value which is the value of the physical quantity. The superposition principle guarantees that any state can be written as a combination of states of this form with complex coefficients.
Write each state with the value q of the physical quantity as a vector in some basis \psi^q_n, a list of numbers at each value of n for the vector which has value q for the physical quantity. Now form the outer product of the vectors by multiplying all the vector components and add them with coefficients to make the matrix

A_{nm} = \sum_q q \psi^{*q}_n \psi^q_m

where the sum extends over all possible values of q. This matrix is necessarily symmetric because it is formed from the orthogonal states, and has eigenvalues q. The matrix A is called the observable associated to the physical quantity. It has the property that the eigenvalues and eigenvectors determine the physical quantity and the states which have definite values for this quantity.

Every physical quantity has a Hermitian linear operator associated to it, and the states where the value of this physical quantity is definite are the eigenstates of this linear operator. The linear combination of two or more eigenstates results in quantum superposition of two or more values of the quantity. If the quantity is measured, the value of the physical quantity will be random, with a probability equal to the square of the coefficient of the superposition in the linear combination. Immediately after the measurement, the state will be given by the eigenvector corresponding to the measured eigenvalue.

It is natural to ask why “real” (macroscopic, Newtonian) objects and events do not seem to display quantum mechanical features such as superposition. In 1935, Erwin Schrödinger devised a well-known thought experiment, now known as Schrödinger’s cat, which highlighted the dissonance between quantum mechanics and Newtonian physics, where only one configuration occurs, although a configuration for a particle in Newtonian physics specifies both position and momentum.
In fact, quantum superposition results in many directly observable effects, such as interference peaks from an electron wave in a double-slit experiment. The superpositions, however, persist at all scales, absent a mechanism for removing them. This mechanism can be philosophical as in the Copenhagen interpretation, or physical.

Recent research indicates that chlorophyll within plants appears to exploit the feature of quantum superposition to achieve greater efficiency in transporting energy, allowing pigment proteins to be spaced further apart than would otherwise be possible.[1][2]

If the operators corresponding to two observables do not commute, they have no simultaneous eigenstates and they obey an uncertainty principle. A state where one observable has a definite value corresponds to a superposition of many states for the other observable.

See also

References

  1. ^ Scholes, Gregory; Elisabetta Collini, Cathy Y. Wong, Krystyna E. Wilk, Paul M. G. Curmi, Paul Brumer & Gregory D. Scholes (4 February 2010). “Coherently wired light-harvesting in photosynthetic marine algae at ambient temperature”. Nature 463 (463): 644–647. http://www.nature.com/nature/journal/v463/n7281/full/nature08811.html. 
  2. ^ Moyer, Michael (September 2009). “Quantum Entanglement, Photosynthesis and Better Solar Cells”. Scientific American. http://www.scientificamerican.com/article.cfm?id=quantum-entanglement-and-photo. Retrieved 12 May 2010.
Posted in Uncategorized | Leave a comment

Time Dilation

……. is a phenomenon (or two phenomena, as mentioned below) described by the theory of relativity. It can be illustrated by supposing that two observers are in motion relative to each other, or differently situated with regard to nearby gravitational masses. They each carry a clock of identical construction and function. Then, the point of view of each observer will generally be that the other observer’s clock is in error (has changed its rate).

Both causes (distance to gravitational mass and relative speed) can operate together.

Contents

Overview

Time dilation can arise from:

  1. the relative velocity of motion between two observers, or
  2. the difference in their distance from a gravitational mass.

Relative velocity time dilation

When two observers are in relative uniform motion and far away from any gravitational mass, the point of view of each will be that the other’s (moving) clock is ticking at a slower rate than the local clock. The faster the relative velocity, the greater the magnitude of time dilation. This case is sometimes called special relativistic time dilation. It is often interpreted as time “slowing down” for the other (moving) clock. But that is only true from the physical point of view of the local observer, and of others at relative rest (i.e. in the local observer’s frame of reference). The point of view of the other observer will be that again the local clock (this time the other clock) is correct and it is the distant moving one that is slow. From a local perspective, time registered by clocks that are at rest with respect to the local frame of reference (and far from any gravitational mass) always appears to pass at the same rate.[1]

Gravitational time dilation

There is another case of time dilation, where both observers are differently situated in their distance from a significant gravitational mass, such as (for terrestrial observers) the Earth or the Sun. One may suppose for simplicity that the observers are at relative rest (which is not the case of two observers both rotating with the Earth—an extra factor described below). In the simplified case, the general theory of relativity describes how, for both observers, the clock that is closer to the gravitational mass, i.e. deeper in its “gravity well“, appears to go slower than the clock that is more distant from the mass (or higher in altitude away from the center of the gravitational mass). That does not mean that the two observers fully agree: each still makes the local clock to be correct; the observer more distant from the mass (higher in altitude) measures the other clock (closer to the mass, lower in altitude) to be slower than the local correct rate, and the observer situated closer to the mass (lower in altitude) measures the other clock (farther from the mass, higher in altitude) to be faster than the local correct rate. They agree at least that the clock nearer the mass is slower in rate and on the ratio of the difference.

Time dilation: special vs. general theories of relativity

In Albert Einstein‘s theories of relativity, time dilation in these two circumstances can be summarized:

  • In special relativity (or, hypothetically far from all gravitational mass), clocks that are moving with respect to an inertial system of observation are measured to be running slower. This effect is described precisely by the Lorentz transformation.

Thus, in special relativity, the time dilation effect is reciprocal: as observed from the point of view of either of two clocks which are in motion with respect to each other, it will be the other clock that is time dilated. (This presumes that the relative motion of both parties is uniform; that is, they do not accelerate with respect to one another during the course of the observations.)

In contrast, gravitational time dilation (as treated in general relativity) is not reciprocal: an observer at the top of a tower will observe that clocks at ground level tick slower, and observers on the ground will agree about that, i.e. about the direction and the ratio of the difference. There is not full agreement, all the observers make their own local clocks out to be correct, but the direction and ratio of gravitational time dilation is agreed by all observers, independent of their altitude.

Simple inference of time dilation due to relative velocity

Observer at rest sees time 2L/c.

Observer moving parallel relative to setup, sees longer path, time > 2L/c, same speed c.

Time dilation can be inferred from the observed fact of the constancy of the speed of light in all reference frames. [2] [3] [4] [5]

This constancy of the speed of light means, counter to intuition, that speeds of material objects and light are not additive. It is not possible to make the speed of light appear faster by approaching at speed towards the material source that is emitting light. It is not possible to make the speed of light appear slower by receding from the source at speed. From one point of view, it is the implications of this unexpected constancy that take away from constancies expected elsewhere.

Consider a simple clock consisting of two mirrors A and B, between which a light pulse is bouncing. The separation of the mirrors is L and the clock ticks once each time it hits a given mirror.
In the frame where the clock is at rest (diagram at right), the light pulse traces out a path of length 2L and the period of the clock is 2L divided by the speed of light:

\Delta t = \frac{2 L}{c}.

From the frame of reference of a moving observer traveling at the speed v (diagram at lower right), the light pulse traces out a longer, angled path. The second postulate of special relativity states that the speed of light is constant in all frames, which implies a lengthening of the period of this clock from the moving observer’s perspective. That is to say, in a frame moving relative to the clock, the clock appears to be running more slowly. Straightforward application of the Pythagorean theorem leads to the well-known prediction of special relativity:

The total time for the light pulse to trace its path is given by

\Delta t' = \frac{2 D}{c}.

The length of the half path can be calculated as a function of known quantities as

D = \sqrt{\left (\frac{1}{2}v \Delta t'\right )^2+L^2}.

Substituting D from this equation into the previous and solving for Δt gives:

\Delta t' = \frac{2L/c}{\sqrt{1-v^2/c^2}}

and thus, with the definition of Δt:

\Delta t' = \frac{\Delta t}{\sqrt{1-v^2/c^2}}

which expresses the fact that for the moving observer the period of the clock is longer than in the frame of the clock itself.

Time dilation due to relative velocity symmetric between observers

Common sense would dictate that if time passage has slowed for a moving object, the moving object would observe the external world to be correspondingly “sped up”. Counterintuitively, special relativity predicts the opposite.

A similar oddity occurs in everyday life. If Sam sees Abigail at a distance she appears small to him and at the same time Sam appears small to Abigail. Being very familiar with the effects of perspective, we see no mystery or a hint of a paradox in this situation.[6]

One is accustomed to the notion of relativity with respect to distance: the distance from Los Angeles to New York is by convention the same as the distance from New York to Los Angeles. On the other hand, when speeds are considered, one thinks of an object as “actually” moving, overlooking that its motion is always relative to something else — to the stars, the ground or to oneself. If one object is moving with respect to another, the latter is moving with respect to the former and with equal relative speed.

In the special theory of relativity, a moving clock is found to be ticking slowly with respect to the observer’s clock. If Sam and Abigail are on different trains in near-lightspeed relative motion, Sam measures (by all methods of measurement) clocks on Abigail’s train to be running slowly and similarly, Abigail measures clocks on Sam’s train to be running slowly.

Note that in all such attempts to establish “synchronization” within the reference system, the question of whether something happening at one location is in fact happening simultaneously with something happening elsewhere, is of key importance. Calculations are ultimately based on determining which events are simultaneous. Furthermore, establishing simultaneity of events separated in space necessarily requires transmission of information between locations, which by itself is an indication that the speed of light will enter the determination of simultaneity.

It is a natural and legitimate question to ask how, in detail, special relativity can be self-consistent if clock A is time-dilated with respect to clock B and clock B is also time-dilated with respect to clock A. It is by challenging the assumptions built into the common notion of simultaneity that logical consistency can be restored. Simultaneity is a relationship between an observer in a particular frame of reference and a set of events. By analogy, left and right are accepted to vary with the position of the observer, because they apply to a relationship. In a similar vein, Plato explained that up and down describe a relationship to the earth and one would not fall off at the antipodes.

Within the framework of the theory and its terminology there is a relativity of simultaneity that affects how the specified events are aligned with respect to each other by observers in relative motion. Because the pairs of putatively simultaneous moments are identified differently by different observers (as illustrated in the twin paradox article), each can treat the other clock as being the slow one without relativity being self-contradictory. This can be explained in many ways, some of which follow.

Temporal coordinate systems and clock synchronization

In Relativity, temporal coordinate systems are set up using a procedure for synchronizing clocks, discussed by Poincaré (1900) in relation to Lorentz’s local time (see relativity of simultaneity). It is now usually called the Einstein synchronization procedure, since it appeared in his 1905 paper.

An observer with a clock sends a light signal out at time t1 according to his clock. At a distant event, that light signal is reflected back to, and arrives back to the observer at time t2 according to his clock. Since the light travels the same path at the same rate going both out and back for the observer in this scenario, the coordinate time of the event of the light signal being reflected for the observer tE is tE = (t1 + t2) / 2. In this way, a single observer’s clock can be used to define temporal coordinates which are good anywhere in the universe.

Symmetric time dilation occurs with respect to temporal coordinate systems set up in this manner. It is an effect where another clock is being viewed as running slowly by an observer. Observers do not consider their own clock time to be time-dilated, but may find that it is observed to be time-dilated in another coordinate system.

Overview of formulae

Time dilation due to relative velocity

Lorentz factor as a function of speed (in natural units where c=1). Notice that for small speeds (less than 0.1), γ is approximately 1

The formula for determining time dilation in special relativity is:

 \Delta t' = \gamma \, \Delta t = \frac{\Delta t}{\sqrt{1-v^2/c^2}} \,

where Δt is the time interval between two co-local events (i.e. happening at the same place) for an observer in some inertial frame (e.g. ticks on his clock) – this is known as the proper time, Δt ’ is the time interval between those same events, as measured by another observer, inertially moving with velocity v with respect to the former observer, v is the relative velocity between the observer and the moving clock, c is the speed of light, and

 \gamma = \frac{1}{\sqrt{1-v^2/c^2}} \,

is the Lorentz factor. Thus the duration of the clock cycle of a moving clock is found to be increased: it is measured to be “running slow”. The range of such variances in ordinary life, where vc, even considering space travel, are not great enough to produce easily detectable time dilation effects and such vanishingly small effects can be safely ignored. It is only when an object approaches speeds on the order of 30,000 km/s (1/10 the speed of light) that time dilation becomes important.

Time dilation by the Lorentz factor was predicted by Joseph Larmor (1897), at least for electrons orbiting a nucleus. Thus “… individual electrons describe corresponding parts of their orbits in times shorter for the [rest] system in the ratio :\scriptstyle \sqrt{1 - v^2/c^2}” (Larmor 1897). Time dilation of magnitude corresponding to this (Lorentz) factor has been experimentally confirmed, as described below.

Time dilation due to gravitation and motion together

Astronomical time scales and the GPS system represent significant practical applications, presenting problems that call for consideration of the combined effects of mass and motion in producing time dilation.
Relativistic time dilation effects, for the solar system and the Earth, have been evaluated from the starting point of an approximation to the Schwarzschild solution to the Einstein field equations. A timelike interval dtE in this metric can be approximated, when expressed in rectangular coordinates and when truncated of higher powers in 1/c2, in the form:[7][8]

 dt_E^2 = \left( 1-\frac{2GM_i}{r_i c^2} \right) dt_c^2 - \frac{dx^2+dy^2+dz^2}{c^2}, \,
(1)

where:

dtE (expressed as a time-like interval) is a small increment forming part of an interval in the proper time tE (an interval that could be recorded on an atomic clock);
dtc is a small increment in the timelike coordinate tc (“coordinate time“) of the clock’s position in the chosen reference frame;
dx, dy and dz are small increments in three orthogonal space-like coordinates x, y, z of the clock’s position in the chosen reference frame; and
GMi/ri represents a sum, to be designated U, of gravitational potentials due to the masses in the neighborhood, based on their distances ri from the clock. This sum of the GMi/ri is evaluated approximately, as a sum of Newtonian gravitational potentials (plus any tidal potentials considered), and is represented below as U (using the positive astronomical sign convention for gravitational potentials). The scope of the approximation may be extended to a case where U further includes effects of external masses other than the Mi, in the form of tidal gravitational potentials that prevail (due to the external masses) in a suitably small region of space around a point of the reference frame located somewhere in a gravity well due to those external masses, where the size of ‘suitably small’ remains to be investigated.[9]

From this, after putting the velocity of the clock (in the coordinates of the chosen reference frame) as

v^2=\frac{dx^2+dy^2+dz^2}{dt_c^2}, \,
(2)

(then taking the square root and truncating after binomial expansion, neglecting terms beyond the first power in 1/c2), a relation between the rate of the proper time and the rate of the coordinate time can be obtained as the differential equation[10]

\frac{dt_E}{dt_c}= 1-\frac{U}{c^2}-\frac{v^2}{2c^2}. \,
(3)

Equation (3) represents combined time dilations due to mass and motion, approximated to the first order in powers of 1/c2. The approximation can be applied to a number of the weak-field situations found around the Earth and in the solar-system. It can be thought of as relating the rate of proper time tE that can be measured by a clock, with the rate of a coordinate time tc.

In particular, for explanatory purposes, the time-dilation equation (3) provides a way of conceiving coordinate time, by showing that the rate of the clock would be exactly equal to the rate of the coordinate time if this “coordinate clock” could be situated

(a) hypothetically outside all relevant ‘gravity wells‘, e.g. remote from all gravitational masses Mi, (so that U=0), and also
(b) at rest in relation to the chosen system of coordinates (so that v=0).

Equation (3) has been developed and integrated for the case where the reference frame is the solar system barycentric (‘ssb’) reference frame, to show the (time-varying) time dilation between the ssb coordinate time and local time at the Earth’s surface: the main effects found included a mean time dilation of about 0.49 second per year (slower at the Earth’s surface than for the ssb coordinate time), plus periodic modulation terms of which the largest has an annual period and an amplitude of about 1.66 millisecond.[11][12]

Equation (3) has also been developed and integrated for the case of clocks at or near the Earth’s surface. For clocks fixed to the rotating Earth’s surface at mean sea level, regarded as a surface of the geoid, the sum ( U + v2/2 ) is a very nearly constant geopotential, and decreases with increasing height above sea level approximately as the product of the change in height and the gradient of the geopotential. This has been evaluated as a fractional increase in clock rate of about 1.1×10−13 per kilometer of height above sea level due to a decrease in combined rate of time dilation with increasing altitude. The value of dtE/dtc at height falls to be compared with the corresponding value at mean sea level.[13] (Both values are slightly below 1, the value at height being a little larger (closer to 1) than the value at sea level.)

A fuller development of equation (3) for the near-Earth situation has been used to evaluate the combined time dilations relative to the Earth’s surface experienced along the trajectories of satellites of the GPS global positioning system. The resulting values (in this case they are relativistic increases in the rate of the satellite-borne clocks, by about 38 microseconds per day) form the basis for adjustments essential for the functioning of the system.[14]

This gravitational time dilation relationship has been used in the synchronization or correlation of atomic clocks used to implement and maintain the atomic time scale TAI, where the different clocks are located at different heights above sea level, and since 1977 have had their frequencies steered to compensate for the differences of rate with height.[15]

In pulsar timing, the advance or retardation of the pulsar phase due to gravitational and motional time dilation is called the “Einstein Delay”.

Experimental confirmation

Time dilation has been tested a number of times. The routine work carried on in particle accelerators since the 1950s, such as those at CERN, is a continuously running test of the time dilation of special relativity. The specific experiments include:

Velocity time dilation tests

  • Ives and Stilwell (1938, 1941), “An experimental study of the rate of a moving clock”, in two parts. The stated purpose of these experiments was to verify the time dilation effect, predicted by Lamor-Lorentz ether theory, due to motion through the ether using Einstein’s suggestion that Doppler effect in canal rays would provide a suitable experiment. These experiments measured the Doppler shift of the radiation emitted from cathode rays, when viewed from directly in front and from directly behind. The high and low frequencies detected were not the classical values predicted.
f_\mathrm{detected} = \frac{f_\mathrm{moving}}{1 - v/c} and \frac{f_\mathrm{moving}}{1+v/c}\,=\, \frac{f_\mathrm{rest}}{1 - v/c} and \frac{f_\mathrm{rest}}{1+v/c}
i.e. for sources with invariant frequencies f_\mathrm{moving}\, = f_\mathrm{rest} The high and low frequencies of the radiation from the moving sources were measured as

f_\mathrm{detected} = f_\mathrm{rest}\sqrt{\left(1 + v/c\right)/\left(1 - v/c\right) } and f_\mathrm{rest}\sqrt{\left(1 - v/c\right)/\left(1 + v/c\right)}
as deduced by Einstein (1905) from the Lorentz transformation, when the source is running slow by the Lorentz factor.
  • Rossi and Hall (1941) compared the population of cosmic-ray-produced muons at the top of a mountain to that observed at sea level. Although the travel time for the muons from the top of the mountain to the base is several muon half-lives, the muon sample at the base was only moderately reduced. This is explained by the time dilation attributed to their high speed relative to the experimenters. That is to say, the muons were decaying about 10 times slower than if they were at rest with respect to the experimenters.
  • Hasselkamp, Mondry, and Scharmann[16] (1979) measured the Doppler shift from a source moving at right angles to the line of sight (the transverse Doppler shift). The most general relationship between frequencies of the radiation from the moving sources is given by:
f_\mathrm{detected} = f_\mathrm{rest}{\left(1 - \frac{v}{c} \cos\phi\right)/\sqrt{1 - {v^2}/{c^2}} }
as deduced by Einstein (1905)[1]. For \phi = 90^\circ (\cos\phi = 0\,) this reduces to fdetected = frestγ. Thus there is no transverse Doppler shift, and the lower frequency of the moving source can be attributed to the time dilation effect alone.
  • In 2010 time dilation was observed at speeds of less than 10 meters per second using optical atomic clocks connected by 75 meters of optical fiber.[17]

Gravitational time dilation tests

  • Pound, Rebka in 1959 measured the very slight gravitational red shift in the frequency of light emitted at a lower height, where Earth’s gravitational field is relatively more intense. The results were within 10% of the predictions of general relativity. Later Pound and Snider (in 1964) derived an even closer result of 1%. This effect is as predicted by gravitational time dilation.
  • In 2010 gravitational time dilation was measured at the Earth’s surface with a height difference of only one meter, using optical atomic clocks.[17]

Velocity and gravitational time dilation combined-effect tests

  • Hafele and Keating, in 1971, flew caesium atomic clocks east and west around the Earth in commercial airliners, to compare the elapsed time against that of a clock that remained at the US Naval Observatory. Two opposite effects came into play. The clocks were expected to age more quickly (show a larger elapsed time) than the reference clock, since they were in a higher (weaker) gravitational potential for most of the trip (c.f. Pound, Rebka). But also, contrastingly, the moving clocks were expected to age more slowly because of the speed of their travel. The gravitational effect was the larger, and the clocks suffered a net gain in elapsed time. To within experimental error, the net gain was consistent with the difference between the predicted gravitational gain and the predicted velocity time loss. In 2005, the National Physical Laboratory in the United Kingdom reported their limited replication of this experiment.[18] The NPL experiment differed from the original in that the caesium clocks were sent on a shorter trip (London–Washington D.C. return), but the clocks were more accurate. The reported results are within 4% of the predictions of relativity.
  • The Global Positioning System can be considered a continuously operating experiment in both special and general relativity. The in-orbit clocks are corrected for both special and general relativistic time dilation effects as described above, so that (as observed from the Earth’s surface) they run at the same rate as clocks on the surface of the Earth. In addition, but not directly time dilation related, general relativistic correction terms are built into the model of motion that the satellites broadcast to receivers — uncorrected, these effects would result in an approximately 7-metre (23 ft) oscillation in the pseudo-ranges measured by a receiver over a cycle of 12 hours.

Muon lifetime

A comparison of muon lifetimes at different speeds is possible. In the laboratory, slow muons are produced, and in the atmosphere very fast moving muons are introduced by cosmic rays. Taking the muon lifetime at rest as the laboratory value of 2.22 μs, the lifetime of a cosmic ray produced muon traveling at 98% of the speed of light is about five times longer, in agreement with observations.[19] In this experiment the “clock” is the time taken by processes leading to muon decay, and these processes take place in the moving muon at its own “clock rate”, which is much slower than the laboratory clock.

Time dilation and space flight

Time dilation would make it possible for passengers in a fast-moving vehicle to travel further into the future while aging very little, in that their great speed slows down the rate of passage of on-board time. That is, the ship’s clock (and according to relativity, any human travelling with it) shows less elapsed time than the clocks of observers on Earth. For sufficiently high speeds the effect is dramatic. For example, one year of travel might correspond to ten years at home. Indeed, a constant 1 g acceleration would permit humans to travel as far as light has been able to travel since the big bang (some 13.7 billion light years) in one human lifetime. The space travellers could return to Earth billions of years in the future. A scenario based on this idea was presented in the novel Planet of the Apes by Pierre Boulle.

A more likely use of this effect would be to enable humans to travel to nearby stars without spending their entire lives aboard the ship. However, any such application of time dilation during Interstellar travel would require the use of some new, advanced method of propulsion. The Orion Project has been the only major attempt toward this idea.

Current space flight technology has fundamental theoretical limits based on the practical problem that an increasing amount of energy is required for propulsion as a craft approaches the speed of light. The likelihood of collision with small space debris and other particulate material is another practical limitation. At the velocities presently attained, however, time dilation is not a factor in space travel. Travel to regions of space-time where gravitational time dilation is taking place, such as within the gravitational field of a black hole but outside the event horizon (perhaps on a hyperbolic trajectory exiting the field), could also yield results consistent with present theory.

Time dilation at constant acceleration

In special relativity, time dilation is most simply described in circumstances where relative velocity is unchanging. Nevertheless, the Lorentz equations allow one to calculate proper time and movement in space for the simple case of a spaceship whose acceleration, relative to some referent object in uniform (i.e. constant velocity) motion, equals g throughout the period of measurement.

Let t be the time in an inertial frame subsequently called the rest frame. Let x be a spatial coordinate, and let the direction of the constant acceleration as well as the spaceship’s velocity (relative to the rest frame) be parallel to the x-axis. Assuming the spaceship’s position at time t = 0 being x = 0 and the velocity being v0 and defining the following abbreviation

\gamma_0 := \frac{1}{\sqrt{1-v_0^2/c^2}},

the following formulas hold:[20]
Position:

x(t) = \frac {c^2}{g} \left( \sqrt{1 + \frac{\left(gt + v_0\gamma_0\right)^2}{c^2}} -\gamma_0 \right).

Velocity:

v(t) =\frac{gt + v_0\gamma_0}{\sqrt{1 + \frac{ \left(gt + v_0\gamma_0\right)^2}{c^2}}}.

Proper time:

\tau(t) = \tau_0 + \int_0^t \sqrt{ 1 - \left( \frac{v(t')}{c} \right)^2 } dt'

In the case where v(0) = v0 = 0 and τ(0) = τ0 = 0 the integral can be expressed as a logarithmic function or, equivalently, as an inverse hyperbolic function:

\tau(t) = \frac{c}{g} \ln \left(  \frac{gt}{c} + \sqrt{ 1 + \left( \frac{gt}{c} \right)^2 } \right) = \frac{c}{g} \operatorname {arsinh} \left( \frac{gt}{c} \right) .

Spacetime geometry of velocity time dilation

Time dilation in transverse motion

The green dots and red dots in the animation represent spaceships. The ships of the green fleet have no velocity relative to each other, so for the clocks onboard the individual ships the same amount of time elapses relative to each other, and they can set up a procedure to maintain a synchronized standard fleet time. The ships of the “red fleet” are moving with a velocity of 0.866 of the speed of light with respect to the green fleet.
The blue dots represent pulses of light. One cycle of light-pulses between two green ships takes two seconds of “green time”, one second for each leg.

As seen from the perspective of the reds, the transit time of the light pulses they exchange among each other is one second of “red time” for each leg. As seen from the perspective of the greens, the red ships’ cycle of exchanging light pulses travels a diagonal path that is two light-seconds long. (As seen from the green perspective the reds travel 1.73 (\sqrt{3}) light-seconds of distance for every two seconds of green time.)
One of the red ships emits a light pulse towards the greens every second of red time. These pulses are received by ships of the green fleet with two-second intervals as measured in green time. Not shown in the animation is that all aspects of physics are proportionally involved. The light pulses that are emitted by the reds at a particular frequency as measured in red time are received at a lower frequency as measured by the detectors of the green fleet that measure against green time, and vice versa.

The animation cycles between the green perspective and the red perspective, to emphasize the symmetry. As there is no such thing as absolute motion in relativity (as is also the case for Newtonian mechanics), both the green and the red fleet are entitled to consider themselves motionless in their own frame of reference.
Again, it is vital to understand that the results of these interactions and calculations reflect the real state of the ships as it emerges from their situation of relative motion. It is not a mere quirk of the method of measurement or communication.

See also

References

  1. ^ For sources on special relativistic time dilation, see Albert Einstein’s own popular exposition, published in English translation (1920) as “Relativity: The Special and General Theory”, especially at “8: On the Idea of Time in Physics”, and in following sections 9–12. See also the articles Special relativity, Lorentz transformation and Relativity of simultaneity.
  2. ^ Cassidy, David C.; Holton, Gerald James; Rutherford, Floyd James (2002), Understanding Physics, Springer-Verlag New York, Inc, ISBN 0-387-98756-8, http://books.google.com/?id=rpQo7f9F1xUC&pg=PA422 , Chapter 9 §9.6, p. 422
  3. ^ Cutner, Mark Leslie (2003), Astronomy, A Physical Perspective, Cambridge University Press, ISBN 0-521-82196-7, http://books.google.com/?id=2QVmiMW0O0MC&pg=PA128 , Chapter 7 §7.2, p. 128
  4. ^ Lerner, Lawrence S. (1996), Physics for Scientists and Engineers, Volume 2, Jones and Bertlett Publishers, Inc, ISBN 0-7637-0460-1, http://books.google.com/?id=B8K_ym9rS6UC&pg=PA1051 , Chapter 38 §38.4, p. 1051,1052
  5. ^ Ellis, George F. R.; Williams, Ruth M. (2000), Flat and Curved Space-times, Second Edition, Oxford University Press Inc, New York, ISBN 0-19-850657-0, http://books.google.com/?id=Hos31wty5WIC&pg=PA28 , Chapter 3 §1.3, p. 28-29
  6. ^ Adams, Steve (1997), Relativity: an introduction to space-time physics, CRC Press, p. 54, ISBN 0-748-40621-2, http://books.google.com/?id=1RV0AysEN4oC , Section 2.5, page 54
  7. ^ See T D Moyer (1981a), “Transformation from proper time on Earth to coordinate time in solar system barycentric space-time frame of reference”, Celestial Mechanics 23 (1981) pages 33-56, equations 2 and 3 at pages 35-6 combined here and divided throughout by c2.
  8. ^ A version of the same relationship can also be seen in Neil Ashby (2002), “Relativity and the Global Positioning System”, Physics Today (May 2002), at equation (2).
  9. ^ Such tidal effects can also be seen included in some of the relations shown in Neil Ashby (2002), cited above.
  10. ^ (This is equation (6) at page 36 of T D Moyer (1981a), cited above.)
  11. ^ G M Clemence & V Szebehely, “Annual variation of an atomic clock”, Astronomical Journal, Vol.72 (1967), p.1324-6.
  12. ^ T D Moyer (1981b), “Transformation from proper time on Earth to coordinate time in solar system barycentric space-time frame of reference” (Part 2), Celestial Mechanics 23 (1981) pages 57-68.
  13. ^ J B Thomas (1975), “Reformulation of the relativistic conversion between coordinate time and atomic time”, Astronomical Journal, vol.80, May 1975, p.405-411.
  14. ^ See Neil Ashby (2002), cited above; also in article Global Positioning System the section Special and general relativity and further sources cited there.
  15. ^ B Guinot (2000), “History of the Bureau International de l’Heure”, ASP Conference Proceedings vol.208 (2000), pp.175-184, at p.182.
  16. ^ “Journal Article”. SpringerLink. http://www.springerlink.com/content/kt5505r2p2r22411/. Retrieved 2009-10-18. 
  17. ^ a b Chou, C. W.; Hume, D. B.; Rosenband, T.; Wineland, D. J. (2010). “Optical Clocks and Relativity”. Science 329: 1630. doi:10.1126/science.1192720.  edit
  18. ^ http://www.npl.co.uk/upload/pdf/metromnia_issue18.pdf
  19. ^ JV Stewart (2001), Intermediate electromagnetic theory, Singapore: World Scientific, p. 705, ISBN 9810244703, http://www.google.com/search?ie=UTF-8&hl=nl&rlz=1T4GZAZ_nlBE306BE306&q=relativity%20%22meson%20lifetime%22%202.22&tbo=u&tbs=bks:1&source=og&sa=N&tab=gp 
  20. ^ Iorio, Lorenzo (27-Jun-2004). “An analytical treatment of the Clock Paradox in the framework of the Special and General Theories of Relativity”. http://arxiv.org/abs/physics/0405038.  (Equations (3), (4), (6), (9) on pages 5-6)
  • Callender, Craig & Edney, Ralph (2001), Introducing Time, Icon, ISBN 1-84046-592-1 
  • Einstein, A. (1905) “Zur Elektrodynamik bewegter Körper”, Annalen der Physik, 17, 891. English translation: On the electrodynamics of moving bodies
  • Einstein, A. (1907) “Über eine Möglichkeit einer Prüfung des Relativitätsprinzips”, Annalen der Physik.
  • Hasselkamp, D., Mondry, E. and Scharmann, A. (1979) “Direct Observation of the Transversal Doppler-Shift”, Z. Physik A 289, 151–155
  • Ives, H. E. and Stilwell, G. R. (1938), “An experimental study of the rate of a moving clock”, J. Opt. Soc. Am, 28, 215–226
  • Ives, H. E. and Stilwell, G. R. (1941), “An experimental study of the rate of a moving clock. II”, J. Opt. Soc. Am, 31, 369–374
  • Joos, G. (1959) Lehrbuch der Theoretischen Physik, 11. Auflage, Leipzig; Zweites Buch, Sechstes Kapitel, § 4: Bewegte Bezugssysteme in der Akustik. Der Doppler-Effekt.
  • Larmor, J. (1897) “On a dynamical theory of the electric and luminiferous medium”, Phil. Trans. Roy. Soc. 190, 205–300 (third and last in a series of papers with the same name).
  • Poincaré, H. (1900) “La theorie de Lorentz et la Principe de Reaction”, Archives Neerlandaies, V, 253–78.
  • Reinhardt et al. Test of relativistic time dilation with fast optical atomic clocks at different velocities (Nature 2007)
  • Rossi, B and Hall, D. B. Phys. Rev., 59, 223 (1941).
  • NIST Two way time transfer for satellites
  • Voigt, W. “Ueber das Doppler’sche princip” Nachrichten von der Königlicher Gesellschaft der Wissenschaften zu Göttingen, 2, 41–51.

External links

Posted in Grace, Outside Time, Relativistic Muons, Time Dilation, Time Variable Measure | Tagged , , , , , | Leave a comment

Muon

The Moon‘s cosmic ray shadow, as seen in secondary muons generated by cosmic rays in the atmosphere, and detected 700 meters below ground, at the Soudan II detector.

Composition: Elementary particle
Particle statistics: Fermionic
Group: Lepton
Generation: Second
Interaction: Gravity, Electromagnetic,
Weak
Symbol(s): μ
Antiparticle: Antimuon (μ+)
Theorized:
Discovered: Carl D. Anderson (1936)
Mass: 105.65836668(38) MeV/c2
Mean lifetime: 2.197034(21)×10−6 s[1]
Electric charge: −1 e
Color charge: None
Spin: 12

The muon (from the Greek letter mu (μ) used to represent it) is an elementary particle similar to the electron, with a negative electric charge and a spin of ½. Together with the electron, the tau, and the three neutrinos, it is classified as a lepton. It is an unstable subatomic particle with the second longest mean lifetime (2.2 µs), exceeded only by that of the free neutron (~15 minutes). Like all elementary particles, the muon has a corresponding antiparticle of opposite charge but equal mass and spin: the antimuon (also called a positive muon). Muons are denoted by μ and antimuons by μ+. Muons were previously called mu mesons, but are not classified as mesons by modern particle physicists (see History).

Muons have a mass of 105.7 MeV/c2, which is about 200 times the mass of an electron. Since the muon’s interactions are very similar to those of the electron, a muon can be thought of as a much heavier version of the electron. Due to their greater mass, muons are not as sharply accelerated when they encounter electromagnetic fields, and do not emit as much bremsstrahlung radiation. Thus muons of a given energy penetrate matter far more deeply than electrons, since the deceleration of electrons and muons is primarily due to energy loss by this mechanism. So-called “secondary muons”, generated by cosmic rays hitting the atmosphere, can penetrate to the Earth’s surface and into deep mines.

As with the case of the other charged leptons, the muon has an associated muon neutrino. Muon neutrinos are denoted by νμ.

Contents

History

Muons were discovered by Carl D. Anderson and Seth Neddermeyer at Caltech in 1936, while studying cosmic radiation. Anderson had noticed particles that curved differently from electrons and other known particles when passed through a magnetic field. They were negatively charged but curved less sharply than electrons, but more sharply than protons, for particles of the same velocity. It was assumed that the magnitude of their negative electric charge was equal to that of the electron, and so to account for the difference in curvature, it was supposed that their mass was greater than an electron but smaller than a proton. Thus Anderson initially called the new particle a mesotron, adopting the prefix meso- from the Greek word for “mid-“. Shortly thereafter, additional particles of intermediate mass were discovered, and the more general term meson was adopted to refer to any such particle. To differentiate between different types of mesons, the mesotron was in 1947 renamed the mu meson (the Greek letter μ (mu) corresponds to m).
It was soon found that the mu meson significantly differed from other mesons: for example, its decay products included a neutrino and an antineutrino, rather than just one or the other, as was observed with other mesons. Other mesons were eventually understood to be hadrons—that is, particles made of quarks—and thus subject to the residual strong force. In the quark model, a meson is composed of exactly two quarks (a quark and antiquark) unlike baryons, which are composed of three quarks. Mu mesons, however, were found to be fundamental particles (leptons) like electrons, with no quark structure. Thus, mu mesons were not mesons at all (in the new sense and use of the term meson), and so the term mu meson was abandoned, and replaced with the modern term muon.

Another particle (the pion, with which the muon was initially confused) had been predicted by theorist Hideki Yukawa:[2]

“It seems natural to modify the theory of Heisenberg and Fermi in the following way. The transition of a heavy particle from neutron state to proton state is not always accompanied by the mission of light particles. The transition is sometimes taken up by another heavy particle.”

The existence of the muon was confirmed in 1937 by J. C. Street and E. C. Stevenson’s cloud chamber experiment.[3] The discovery of the muon seemed so incongruous and surprising at the time that Nobel laureate I. I. Rabi famously quipped, “Who ordered that?”

In a 1941 experiment on Mount Washington in New Hampshire, muons were used to observe the time dilation predicted by special relativity for the first time.[4]

Muon sources

Since the production of muons requires an available center of momentum frame energy of 105.7 MeV, neither ordinary radioactive decay events nor nuclear fission and fusion events (such as those occurring in nuclear reactors and nuclear weapons) are energetic enough to produce muons. Only nuclear fission produces single-nuclear-event energies in this range, but do not produce muons as the production of a single muon would violate the conservation of quantum numbers (see under “muon decay” below).

On Earth, most naturally occurring muons are created by cosmic rays, which consist mostly of protons, many arriving from deep space at very high energy[5]

About 10,000 muons reach every square meter of the earth’s surface a minute; these charged particles form as by-products of cosmic rays colliding with molecules in the upper atmosphere. Travelling at relativistic speeds, muons can penetrate tens of meters into rocks and other matter before attenuating as a result of absorption or deflection by other atoms.

When a cosmic ray proton impacts atomic nuclei of air atoms in the upper atmosphere, pions are created. These decay within a relatively short distance (meters) into muons (the pion’s preferred decay product), and neutrinos. The muons from these high energy cosmic rays generally continue in about the same direction as the original proton, at a very high velocity. Although their lifetime without relativistic effects would allow a half-survival distance of only about 0.66 km (660 meters) at most (as seen from Earth) the time dilation effect of special relativity (from the viewpoint of the Earth) allows cosmic ray secondary muons to survive the flight to the Earth’s surface, since in the Earth frame, the muons have a longer half-life due to their velocity. From the viewpoint (inertial frame) of the muon, on the other hand, it is the length contraction effect of special relativity which allows this penetration, since in the muon frame, its lifetime is unaffected, but the distance through the atmosphere and earth appears far shorter than these distances in the Earth rest-frame. Both are equally valid ways of explaining the fast muon’s unusual survival over distances.

Since muons are unusually penetrative of ordinary matter, like neutrinos, they are also detectable deep underground (700 meters at the Soudan II detector) and underwater, where they form a major part of the natural background ionizing radiation. Like cosmic rays, as noted, this secondary muon radiation is also directional.

The same nuclear reaction described above (i.e. hadron-hadron impacts to produce pion beams, which then quickly decay to muon beams over short distances) is used by particle physicists to produce muon beams, such as the beam used for the muon g − 2 experiment.[6]

Muon decay

The most common decay of the muon

Muons are unstable elementary particles and are heavier than electrons and neutrinos but lighter than all other matter particles. They decay via the weak interaction. Because lepton numbers must be conserved, one of the product neutrinos of muon decay must be a muon-type neutrino and the other an electron-type antineutrino (antimuon decay produces the corresponding antiparticles, as detailed below). Because charge must be conserved, one of the products of muon decay is always an electron of the same charge as the muon (a positron if it is a positive muon). Thus all muons decay to at least an electron, and two neutrinos. Sometimes, besides these necessary products, additional other particles that have a net charge and spin of zero (i.e. a pair of photons, or an electron-positron pair), are produced.

The dominant muon decay mode (sometimes called the Michel decay after Louis Michel) is the simplest possible: the muon decays to an electron, an electron-antineutrino, and a muon-neutrino. Antimuons, in mirror fashion, most often decay to the corresponding antiparticles: a positron, an electron-neutrino, and a muon-antineutrino. In formulaic terms, these two decays are:

\mu^-\to e^- + \bar\nu_e + \nu_\mu,~~~\mu^+\to e^+ + \nu_e + \bar\nu_\mu.

The mean lifetime of the (positive) muon is 2.197 019 ± 0.000 021 μs[7]. The equality of the muon and anti-muon lifetimes has been established to better than one part in 104.

The tree-level muon decay width is

\Gamma=\frac{G_F^2 m_\mu^5}{192\pi^3}I\left(\frac{m_e^2}{m_\mu^2}\right),

where I(x) = 1 − 8x − 12x2lnx + 8x3x4;  G_F^2 is the Fermi coupling constant.
The decay distributions of the electron in muon decays have been parameterised using the so-called Michel parameters. The values of these four parameters are predicted unambiguously in the Standard Model of particle physics, thus muon decays represent a good test of the space-time structure of the weak interaction. No deviation from the Standard Model predictions has yet been found.

Certain neutrino-less decay modes are kinematically allowed but forbidden in the Standard Model. Examples forbidden by lepton flavour conservation are

\mu^-\to e^- + \gamma and \mu^-\to e^- + e^+ + e^-.

Observation of such decay modes would constitute clear evidence for physics beyond the Standard Model (BSM). Current experimental upper limits for the branching fractions of such decay modes are in the range 10−11 to 10−12.

Muonic atoms

The muon was the first elementary particle discovered that does not appear in ordinary atoms. Negative muons can, however, form muonic atoms (also called mu-mesic atoms), by replacing an electron in ordinary atoms. Muonic hydrogen atoms are much smaller than typical hydrogen atoms because the much larger mass of the muon gives it a much smaller ground-state wavefunction than is observed for the electron. In multi-electron atoms, when only one of the electrons is replaced by a muon, the size of the atom continues to be determined by the other electrons, and the atomic size is nearly unchanged. However, in such cases the orbital of the muon continues to be smaller and far closer to the nucleus than the atomic orbitals of the electrons.

A positive muon, when stopped in ordinary matter, can also bind an electron and form an exotic atom known as muonium (Mu) atom, in which the muon acts as the nucleus. The positive muon, in this context, can be considered a pseudo-isotope of hydrogen with one ninth of the mass of the proton. Because the reduced mass of muonium, and hence its Bohr radius, is very close to that of hydrogen[clarification needed], this short-lived “atom” behaves chemically — to a first approximation — like hydrogen, deuterium and tritium.

Use in measurement of the proton charge radius

The recent culmination of a twelve year experiment investigating the proton’s charge radius involved the use of muonic hydrogen. This form of hydrogen is composed of a muon orbiting a proton[8]. The Lamb shift in muonic hydrogen was measured by driving the muon from the from its 2s state up to an excited 2p state using a laser. The frequency of the photon required to induce this transition was revealed to be 50 terahertz which, according to present theories of quantum electrodynamics, yields a value of 0.84184 ± 0.00067 femtometres for the charge radius of the proton.[9]

Anomalous magnetic dipole moment

The anomalous magnetic dipole moment is the difference between the experimentally observed value of the magnetic dipole moment and the theoretical value predicted by the Dirac equation. The measurement and prediction of this value is very important in the precision tests of QED (quantum electrodynamics). The E821 experiment at Brookhaven National Laboratory (BNL) studied the precession of muon and anti-muon in a constant external magnetic field as they circulated in a confining storage ring. The E821 Experiment reported the following average value (from the July 2007 review by Particle Data Group)

a = \frac{g-2}{2} = 0.00116592080(54)(33)

where the first errors are statistical and the second systematic.

The difference between the g-factors of the muon and the electron is due to their difference in mass. Because of the muon’s larger mass, contributions to the theoretical calculation of its anomalous magnetic dipole moment from Standard Model weak interactions and from contributions involving hadrons are important at the current level of precision, whereas these effects are not important for the electron. The muon’s anomalous magnetic dipole moment is also sensitive to contributions from new physics beyond the Standard Model, such as supersymmetry. For this reason, the muon’s anomalous magnetic moment is normally used as a probe for new physics beyond the Standard Model rather than as a test of QED (Phys.Lett. B649, 173 (2007)).

See also

References

  1. ^ K. Nakamura et al. (Particle Data Group), J. Phys. G 37, 075021 (2010), URL: http://pdg.lbl.gov
  2. ^ Yukaya Hideka, On the Interaction of Elementary Particles 1, Proceedings of the Physico-Mathematical Society of Japan (3) 17, 48, pp 139-148 (1935). (Read 17 November 1934)
  3. ^ New Evidence for the Existence of a Particle Intermediate Between the Proton and Electron”, Phys. Rev. 52, 1003 (1937).
  4. ^ David H. Frisch and James A. Smith, “Measurement of the Relativistic Time Dilation Using Muons”, American Journal of Physics, 31, 342, 1963, cited by Michael Fowler, “Special Relativity: What Time is it?
  5. ^ S. Carroll (2004). Spacetime and Geometry: An Introduction to General Relativity. Addison Wesly. p. 204
  6. ^ Brookhaven National Laboratory (30 July 2002). “Physicists Announce Latest Muon g-2 Measurement”. Press release. http://www.bnl.gov/bnlweb/pubaf/pr/2002/bnlpr073002.htm. Retrieved 2009-11-14. 
  7. ^ [1]
  8. ^ TRIUMF Muonic Hydrogen collaboration. “A brief description of Muonic Hydrogen research”. Retrieved 2010-11-7
  9. ^ Pohl, Randolf et al. “The Size of the ProtonNature 466, 213-216 (8 July 2010)

External links

***
Comment on Backreaction made to Steven

Hi Steven

Would we not be correct to say that unification with the small would be most apropos indeed with the large?

Pushing through that veil.

My interest with the QGP is well documented, as it presented itself “with an interesting location” with which to look at during the collision process.

Natural Microscopic blackhole creations? Are such conditions possible in the natural way of things? Although quickly dissipative, they leave their mark as Cerenkov effects.

As one looks toward the cosmos this reductionist process is how one might look at the cosmos at large, as to some of it’s “motivations displayed” in the cosmos?

What conditions allow such reductionism at play to consider the end result of geometrical propensity as a message across the vast distance of space, so as to “count these effects” here on earth?

Let’s say cosmos particle collisions and LHC are hand in hand “as to decay of the original particles in space” as they leave their imprint noticeably in the measures of SNO or Icecube, but help us discern further effects of that decay chain as to the constitutions of LHC energy progressions of particles in examination?

Emulating the conditions in LHC progression, adaptability seen then from such progressions, working to produce future understandings. Muon detections through the earth?

So “modeled experiments” in which “distillation of thought” are helped to be reduced too, in kind, lead to matter forming ideas with which to progress? Measure. Self evident.

You see the view has to be on two levels, maybe as a poet using words to describe, or as a artist, trying to explain the natural world. The natural consequence, of understanding of our humanity and it’s continuations expressed as abstract thought of our interactions with the world at large, unseen, and miscomprehended?

Do you think Superstringy has anything to do with what I just told you here?:)

Best,

    Hi Steven,

    Maybe the following will help, and then I will lead up to a modern version for consideration, so you understand the relation.

    Keep Gran Sasso in your mind as you look at what I am giving you.

    The underground laboratory, which opened in 1989, with its low background radiation is used for experiments in particle and nuclear physics,including the study of neutrinos, high-energy cosmic rays, dark matter, nuclear decay, as well as geology, and biology-wiki

    Neutrinos, get set, go!

    This summer, CERN gave the starting signal for the long-distance neutrino race to Italy. The CNGS facility (CERN Neutrinos to Gran Sasso), embedded in the laboratory’s accelerator complex, produced its first neutrino beam. For the first time, billions of neutrinos were sent through the Earth’s crust to the Gran Sasso laboratory, 732 kilometres away in Italy, a journey at almost the speed of light which they completed in less than 2.5 milliseconds. The OPERA experiment at the Gran Sasso laboratory was then commissioned, recording the first neutrino tracks.

    Because I am a layman, does not reduce the understanding that I can have, that a scientist may have.

    Now for the esoteric :)

    Secrets of the Pyramids In a boon for archaeology, particle physicists plan to probe ancient structures for tombs and other hidden chambers. The key to the technology is the muon, a cousin of the electron that rains harmlessly from the sky.

    What kind of result would they get from using the muon. What will it tell them?:)

    Best,

    Posted in AMS, Cosmic Rays, Gran Sasso, Muons, Relativistic Muons, Time Dilation | Tagged , , , , , | 3 Comments

    Conformal Cyclic Cosmology….

    Penrose’s Conformal Cyclic Cosmology, from one of his Pittsburgh lecture slides in June, 2009. Photo by Bryan W. Roberts

    Also see: BEFORE THE BIG BANG: AN OUTRAGEOUS NEW PERSPECTIVE AND ITS IMPLICATIONS FOR PARTICLE PHYSICS

    ……. (CCC) is a cosmological model in the framework of general relativity, advanced by the theoretical physicist Sir Roger Penrose.[1][2] In CCC, the universe undergoes a repeated cycle of death and rebirth, with the future timelike infinity of each previous universe being identified with the Big Bang singularity of the next.[3] Penrose outlines this theory in his book Cycles of Time: An Extraordinary New View of the Universe.

    Contents

    Basic Construction

    Penrose’s basic construction[4] is to paste together a countable sequence of open FLRW spacetimes, each representing a big bang followed by an infinite future expansion. Penrose noticed that the past conformal boundary of one copy of FLRW spacetime can be “attached” to the future conformal boundary of another, after an appropriate conformal rescaling. In particular, each individual FLRW metric gab is multiplied by the square of a conformal factor Ω that approaches zero at timelike infinity, effectively “squashing down” the future conformal boundary to a conformally regular hypersurface (which is spacelike if there is a positive cosmological constant, as we currently believe). The result is a new solution to Einstein’s equations, which Penrose takes to represent the entire Universe, and which is composed of a sequence of sectors that Penrose calls “aeons.”

    Physical Implications

    The significant feature of this construction for particle physics is that, since baryons are obey the laws of conformally invariant quantum theory, they will behave in the same way in the rescaled aeons as in the original FLRW counterparts. (Classically, this corresponds to the fact that light cone structure is preserved under conformal rescalings.) For such particles, the boundary between aeons is not a boundary at all, but just a spacelike surface that can be passed across like any other. Fermions, on the other hand, remain confined to a given aeon. This provides a convenient solution to the black hole information paradox; according to Penrose, fermions must be irreversibly converted into radiation during black hole evaporation, to preserve the smoothness of the boundary between aeons.

    The curvature properties of Penrose’s cosmology are also highly desirable. First, the boundary between aeons satisfies the Weyl curvature hypothesis, thus providing a certain kind of low-entropy past as required by statistical mechanics and by observation. Second, Penrose has calculated that a certain amount of gravitational radiation should be preserved across the boundary between aeons. Penrose suggests this extra gravitational radiation may be enough to explain the observed cosmic acceleration without appeal to a dark energy matter field.

    Empirical Tests

    In 2010, Penrose and V. G. Gurzadyan published a preprint of a paper claiming that observations of the cosmic microwave background made by the Wilkinson Microwave Anisotropy Probe and the BOOMERanG experiment showed concentric anomalies which were consistent with the CCC hypothesis, with a low probability of the null hypothesis that the observations in question were caused by chance.[5] However, the statistical significance of the claimed detection has since been questioned. Three groups have independently attempted to reproduce these results, but found that the detection of the concentric anomalies was not statistically significant.[6][7][8]

    See also

    References

    1. ^ Palmer, Jason (2010-11-27). “Cosmos may show echoes of events before Big Bang”. BBC News. http://www.bbc.co.uk/news/science-environment-11837869. Retrieved 2010-11-27. 
    2. ^ Penrose, Roger (June 2006). “Before the big bang: An outrageous new perspective and its implications for particle physics”. Edinburgh, Scotland: Proceedings of EPAC 2006. p. 2759-2767. http://accelconf.web.cern.ch/accelconf/e06/PAPERS/THESPA01.PDF. Retrieved 2010-11-27. 
    3. ^ Cartlidge, Edwin (2010-11-19). “Penrose claims to have glimpsed universe before Big Bang”. physicsworld.com. http://physicsworld.com/cws/article/news/44388. Retrieved 2010-11-27. 
    4. ^ Roger Penrose (2006). “Before the Big Bang: An Outrageous New Perspective and its Implications for Particle Physics”. Proceedings of the EPAC 2006, Edinburgh, Scotland: 2759-2762. http://accelconf.web.cern.ch/accelconf/e06/PAPERS/THESPA01.PDF. 
    5. ^ Gurzadyan VG; Penrose R (2010-11-16). “Concentric circles in WMAP data may provide evidence of violent pre-Big-Bang activity”. arΧiv:1011.3706 [astro-ph.CO]. 
    6. ^ Wehus IK; Eriksen HK (2010-12-07). “A search for concentric circles in the 7-year WMAP temperature sky maps”. arΧiv:1012.1268 [astro-ph.CO]. 
    7. ^ Moss A; Scott D; Zibin JP (2010-12-07). “No evidence for anomalously low variance circles on the sky”. arΧiv:1012.1305 [astro-ph.CO]. 
    8. ^ Hajian A (2010-12-8). “Are There Echoes From The Pre-Big Bang Universe? A Search for Low Variance Circles in the CMB Sky”. arΧiv:1012.1656 [astro-ph.CO].

    See Also: Penrose’s CCC cosmology is either inflation or gibberish

    Posted in Cosmology, Quanglement, Sir Roger Penrose | Tagged , , | Leave a comment

    Big Bounce

    Physical cosmology
    WMAP 2010.png
    Universe · Big Bang
    Age of the universe
    Timeline of the Big Bang
    Ultimate fate of the universe

    The Big Bounce is a theorized scientific model related to the formation of the known Universe. It derives from the cyclic model or oscillatory universe interpretation of the Big Bang where the first cosmological event was the result of the collapse of a previous universe.[1]

    Contents

    Expansion and contraction

    According to some oscillatory universe theorists, the Big Bang was simply the beginning of a period of expansion that followed a period of contraction. In this view, one could talk of a Big Crunch followed by a Big Bang, or more simply, a Big Bounce. This suggests that we might be living in the first of all universes, but are equally likely to be living in the 2 billionth universe (or any of an infinite other sequential universes).
    The main idea behind the quantum theory of a Big Bounce is that, as density approaches infinity, the behavior of the quantum foam changes. All the so-called fundamental physical constants, including the speed of light in a vacuum, were not so constant during the Big Crunch, especially in the interval stretching 10−43 seconds before and after the point of inflection. (One unit of Planck time is about 10−43 seconds.)

    If the fundamental physical constants were determined in a quantum-mechanical manner during the Big Crunch, then their apparently inexplicable values in this universe would not be so surprising, it being understood here that a universe is that which exists between a Big Bang and its Big Crunch.

    Recent developments in the theory

    Martin Bojowald, an assistant professor of physics at Pennsylvania State University, published a study in July 2007 detailing work somewhat related to loop quantum gravity that claimed to mathematically solve the time before the Big Bang, which would give new weight to the oscillatory universe and Big Bounce theories.[2]

    One of the main problems with the Big Bang theory is that at the moment of the Big Bang, there is a singularity of zero volume and infinite energy. This is normally interpreted as the end of the physics as we know it; in this case, of the theory of general relativity. This is why one expects quantum effects to become important and avoid the singularity.

    However, research in loop quantum cosmology purported to show that a previously existing universe collapsed, not to the point of singularity, but to a point before that where the quantum effects of gravity become so strongly repulsive that the universe rebounds back out, forming a new branch. Throughout this collapse and bounce, the evolution is unitary.

    Bojowald also claims that some properties of the universe that collapsed to form ours can also be determined. Some properties of the prior universe are not determinable however due to some kind of uncertainty principle.

    This work is still in its early stages and very speculative. Some extensions by further scientists have been published in Physical Review Letters.[3]

    Peter Lynds has recently put forward a new cosmology model in which time is cyclic. In his theory our Universe will eventually stop expanding and then contract. Before becoming a singularity, as one would expect from Hawking’s black hole theory, the Universe would bounce. Lynds claims that a singularity would violate the second law of thermodynamics and this stops the Universe from being bounded by singularities. The Big Crunch would be avoided with a new Big Bang. Lynds suggests the exact history of the Universe would be repeated in each cycle. Some critics argue that while the Universe may be cyclic, the histories would all be variants.

    See also

    References

    1. ^ “Penn State Researchers Look Beyond The Birth Of The Universe”. Science Daily. May 17, 2006. http://www.sciencedaily.com/releases/2006/05/060515232747.htm.  Referring to Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parmpreet (2006). “Quantum Nature of the Big Bang”. Physical Review Letters 96 (14): 141301. doi:10.1103/PhysRevLett.96.141301. PMID 16712061. http://link.aps.org/abstract/PRL/v96/e141301. 
    2. ^ Bojowald, Martin (2007). “What happened before the Big Bang?”. Nature Physics 3 (8): 523–525. doi:10.1038/nphys654. 
    3. ^ Ashtekar, Abhay; Corichi, Alejandro; Singh, Parampreet (2008). “Robustness of key features of loop quantum cosmology”. Physical Review D 77: 024046. doi:10.1103/PhysRevD.77.024046. 

    Further reading

    • Magueijo, João (2003). Faster than the Speed of Light: the Story of a Scientific Speculation. Cambridge, MA: Perseus Publishing. ISBN 0738205257. 
    • Bojowald, Martin. “Follow the Bouncing Universe”. Scientific American (October 2008): 44–51. 

    External links

    Posted in Cosmology | Tagged | Leave a comment

    Cyclic model

    Physical cosmology
    WMAP 2010.png
    Universe · Big Bang
    Age of the universe
    Timeline of the Big Bang
    Ultimate fate of the universe

    A cyclic model is any of several cosmological models in which the universe follows infinite, self-sustaining cycles. For example, the oscillating universe theory briefly considered by Albert Einstein in 1930 theorized a universe following an eternal series of oscillations, each beginning with a big bang and ending with a big crunch; in the interim, the universe would expand for a period of time before the gravitational attraction of matter causes it to collapse back in and undergo a bounce.

    Contents

    Overview

    In the 1930s, theoretical physicists, most notably Albert Einstein, considered the possibility of a cyclic model for the universe as an (everlasting) alternative to the model of an expanding universe. However, work by Richard C. Tolman in 1934 showed that these early attempts failed because of the entropy problem that, in statistical mechanics, entropy only increases because of the Second law of thermodynamics.[1] This implies that successive cycles grow longer and larger. Extrapolating back in time, cycles before the present one become shorter and smaller culminating again in a Big Bang and thus not replacing it. This puzzling situation remained for many decades until the early 21st century when the recently discovered dark energy component provided new hope for a consistent cyclic cosmology.[2]

    One new cyclic model is a brane cosmology model of the creation of the universe, derived from the earlier ekpyrotic model. It was proposed in 2001 by Paul Steinhardt of Princeton University and Neil Turok of Cambridge University. The theory describes a universe exploding into existence not just once, but repeatedly over time.[3][4] The theory could potentially explain why a mysterious repulsive form of energy known as the “cosmological constant“, and which is accelerating the expansion of the universe, is several orders of magnitude smaller than predicted by the standard Big Bang model.

    A different cyclic model relying on the notion of phantom energy was proposed in 2007 by Lauris Baum and Paul Frampton of the University of North Carolina at Chapel Hill.[5]

    The Steinhardt–Turok model

    In this cyclic model, two parallel orbifold planes or M-branes collide periodically in a higher dimensional space.[6] The visible four-dimensional universe lies on one of these branes. The collisions correspond to a reversal from contraction to expansion, or a big crunch followed immediately by a big bang. The matter and radiation we see today were generated during the most recent collision in a pattern dictated by quantum fluctuations created before the branes. Eventually, the universe reached the state we observe today, before beginning to contract again many billions of years in the future. Dark energy corresponds to a force between the branes, and serves the crucial role of solving the monopole, horizon, and flatness problems. Moreover the cycles can continue indefinitely into the past and the future, and the solution is an attractor, so it can provide a complete history of the universe.
    As Richard C. Tolman showed, the earlier cyclic model failed because the universe would undergo inevitable thermodynamic heat death.[1] However, the newer cyclic model evades this by having a net expansion each cycle, preventing entropy from building up. However, there are major problems with the model. Foremost among them is that colliding branes are not understood by string theorists, and nobody knows if the scale invariant spectrum will be destroyed by the big crunch. Moreover, like cosmic inflation, while the general character of the forces (in the ekpyrotic scenario, a force between branes) required to create the vacuum fluctuations is known, there is no candidate from particle physics. [7]

    The Baum–Frampton model

    This more recent cyclic model of 2007 makes a different technical assumption concerning the equation of state of the dark energy which relates pressure and density through a parameter w.[5][8] It assumes w < -1 (a condition called phantom energy) throughout a cycle, including at present. (By contrast, Steinhardt-Turok assume w is never less than -1.) In the Baum-Frampton model, a septillionth (or less) of a second before the would-be Big Rip, a turnaround occurs and only one causal patch is retained as our universe. The generic patch contains no quark, lepton or force carrier; only dark energy – and its entropy thereby vanishes. The adiabatic process of contraction of this much smaller universe takes place with constant vanishing entropy and with no matter including no black holes which disintegrated before turnaround. The idea that the universe “comes back empty” is a central new idea of this cyclic model, and avoids many difficulties confronting matter in a contracting phase such as excessive structure formation, proliferation and expansion of black holes, as well as going through phase transitions such as those of QCD and electroweak symmetry restoration. Any of these would tend strongly to produce an unwanted premature bounce, simply to avoid violation of the second law of thermodynamics. The surprising w < -1 condition may be logically inevitable in a truly infinitely cyclic cosmology because of the entropy problem. Nevertheless, many technical back up calculations are necessary to confirm consistency of the approach. Although the model borrows ideas from string theory, it is not necessarily committed to strings, or to higher dimensions, yet such speculative devices may provide the most expeditious methods to investigate the internal consistency. The value of w in the Baum-Frampton model can be made arbitrarily close to, but must be less than, -1.

    Notes

    1. ^ a b R.C. Tolman (1987) [1934]. Relativity, Thermodynamics, and Cosmology. New York: Dover. LCCN 34032023-{{{3}}}. ISBN 0486653838. 
    2. ^ P.H. Frampton (2006). “On Cyclic Universes”. arΧiv:astro-ph/0612243 [astro-ph]. 
    3. ^ P.J. Steinhardt, N. Turok (2001). “Cosmic Evolution in a Cyclic Universe”. arΧiv:hep-th/0111098 [hep-th]. 
    4. ^ P.J. Steinhardt, N. Turok (2001). “A Cyclic Model of the Universe”. arΧiv:hep-th/0111030 [hep-th]. 
    5. ^ a b L. Baum, P.H. Frampton (2007). “Entropy of Contracting Universe in Cyclic Cosmology”. arΧiv:hep-th/0703162 [hep-th]. 
    6. ^ P.J. Steinhardt, N. Turok (2004). “The Cyclic Model Simplified”. arΧiv:astro-ph/0404480 [astro-ph]. 
    7. ^ P. Woit (2006). Not Even Wrong. London: Random House. ISBN 97800994488644. 
    8. ^ L. Baum and P.H. Frampton (2007). “Turnaround in Cyclic Cosmology”. Physical Review Letters 98 (7): 071301. doi:10.1103/PhysRevLett.98.071301. arXiv:hep-th/0610213. PMID 17359014. 

    See also

    Further reading

    • P.J. Steinhardt, N. Turok (2007). Endless Universe. New York: Doubleday. ISBN 9780385509640. 
    • R.C. Tolman (1987) [1934]. Relativity, Thermodynamics, and Cosmology. New York: Dover. LCCN 34032023-{{{3}}}. ISBN 0486653838. 
    • L. Baum and P.H. Frampton (2007). “Turnaround in Cyclic Cosmology”. Physical Review Letters 98 (7): 071301. doi:10.1103/PhysRevLett.98.071301. arXiv:hep-th/0610213. PMID 17359014. 
    • R. H. Dicke, P. J. E. Peebles, P. G. Roll and D. T. Wilkinson, “Cosmic Black-Body Radiation,” Astrophysical Journal 142 (1965), 414. This paper discussed the oscillatory universe as one of the main cosmological possibilities of the time.
    • S. W. Hawking and G. F. R. Ellis, The large-scale structure of space-time (Cambridge, 1973).

    External links

    Posted in Cosmology, Paul Steinhardt | Tagged , , | Leave a comment

    Physical Cosmology

    Physical cosmology

    Physical cosmology
    WMAP 2010.png
    Universe · Big Bang
    Age of the universe
    Timeline of the Big Bang
    Ultimate fate of the universe

    Physical cosmology, as a branch of astronomy, is the study of the largest-scale structures and dynamics of the universe and is concerned with fundamental questions about its formation and evolution.[1] For most of human history, it was a branch of metaphysics and religion. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed us to understand those laws.

    Physical cosmology, as it is now understood, began with the twentieth century development of Albert Einstein‘s general theory of relativity and better astronomical observations of extremely distant objects. These advances made it possible to speculate about the origin of the universe, and allowed scientists to establish the Big Bang Theory as the leading cosmological model. Some researchers still advocate a handful of alternative cosmologies; however, cosmologists generally agree that the Big Bang theory best explains observations.

    Cosmology draws heavily on the work of many disparate areas of research in physics. Areas relevant to cosmology include particle physics experiments and theory, including string theory, astrophysics, general relativity, and plasma physics. Thus, cosmology unites the physics of the largest structures in the universe with the physics of the smallest structures in the universe.

    Contents

    History of physical cosmology

    Modern cosmology developed along tandem observational and theoretical tracks. In 1915, Albert Einstein developed his theory of general relativity. At the time, physicists believed in a perfectly static universe without beginning or end. Einstein added a cosmological constant to his theory to try to force it to allow for a static universe with matter in it. The so-called Einstein universe is, however, unstable. It is bound to eventually start expanding or contracting. The cosmological solutions of general relativity were found by Alexander Friedmann, whose equations describe the Friedmann-Lemaître-Robertson-Walker universe, which may expand or contract.

    In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size, but a physical size must be assumed to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity, from which the distance may be determined using the inverse square law. Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way, nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann-Lemaître-Robertson-Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the “explosion” of a “primeval atom“—which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître’s theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds directly proportional to their distance. This fact is now known as Hubble’s law, though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing at the time about different types of Cepheid variables.
    Given the cosmological principle, Hubble’s law suggested that the universe was expanding. There were two primary explanations put forth for the expansion of the universe. One was Lemaître’s Big Bang theory, advocated and developed by George Gamow. The other possibility was Fred Hoyle’s steady state model in which new matter would be created as the galaxies moved away from each other. In this model, the universe is roughly the same at any point in time.

    For a number of years the support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity, as demonstrated by Stephen Hawking and Roger Penrose in the 1960s.

    History of the Universe

    The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the ΛCDM model.

    Equations of motion

    The equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The solution is an expanding universe; due to this expansion the radiation and matter in the universe are cooled down and become diluted. At first, the expansion is slowed down by gravitation due to the radiation and matter content of the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this has already happened, billions of years ago.

    Particle physics in cosmology

    Particle physics is important to the behavior of the early universe, since the early universe was so hot that the average energy density was very high. Because of this, scattering processes and decay of unstable particles are important in cosmology.

    As a rule of thumb, a scattering or a decay process is cosmologically important in a certain cosmological epoch if the time scale describing that process is smaller or comparable to the time scale of the expansion of the universe, which is 1 / H with H being the Hubble constant at that time. This is roughly equal to the age of the universe at that time.

    Timeline of the Big Bang

    Observations suggest that the universe began around 13.7 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses. Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever.

    Areas of study

    Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang.

    The very early universe

    While the early, hot universe appears to be well explained by the Big Bang from roughly 10−33 seconds onwards, there are several problems. One is that there is no compelling reason, using current particle physics, to expect the universe to be flat, homogeneous and isotropic (see the cosmological principle). Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, however it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation.

    Another major problem in cosmology is what caused the universe to contain more particles than antiparticles. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. This problem is called the baryon asymmetry, and the theory to describe the resolution is called baryogenesis. The theory of baryogenesis was worked out by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. Particle accelerators, however, measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists are trying to find additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry.

    Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe.

    Big bang nucleosynthesis

    Big Bang Nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4 and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth “sterile” species of neutrino.

    Cosmic microwave background

    The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The recent measurements made by WMAP, for example, have placed limits on the neutrino masses.

    Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel’dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background.

    Formation and evolution of large-scale structure

    Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.

    Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy.

    Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include:

    • The Lyman alpha forest, which allows cosmologists to measure the distribution of neutral atomic hydrogen gas in the early universe, by measuring the absorption of light from distant quasars by the gas.
    • The 21 centimeter absorption line of neutral atomic hydrogen also provides a sensitive test of cosmology
    • Weak lensing, the distortion of a distant image by gravitational lensing due to dark matter.

    These will help cosmologists settle the question of when and how structure formed in the universe.

    Dark matter

    Evidence from Big Bang nucleosynthesis, the cosmic microwave background and structure formation suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology.

    Dark energy

    If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate.

    Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have used this as evidence for the anthropic principle, which suggests that the cosmological constant is so small because life (and thus physicists, to make observations) cannot exist in a universe with a large cosmological constant, but many people find this an unsatisfying explanation. Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy’s equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology.

    A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse.

    Other areas of inquiry

    Cosmologists also study:

    See also

    References

    1. ^ For an overview, see George FR Ellis (2006). “Issues in the Philosophy of Cosmology”. In Jeremy Butterfield & John Earman. Philosophy of Physics (Handbook of the Philosophy of Science) 3 volume set. North Holland. pp. 1183ff. ISBN 0444515607. http://arxiv.org/abs/astro-ph/0602280v2. 

    Further reading

    Popular

    Textbooks

    • Cheng, Ta-Pei (2005). Relativity, Gravitation and Cosmology: a Basic Introduction. Oxford and New York: Oxford University Press. ISBN 0-19-852957-0.  Introductory cosmology and general relativity without the full tensor apparatus, deferred until the last part of the book.
    • Dodelson, Scott (2003). Modern Cosmology. Academic Press. ISBN 0-12-219141-2.  An introductory text, released slightly before the WMAP results.
    • Grøn, Øyvind; Hervik, Sigbjørn (2007). Einstein’s General Theory of Relativity with Modern Applications in Cosmology. New York: Springer. ISBN 978-0-387-69199-2. 
    • Harrison, Edward (2000). Cosmology: the science of the universe. Cambridge University Press. ISBN 0-521-66148-X.  For undergraduates; mathematically gentle with a strong historical focus.
    • Kutner, Marc (2003). Astronomy: A Physical Perspective. Cambridge University Press. ISBN 0-521-52927-1.  An introductory astronomy text.
    • Kolb, Edward; Michael Turner (1988). The Early Universe. Addison-Wesley. ISBN 0-201-11604-9.  The classic reference for researchers.
    • Liddle, Andrew (2003). An Introduction to Modern Cosmology. John Wiley. ISBN 0-470-84835-9.  Cosmology without general relativity.
    • Liddle, Andrew; David Lyth (2000). Cosmological Inflation and Large-Scale Structure. Cambridge. ISBN 0-521-57598-2.  An introduction to cosmology with a thorough discussion of inflation.
    • Mukhanov, Viatcheslav (2005). Physical Foundations of Cosmology. Cambridge University Press. ISBN 0-521-56398-4. 
    • Padmanabhan, T. (1993). Structure formation in the universe. Cambridge University Press. ISBN 0-521-42486-0.  Discusses the formation of large-scale structures in detail.
    • Peacock, John (1998). Cosmological Physics. Cambridge University Press. ISBN 0-521-42270-1.  An introduction including more on general relativity and quantum field theory than most.
    • Peebles, P. J. E. (1993). Principles of Physical Cosmology. Princeton University Press. ISBN 0-691-01933-9.  Strong historical focus.
    • Peebles, P. J. E. (1980). The Large-Scale Structure of the Universe. Princeton University Press. ISBN 0-691-08240-5.  The classic work on large scale structure and correlation functions.
    • Rees, Martin (2002). New Perspectives in Astrophysical Cosmology. Cambridge University Press. ISBN 0-521-64544-1. 
    • Weinberg, Steven (1971). Gravitation and Cosmology. John Wiley. ISBN 0-471-92567-5.  A standard reference for the mathematical formalism.
    • Weinberg, Steven (2008). Cosmology. Oxford University Press. ISBN 0198526822. 
    • Benjamin Gal-Or, “Cosmology, Physics and Philosophy”, Springer Verlag, 1981, 1983, 1987, ISBN 0-387-90581-2, ISBN 0387965262.

    External links

    From groups

    From individuals

    Posted in Cosmology, Martin Rees | Tagged , | Leave a comment

    Thinking Outside the Box, People Like Veneziano, Turok and Penrose

    Credit: V.G.Gurzadyan and R.Penrose

    Dark circles indicate regions in space where the cosmic microwave background has temperature variations that are lower than average. The features hint that the universe was born long before the Big Bang 13.7 billion years ago and had undergone myriad cycles of birth and death before that time. See: Cosmic rebirth

    ***

    Concentric circles in WMAP data may provide evidence of violent pre-Big-Bang activity

    Abstract: Conformal cyclic cosmology (CCC) posits the existence of an aeon preceding our Big Bang ‘B’, whose conformal infinity ‘I’ is identified, conformally, with ‘B’, now regarded as a spacelike 3-surface. Black-hole encounters, within bound galactic clusters in that previous aeon, would have the observable effect, in our CMB sky, of families of concentric circles over which the temperature variance is anomalously low, the centre of each such family representing the point of ‘I’ at which the cluster converges. These centres appear as fairly randomly distributed fixed points in our CMB sky. The analysis of Wilkinson Microwave Background Probe’s (WMAP) cosmic microwave background 7-year maps does indeed reveal such concentric circles, of up to 6{\sigma} significance. This is confirmed when the same analysis is applied to BOOMERanG98 data, eliminating the possibility of an instrumental cause for the effects. These observational predictions of CCC would not be easily explained within standard inflationary cosmology.

    Update:Penrose’s Cyclic Cosmology  by Sean Carroll

    In response too….

    More on the low variance circles in CMB sky

    Abstract: Two groups [3,4] have confirmed the results of our paper concerning the actual existence of low variance circles in the cosmic microwave background (CMB) sky. They also point out that the effect does not contradict the LCDM model – a matter which is not in dispute. We point out two discrepancies between their treatment and ours, however, one technical, the other having to do with the very understanding of what constitutes a Gaussian random signal. Both groups simulate maps using the CMB power spectrum for LCDM, while we simulate a pure Gaussian sky plus the WMAP’s noise, which points out the contradiction with a common statement [3] that “CMB signal is random noise of Gaussian nature”. For as it was shown in [5], the random component is a minor one in the CMB signal, namely, about 0.2. Accordingly, the circles we saw are a real structure of the CMB sky and they are not of a random Gaussian nature. Although the structures studied certainly cannot contradict the power spectrum, which is well fitted by LCDM model, we particularly emphasize that the low variance circles occur in concentric families, and this key fact cannot be explained as a purely random effect. It is, however a clear prediction of conformal cyclic cosmology.

    Posted in Branes, Outside Time, Sir Roger Penrose, Veneziano | Tagged , , , | 1 Comment

    Holometer

    Holometer Revised

    This plot shows the sensitivity of various experiments to fluctuations in space and time. Horizontal axis is the log of apparatus size (or duration time the speed of light), in meters; vertical axis is the log of the rms fluctuation amplitude in the same units. The lower left corner represents the Planck length or time. In these units, the size of the observable universe is about 26. Various physical systems and experiments are plotted. The “holographic noise” line represents the rms transverse holographic fluctuation amplitude on a given scale. The most sensitive experiments are Michelson interferometers.

    The Fermilab Holometer in Illinois is currently under construction and will be the world’s most sensitive laser interferometer when complete, surpassing the sensitivity of the GEO600 and LIGO systems, and theoretically able to detect holographic fluctuations in spacetime.[1][2][3]

    The Holometer may be capable of meeting or exceeding the sensitivity required to detect the smallest units in the universe called Planck units.[1] Fermilab states, “Everyone is familiar these days with the blurry and pixelated images, or noisy sound transmission, associated with poor internet bandwidth. The Holometer seeks to detect the equivalent blurriness or noise in reality itself, associated with the ultimate frequency limit imposed by nature.”[2]
    Craig Hogan, a particle astrophysicist at Fermilab, states about the experiment, “What we’re looking for is when the lasers lose step with each other. We’re trying to detect the smallest unit in the universe. This is really great fun, a sort of old-fashioned physics experiment where you don’t know what the result will be.”

    Experimental physicist Hartmut Grote of the Max Planck Institute in Germany, states that although he is skeptical that the apparatus will successfully detect the holographic fluctuations, if the experiment is successful “it would be a very strong impact to one of the most open questions in fundamental physics. It would be the first proof that space-time, the fabric of the universe, is quantized.”[1]

    References

    1. ^ a b c Mosher, David (201010-28). “World’s Most Precise Clocks Could Reveal Universe Is a Hologram”. Wired. http://www.wired.com/wiredscience/2010/10/holometer-universe-resolution/. 
    2. ^ a b “The Fermilab Holometer”. Fermi National Accelerator Laboratory. http://holometer.fnal.gov/. Retrieved 201011-01. 
    3. ^ Dillow, Clay (201010-21). “Fermilab is Building a ‘Holometer’ to Determine Once and For All Whether Reality Is Just an Illusion”. Popular Science. http://www.popsci.com/science/article/2010-10/fermilab-building-holometer-determine-if-universe-just-hologram.

    ***
    Fermilab Holometer

    About a hundred years ago, the German physicist Max Planck introduced the idea of a fundamental, natural length or time, derived from fundamental constants. We now call these the Planck length, lp = √hG/2π c3 = 1.6 × 10-35 meters. Light travels one Planck length in the Planck time, tp = √hG/2π c5 = 5.4 × 10-44seconds. 

    The physics of space and time is expected to change radically on such small scales. For example, a particle confined to a Planck volume automatically collapses to a black hole. 

    See: Fermilab Holometer

    ***

    A Conceptual Drawing of the ‘Holometer’ via Symmetry

    “The shaking of spacetime occurs at a million times per second, a thousand times what your ear can hear,” said Fermilab experimental physicist Aaron Chou, whose lab is developing prototypes for the holometer. “Matter doesn’t like to shake at that speed. You could listen to gravitational frequencies with headphones.”

    The whole trick, Chou says, is to prove that the vibrations don’t come from the instrument. Using technology similar to that in noise-cancelling headphones, sensors outside the instrument detect vibrations and shake the mirror at the same frequency to cancel them. Any remaining shakiness at high frequency, the researchers propose, will be evidence of blurriness in spacetime

    “With the holometer’s long arms, we’re magnifying spacetime’s uncertainty,” Chou said.

    See: Hogan’s holometer: Testing the hypothesis of a holographic universe

    ***

    Conclusion:

    Posted in Interferometer, LIGO, Outside Time, planck, Quantum Gravity | Tagged , , , , | Leave a comment