The Compact Muon Solenoid……

Coordinates: 46°18′34″N 6°4′37″E / 46.30944°N 6.07694°E / 46.30944; 6.07694

Large Hadron Collider (LHC)
LHC.svg
LHC experiments
ATLAS A Toroidal LHC Apparatus
CMS Compact Muon Solenoid
LHCb LHC-beauty
ALICE A Large Ion Collider Experiment
TOTEM Total Cross Section, Elastic Scattering and Diffraction Dissociation
LHCf LHC-forward
MoEDAL Monopole and Exotics Detector At the LHC
LHC preaccelerators
p and Pb Linear accelerators for protons (Linac 2) and Lead (Linac 3)
(not marked) Proton Synchrotron Booster
PS Proton Synchrotron
SPS Super Proton Synchrotron

View of the CMS endcap through the barrel sections. The ladder to the lower right gives an impression of scale.

……(CMS) experiment is one of two large general-purpose particle physics detectors built on the proton-proton Large Hadron Collider (LHC) at CERN in Switzerland and France. Approximately 3,600 people from 183 scientific institutes, representing 38 countries form the CMS collaboration who built and now operate the detector.[1] It is located in an underground cavern at Cessy in France, just across the border from Geneva.

Contents

Background

Recent collider experiments such as the now-dismantled Large Electron-Positron Collider at CERN and the (as of 2010) still running Tevatron at Fermilab have provided remarkable insights into, and precision tests of the Standard Model of Particle Physics. However, a number of questions remain unanswered.

A principal concern is the lack of any direct evidence for the Higgs Boson, the particle resulting from the Higgs mechanism which provides an explanation for the masses of elementary particles. Other questions include uncertainties in the mathematical behaviour of the Standard Model at high energies, the lack of any particle physics explanation for dark matter and the reasons for the imbalance of matter and antimatter observed in the Universe.

The Large Hadron Collider and the associated experiments are designed to address a number of these questions.

Physics goals

The main goals of the experiment are:

The ATLAS experiment, at the other side of the LHC ring is designed with similar goals in mind, and the two experiments are designed to complement each other both to extend reach and to provide corroboration of findings.

Detector summary

CMS is designed as a general-purpose detector, capable of studying many aspects of proton collisions at 14 TeV, the center-of-mass energy of the LHC particle accelerator. It contains subsystems which are designed to measure the energy and momentum of photons, electrons, muons, and other products of the collisions. The innermost layer is a silicon-based tracker. Surrounding it is a scintillating crystal electromagnetic calorimeter, which is itself surrounded with a sampling calorimeter for hadrons. The tracker and the calorimetry are compact enough to fit inside the CMS solenoid which generates a powerful magnetic field of 3.8 T. Outside the magnet are the large muon detectors, which are inside the return yoke of the magnet.

The set up of the CMS. In the middle, under the so-called barrel there is a man for scale. (HCAL=hadron calorimeter, ECAL=electromagnetic calorimeter)

CMS by layers

A slice of the CMS detector.

For full technical details about the CMS detector, please see the Technical Design Report.

The interaction point

This is the point in the centre of the detector at which proton-proton collisions occur between the two counter-rotating beams of the LHC. At each end of the detector magnets focus the beams into the interaction point. At collision each beam has a radius of 17 μm and the crossing angle between the beams is 285 μrad.
At full design luminosity each of the two LHC beams will contain 2,808 bunches of 1.15×1011 protons. The interval between crossings is 25 ns, although the number of collisions per second is only 31.6 million due to gaps in the beam as injector magnets are activated and deactivated.

At full luminosity each collision will produce an average of 20 proton-proton interactions. The collisions occur at a centre of mass energy of 14 TeV. It is worth noting that the actual interactions occur between quarks rather than protons, and so the actual energy involved in each collision will be lower, as determined by the parton distribution functions.

The first which ran in September 2008 was expected to operate at a lower collision energy of 10 TeV but this was prevented by the 19 September 2008 shutdown. When at this target level, the LHC will have a significantly reduced luminosity, due to both fewer proton bunches in each beam and fewer protons per bunch. The reduced bunch frequency does allow the crossing angle to be reduced to zero however, as bunches are far enough spaced to prevent secondary collisions in the experimental beampipe.

Layer 1 – The tracker

The silicon strip tracker of CMS.

Immediately around the interaction point the inner tracker serves to identify the tracks of individual particles and match them to the vertices from which they originated. The curvature of charged particle tracks in the magnetic field allows their charge and momentum to be measured.

The CMS silicon tracker consists of 13 layers in the central region and 14 layers in the endcaps. The innermost three layers (up to 11 cm radius) consist of 100×150 μm pixels, 66 million in total.
The next four layers (up to 55 cm radius) consist of 10 cm × 180 μm silicon strips, followed by the remaining six layers of 25 cm × 180 μm strips, out to a radius of 1.1 m. There are 9.6 million strip channels in total.
During full luminosity collisions the occupancy of the pixel layers per event is expected to be 0.1%, and 1–2% in the strip layers. The expected SLHC upgrade will increase the number of interactions to the point where over-occupancy may significantly reduce trackfinding effectiveness.

This part of the detector is the world’s largest silicon detector. It has 205 m2 of silicon sensors (approximately the area of a tennis court) comprising 76 million channels.[2]

Layer 2 – The Electromagnetic Calorimeter

The Electromagnetic Calorimeter (ECAL) is designed to measure with high accuracy the energies of electrons and photons.

The ECAL is constructed from crystals of lead tungstate, PbWO4. This is an extremely dense but optically clear material, ideal for stopping high energy particles. It has a radiation length of χ0 = 0.89 cm, and has a rapid light yield, with 80% of light yield within one crossing time (25 ns). This is balanced however by a relatively low light yield of 30 photons per MeV of incident energy.

The crystals used have a front size of 22 mm × 22 mm and a depth of 230 mm. They are set in a matrix of carbon fibre to keep them optically isolated, and backed by silicon avalanche photodiodes for readout. The barrel region consists of 61,200 crystals, with a further 7,324 in each of the endcaps.

At the endcaps the ECAL inner surface is covered by the preshower subdetector, consisting of two layers of lead interleaved with two layers of silicon strip detectors. Its purpose is to aid in pion-photon discrimination.

Layer 3 – The Hadronic Calorimeter

Half of the Hadron Calorimeter

The purpose of the Hadronic Calorimeter (HCAL) is both to measure the energy of individual hadrons produced in each event, and to be as near to hermetic around the interaction region as possible to allow events with missing energy to be identified.

The HCAL consists of layers of dense material (brass or steel) interleaved with tiles of plastic scintillators, read out via wavelength-shifting fibres by hybrid photodiodes. This combination was determined to allow the maximum amount of absorbing material inside of the magnet coil.

The high pseudorapidity region (3.0 < | η | < 5.0) is instrumented by the Hadronic Forward detector. Located 11 m either side of the interaction point, this uses a slightly different technology of steel absorbers and quartz fibres for readout, designed to allow better separation of particles in the congested forward region.
The brass used in the endcaps of the HCAL used to be Russian artillery shells.[3]

Layer 4 – The magnet

Like most particle physics detectors, CMS has a large solenoid magnet. This allows the charge/mass ratio of particles to be determined from the curved track that they follow in the magnetic field. It is 13 m long and 6 m in diameter, and its refrigerated superconducting niobium-titanium coils were originally intended to produce a 4 T magnetic field. It was recently announced that the magnet will run at 3.8 T instead of the full design strength in order to maximize longevity.[4]

The inductance of the magnet is 14 Η and the nominal current for 4 T is 19,500 A, giving a total stored energy of 2.66 GJ, equivalent to about half-a-tonne of TNT. There are dump circuits to safely dissipate this energy should the magnet quench. The circuit resistance (essentially just the cables from the power converter to the cryostat) has a value of 0.1 mΩ which leads to a circuit time constant of nearly 39 hours. This is the longest time constant of any circuit at CERN. The operating current for 3.8 T is 18,160 A, giving a stored energy of 2.3 GJ.

Layer 5 – The muon detectors and return yoke

To identify muons and measure their momenta, CMS uses three types of detector: drift tubes (DT), cathode strip chambers (CSC) and resistive plate chambers (RPC). The DTs are used for precise trajectory measurements in the central barrel region, while the CSCs are used in the end caps. The RPCs provide a fast signal when a muon passes through the muon detector, and are installed in both the barrel and the end caps.

Collecting and collating the data

Pattern recognition

Testing the data read-out electronics for the tracker.

New particles discovered in CMS will be typically unstable and rapidly transform into a cascade of lighter, more stable and better understood particles. Particles travelling through CMS leave behind characteristic patterns, or ‘signatures’, in the different layers, allowing them to be identified. The presence (or not) of any new particles can then be inferred.

Trigger system

To have a good chance of producing a rare particle, such as a Higgs boson, a very large number of collisions are required. Most collision events in the detector are “soft” and do not produce interesting effects. The amount of raw data from each crossing is approximately 1 MB, which at the 40 MHz crossing rate would result in 40 TB of data a second, an amount that the experiment cannot hope to store or even process properly. The trigger system reduces the rate of interesting events down to a manageable 100 per second.
To accomplish this, a series of “trigger” stages are employed. All the data from each crossing is held in buffers within the detector while a small amount of key information is used to perform a fast, approximate calculation to identify features of interest such as high energy jets, muons or missing energy. This “Level 1” calculation is completed in around 1 µs, and event rate is reduced by a factor of about thousand down to 50 kHz. All these calculations are done on fast, custom hardware using reprogrammable FPGAs.

If an event is passed by the Level 1 trigger all the data still buffered in the detector is sent over fibre-optic links to the “High Level” trigger, which is software (mainly written in C++) running on ordinary computer servers. The lower event rate in the High Level trigger allows time for much more detailed analysis of the event to be done than in the Level 1 trigger. The High Level trigger reduces the event rate by a further factor of about a thousand down to around 100 events per second. These are then stored on tape for future analysis.

Data analysis

Data that has passed the triggering stages and been stored on tape is duplicated using the Grid to additional sites around the world for easier access and redundancy. Physicists are then able to use the Grid to access and run their analyses on the data.
Some possible analyses might be:

  • Looking at events with large amounts of apparently missing energy, which implies the presence of particles that have passed through the detector without leaving a signature, such as neutrinos.
  • Looking at the kinematics of pairs of particles produced by the decay of a parent, such as the Z boson decaying to a pair of electrons or the Higgs boson decaying to a pair of tau leptons or photons, to determine the properties and mass of the parent.
  • Looking at jets of particles to study the way the quarks in the collided protons have interacted.

Milestones

1998 Construction of surface buildings for CMS begins.
2000 LEP shut down, construction of cavern begins.
2004 Cavern completed.
10 September 2008 First beam in CMS.
23 November 2009 First collisions in CMS.
30 March 2010 First 7 TeV collisions in CMS.

See also

References

  1. ^ [1]
  2. ^ CMS installs the world’s largest silicon detector, CERN Courier, Feb 15, 2008
  3. ^ CMS HCAL history – CERN
  4. ^ http://iopscience.iop.org/1748-0221/5/03/T03021/pdf/1748-0221_5_03_T03021.pdf Precise mapping of the magnetic field in the CMS barrel yoke using cosmic rays

External links

Posted in AMS, Atlas, CMS, Muons, Triggering | Tagged , , , , | Leave a comment

Antarctic Muon And Neutrino Detector Array….

Diagram of IceCube. IceCube will occupy a volume of one cubic kilometer. Here we depict one of the 80 strings of opctical modules (number and size not to scale). IceTop located at the surface, comprises an array of sensors to detect air showers. It will be used to calibrate IceCube and to conduct research on high-energy cosmic rays. Author: Steve Yunck, Credit: NSF

…..(AMANDA) is a neutrino telescope located beneath the Amundsen-Scott South Pole Station. In 2005, after nine years of operation, AMANDA officially became part of its successor project, IceCube.

AMANDA consists of optical modules, each containing one photomultiplier tube, sunk in Antarctic ice cap at a depth of about 1500 to 1900 meters. In its latest development stage, known as AMANDA-II, AMANDA is made up of an array of 677 optical modules mounted on 19 separate strings that are spread out in a rough circle with a diameter of 200 meters. Each string has several dozen modules, and was put in place by “drilling” a hole in the ice using a hot-water hose, sinking the cable with attached optical modules in, and then letting the ice freeze around it.

AMANDA detects very high energy neutrinos (50+ GeV) which pass through the Earth from the northern hemisphere and then react just as they are leaving upwards through the Antarctic ice. The neutrino collides with nuclei of oxygen or hydrogen atoms contained in the surrounding water ice, producing a muon and a hadronic shower. The optical modules detect the Cherenkov radiation from these latter particles, and by analysis of the timing of photon hits can approximately determine the direction of the original neutrino with a spatial resolution of approximately 2 degrees.

AMANDA’s goal was an attempt at neutrino astronomy, identifying and characterizing extra-solar sources of neutrinos. Compared to underground detectors like Super-Kamiokande in Japan, AMANDA was capable of looking at higher energy neutrinos because it is not limited in volume to a manmade tank; however, it had much less accuracy because of the less controlled conditions and wider spacing of photomultipliers. Super-Kamiokande can look at much greater detail at neutrinos from the Sun and those generated in the Earth’s atmosphere; however, at higher energies, the spectrum should include neutrinos dominated by those from sources outside the solar system. Such a new view into the cosmos could give important clues in the search for Dark Matter and other astrophysical phenomena.

After two short years of integrated operation as part of IceCube[1], the AMANDA counting house (in the Martin A. Pomerantz Observatory) was finally decommissioned in July and August of 2009.

See also

References

  1. ^ http://icecube.wisc.edu/science/publications/pdd/pdd12.php

External links

****

When a neutrino collides with a water molecule deep in Antarctica’s ice, the particle it produces radiates a blue light called Cerenkov radiation, which IceCube will detect (Steve Yunck/NSF)

See:Dual Nature From Microstate Blackhole Creation?

Posted in IceCube, Martin Rees, Muons, SuperKamiokande | Tagged , , , | Leave a comment

Muons reveal the interior of volcanoes

The location of the muon detector on the slopes of the Vesuvius volcano.

Like X-ray scans of the human body, muon radiography allows researchers to obtain an image of the internal structures of the upper levels of volcanoes. Although such an image cannot help to predict ‘when’ an eruption might occur, it can, if combined with other observations, help to foresee ‘how’ it could develop and serves as a powerful tool for the study of geological structures.

Muons come from the interaction of cosmic rays with the Earth’s atmosphere. They are able to traverse layers of rock as thick as one kilometre or more. During their trip, they are partially absorbed by the material they go through, very much like X-rays are partially absorbed by bones or other internal structures in our body. At the end of the chain, instead of the classic X-ray plate, is the so-called ‘muon telescope’, a special detector placed on the slopes of the volcano.

See: Muons reveal the interior of volcanoes

***

MU-RAY project 
MUon RAdiographY

A. Kircher (1601-1680), “The interior of Vesuvius”  
A. Kircher (1601-1680): “The interior of Vesuvius” (1638)
Read more about Athanasius Kircher on Wikipedia.

Cosmic ray muon radiography is a technique capable of imaging variations of density inside a hundreds of meters of rock. With resolutions up to tens of meters in optimal detection conditions, muon radiography can give us images of the top region of a volcano edifice with a resolution that is significantly better than the one typically achieved with conventional gravity methods and in this way can give us information on anomalies in the density distribution, such as expected from dense lava conduits, low density magma supply paths or the compression with depth of the overlying soil.
The MU-RAY project is aimed toward the study of the internal structure of Stromboli and Vesuvius volcanoes using this technique.

Posted in Gran Sasso, Muons, Volcanoes | Tagged , , | Leave a comment

When Muons Collide

By Leah Hesla

photo
Illustration: Sandbox Studio

When Fermilab physicist Steve Geer agreed to perform a calculation as part of a muon collider task force 10 years ago, he imagined he would show that the collider’s technical challenges were too difficult to be solved and move on to other matters. But as he delved further into the problem, he realized that the obstacles he had envisioned could in principle be overcome.

“I started as a skeptic,” he says. “But the more I studied it, I realized it might be a solvable problem.”

A muon collider—a machine that currently exists only in computer simulation—is a relative newcomer to the world of particle accelerators. At the moment, the reception from the particle physics community to this first-of-its-kind particle smasher is “polite,” says Fermilab physicist Alan Bross.

Politeness will suffice for now: research and development on the machine are gearing up thanks to funding from the US Department of Energy. In August, a DOE review panel supported the launch of the Muon Accelerator Program, or MAP, an international initiative led by Fermilab. Scientists hope the program will receive about $15 million per year over seven years to examine the collider’s feasibility and cost effectiveness. See more on Caption of Blog Post

***


shows arrival directions of cosmic rays with energies above 4 x 1019eV. Red squares and green circles represent cosmic rays with energies of > 1020eV , and (4 – 10) x 1019eV , respectively.

We observed muon components in the detected air showers and studied their characteristics. Generally speaking, more muons in a shower cascade favors heavier primary hadrons and measurement of muons is one of the methods used to infer the chemical composition of the energetic cosmic rays. Our recent measurement indicates no systematic change in the mass composition from a predominantly heavy to a light composition above 3 x 1017eV claimed by the Fly’s Eye group.

***

Also see:

Muons 

Posted in Muons | Tagged | 13 Comments

The Penrose interpretation

 ……….is a prediction of Sir Roger Penrose about the mass scale at which standard quantum mechanics will fail. This idea is inspired by quantum gravity, because it uses both the physical constants \scriptstyle \hbar and \scriptstyle G.
Penrose’s idea is a variant of objective collapse theory. In these theories the wavefunction is a physical wave, which undergoes wave function collapse as a random process, with observers playing no special role. Penrose suggests that the threshold for wave function collapse is when superpositions involve at least a Planck mass worth of matter. He then hypothesizes that some fundamental gravitational event occurs, causing the wavefunction to choose one branch of reality over another. Despite the difficulties in specifying this in a rigorous way, he mathematically described the basis states involved in the Schrödinger–Newton equations.
Accepting that wavefunctions are physically real, Penrose believes that things can exist in more than one place at one time. In his view, a macroscopic system, like a human being, cannot exist in more than one position because it has a significant gravitational field. A microscopic system, like an electron, has an insignificant gravitational field, and can exist in more than one location almost indefinitely.

In Einstein‘s theory, any object that has mass causes a warp in the structure of space and time around it. This warping produces the effect we experience as gravity. Penrose points out that tiny objects, such as dust specks, atoms and electrons, produce space-time warps as well. Ignoring these warps is where most physicists go awry. If a dust speck is in two locations at the same time, each one should create its own distortions in space-time, yielding two superposed gravitational fields. According to Penrose’s theory, it takes energy to sustain these dual fields. The stability of a system depends on the amount of energy involved: the higher the energy required to sustain a system, the less stable it is. Over time, an unstable system tends to settle back to its simplest, lowest-energy state: in this case, one object in one location producing one gravitational field. If Penrose is right, gravity yanks objects back into a single location, without any need to invoke observers or parallel universes.[1]

Penrose speculates that the transition between macroscopic and quantum begins on the scale of dust particles (whose mass is the planck mass). Dust particles could exist in more than one location for as long as one second, and this is much longer than the time a larger object could be in a superposition. He has proposed an experiment to test this theory, called FELIX (Free-orbit Experiment with Laser Interfometry X-Rays), in which an X-ray laser in space is directed toward a tiny mirror, and fissioned by a beam splitter from thousands of miles away, with which the photons are directed toward other mirrors and reflected back. One photon will strike the tiny mirror moving en route to another mirror and move the tiny mirror back as it returns, and according to Penrose’s approach, that the tiny mirror exists in two locations at one time. If gravity affects the mirror, it will be unable to exist in two locations at once because gravity holds it in place. [2]

However, because this experiment would be difficult to set up, a table-top version has been proposed instead.[3]

See also

References

  1. ^ ‘Folger, Tim. “If an Electron Can Be in 2 Places at Once, Why Can’t You?” Discover. Vol. 25 No. 6 (June 2005). 33.
  2. ^ Penrose, R. Road to Reality pp856-860
  3. ^ ‘Folger, Tim. “If an Electron Can Be in 2 Places at Once, Why Can’t You?” Discover. Vol. 25 No. 6 (June 2005). 34-35.

External links

Posted in Uncategorized | Leave a comment

Quantum superposition

Quantum mechanics
\Delta x\, \Delta p \ge \frac{\hbar}{2}
Uncertainty principle
Introduction · Mathematical formulations

[hide]Fundamental concepts
Quantum state · Wave function
Superposition · Entanglement
Complementarity · Duality · Uncertainty
Measurement · Exclusion
Decoherence · Ehrenfest theorem · Tunnelling

……refers to the quantum mechanical property of a particle to occupy all of its possible quantum states simultaneously. Due to this property, to completely describe a particle one must include a description of every possible state and the probability of the particle being in that state.[citation needed] Since the Schrödinger equation is linear, a solution that takes into account all possible states will be a Linear combination of the solutions for each individual state.[clarification needed] This mathematical property of linear equations is known as the superposition principle.

Contents

The Superposition principle of quantum mechanics

The principle of superposition states that if the world can be in any configuration, any possible arrangement of particles or fields, and if the world could also be in another configuration, then the world can also be in a state which is a superposition of the two, where the amount of each configuration that is in the superposition is specified by a complex number.

Examples

For an equation describing a physical phenomenon, the superposition principle states that a linear combination of solutions to an equation is also a solution. When this is true then the equation is linear and said to obey the superposition principle. Thus if functions f1, f2, and f3 solve the linear equation ψ, then ψ=c1f1+c2f2+c3f3 would also be a solution where each c is a coefficient. For example, the electrical field due to a distribution of charged particles can be described by the vector sum of the contributions of the individual particles.

Similarly, probability theory states that the probability of an event can be described by a linear combination of the probabilities of certain specific other events (see Mathematical treatment). For example, the probability of flipping two coins (coin A and coin B) and having at least one turn face up can be expressed as the sum of the probabilities for three specific events- A heads with B tails, A heads with B heads, and A tails with B heads. In this case the probability could be expressed as:

P(heads >  = 1) = P(AnotB) + P(AandB) + P(BnotA)

or even:

P(heads >  = 1) = 1 − P(notAnotB)

Probability theory, as with quantum theory, would also require that the sum of probabilities for all possible events, not just those satisfying the previous condition, be normalized to one. Thus:

P(AnotB) + P(AandB) + P(BnotA) + P(notAnotB) = 1

Probability theory also states that the probability distribution along a continuum (i.e., the chance of finding something is a function of position along a continuous set of coordinates) or among discrete events (the example above) can be described using a probability density or unit vector respectively with the probability magnitude being given by a square of the density function.

In quantum mechanics an additional layer of analysis is introduced as the probability density function is now more specifically a wave function ψ. The wave function is either a complex function of a finite set of real variables or a complex vector formed of a finite or infinite number of components. As the coefficients in the linear combination that describes our probability density are now complex, the probability must now come from the absolute value of the multiplication of the wave function by its complex conjugate \psi \psi^* =  \mid \psi  \mid ^2. In cases where the functions are not complex, the probability of an event occurring dependent on any member of a subset of the complete set of possible events occurring is the simple sum of the event probabilities in that subset. For example, if an observer rings a bell whenever one or more coins land hands up in the example above, then the probability of the observer ringing a bell is the same as the sum of the probabilities of each event in which at least one coin lands heads up. This is a simple sum since the square of the probability function describing this system is always positive. Using the wave equation, the multiplication of the function by its complex conjugate (square) is not always positive and may produce counterintuitive results.

For example, if a photon in a plus spin state has a 0.1 amplitude to be absorbed and take an atom to the second energy level, and if the photon in a minus spin state has a −0.1 amplitude to do the same thing, a photon which has an equal amplitude to be plus or minus would have zero amplitude to take the atom to the second excited state and the atom will not be excited. If the photon’s spin is measured before it reaches the atom, whatever the answer, plus or minus, it will have a nonzero amplitude to excite the atom, plus or minus 0.1.

Assuming normalization, the probability density in quantum mechanics is equal to the square of the absolute value of the amplitude. The further the amplitude is from zero, the bigger the probability. Where probability distribution is represented as a continuous function the probability is the integral of the density function over the relevant values. Where the wave equation is represented as a complex vector, then probability will be extracted from the absolute value of an inner-product of the coefficient matrix and its complex conjugate. In the atom example above, the probability that the atom will be excited is 0. But the only time probability enters the picture is when an observer gets involved. If you look to see which way the atom is, the different amplitudes become probabilities for seeing different things. So if you check to see whether the atom is excited immediately after the photon with 0 amplitude reaches it, there is no chance of seeing the atom excited.

Another example: If a particle can be in position A and position B, it can also be in a state where it is an amount “3i/5” in position A and an amount “4/5” in position B. To write this, physicists usually say:

|\psi\rangle = {3\over 5} i |A\rangle + {4\over 5} |B\rangle.

In the description, only the relative size of the different components matter, and their angle to each other on the complex plane. This is usually stated by declaring that two states which are a multiple of one another are the same as far as the description of the situation is concerned.

|\psi \rangle \approx \alpha |\psi \rangle

The fundamental dynamical law of quantum mechanics is that the evolution is linear, meaning that if the state A turns into A’ and B turns into B’ after 10 seconds, then after 10 seconds the superposition ψ turns into a mixture of A’ and B’ with the same coefficients as A and B. A particle can have any position, so that there are different states which have any value of the position x. These are written:

|x\rangle

The principle of superposition guarantees that there are states which are arbitrary superpositions of all the positions with complex coefficients:

\sum_x \psi(x) |x\rangle

This sum is defined only if the index x is discrete. If the index is over \reals, then the sum is not defined and is replaced by an integral instead. The quantity ψ(x) is called the wavefunction of the particle.
If a particle can have some discrete orientations of the spin, say the spin can be aligned with the z axis |+\rangle or against it |-\rangle, then the particle can have any state of the form:

C_1 |+\rangle + C_2 |-\rangle

If the particle has both position and spin, the state is a superposition of all possibilities for both:

\sum_x \psi_+(x)|x,+\rangle + \psi_-(x)|x,-\rangle \,

The configuration space of a quantum mechanical system cannot be worked out without some physical knowledge. The input is usually the allowed different classical configurations, but without the duplication of including both position and momentum.
A pair of particles can be in any combination of pairs of positions. A state where one particle is at position x and the other is at position y is written |x,y\rangle. The most general state is a superposition of the possibilities:

\sum_{xy} A(x,y) |x,y\rangle \,

The description of the two particles is much larger than the description of one particle — it is a function in twice the number of dimensions. This is also true in probability, when the statistics of two random things are correlated. If two particles are uncorrelated, the probability distribution for their joint position P(x,y) is a product of the probability of finding one at one position and the other at the other position:

P(x,y) = P_x (x) P_y(y) \,

In quantum mechanics, two particles can be in special states where the amplitudes of their position are uncorrelated. For quantum amplitudes, the word entanglement replaces the word correlation, but the analogy is exact. A disentangled wavefunction has the form:

A(x,y) = \psi_x(x)\psi_y(y) \,

while an entangled wavefunction does not have this form. Like correlation in probability, there are many more entangled states than disentangled ones. For instance, when two particles which start out with an equal amplitude to be anywhere in a box have a strong attraction and a way to dissipate energy, they can easily come together to make a bound state. The bound state still has an equal probability to be anywhere, so that each particle is equally likely to be everywhere, but the two particles will become entangled so that wherever one particle is, the other is too.

Analogy with probability

In probability theory there is a similar principle. If a system has a probabilistic description, this description gives the probability of any configuration, and given any two different configurations, there is a state which is partly this and partly that, with positive real number coefficients, the probabilities, which say how much of each there is.

For example, if we have a probability distribution for where a particle is, it is described by the “state”

\sum_x \rho(x) |x\rangle

Where ρ is the probability density function, a positive number that measures the probability that the particle will be found at a certain location.

The evolution equation is also linear in probability, for fundamental reasons. If the particle has some probability for going from position x to y, and from z to y, the probability of going to y starting from a state which is half-x and half-z is a half-and-half mixture of the probability of going to y from each of the options. This is the principle of linear superposition in probability.

Quantum mechanics is different, because the numbers can be positive or negative. While the complex nature of the numbers is just a doubling, if you consider the real and imaginary parts separately, the sign of the coefficients is important. In probability, two different possible outcomes always add together, so that if there are more options to get to a point z, the probability always goes up. In quantum mechanics, different possibilities can cancel.

In probability theory with a finite number of states, the probabilities can always be multiplied by a positive number to make their sum equal to one. For example, if there is a three state probability system:

x |1\rangle + y |2\rangle + z |3\rangle \,

where the probabilities x,y,z are positive numbers. Rescaling x,y,z so that

x+y+z=1 \,

The geometry of the state space is a revealed to be a triangle. In general it is a simplex. There are special points in a triangle or simplex corresponding to the corners, and these points are those where one of the probabilities is equal to 1 and the others are zero. These are the unique locations where the position is known with certainty.

In a quantum mechanical system with three states, the quantum mechanical wavefunction is a superposition of states again, but this time twice as many quantities with no restriction on the sign:

A|1\rangle + B|2\rangle + C|3\rangle = (A_r + iA_i) |1\rangle + (B_r + i B_i) |2\rangle + (C_r + iC_i) |3\rangle \,

rescaling the variables so that the sum of the squares is 1, the geometry of the space is revealed to be a high dimensional sphere

A_r^2 + A_i^2 + B_r^2 + B_i^2 + C_r^2 + C_i^2 = 1 \,.

A sphere has a large amount of symmetry, it can be viewed in different coordinate systems or bases. So unlike a probability theory, a quantum theory has a large number of different bases in which it can be equally well described. The geometry of the phase space can be viewed as a hint that the quantity in quantum mechanics which corresponds to the probability is the absolute square of the coefficient of the superposition.

Hamiltonian evolution

The numbers that describe the amplitudes for different possibilities define the kinematics, the space of different states. The dynamics describes how these numbers change with time. For a particle that can be in any one of infinitely many discrete positions, a particle on a lattice, the superposition principle tells you how to make a state:

\sum_n \psi_n |n\rangle \,

So that the infinite list of amplitudes \scriptstyle (... \psi_{-2},\psi_{-1},\psi_0,\psi_1,\psi_2 ...) completely describes the quantum state of the particle. This list is called the state vector, and formally it is an element of a Hilbert space, an infinite dimensional complex vector space. It is usual to represent the state so that the sum of the absolute squares of the amplitudes add up to one:

\sum \psi_n^*\psi_n = 1

For a particle described by probability theory random walking on a line, the analogous thing is the list of probabilities (…P − 2,P − 1,P0,P1,P2,…), which give the probability of any position. The quantities that describe how they change in time are the transition probabilities \scriptstyle K_{x\rightarrow y}(t), which gives the probability that, starting at x, the particle ends up at y after time t. The total probability of ending up at y is given by the sum over all the possibilities

P_y(t_0+t) = \sum_x P_x(t_0) K_{x\rightarrow y}(t) \,

The condition of conservation of probability states that starting at any x, the total probability to end up somewhere must add up to 1:

\sum_y K_{x\rightarrow y} = 1 \,

So that the total probability will be preserved, K is what is called a stochastic matrix.
When no time passes, nothing changes: for zero elapsed time \scriptstyle K{x\rightarrow y}(0) = \delta_{xy} , the K matrix is zero except from a state to itself. So in the case that the time is short, it is better to talk about the rate of change of the probability instead of the absolute change in the probability.

  P_y(t+dt) = P_y(t) + dt \sum_x P_x R_{x\rightarrow y} \,

where \scriptstyle R_{x\rightarrow y} is the time derivative of the K matrix:

R_{x\rightarrow y} = { K_{x\rightarrow y}(dt) - \delta_{xy} \over dt} \,.

The equation for the probabilities is a differential equation which is sometimes called the master equation:

{dP_y \over dt} = \sum_x P_x R_{x\rightarrow y} \,

The R matrix is the probability per unit time for the particle to make a transition from x to y. The condition that the K matrix elements add up to one becomes the condition that the R matrix elements add up to zero:

\sum_y R_{x\rightarrow y} = 0 \,

One simple case to study is when the R matrix has an equal probability to go one unit to the left or to the right, describing a particle which has a constant rate of random walking. In this case \scriptstyle R_{x\rightarrow y} is zero unless y is either x+1,x, or x−1, when y is x+1 or x−1, the R matrix has value c, and in order for the sum of the R matrix coefficients to equal zero, the value of R_{x\rightarrow x} must be −2c. So the probabilities obey the discretized diffusion equation:

{dP_x \over dt } = c(P_{x+1} - 2P_{x} + P_{x-1}) \,

which, when c is scaled appropriately and the P distribution is smooth enough to think of the system in a continuum limit becomes:

{\partial P(x,t) \over \partial t} = c {\partial^2 P \over \partial x^2 } \,

Which is the diffusion equation.
Quantum amplitudes give the rate at which amplitudes change in time, and they are mathematically exactly the same except that they are complex numbers. The analog of the finite time K matrix is called the U matrix:

\psi_n(t) = \sum_m U_{nm}(t) \psi_m \,

Since the sum of the absolute squares of the amplitudes must be constant, U must be unitary:

\sum_n U^*_{nm} U_{np} = \delta_{mp} \,

or, in matrix notation,

U^\dagger U = I \,

The rate of change of U is called the Hamiltonian H, up to a traditional factor of i:

H_{mn} = i{d \over dt} U_{mn}

The Hamiltonian gives the rate at which the particle has an amplitude to go from m to n. The reason it is multiplied by i is that the condition that U is unitary translates to the condition:

(I + i H^\dagger dt )(I - i H dt ) = I \,
H^\dagger - H = 0 \,

which says that H is Hermitian. The eigenvalues of the Hermitian matrix H are real quantities which have a physical interpretation as energy levels. If the factor i were absent, the H matrix would be antihermitian and would have purely imaginary eigenvalues, which is not the traditional way quantum mechanics represents observable quantities like the energy.
For a particle which has equal amplitude to move left and right, the Hermitian matrix H is zero except for nearest neighbors, where it has the value c. If the coefficient is everywhere constant, the condition that H is Hermitian demands that the amplitude to move to the left is the complex conjugate of the amplitude to move to the right. The equation of motion for ψ is the time differential equation:

i{d \psi_n \over dt} = c^* \psi_{n+1} + c \psi_{n-1}

In the case that left and right are symmetric, c is real. By redefining the phase of the wavefunction in time,  \psi\rightarrow \psi e^{i2ct}, the amplitudes for being at different locations are only rescaled, so that the physical situation is unchanged. But this phase rotation introduces a linear term.

i{d \psi_n \over dt} = c \psi_{n+1} - 2c\psi_n + c\psi_{n-1}

which is the right choice of phase to take the continuum limit. When c is very large and psi is slowly varying so that the lattice can be thought of as a line, this becomes the free Schrödinger equation:

i{ \partial \psi \over \partial t } = - {\partial^2 \psi \over \partial x^2}

If there is an additional term in the H matrix which is an extra phase rotation which varies from point to point, the continuum limit is the Schrödinger equation with a potential energy:

i{ \partial \psi \over \partial t} = - {\partial^2 \psi \over \partial x^2} + V(x) \psi

These equations describe the motion of a single particle in non-relativistic quantum mechanics.

Quantum mechanics in imaginary time

The analogy between quantum mechanics and probability is very strong, so that there are many mathematical links between them. In a statistical system in discrete time, t=1,2,3, described by a transition matrix for one time step \scriptstyle K_{m\rightarrow n}, the probability to go between two points after a finite number of time steps can be represented as a sum over all paths of the probability of taking each path:

K_{x\rightarrow y}(T) = \sum_{x(t)} \prod_t K_{x(t)x(t+1)}  \,

where the sum extends over all paths x(t) with the property that x(0) = 0 and x(T) = y. The analogous expression in quantum mechanics is the path integral.

A generic transition matrix in probability has a stationary distribution, which is the eventual probability to be found at any point no matter what the starting point. If there is a nonzero probability for any two paths to reach the same point at the same time, this stationary distribution does not depend on the initial conditions. In probability theory, the probability m for the stochastic matrix obeys detailed balance when the stationary distribution ρn has the property:

\rho_n K_{n\rightarrow m} = \rho_m K_{m\rightarrow n} \,

Detailed balance says that the total probability of going from m to n in the stationary distribution, which is the probability of starting at m ρm times the probability of hopping from m to n, is equal to the probability of going from n to m, so that the total back-and-forth flow of probability in equilibrium is zero along any hop. The condition is automatically satisfied when n=m, so it has the same form when written as a condition for the transition-probability R matrix.

\rho_n R_{n\rightarrow m} = \rho_m R_{m\rightarrow n} \,

When the R matrix obeys detailed balance, the scale of the probabilities can be redefined using the stationary distribution so that they no longer sum to 1:

p'_n = \sqrt{\rho_n}\;p_n \,

In the new coordinates, the R matrix is rescaled as follows:

\sqrt{\rho_n} R_{n\rightarrow m} {1\over \sqrt{\rho_m}} = H_{nm}  \,

and H is symmetric

H_{nm} = H_{mn} \,

This matrix H defines a quantum mechanical system:

i{d \over dt} \psi_n = \sum H_{nm} \psi_m \,

whose Hamiltonian has the same eigenvalues as those of the R matrix of the statistical system. The eigenvectors are the same too, except expressed in the rescaled basis. The stationary distribution of the statistical system is the ground state of the Hamiltonian and it has energy exactly zero, while all the other energies are positive. If H is exponentiated to find the U matrix:

U(t) = e^{-iHt} \,

and t is allowed to take on complex values, the K’ matrix is found by taking time imaginary.

K'(t) = e^{-Ht} \,

For quantum systems which are invariant under time reversal the Hamiltonian can be made real and symmetric, so that the action of time-reversal on the wave-function is just complex conjugation. If such a Hamiltonian has a unique lowest energy state with a positive real wave-function, as it often does for physical reasons, it is connected to a stochastic system in imaginary time. This relationship between stochastic systems and quantum systems sheds much light on supersymmetry.

Formal interpretation

Applying the superposition principle to a quantum mechanical particle, the configurations of the particle are all positions, so the superpositions make a complex wave in space. The coefficients of the linear superposition are a wave which describes the particle as best as is possible, and whose amplitude interferes according to the Huygens principle.

For any physical quantity in quantum mechanics, there is a list of all the states where the quantity has some value. These states are necessarily perpendicular to each other using the Euclidean notion of perpendicularity which comes from sums-of-squares length, except that they also must not be i multiples of each other. This list of perpendicular states has an associated value which is the value of the physical quantity. The superposition principle guarantees that any state can be written as a combination of states of this form with complex coefficients.
Write each state with the value q of the physical quantity as a vector in some basis \psi^q_n, a list of numbers at each value of n for the vector which has value q for the physical quantity. Now form the outer product of the vectors by multiplying all the vector components and add them with coefficients to make the matrix

A_{nm} = \sum_q q \psi^{*q}_n \psi^q_m

where the sum extends over all possible values of q. This matrix is necessarily symmetric because it is formed from the orthogonal states, and has eigenvalues q. The matrix A is called the observable associated to the physical quantity. It has the property that the eigenvalues and eigenvectors determine the physical quantity and the states which have definite values for this quantity.

Every physical quantity has a Hermitian linear operator associated to it, and the states where the value of this physical quantity is definite are the eigenstates of this linear operator. The linear combination of two or more eigenstates results in quantum superposition of two or more values of the quantity. If the quantity is measured, the value of the physical quantity will be random, with a probability equal to the square of the coefficient of the superposition in the linear combination. Immediately after the measurement, the state will be given by the eigenvector corresponding to the measured eigenvalue.

It is natural to ask why “real” (macroscopic, Newtonian) objects and events do not seem to display quantum mechanical features such as superposition. In 1935, Erwin Schrödinger devised a well-known thought experiment, now known as Schrödinger’s cat, which highlighted the dissonance between quantum mechanics and Newtonian physics, where only one configuration occurs, although a configuration for a particle in Newtonian physics specifies both position and momentum.
In fact, quantum superposition results in many directly observable effects, such as interference peaks from an electron wave in a double-slit experiment. The superpositions, however, persist at all scales, absent a mechanism for removing them. This mechanism can be philosophical as in the Copenhagen interpretation, or physical.

Recent research indicates that chlorophyll within plants appears to exploit the feature of quantum superposition to achieve greater efficiency in transporting energy, allowing pigment proteins to be spaced further apart than would otherwise be possible.[1][2]

If the operators corresponding to two observables do not commute, they have no simultaneous eigenstates and they obey an uncertainty principle. A state where one observable has a definite value corresponds to a superposition of many states for the other observable.

See also

References

  1. ^ Scholes, Gregory; Elisabetta Collini, Cathy Y. Wong, Krystyna E. Wilk, Paul M. G. Curmi, Paul Brumer & Gregory D. Scholes (4 February 2010). “Coherently wired light-harvesting in photosynthetic marine algae at ambient temperature”. Nature 463 (463): 644–647. http://www.nature.com/nature/journal/v463/n7281/full/nature08811.html. 
  2. ^ Moyer, Michael (September 2009). “Quantum Entanglement, Photosynthesis and Better Solar Cells”. Scientific American. http://www.scientificamerican.com/article.cfm?id=quantum-entanglement-and-photo. Retrieved 12 May 2010.
Posted in Uncategorized | Leave a comment

Time Dilation

……. is a phenomenon (or two phenomena, as mentioned below) described by the theory of relativity. It can be illustrated by supposing that two observers are in motion relative to each other, or differently situated with regard to nearby gravitational masses. They each carry a clock of identical construction and function. Then, the point of view of each observer will generally be that the other observer’s clock is in error (has changed its rate).

Both causes (distance to gravitational mass and relative speed) can operate together.

Contents

Overview

Time dilation can arise from:

  1. the relative velocity of motion between two observers, or
  2. the difference in their distance from a gravitational mass.

Relative velocity time dilation

When two observers are in relative uniform motion and far away from any gravitational mass, the point of view of each will be that the other’s (moving) clock is ticking at a slower rate than the local clock. The faster the relative velocity, the greater the magnitude of time dilation. This case is sometimes called special relativistic time dilation. It is often interpreted as time “slowing down” for the other (moving) clock. But that is only true from the physical point of view of the local observer, and of others at relative rest (i.e. in the local observer’s frame of reference). The point of view of the other observer will be that again the local clock (this time the other clock) is correct and it is the distant moving one that is slow. From a local perspective, time registered by clocks that are at rest with respect to the local frame of reference (and far from any gravitational mass) always appears to pass at the same rate.[1]

Gravitational time dilation

There is another case of time dilation, where both observers are differently situated in their distance from a significant gravitational mass, such as (for terrestrial observers) the Earth or the Sun. One may suppose for simplicity that the observers are at relative rest (which is not the case of two observers both rotating with the Earth—an extra factor described below). In the simplified case, the general theory of relativity describes how, for both observers, the clock that is closer to the gravitational mass, i.e. deeper in its “gravity well“, appears to go slower than the clock that is more distant from the mass (or higher in altitude away from the center of the gravitational mass). That does not mean that the two observers fully agree: each still makes the local clock to be correct; the observer more distant from the mass (higher in altitude) measures the other clock (closer to the mass, lower in altitude) to be slower than the local correct rate, and the observer situated closer to the mass (lower in altitude) measures the other clock (farther from the mass, higher in altitude) to be faster than the local correct rate. They agree at least that the clock nearer the mass is slower in rate and on the ratio of the difference.

Time dilation: special vs. general theories of relativity

In Albert Einstein‘s theories of relativity, time dilation in these two circumstances can be summarized:

  • In special relativity (or, hypothetically far from all gravitational mass), clocks that are moving with respect to an inertial system of observation are measured to be running slower. This effect is described precisely by the Lorentz transformation.

Thus, in special relativity, the time dilation effect is reciprocal: as observed from the point of view of either of two clocks which are in motion with respect to each other, it will be the other clock that is time dilated. (This presumes that the relative motion of both parties is uniform; that is, they do not accelerate with respect to one another during the course of the observations.)

In contrast, gravitational time dilation (as treated in general relativity) is not reciprocal: an observer at the top of a tower will observe that clocks at ground level tick slower, and observers on the ground will agree about that, i.e. about the direction and the ratio of the difference. There is not full agreement, all the observers make their own local clocks out to be correct, but the direction and ratio of gravitational time dilation is agreed by all observers, independent of their altitude.

Simple inference of time dilation due to relative velocity

Observer at rest sees time 2L/c.

Observer moving parallel relative to setup, sees longer path, time > 2L/c, same speed c.

Time dilation can be inferred from the observed fact of the constancy of the speed of light in all reference frames. [2] [3] [4] [5]

This constancy of the speed of light means, counter to intuition, that speeds of material objects and light are not additive. It is not possible to make the speed of light appear faster by approaching at speed towards the material source that is emitting light. It is not possible to make the speed of light appear slower by receding from the source at speed. From one point of view, it is the implications of this unexpected constancy that take away from constancies expected elsewhere.

Consider a simple clock consisting of two mirrors A and B, between which a light pulse is bouncing. The separation of the mirrors is L and the clock ticks once each time it hits a given mirror.
In the frame where the clock is at rest (diagram at right), the light pulse traces out a path of length 2L and the period of the clock is 2L divided by the speed of light:

\Delta t = \frac{2 L}{c}.

From the frame of reference of a moving observer traveling at the speed v (diagram at lower right), the light pulse traces out a longer, angled path. The second postulate of special relativity states that the speed of light is constant in all frames, which implies a lengthening of the period of this clock from the moving observer’s perspective. That is to say, in a frame moving relative to the clock, the clock appears to be running more slowly. Straightforward application of the Pythagorean theorem leads to the well-known prediction of special relativity:

The total time for the light pulse to trace its path is given by

\Delta t' = \frac{2 D}{c}.

The length of the half path can be calculated as a function of known quantities as

D = \sqrt{\left (\frac{1}{2}v \Delta t'\right )^2+L^2}.

Substituting D from this equation into the previous and solving for Δt gives:

\Delta t' = \frac{2L/c}{\sqrt{1-v^2/c^2}}

and thus, with the definition of Δt:

\Delta t' = \frac{\Delta t}{\sqrt{1-v^2/c^2}}

which expresses the fact that for the moving observer the period of the clock is longer than in the frame of the clock itself.

Time dilation due to relative velocity symmetric between observers

Common sense would dictate that if time passage has slowed for a moving object, the moving object would observe the external world to be correspondingly “sped up”. Counterintuitively, special relativity predicts the opposite.

A similar oddity occurs in everyday life. If Sam sees Abigail at a distance she appears small to him and at the same time Sam appears small to Abigail. Being very familiar with the effects of perspective, we see no mystery or a hint of a paradox in this situation.[6]

One is accustomed to the notion of relativity with respect to distance: the distance from Los Angeles to New York is by convention the same as the distance from New York to Los Angeles. On the other hand, when speeds are considered, one thinks of an object as “actually” moving, overlooking that its motion is always relative to something else — to the stars, the ground or to oneself. If one object is moving with respect to another, the latter is moving with respect to the former and with equal relative speed.

In the special theory of relativity, a moving clock is found to be ticking slowly with respect to the observer’s clock. If Sam and Abigail are on different trains in near-lightspeed relative motion, Sam measures (by all methods of measurement) clocks on Abigail’s train to be running slowly and similarly, Abigail measures clocks on Sam’s train to be running slowly.

Note that in all such attempts to establish “synchronization” within the reference system, the question of whether something happening at one location is in fact happening simultaneously with something happening elsewhere, is of key importance. Calculations are ultimately based on determining which events are simultaneous. Furthermore, establishing simultaneity of events separated in space necessarily requires transmission of information between locations, which by itself is an indication that the speed of light will enter the determination of simultaneity.

It is a natural and legitimate question to ask how, in detail, special relativity can be self-consistent if clock A is time-dilated with respect to clock B and clock B is also time-dilated with respect to clock A. It is by challenging the assumptions built into the common notion of simultaneity that logical consistency can be restored. Simultaneity is a relationship between an observer in a particular frame of reference and a set of events. By analogy, left and right are accepted to vary with the position of the observer, because they apply to a relationship. In a similar vein, Plato explained that up and down describe a relationship to the earth and one would not fall off at the antipodes.

Within the framework of the theory and its terminology there is a relativity of simultaneity that affects how the specified events are aligned with respect to each other by observers in relative motion. Because the pairs of putatively simultaneous moments are identified differently by different observers (as illustrated in the twin paradox article), each can treat the other clock as being the slow one without relativity being self-contradictory. This can be explained in many ways, some of which follow.

Temporal coordinate systems and clock synchronization

In Relativity, temporal coordinate systems are set up using a procedure for synchronizing clocks, discussed by Poincaré (1900) in relation to Lorentz’s local time (see relativity of simultaneity). It is now usually called the Einstein synchronization procedure, since it appeared in his 1905 paper.

An observer with a clock sends a light signal out at time t1 according to his clock. At a distant event, that light signal is reflected back to, and arrives back to the observer at time t2 according to his clock. Since the light travels the same path at the same rate going both out and back for the observer in this scenario, the coordinate time of the event of the light signal being reflected for the observer tE is tE = (t1 + t2) / 2. In this way, a single observer’s clock can be used to define temporal coordinates which are good anywhere in the universe.

Symmetric time dilation occurs with respect to temporal coordinate systems set up in this manner. It is an effect where another clock is being viewed as running slowly by an observer. Observers do not consider their own clock time to be time-dilated, but may find that it is observed to be time-dilated in another coordinate system.

Overview of formulae

Time dilation due to relative velocity

Lorentz factor as a function of speed (in natural units where c=1). Notice that for small speeds (less than 0.1), γ is approximately 1

The formula for determining time dilation in special relativity is:

 \Delta t' = \gamma \, \Delta t = \frac{\Delta t}{\sqrt{1-v^2/c^2}} \,

where Δt is the time interval between two co-local events (i.e. happening at the same place) for an observer in some inertial frame (e.g. ticks on his clock) – this is known as the proper time, Δt ’ is the time interval between those same events, as measured by another observer, inertially moving with velocity v with respect to the former observer, v is the relative velocity between the observer and the moving clock, c is the speed of light, and

 \gamma = \frac{1}{\sqrt{1-v^2/c^2}} \,

is the Lorentz factor. Thus the duration of the clock cycle of a moving clock is found to be increased: it is measured to be “running slow”. The range of such variances in ordinary life, where vc, even considering space travel, are not great enough to produce easily detectable time dilation effects and such vanishingly small effects can be safely ignored. It is only when an object approaches speeds on the order of 30,000 km/s (1/10 the speed of light) that time dilation becomes important.

Time dilation by the Lorentz factor was predicted by Joseph Larmor (1897), at least for electrons orbiting a nucleus. Thus “… individual electrons describe corresponding parts of their orbits in times shorter for the [rest] system in the ratio :\scriptstyle \sqrt{1 - v^2/c^2}” (Larmor 1897). Time dilation of magnitude corresponding to this (Lorentz) factor has been experimentally confirmed, as described below.

Time dilation due to gravitation and motion together

Astronomical time scales and the GPS system represent significant practical applications, presenting problems that call for consideration of the combined effects of mass and motion in producing time dilation.
Relativistic time dilation effects, for the solar system and the Earth, have been evaluated from the starting point of an approximation to the Schwarzschild solution to the Einstein field equations. A timelike interval dtE in this metric can be approximated, when expressed in rectangular coordinates and when truncated of higher powers in 1/c2, in the form:[7][8]

 dt_E^2 = \left( 1-\frac{2GM_i}{r_i c^2} \right) dt_c^2 - \frac{dx^2+dy^2+dz^2}{c^2}, \,
(1)

where:

dtE (expressed as a time-like interval) is a small increment forming part of an interval in the proper time tE (an interval that could be recorded on an atomic clock);
dtc is a small increment in the timelike coordinate tc (“coordinate time“) of the clock’s position in the chosen reference frame;
dx, dy and dz are small increments in three orthogonal space-like coordinates x, y, z of the clock’s position in the chosen reference frame; and
GMi/ri represents a sum, to be designated U, of gravitational potentials due to the masses in the neighborhood, based on their distances ri from the clock. This sum of the GMi/ri is evaluated approximately, as a sum of Newtonian gravitational potentials (plus any tidal potentials considered), and is represented below as U (using the positive astronomical sign convention for gravitational potentials). The scope of the approximation may be extended to a case where U further includes effects of external masses other than the Mi, in the form of tidal gravitational potentials that prevail (due to the external masses) in a suitably small region of space around a point of the reference frame located somewhere in a gravity well due to those external masses, where the size of ‘suitably small’ remains to be investigated.[9]

From this, after putting the velocity of the clock (in the coordinates of the chosen reference frame) as

v^2=\frac{dx^2+dy^2+dz^2}{dt_c^2}, \,
(2)

(then taking the square root and truncating after binomial expansion, neglecting terms beyond the first power in 1/c2), a relation between the rate of the proper time and the rate of the coordinate time can be obtained as the differential equation[10]

\frac{dt_E}{dt_c}= 1-\frac{U}{c^2}-\frac{v^2}{2c^2}. \,
(3)

Equation (3) represents combined time dilations due to mass and motion, approximated to the first order in powers of 1/c2. The approximation can be applied to a number of the weak-field situations found around the Earth and in the solar-system. It can be thought of as relating the rate of proper time tE that can be measured by a clock, with the rate of a coordinate time tc.

In particular, for explanatory purposes, the time-dilation equation (3) provides a way of conceiving coordinate time, by showing that the rate of the clock would be exactly equal to the rate of the coordinate time if this “coordinate clock” could be situated

(a) hypothetically outside all relevant ‘gravity wells‘, e.g. remote from all gravitational masses Mi, (so that U=0), and also
(b) at rest in relation to the chosen system of coordinates (so that v=0).

Equation (3) has been developed and integrated for the case where the reference frame is the solar system barycentric (‘ssb’) reference frame, to show the (time-varying) time dilation between the ssb coordinate time and local time at the Earth’s surface: the main effects found included a mean time dilation of about 0.49 second per year (slower at the Earth’s surface than for the ssb coordinate time), plus periodic modulation terms of which the largest has an annual period and an amplitude of about 1.66 millisecond.[11][12]

Equation (3) has also been developed and integrated for the case of clocks at or near the Earth’s surface. For clocks fixed to the rotating Earth’s surface at mean sea level, regarded as a surface of the geoid, the sum ( U + v2/2 ) is a very nearly constant geopotential, and decreases with increasing height above sea level approximately as the product of the change in height and the gradient of the geopotential. This has been evaluated as a fractional increase in clock rate of about 1.1×10−13 per kilometer of height above sea level due to a decrease in combined rate of time dilation with increasing altitude. The value of dtE/dtc at height falls to be compared with the corresponding value at mean sea level.[13] (Both values are slightly below 1, the value at height being a little larger (closer to 1) than the value at sea level.)

A fuller development of equation (3) for the near-Earth situation has been used to evaluate the combined time dilations relative to the Earth’s surface experienced along the trajectories of satellites of the GPS global positioning system. The resulting values (in this case they are relativistic increases in the rate of the satellite-borne clocks, by about 38 microseconds per day) form the basis for adjustments essential for the functioning of the system.[14]

This gravitational time dilation relationship has been used in the synchronization or correlation of atomic clocks used to implement and maintain the atomic time scale TAI, where the different clocks are located at different heights above sea level, and since 1977 have had their frequencies steered to compensate for the differences of rate with height.[15]

In pulsar timing, the advance or retardation of the pulsar phase due to gravitational and motional time dilation is called the “Einstein Delay”.

Experimental confirmation

Time dilation has been tested a number of times. The routine work carried on in particle accelerators since the 1950s, such as those at CERN, is a continuously running test of the time dilation of special relativity. The specific experiments include:

Velocity time dilation tests

  • Ives and Stilwell (1938, 1941), “An experimental study of the rate of a moving clock”, in two parts. The stated purpose of these experiments was to verify the time dilation effect, predicted by Lamor-Lorentz ether theory, due to motion through the ether using Einstein’s suggestion that Doppler effect in canal rays would provide a suitable experiment. These experiments measured the Doppler shift of the radiation emitted from cathode rays, when viewed from directly in front and from directly behind. The high and low frequencies detected were not the classical values predicted.
f_\mathrm{detected} = \frac{f_\mathrm{moving}}{1 - v/c} and \frac{f_\mathrm{moving}}{1+v/c}\,=\, \frac{f_\mathrm{rest}}{1 - v/c} and \frac{f_\mathrm{rest}}{1+v/c}
i.e. for sources with invariant frequencies f_\mathrm{moving}\, = f_\mathrm{rest} The high and low frequencies of the radiation from the moving sources were measured as

f_\mathrm{detected} = f_\mathrm{rest}\sqrt{\left(1 + v/c\right)/\left(1 - v/c\right) } and f_\mathrm{rest}\sqrt{\left(1 - v/c\right)/\left(1 + v/c\right)}
as deduced by Einstein (1905) from the Lorentz transformation, when the source is running slow by the Lorentz factor.
  • Rossi and Hall (1941) compared the population of cosmic-ray-produced muons at the top of a mountain to that observed at sea level. Although the travel time for the muons from the top of the mountain to the base is several muon half-lives, the muon sample at the base was only moderately reduced. This is explained by the time dilation attributed to their high speed relative to the experimenters. That is to say, the muons were decaying about 10 times slower than if they were at rest with respect to the experimenters.
  • Hasselkamp, Mondry, and Scharmann[16] (1979) measured the Doppler shift from a source moving at right angles to the line of sight (the transverse Doppler shift). The most general relationship between frequencies of the radiation from the moving sources is given by:
f_\mathrm{detected} = f_\mathrm{rest}{\left(1 - \frac{v}{c} \cos\phi\right)/\sqrt{1 - {v^2}/{c^2}} }
as deduced by Einstein (1905)[1]. For \phi = 90^\circ (\cos\phi = 0\,) this reduces to fdetected = frestγ. Thus there is no transverse Doppler shift, and the lower frequency of the moving source can be attributed to the time dilation effect alone.
  • In 2010 time dilation was observed at speeds of less than 10 meters per second using optical atomic clocks connected by 75 meters of optical fiber.[17]

Gravitational time dilation tests

  • Pound, Rebka in 1959 measured the very slight gravitational red shift in the frequency of light emitted at a lower height, where Earth’s gravitational field is relatively more intense. The results were within 10% of the predictions of general relativity. Later Pound and Snider (in 1964) derived an even closer result of 1%. This effect is as predicted by gravitational time dilation.
  • In 2010 gravitational time dilation was measured at the Earth’s surface with a height difference of only one meter, using optical atomic clocks.[17]

Velocity and gravitational time dilation combined-effect tests

  • Hafele and Keating, in 1971, flew caesium atomic clocks east and west around the Earth in commercial airliners, to compare the elapsed time against that of a clock that remained at the US Naval Observatory. Two opposite effects came into play. The clocks were expected to age more quickly (show a larger elapsed time) than the reference clock, since they were in a higher (weaker) gravitational potential for most of the trip (c.f. Pound, Rebka). But also, contrastingly, the moving clocks were expected to age more slowly because of the speed of their travel. The gravitational effect was the larger, and the clocks suffered a net gain in elapsed time. To within experimental error, the net gain was consistent with the difference between the predicted gravitational gain and the predicted velocity time loss. In 2005, the National Physical Laboratory in the United Kingdom reported their limited replication of this experiment.[18] The NPL experiment differed from the original in that the caesium clocks were sent on a shorter trip (London–Washington D.C. return), but the clocks were more accurate. The reported results are within 4% of the predictions of relativity.
  • The Global Positioning System can be considered a continuously operating experiment in both special and general relativity. The in-orbit clocks are corrected for both special and general relativistic time dilation effects as described above, so that (as observed from the Earth’s surface) they run at the same rate as clocks on the surface of the Earth. In addition, but not directly time dilation related, general relativistic correction terms are built into the model of motion that the satellites broadcast to receivers — uncorrected, these effects would result in an approximately 7-metre (23 ft) oscillation in the pseudo-ranges measured by a receiver over a cycle of 12 hours.

Muon lifetime

A comparison of muon lifetimes at different speeds is possible. In the laboratory, slow muons are produced, and in the atmosphere very fast moving muons are introduced by cosmic rays. Taking the muon lifetime at rest as the laboratory value of 2.22 μs, the lifetime of a cosmic ray produced muon traveling at 98% of the speed of light is about five times longer, in agreement with observations.[19] In this experiment the “clock” is the time taken by processes leading to muon decay, and these processes take place in the moving muon at its own “clock rate”, which is much slower than the laboratory clock.

Time dilation and space flight

Time dilation would make it possible for passengers in a fast-moving vehicle to travel further into the future while aging very little, in that their great speed slows down the rate of passage of on-board time. That is, the ship’s clock (and according to relativity, any human travelling with it) shows less elapsed time than the clocks of observers on Earth. For sufficiently high speeds the effect is dramatic. For example, one year of travel might correspond to ten years at home. Indeed, a constant 1 g acceleration would permit humans to travel as far as light has been able to travel since the big bang (some 13.7 billion light years) in one human lifetime. The space travellers could return to Earth billions of years in the future. A scenario based on this idea was presented in the novel Planet of the Apes by Pierre Boulle.

A more likely use of this effect would be to enable humans to travel to nearby stars without spending their entire lives aboard the ship. However, any such application of time dilation during Interstellar travel would require the use of some new, advanced method of propulsion. The Orion Project has been the only major attempt toward this idea.

Current space flight technology has fundamental theoretical limits based on the practical problem that an increasing amount of energy is required for propulsion as a craft approaches the speed of light. The likelihood of collision with small space debris and other particulate material is another practical limitation. At the velocities presently attained, however, time dilation is not a factor in space travel. Travel to regions of space-time where gravitational time dilation is taking place, such as within the gravitational field of a black hole but outside the event horizon (perhaps on a hyperbolic trajectory exiting the field), could also yield results consistent with present theory.

Time dilation at constant acceleration

In special relativity, time dilation is most simply described in circumstances where relative velocity is unchanging. Nevertheless, the Lorentz equations allow one to calculate proper time and movement in space for the simple case of a spaceship whose acceleration, relative to some referent object in uniform (i.e. constant velocity) motion, equals g throughout the period of measurement.

Let t be the time in an inertial frame subsequently called the rest frame. Let x be a spatial coordinate, and let the direction of the constant acceleration as well as the spaceship’s velocity (relative to the rest frame) be parallel to the x-axis. Assuming the spaceship’s position at time t = 0 being x = 0 and the velocity being v0 and defining the following abbreviation

\gamma_0 := \frac{1}{\sqrt{1-v_0^2/c^2}},

the following formulas hold:[20]
Position:

x(t) = \frac {c^2}{g} \left( \sqrt{1 + \frac{\left(gt + v_0\gamma_0\right)^2}{c^2}} -\gamma_0 \right).

Velocity:

v(t) =\frac{gt + v_0\gamma_0}{\sqrt{1 + \frac{ \left(gt + v_0\gamma_0\right)^2}{c^2}}}.

Proper time:

\tau(t) = \tau_0 + \int_0^t \sqrt{ 1 - \left( \frac{v(t')}{c} \right)^2 } dt'

In the case where v(0) = v0 = 0 and τ(0) = τ0 = 0 the integral can be expressed as a logarithmic function or, equivalently, as an inverse hyperbolic function:

\tau(t) = \frac{c}{g} \ln \left(  \frac{gt}{c} + \sqrt{ 1 + \left( \frac{gt}{c} \right)^2 } \right) = \frac{c}{g} \operatorname {arsinh} \left( \frac{gt}{c} \right) .

Spacetime geometry of velocity time dilation

Time dilation in transverse motion

The green dots and red dots in the animation represent spaceships. The ships of the green fleet have no velocity relative to each other, so for the clocks onboard the individual ships the same amount of time elapses relative to each other, and they can set up a procedure to maintain a synchronized standard fleet time. The ships of the “red fleet” are moving with a velocity of 0.866 of the speed of light with respect to the green fleet.
The blue dots represent pulses of light. One cycle of light-pulses between two green ships takes two seconds of “green time”, one second for each leg.

As seen from the perspective of the reds, the transit time of the light pulses they exchange among each other is one second of “red time” for each leg. As seen from the perspective of the greens, the red ships’ cycle of exchanging light pulses travels a diagonal path that is two light-seconds long. (As seen from the green perspective the reds travel 1.73 (\sqrt{3}) light-seconds of distance for every two seconds of green time.)
One of the red ships emits a light pulse towards the greens every second of red time. These pulses are received by ships of the green fleet with two-second intervals as measured in green time. Not shown in the animation is that all aspects of physics are proportionally involved. The light pulses that are emitted by the reds at a particular frequency as measured in red time are received at a lower frequency as measured by the detectors of the green fleet that measure against green time, and vice versa.

The animation cycles between the green perspective and the red perspective, to emphasize the symmetry. As there is no such thing as absolute motion in relativity (as is also the case for Newtonian mechanics), both the green and the red fleet are entitled to consider themselves motionless in their own frame of reference.
Again, it is vital to understand that the results of these interactions and calculations reflect the real state of the ships as it emerges from their situation of relative motion. It is not a mere quirk of the method of measurement or communication.

See also

References

  1. ^ For sources on special relativistic time dilation, see Albert Einstein’s own popular exposition, published in English translation (1920) as “Relativity: The Special and General Theory”, especially at “8: On the Idea of Time in Physics”, and in following sections 9–12. See also the articles Special relativity, Lorentz transformation and Relativity of simultaneity.
  2. ^ Cassidy, David C.; Holton, Gerald James; Rutherford, Floyd James (2002), Understanding Physics, Springer-Verlag New York, Inc, ISBN 0-387-98756-8, http://books.google.com/?id=rpQo7f9F1xUC&pg=PA422 , Chapter 9 §9.6, p. 422
  3. ^ Cutner, Mark Leslie (2003), Astronomy, A Physical Perspective, Cambridge University Press, ISBN 0-521-82196-7, http://books.google.com/?id=2QVmiMW0O0MC&pg=PA128 , Chapter 7 §7.2, p. 128
  4. ^ Lerner, Lawrence S. (1996), Physics for Scientists and Engineers, Volume 2, Jones and Bertlett Publishers, Inc, ISBN 0-7637-0460-1, http://books.google.com/?id=B8K_ym9rS6UC&pg=PA1051 , Chapter 38 §38.4, p. 1051,1052
  5. ^ Ellis, George F. R.; Williams, Ruth M. (2000), Flat and Curved Space-times, Second Edition, Oxford University Press Inc, New York, ISBN 0-19-850657-0, http://books.google.com/?id=Hos31wty5WIC&pg=PA28 , Chapter 3 §1.3, p. 28-29
  6. ^ Adams, Steve (1997), Relativity: an introduction to space-time physics, CRC Press, p. 54, ISBN 0-748-40621-2, http://books.google.com/?id=1RV0AysEN4oC , Section 2.5, page 54
  7. ^ See T D Moyer (1981a), “Transformation from proper time on Earth to coordinate time in solar system barycentric space-time frame of reference”, Celestial Mechanics 23 (1981) pages 33-56, equations 2 and 3 at pages 35-6 combined here and divided throughout by c2.
  8. ^ A version of the same relationship can also be seen in Neil Ashby (2002), “Relativity and the Global Positioning System”, Physics Today (May 2002), at equation (2).
  9. ^ Such tidal effects can also be seen included in some of the relations shown in Neil Ashby (2002), cited above.
  10. ^ (This is equation (6) at page 36 of T D Moyer (1981a), cited above.)
  11. ^ G M Clemence & V Szebehely, “Annual variation of an atomic clock”, Astronomical Journal, Vol.72 (1967), p.1324-6.
  12. ^ T D Moyer (1981b), “Transformation from proper time on Earth to coordinate time in solar system barycentric space-time frame of reference” (Part 2), Celestial Mechanics 23 (1981) pages 57-68.
  13. ^ J B Thomas (1975), “Reformulation of the relativistic conversion between coordinate time and atomic time”, Astronomical Journal, vol.80, May 1975, p.405-411.
  14. ^ See Neil Ashby (2002), cited above; also in article Global Positioning System the section Special and general relativity and further sources cited there.
  15. ^ B Guinot (2000), “History of the Bureau International de l’Heure”, ASP Conference Proceedings vol.208 (2000), pp.175-184, at p.182.
  16. ^ “Journal Article”. SpringerLink. http://www.springerlink.com/content/kt5505r2p2r22411/. Retrieved 2009-10-18. 
  17. ^ a b Chou, C. W.; Hume, D. B.; Rosenband, T.; Wineland, D. J. (2010). “Optical Clocks and Relativity”. Science 329: 1630. doi:10.1126/science.1192720.  edit
  18. ^ http://www.npl.co.uk/upload/pdf/metromnia_issue18.pdf
  19. ^ JV Stewart (2001), Intermediate electromagnetic theory, Singapore: World Scientific, p. 705, ISBN 9810244703, http://www.google.com/search?ie=UTF-8&hl=nl&rlz=1T4GZAZ_nlBE306BE306&q=relativity%20%22meson%20lifetime%22%202.22&tbo=u&tbs=bks:1&source=og&sa=N&tab=gp 
  20. ^ Iorio, Lorenzo (27-Jun-2004). “An analytical treatment of the Clock Paradox in the framework of the Special and General Theories of Relativity”. http://arxiv.org/abs/physics/0405038.  (Equations (3), (4), (6), (9) on pages 5-6)
  • Callender, Craig & Edney, Ralph (2001), Introducing Time, Icon, ISBN 1-84046-592-1 
  • Einstein, A. (1905) “Zur Elektrodynamik bewegter Körper”, Annalen der Physik, 17, 891. English translation: On the electrodynamics of moving bodies
  • Einstein, A. (1907) “Über eine Möglichkeit einer Prüfung des Relativitätsprinzips”, Annalen der Physik.
  • Hasselkamp, D., Mondry, E. and Scharmann, A. (1979) “Direct Observation of the Transversal Doppler-Shift”, Z. Physik A 289, 151–155
  • Ives, H. E. and Stilwell, G. R. (1938), “An experimental study of the rate of a moving clock”, J. Opt. Soc. Am, 28, 215–226
  • Ives, H. E. and Stilwell, G. R. (1941), “An experimental study of the rate of a moving clock. II”, J. Opt. Soc. Am, 31, 369–374
  • Joos, G. (1959) Lehrbuch der Theoretischen Physik, 11. Auflage, Leipzig; Zweites Buch, Sechstes Kapitel, § 4: Bewegte Bezugssysteme in der Akustik. Der Doppler-Effekt.
  • Larmor, J. (1897) “On a dynamical theory of the electric and luminiferous medium”, Phil. Trans. Roy. Soc. 190, 205–300 (third and last in a series of papers with the same name).
  • Poincaré, H. (1900) “La theorie de Lorentz et la Principe de Reaction”, Archives Neerlandaies, V, 253–78.
  • Reinhardt et al. Test of relativistic time dilation with fast optical atomic clocks at different velocities (Nature 2007)
  • Rossi, B and Hall, D. B. Phys. Rev., 59, 223 (1941).
  • NIST Two way time transfer for satellites
  • Voigt, W. “Ueber das Doppler’sche princip” Nachrichten von der Königlicher Gesellschaft der Wissenschaften zu Göttingen, 2, 41–51.

External links

Posted in Grace, Outside Time, Relativistic Muons, Time Dilation, Time Variable Measure | Tagged , , , , , | Leave a comment

Muon

The Moon‘s cosmic ray shadow, as seen in secondary muons generated by cosmic rays in the atmosphere, and detected 700 meters below ground, at the Soudan II detector.

Composition: Elementary particle
Particle statistics: Fermionic
Group: Lepton
Generation: Second
Interaction: Gravity, Electromagnetic,
Weak
Symbol(s): μ
Antiparticle: Antimuon (μ+)
Theorized:
Discovered: Carl D. Anderson (1936)
Mass: 105.65836668(38) MeV/c2
Mean lifetime: 2.197034(21)×10−6 s[1]
Electric charge: −1 e
Color charge: None
Spin: 12

The muon (from the Greek letter mu (μ) used to represent it) is an elementary particle similar to the electron, with a negative electric charge and a spin of ½. Together with the electron, the tau, and the three neutrinos, it is classified as a lepton. It is an unstable subatomic particle with the second longest mean lifetime (2.2 µs), exceeded only by that of the free neutron (~15 minutes). Like all elementary particles, the muon has a corresponding antiparticle of opposite charge but equal mass and spin: the antimuon (also called a positive muon). Muons are denoted by μ and antimuons by μ+. Muons were previously called mu mesons, but are not classified as mesons by modern particle physicists (see History).

Muons have a mass of 105.7 MeV/c2, which is about 200 times the mass of an electron. Since the muon’s interactions are very similar to those of the electron, a muon can be thought of as a much heavier version of the electron. Due to their greater mass, muons are not as sharply accelerated when they encounter electromagnetic fields, and do not emit as much bremsstrahlung radiation. Thus muons of a given energy penetrate matter far more deeply than electrons, since the deceleration of electrons and muons is primarily due to energy loss by this mechanism. So-called “secondary muons”, generated by cosmic rays hitting the atmosphere, can penetrate to the Earth’s surface and into deep mines.

As with the case of the other charged leptons, the muon has an associated muon neutrino. Muon neutrinos are denoted by νμ.

Contents

History

Muons were discovered by Carl D. Anderson and Seth Neddermeyer at Caltech in 1936, while studying cosmic radiation. Anderson had noticed particles that curved differently from electrons and other known particles when passed through a magnetic field. They were negatively charged but curved less sharply than electrons, but more sharply than protons, for particles of the same velocity. It was assumed that the magnitude of their negative electric charge was equal to that of the electron, and so to account for the difference in curvature, it was supposed that their mass was greater than an electron but smaller than a proton. Thus Anderson initially called the new particle a mesotron, adopting the prefix meso- from the Greek word for “mid-“. Shortly thereafter, additional particles of intermediate mass were discovered, and the more general term meson was adopted to refer to any such particle. To differentiate between different types of mesons, the mesotron was in 1947 renamed the mu meson (the Greek letter μ (mu) corresponds to m).
It was soon found that the mu meson significantly differed from other mesons: for example, its decay products included a neutrino and an antineutrino, rather than just one or the other, as was observed with other mesons. Other mesons were eventually understood to be hadrons—that is, particles made of quarks—and thus subject to the residual strong force. In the quark model, a meson is composed of exactly two quarks (a quark and antiquark) unlike baryons, which are composed of three quarks. Mu mesons, however, were found to be fundamental particles (leptons) like electrons, with no quark structure. Thus, mu mesons were not mesons at all (in the new sense and use of the term meson), and so the term mu meson was abandoned, and replaced with the modern term muon.

Another particle (the pion, with which the muon was initially confused) had been predicted by theorist Hideki Yukawa:[2]

“It seems natural to modify the theory of Heisenberg and Fermi in the following way. The transition of a heavy particle from neutron state to proton state is not always accompanied by the mission of light particles. The transition is sometimes taken up by another heavy particle.”

The existence of the muon was confirmed in 1937 by J. C. Street and E. C. Stevenson’s cloud chamber experiment.[3] The discovery of the muon seemed so incongruous and surprising at the time that Nobel laureate I. I. Rabi famously quipped, “Who ordered that?”

In a 1941 experiment on Mount Washington in New Hampshire, muons were used to observe the time dilation predicted by special relativity for the first time.[4]

Muon sources

Since the production of muons requires an available center of momentum frame energy of 105.7 MeV, neither ordinary radioactive decay events nor nuclear fission and fusion events (such as those occurring in nuclear reactors and nuclear weapons) are energetic enough to produce muons. Only nuclear fission produces single-nuclear-event energies in this range, but do not produce muons as the production of a single muon would violate the conservation of quantum numbers (see under “muon decay” below).

On Earth, most naturally occurring muons are created by cosmic rays, which consist mostly of protons, many arriving from deep space at very high energy[5]

About 10,000 muons reach every square meter of the earth’s surface a minute; these charged particles form as by-products of cosmic rays colliding with molecules in the upper atmosphere. Travelling at relativistic speeds, muons can penetrate tens of meters into rocks and other matter before attenuating as a result of absorption or deflection by other atoms.

When a cosmic ray proton impacts atomic nuclei of air atoms in the upper atmosphere, pions are created. These decay within a relatively short distance (meters) into muons (the pion’s preferred decay product), and neutrinos. The muons from these high energy cosmic rays generally continue in about the same direction as the original proton, at a very high velocity. Although their lifetime without relativistic effects would allow a half-survival distance of only about 0.66 km (660 meters) at most (as seen from Earth) the time dilation effect of special relativity (from the viewpoint of the Earth) allows cosmic ray secondary muons to survive the flight to the Earth’s surface, since in the Earth frame, the muons have a longer half-life due to their velocity. From the viewpoint (inertial frame) of the muon, on the other hand, it is the length contraction effect of special relativity which allows this penetration, since in the muon frame, its lifetime is unaffected, but the distance through the atmosphere and earth appears far shorter than these distances in the Earth rest-frame. Both are equally valid ways of explaining the fast muon’s unusual survival over distances.

Since muons are unusually penetrative of ordinary matter, like neutrinos, they are also detectable deep underground (700 meters at the Soudan II detector) and underwater, where they form a major part of the natural background ionizing radiation. Like cosmic rays, as noted, this secondary muon radiation is also directional.

The same nuclear reaction described above (i.e. hadron-hadron impacts to produce pion beams, which then quickly decay to muon beams over short distances) is used by particle physicists to produce muon beams, such as the beam used for the muon g − 2 experiment.[6]

Muon decay

The most common decay of the muon

Muons are unstable elementary particles and are heavier than electrons and neutrinos but lighter than all other matter particles. They decay via the weak interaction. Because lepton numbers must be conserved, one of the product neutrinos of muon decay must be a muon-type neutrino and the other an electron-type antineutrino (antimuon decay produces the corresponding antiparticles, as detailed below). Because charge must be conserved, one of the products of muon decay is always an electron of the same charge as the muon (a positron if it is a positive muon). Thus all muons decay to at least an electron, and two neutrinos. Sometimes, besides these necessary products, additional other particles that have a net charge and spin of zero (i.e. a pair of photons, or an electron-positron pair), are produced.

The dominant muon decay mode (sometimes called the Michel decay after Louis Michel) is the simplest possible: the muon decays to an electron, an electron-antineutrino, and a muon-neutrino. Antimuons, in mirror fashion, most often decay to the corresponding antiparticles: a positron, an electron-neutrino, and a muon-antineutrino. In formulaic terms, these two decays are:

\mu^-\to e^- + \bar\nu_e + \nu_\mu,~~~\mu^+\to e^+ + \nu_e + \bar\nu_\mu.

The mean lifetime of the (positive) muon is 2.197 019 ± 0.000 021 μs[7]. The equality of the muon and anti-muon lifetimes has been established to better than one part in 104.

The tree-level muon decay width is

\Gamma=\frac{G_F^2 m_\mu^5}{192\pi^3}I\left(\frac{m_e^2}{m_\mu^2}\right),

where I(x) = 1 − 8x − 12x2lnx + 8x3x4;  G_F^2 is the Fermi coupling constant.
The decay distributions of the electron in muon decays have been parameterised using the so-called Michel parameters. The values of these four parameters are predicted unambiguously in the Standard Model of particle physics, thus muon decays represent a good test of the space-time structure of the weak interaction. No deviation from the Standard Model predictions has yet been found.

Certain neutrino-less decay modes are kinematically allowed but forbidden in the Standard Model. Examples forbidden by lepton flavour conservation are

\mu^-\to e^- + \gamma and \mu^-\to e^- + e^+ + e^-.

Observation of such decay modes would constitute clear evidence for physics beyond the Standard Model (BSM). Current experimental upper limits for the branching fractions of such decay modes are in the range 10−11 to 10−12.

Muonic atoms

The muon was the first elementary particle discovered that does not appear in ordinary atoms. Negative muons can, however, form muonic atoms (also called mu-mesic atoms), by replacing an electron in ordinary atoms. Muonic hydrogen atoms are much smaller than typical hydrogen atoms because the much larger mass of the muon gives it a much smaller ground-state wavefunction than is observed for the electron. In multi-electron atoms, when only one of the electrons is replaced by a muon, the size of the atom continues to be determined by the other electrons, and the atomic size is nearly unchanged. However, in such cases the orbital of the muon continues to be smaller and far closer to the nucleus than the atomic orbitals of the electrons.

A positive muon, when stopped in ordinary matter, can also bind an electron and form an exotic atom known as muonium (Mu) atom, in which the muon acts as the nucleus. The positive muon, in this context, can be considered a pseudo-isotope of hydrogen with one ninth of the mass of the proton. Because the reduced mass of muonium, and hence its Bohr radius, is very close to that of hydrogen[clarification needed], this short-lived “atom” behaves chemically — to a first approximation — like hydrogen, deuterium and tritium.

Use in measurement of the proton charge radius

The recent culmination of a twelve year experiment investigating the proton’s charge radius involved the use of muonic hydrogen. This form of hydrogen is composed of a muon orbiting a proton[8]. The Lamb shift in muonic hydrogen was measured by driving the muon from the from its 2s state up to an excited 2p state using a laser. The frequency of the photon required to induce this transition was revealed to be 50 terahertz which, according to present theories of quantum electrodynamics, yields a value of 0.84184 ± 0.00067 femtometres for the charge radius of the proton.[9]

Anomalous magnetic dipole moment

The anomalous magnetic dipole moment is the difference between the experimentally observed value of the magnetic dipole moment and the theoretical value predicted by the Dirac equation. The measurement and prediction of this value is very important in the precision tests of QED (quantum electrodynamics). The E821 experiment at Brookhaven National Laboratory (BNL) studied the precession of muon and anti-muon in a constant external magnetic field as they circulated in a confining storage ring. The E821 Experiment reported the following average value (from the July 2007 review by Particle Data Group)

a = \frac{g-2}{2} = 0.00116592080(54)(33)

where the first errors are statistical and the second systematic.

The difference between the g-factors of the muon and the electron is due to their difference in mass. Because of the muon’s larger mass, contributions to the theoretical calculation of its anomalous magnetic dipole moment from Standard Model weak interactions and from contributions involving hadrons are important at the current level of precision, whereas these effects are not important for the electron. The muon’s anomalous magnetic dipole moment is also sensitive to contributions from new physics beyond the Standard Model, such as supersymmetry. For this reason, the muon’s anomalous magnetic moment is normally used as a probe for new physics beyond the Standard Model rather than as a test of QED (Phys.Lett. B649, 173 (2007)).

See also

References

  1. ^ K. Nakamura et al. (Particle Data Group), J. Phys. G 37, 075021 (2010), URL: http://pdg.lbl.gov
  2. ^ Yukaya Hideka, On the Interaction of Elementary Particles 1, Proceedings of the Physico-Mathematical Society of Japan (3) 17, 48, pp 139-148 (1935). (Read 17 November 1934)
  3. ^ New Evidence for the Existence of a Particle Intermediate Between the Proton and Electron”, Phys. Rev. 52, 1003 (1937).
  4. ^ David H. Frisch and James A. Smith, “Measurement of the Relativistic Time Dilation Using Muons”, American Journal of Physics, 31, 342, 1963, cited by Michael Fowler, “Special Relativity: What Time is it?
  5. ^ S. Carroll (2004). Spacetime and Geometry: An Introduction to General Relativity. Addison Wesly. p. 204
  6. ^ Brookhaven National Laboratory (30 July 2002). “Physicists Announce Latest Muon g-2 Measurement”. Press release. http://www.bnl.gov/bnlweb/pubaf/pr/2002/bnlpr073002.htm. Retrieved 2009-11-14. 
  7. ^ [1]
  8. ^ TRIUMF Muonic Hydrogen collaboration. “A brief description of Muonic Hydrogen research”. Retrieved 2010-11-7
  9. ^ Pohl, Randolf et al. “The Size of the ProtonNature 466, 213-216 (8 July 2010)

External links

***
Comment on Backreaction made to Steven

Hi Steven

Would we not be correct to say that unification with the small would be most apropos indeed with the large?

Pushing through that veil.

My interest with the QGP is well documented, as it presented itself “with an interesting location” with which to look at during the collision process.

Natural Microscopic blackhole creations? Are such conditions possible in the natural way of things? Although quickly dissipative, they leave their mark as Cerenkov effects.

As one looks toward the cosmos this reductionist process is how one might look at the cosmos at large, as to some of it’s “motivations displayed” in the cosmos?

What conditions allow such reductionism at play to consider the end result of geometrical propensity as a message across the vast distance of space, so as to “count these effects” here on earth?

Let’s say cosmos particle collisions and LHC are hand in hand “as to decay of the original particles in space” as they leave their imprint noticeably in the measures of SNO or Icecube, but help us discern further effects of that decay chain as to the constitutions of LHC energy progressions of particles in examination?

Emulating the conditions in LHC progression, adaptability seen then from such progressions, working to produce future understandings. Muon detections through the earth?

So “modeled experiments” in which “distillation of thought” are helped to be reduced too, in kind, lead to matter forming ideas with which to progress? Measure. Self evident.

You see the view has to be on two levels, maybe as a poet using words to describe, or as a artist, trying to explain the natural world. The natural consequence, of understanding of our humanity and it’s continuations expressed as abstract thought of our interactions with the world at large, unseen, and miscomprehended?

Do you think Superstringy has anything to do with what I just told you here?:)

Best,

    Hi Steven,

    Maybe the following will help, and then I will lead up to a modern version for consideration, so you understand the relation.

    Keep Gran Sasso in your mind as you look at what I am giving you.

    The underground laboratory, which opened in 1989, with its low background radiation is used for experiments in particle and nuclear physics,including the study of neutrinos, high-energy cosmic rays, dark matter, nuclear decay, as well as geology, and biology-wiki

    Neutrinos, get set, go!

    This summer, CERN gave the starting signal for the long-distance neutrino race to Italy. The CNGS facility (CERN Neutrinos to Gran Sasso), embedded in the laboratory’s accelerator complex, produced its first neutrino beam. For the first time, billions of neutrinos were sent through the Earth’s crust to the Gran Sasso laboratory, 732 kilometres away in Italy, a journey at almost the speed of light which they completed in less than 2.5 milliseconds. The OPERA experiment at the Gran Sasso laboratory was then commissioned, recording the first neutrino tracks.

    Because I am a layman, does not reduce the understanding that I can have, that a scientist may have.

    Now for the esoteric :)

    Secrets of the Pyramids In a boon for archaeology, particle physicists plan to probe ancient structures for tombs and other hidden chambers. The key to the technology is the muon, a cousin of the electron that rains harmlessly from the sky.

    What kind of result would they get from using the muon. What will it tell them?:)

    Best,

    Posted in AMS, Cosmic Rays, Gran Sasso, Muons, Relativistic Muons, Time Dilation | Tagged , , , , , | 3 Comments

    Conformal Cyclic Cosmology….

    Penrose’s Conformal Cyclic Cosmology, from one of his Pittsburgh lecture slides in June, 2009. Photo by Bryan W. Roberts

    Also see: BEFORE THE BIG BANG: AN OUTRAGEOUS NEW PERSPECTIVE AND ITS IMPLICATIONS FOR PARTICLE PHYSICS

    ……. (CCC) is a cosmological model in the framework of general relativity, advanced by the theoretical physicist Sir Roger Penrose.[1][2] In CCC, the universe undergoes a repeated cycle of death and rebirth, with the future timelike infinity of each previous universe being identified with the Big Bang singularity of the next.[3] Penrose outlines this theory in his book Cycles of Time: An Extraordinary New View of the Universe.

    Contents

    Basic Construction

    Penrose’s basic construction[4] is to paste together a countable sequence of open FLRW spacetimes, each representing a big bang followed by an infinite future expansion. Penrose noticed that the past conformal boundary of one copy of FLRW spacetime can be “attached” to the future conformal boundary of another, after an appropriate conformal rescaling. In particular, each individual FLRW metric gab is multiplied by the square of a conformal factor Ω that approaches zero at timelike infinity, effectively “squashing down” the future conformal boundary to a conformally regular hypersurface (which is spacelike if there is a positive cosmological constant, as we currently believe). The result is a new solution to Einstein’s equations, which Penrose takes to represent the entire Universe, and which is composed of a sequence of sectors that Penrose calls “aeons.”

    Physical Implications

    The significant feature of this construction for particle physics is that, since baryons are obey the laws of conformally invariant quantum theory, they will behave in the same way in the rescaled aeons as in the original FLRW counterparts. (Classically, this corresponds to the fact that light cone structure is preserved under conformal rescalings.) For such particles, the boundary between aeons is not a boundary at all, but just a spacelike surface that can be passed across like any other. Fermions, on the other hand, remain confined to a given aeon. This provides a convenient solution to the black hole information paradox; according to Penrose, fermions must be irreversibly converted into radiation during black hole evaporation, to preserve the smoothness of the boundary between aeons.

    The curvature properties of Penrose’s cosmology are also highly desirable. First, the boundary between aeons satisfies the Weyl curvature hypothesis, thus providing a certain kind of low-entropy past as required by statistical mechanics and by observation. Second, Penrose has calculated that a certain amount of gravitational radiation should be preserved across the boundary between aeons. Penrose suggests this extra gravitational radiation may be enough to explain the observed cosmic acceleration without appeal to a dark energy matter field.

    Empirical Tests

    In 2010, Penrose and V. G. Gurzadyan published a preprint of a paper claiming that observations of the cosmic microwave background made by the Wilkinson Microwave Anisotropy Probe and the BOOMERanG experiment showed concentric anomalies which were consistent with the CCC hypothesis, with a low probability of the null hypothesis that the observations in question were caused by chance.[5] However, the statistical significance of the claimed detection has since been questioned. Three groups have independently attempted to reproduce these results, but found that the detection of the concentric anomalies was not statistically significant.[6][7][8]

    See also

    References

    1. ^ Palmer, Jason (2010-11-27). “Cosmos may show echoes of events before Big Bang”. BBC News. http://www.bbc.co.uk/news/science-environment-11837869. Retrieved 2010-11-27. 
    2. ^ Penrose, Roger (June 2006). “Before the big bang: An outrageous new perspective and its implications for particle physics”. Edinburgh, Scotland: Proceedings of EPAC 2006. p. 2759-2767. http://accelconf.web.cern.ch/accelconf/e06/PAPERS/THESPA01.PDF. Retrieved 2010-11-27. 
    3. ^ Cartlidge, Edwin (2010-11-19). “Penrose claims to have glimpsed universe before Big Bang”. physicsworld.com. http://physicsworld.com/cws/article/news/44388. Retrieved 2010-11-27. 
    4. ^ Roger Penrose (2006). “Before the Big Bang: An Outrageous New Perspective and its Implications for Particle Physics”. Proceedings of the EPAC 2006, Edinburgh, Scotland: 2759-2762. http://accelconf.web.cern.ch/accelconf/e06/PAPERS/THESPA01.PDF. 
    5. ^ Gurzadyan VG; Penrose R (2010-11-16). “Concentric circles in WMAP data may provide evidence of violent pre-Big-Bang activity”. arΧiv:1011.3706 [astro-ph.CO]. 
    6. ^ Wehus IK; Eriksen HK (2010-12-07). “A search for concentric circles in the 7-year WMAP temperature sky maps”. arΧiv:1012.1268 [astro-ph.CO]. 
    7. ^ Moss A; Scott D; Zibin JP (2010-12-07). “No evidence for anomalously low variance circles on the sky”. arΧiv:1012.1305 [astro-ph.CO]. 
    8. ^ Hajian A (2010-12-8). “Are There Echoes From The Pre-Big Bang Universe? A Search for Low Variance Circles in the CMB Sky”. arΧiv:1012.1656 [astro-ph.CO].

    See Also: Penrose’s CCC cosmology is either inflation or gibberish

    Posted in Cosmology, Quanglement, Sir Roger Penrose | Tagged , , | Leave a comment

    Big Bounce

    Physical cosmology
    WMAP 2010.png
    Universe · Big Bang
    Age of the universe
    Timeline of the Big Bang
    Ultimate fate of the universe

    The Big Bounce is a theorized scientific model related to the formation of the known Universe. It derives from the cyclic model or oscillatory universe interpretation of the Big Bang where the first cosmological event was the result of the collapse of a previous universe.[1]

    Contents

    Expansion and contraction

    According to some oscillatory universe theorists, the Big Bang was simply the beginning of a period of expansion that followed a period of contraction. In this view, one could talk of a Big Crunch followed by a Big Bang, or more simply, a Big Bounce. This suggests that we might be living in the first of all universes, but are equally likely to be living in the 2 billionth universe (or any of an infinite other sequential universes).
    The main idea behind the quantum theory of a Big Bounce is that, as density approaches infinity, the behavior of the quantum foam changes. All the so-called fundamental physical constants, including the speed of light in a vacuum, were not so constant during the Big Crunch, especially in the interval stretching 10−43 seconds before and after the point of inflection. (One unit of Planck time is about 10−43 seconds.)

    If the fundamental physical constants were determined in a quantum-mechanical manner during the Big Crunch, then their apparently inexplicable values in this universe would not be so surprising, it being understood here that a universe is that which exists between a Big Bang and its Big Crunch.

    Recent developments in the theory

    Martin Bojowald, an assistant professor of physics at Pennsylvania State University, published a study in July 2007 detailing work somewhat related to loop quantum gravity that claimed to mathematically solve the time before the Big Bang, which would give new weight to the oscillatory universe and Big Bounce theories.[2]

    One of the main problems with the Big Bang theory is that at the moment of the Big Bang, there is a singularity of zero volume and infinite energy. This is normally interpreted as the end of the physics as we know it; in this case, of the theory of general relativity. This is why one expects quantum effects to become important and avoid the singularity.

    However, research in loop quantum cosmology purported to show that a previously existing universe collapsed, not to the point of singularity, but to a point before that where the quantum effects of gravity become so strongly repulsive that the universe rebounds back out, forming a new branch. Throughout this collapse and bounce, the evolution is unitary.

    Bojowald also claims that some properties of the universe that collapsed to form ours can also be determined. Some properties of the prior universe are not determinable however due to some kind of uncertainty principle.

    This work is still in its early stages and very speculative. Some extensions by further scientists have been published in Physical Review Letters.[3]

    Peter Lynds has recently put forward a new cosmology model in which time is cyclic. In his theory our Universe will eventually stop expanding and then contract. Before becoming a singularity, as one would expect from Hawking’s black hole theory, the Universe would bounce. Lynds claims that a singularity would violate the second law of thermodynamics and this stops the Universe from being bounded by singularities. The Big Crunch would be avoided with a new Big Bang. Lynds suggests the exact history of the Universe would be repeated in each cycle. Some critics argue that while the Universe may be cyclic, the histories would all be variants.

    See also

    References

    1. ^ “Penn State Researchers Look Beyond The Birth Of The Universe”. Science Daily. May 17, 2006. http://www.sciencedaily.com/releases/2006/05/060515232747.htm.  Referring to Ashtekar, Abhay; Pawlowski, Tomasz; Singh, Parmpreet (2006). “Quantum Nature of the Big Bang”. Physical Review Letters 96 (14): 141301. doi:10.1103/PhysRevLett.96.141301. PMID 16712061. http://link.aps.org/abstract/PRL/v96/e141301. 
    2. ^ Bojowald, Martin (2007). “What happened before the Big Bang?”. Nature Physics 3 (8): 523–525. doi:10.1038/nphys654. 
    3. ^ Ashtekar, Abhay; Corichi, Alejandro; Singh, Parampreet (2008). “Robustness of key features of loop quantum cosmology”. Physical Review D 77: 024046. doi:10.1103/PhysRevD.77.024046. 

    Further reading

    • Magueijo, João (2003). Faster than the Speed of Light: the Story of a Scientific Speculation. Cambridge, MA: Perseus Publishing. ISBN 0738205257. 
    • Bojowald, Martin. “Follow the Bouncing Universe”. Scientific American (October 2008): 44–51. 

    External links

    Posted in Cosmology | Tagged | Leave a comment