Decay times of excited atomic states

The time spent in the excited state is a random variable which can have any positive value but whose expectation value depends on the energy drops to lower energy unoccupied states (with the one with closest energy giving the dominant contribution). Since the drop in an isolated atom can only happen via a transfer of energy to the electromagnetic field, the actual formula results from a quantum field theory calculation involving the strength of the EM coupling constant (and would be infinite if that coupling constant were zero). But I suspect that the result turns out to be consistent with the Heisenberg uncertainty relation $#\Delta E \Delta t \gtrsim \frac{h}{4\pi}#$ and that the bound is similar for all cases of a top level excited electron in a neutral atom so that we can say the expected lifetime is inversely proportional to the energy drop to the nearest available level (and since the levels tend to get more closely spaced the higher we go this is also consistent with more highly excited states of the same atom decaying more quickly – albeit usually not directly to the initial ground state).

For more info just do a Google search for something like ‘atomic excited state lifetimes’.

Source: (1001) Alan Cooper’s answer to For how much time does the excited electron stay in orbit when jumped from a lower energy level to a higher one? – Quora

What if the “luminiferous aether” really exists? 

The existence of a “luminiferous aether” would have no observable consequences unless there was some kind of force or field that was not governed by laws that are locally Lorentz covariant. But so far nothing of that sort has ever been detected. So, as Pierre-Simon Laplace famously responded to Napoleon Bonaparte after Bonaparte expressed surprise that God was not mentioned in Laplace’s manuscript, we “have no need of that hypothesis”.

Source: (1001) Alan Cooper’s answer to Ah but what if there is a luminiferous aether? – Quora

Hilbert Distance and Similarity of States

A Quoran asks Is distance in Hilbert space a measure of similarity between quantum states?

To which I answer:

Yes, but only to a limited extent. Quantum states are not represented by general vectors but just by unit vectors (or equivalently by “rays” – each of which consists of all real multiples of some unit vector). As such, the distance between them can never be greater than $#\sqrt{2}#$ (which happens when they are mutually orthogonal – ie perpendicular to one another).

Any two states which can be experimentally distinguished with certainty are eigenstates with distinct eigenvalues of some observable (namely the one that responds with a 1 or 0 depending on which of the two states is present) and so their unit vectors are mutually orthogonal and therefore maximally separated. On the other hand, if the distance is zero they can differ only by a phase factor and so are almost the same.

For intermediate distances between $#0#$ and $#\sqrt{2}#$ , the angle is between $#0#$ and $#\pi/2#$, so each state’s unit vector projects partially onto the other and the absolute value of their inner product (which measures the probability of one being identified as the other) is between $#0#$ and $#1#$.

But distance is not a very direct way of getting at the quantity of interest which, as noted above, is captured much more directly by just looking at the inner product.

Source: (1001) Alan Cooper’s answer to Is distance in Hilbert space a measure of similarity between quantum states? – Quora

Why Least Action?

This Quora Answer by John Fernee gives some useful intuition,

saying:

The principle of least action can be derived simply by asking that each point in a path has enough information to derive the next infinitesimal step.

That is essentially determinism: If you know the complete state of a system at any point in time, you can derive the state of the system at any other point.

The principle of determinism is all that you need to define a unique path. The mathematics of path integrals can then be used to derive the principle of least action using the calculus of variations.

The principle of determinism is actually modified with quantum mechanics. The deterministic aspect of quantum mechanics relates to the evolution of the wavefunction. Whereas measurement outcomes are probabilistic. This lead to Feynman generalising the principle of least action, which resulted in the path integral formalism of quantum mechanics.

As to why does the universe satisfy the principle of least action, you can only state that if it didn’t, we’d be unable to predict anything with certainty. There would be multiple paths to some observed state and we’d be at a complete loss to discover which path was taken. That might sound like the Feynman path integral approach, but it isn’t. The fact that wavefunction evolution is deterministic means that there must be interference between the different quantum paths that result in the deterministic evolution.

A subsequent comment clarifies the meaning of “determinism” as basically referring to being governed by a second order DE in terms of the position coordinates and adds motivation for the fact that L=T-V:

The concept of energy is closely related to the principle of least action, but does not form part of the definition. The definition is far simpler: That a particle will follow a unique path and that path can be determined from position and velocity coordinates.

This is a mathematical process for determining a unique path and it just happens to match with the physical requirements for particles to have deterministic trajectories.

As for energy, the Lagrangian that corresponds to Newtonian physics is simply given by the difference in the kinetic and potential energies; L=T-V. The minimisation is with respect to this Lagrangian and is given by the condition that dL/ds=0, where ds represents the change in path. Given the form of the Lagrangian, we can write, dT/ds=dV/ds. These quantities can be identified as the action and reaction forces of Newton’s third law. So what we’ve really got is a statement that the path is defined as the trajectory where the action and reaction forces are balanced. The simplest example is in circular motion where the centripetal force is equal and opposite to the centrifugal force. Even though the centrifugal force is a fictitious force, it must be equal and opposite to the centripetal force for there to be circular motion.

But I have some quibbles with the references to quantum mechanics which I think are better covered (though still not perfectly) in Andrew Winkler’s answer.

Source: (1001) Why does the principle of least action hold in our universe? – Quora

Source of Uncertainty Principle – Observation or Theory?

A Quoran asks: “Is the Uncertainty Principle in QM primarily a consequence of the mathematics involved (ie. derived from it) or of empirical evidence (which the mathematics has then been constructed to describe)?

I answered “Both”.

The original idea of the uncertainty principle was motivated by the apparent empirical facts that in order to determine the position of an object very precisely it seems necessary to illuminate it with light of a very short wavelength, and that the empirically observed spectrum of black body radiation suggests that short wavelengths interact with matter not continuously but in discrete ‘quanta’ which transfer more and more momentum as the wavelengths get shorter. In the mathematical theories that were developed to describe these phenomena (but not so much motivated by the uncertainty principle itself) the uncertainty principle (and in fact a more general form of it) can be derived from the fact that measurements of position and momentum correspond to non-commuting operators on the Hilbert space that is used to represent states of the system.

Source: (1001) Alan Cooper’s answer to Is the Uncertainty Principle in QM primarily a consequence of the mathematics involved (ie. derived from it) or of empirical evidence (which the mathematics has then been constructed to describe)? – Quora

Schrödinger’s Cat

Basically because we have no way of putting a system as complex as a cat into a pure quantum state.
Schrödinger’s cat thought experiment was intended to challenge the idea that a system can have undetermined values of some observables in a way that does not just correspond to a lack of knowledge on our part. If the killing of the cat were the result of a purely classical source of uncertainty such as a coin toss we could just say that the cat is either truly alive or truly dead and we just don’t know which until we look. But quantum mechanics includes uncertainties of a kind that cannot be interpreted as just the result of incomplete information.
An example on a scale much smaller than that of a cat is if a single electron has a known value of its vertical spin component (up or down), and if we subsequently divert it by a device that sends it in different directions depending on its horizontal spin component, then there is a 50% chance of seeing it go in either of the two directions and it appears (from an analysis of the observed probabilities in various other directions) that there was no way of predicting the horizontal component (say from some other initial observations) before we actually measured it.
Schrödinger’s point was that there’s something weird about this business of not having a property until we measure it, and he used the cat as an extreme example. But in practice, if we are good enough detectives, there will always be evidence in the box telling us exactly when the cat died; and so, although there may have been some time before we knew the outcome, we never see anything that looks significantly different from what might have happened if the killing of the cat had been triggered by some classical probabilistic event (such as say the first double six in a series of dice throws).
In order to experience the weirdness of having a cat that is neither alive nor dead (sometimes referred to as being both at once) we would need to keep the system of cat and triggering nucleus (and everything else that they can interact with) in what is called a ‘pure’ quantum state, which requires having complete information about the quantum states of all its constituent elementary particles.
This is obviously impossible for a cat, but experiments have been done in which larger systems such as complex molecules are put into superposition states where something like the shape of the molecule (which perhaps seems more substantial to us than the spin of a single electron) does not have a value until we observe it. These will undoubtedly get larger and more impressively weird seeming as technology improves, but I am pretty sure that they will never reach the scale of an actual cat.

Source: (1001) Alan Cooper’s answer to Why has Schrödinger’s Cat, the experiment, not actually been performed? – Quora

Is Decoherence Reversible?

It depends on what you mean by the word “decoherence”.

The conventional use of “decoherence” to describe part of what happens in a measurement process refers to the interaction of a pure state of an experimental system (which may be a superposition of eigenstates of some observable) with a statistical mixed state of a complex environment (which includes some kind of measurement apparatus for that observable) such that, after the interaction, the relative state of the system is a statistical mixture of eigenstates of the observable, each of which is linked to some indicator state of the environmental apparatus. This is typically NOT reversible for thermodynamic reasons (basically due to the fact that the final state is not known in sufficient detail needed to determine the actions needed in order to reverse the process).

BUT, as another answer notes, if the word “decoherence” is being used to describe interaction of the system with an ancilliary system that is in a pure state, then after the interaction the combined system is still in a pure state and the unitary evolution of pure states is reversible.

Source: (1001) Alan Cooper’s answer to Is it possible to reverse quantum entanglement decoherence? – Quora

Watching Fall Into Black Hole

If we on Earth are now observing an object that we see as near a black hole and in free fall on a path that intersects the event horizon of that black hole, then what we are seeing will be extremely red shifted version of the object that is therefore both very dim and ageing very slowly. So the apparent (to us) progress of everything in the object’s frame (including its rate of fall) is very slow. As time (for us) progresses, we will see the object’s clocks and apparent rate of fall to get progressively slower as it also dims towards invisibility.

Source: (1001) Alan Cooper’s answer to An event horizon is a boundary beyond which events cannot affect an observer. What would we be witnessing if a spacecraft was to cross the event horizon of a black hole? The spacecraft disappearing, or eternally approaching the black hole? – Quora

Bergson vs Einstein

After reading this article twice, and yet again the paragraph where the author purports to show that “it’s wrong to think that Bergson’s idea of duration can be assimilated into the idea of psychological time”,

I am still unable to find any explanation of the difference between our internally experienced psychological time (which, by the way can not necessarily always be “aligned with external clock time”) and “the first-person experience of (Bergson’s unmeasurable) duration” (which they appear to identify as the “lived time” in terms of which “An hour in the dentist’s chair is very different from an hour over a glass of wine with friends”).

On the other hand Steven Savitt’s “solution” does not address the subjective nature of duration and appears to just identify it with the non-subjective proper time associated with a possible observer’s world line – which seems to be just giving up on the idea of any special “philosophical” time as this has always been the only kind of time that is ever discussed in relativistic physics.

Source: Who really won when Bergson and Einstein debated time? | Aeon Essays