Yet Another Quora Question:

Is there experimental evidence of RoS (relativity of simultaneity) available? According to Carl Popper something that cannot be falsified is metaphysics. Can the ALT (absolute Lorentz transformation) and absolute simultaneity be used instead of RoS?

Yes. There is lots of evidence that IF observers define simultaneity of remote events by comparing the arrival times of light signals from those events and adjusting for the light travel times, then they will NOT agree on which pairs of events are or are not simultaneous.

What cannot be falsified (and so, according to Popper, is not worthy of consideration as scientific) is the claim that one particular set of “stationary” observers are “correct” and the rest are “wrong” about the “actual” simultaneity (with the observations of the “wrong” ones being explained by effects of their “motion” on their clocks and measuring rods).

The “metaphysical” question of whether or not there is such a particular set of “stationary” frames is generally resolved by reference to “Ockham’s Razor” which advises us to prefer an explanation which involves fewer arbitrary choices. To the extent that the identification of a particular preferred set of “stationary” frames is arbitrary, we are therefore advised to treat them all as equally valid and make no choice of what constitutes “absolute” simultaneity.

Of course, any particular feature of the universe (such as for example, the cosmic background microwave radiation (CBMR)) can, if we like, be taken as defining an “absolute” rest frame (and so an “absolute” sense of simultaneity). And if I were moving at a noticeably high speed relative to the CBMR, then it might indeed be presumptuous to declare my own synchronization as more fundamental, but it would not in fact lead to any difference in anyone’s prediction of any testable event.

Is Special relativity based on a fundamental flawed claim of inertial reference frames without acceleration which do not exist anywhere in space?

NO. Special Relativity is not based on any “claim” at all.

It is based on the observation that Maxwell’s Equations (and the consequent value of the speed of light) appear to hold with the same constants in every freely falling reference frame, and is only expected to be valid in situations satisfying the simplifying condition that gravitation has negligible effects on the quantities of interest.

That simplifying condition of course limits the domain in which SR applies; but the existence of locally inertial frames is readily apparent, and for the purpose of measurements over a sufficiently small range of space and time SR has no difficulty dealing with accelerated frames as well.

How and why did physicists come to accept that measuring a property of something changes it permanently (as stated in quantum mechanics)? – Quora

What is this “it” that you think QM says is changed? And why do you say “permanently”?

And what is a “property” of something anyhow?

I think what most physicists consider to be changed when they become aware of how a system has influenced some measurement apparatus is just their relationship to that system rather than the system itself. In particular, if they have a good theoretical model for how the system evolves, then they may be able to predict the results of future similar “measurement” processes (so long as the system does not interact with anything else in the meantime). The repeatability (or at least predictability) of such outcomes is analogous to that of measurement of some property of a classical system, but quantum systems don’t seem to have fixed “properties” in the same sense, so we generally refer to the results of measurements as “observables” rather than “properties”.

And there may be “complementary” observables about which they necessarily have less information after that first measurement (and for which a subsequent measurement of those complementary observables leaves us back in the dark about what to expect for future measurements of the original one).

But the question of how we came to accept this lack of fixed classical “properties” is a good one.

It’s a long and interesting story that I can’t possibly do justice to here. But it starts with some observations by Werner Heisenberg in the 1920s about the way looking at a small object, so as to determine its position very precisely, involves shining a very short wavelength light on it; and according to Einstein’s interpretation of the Planck’s idea of electromagnetic energy being restricted to discrete jumps (which get bigger as the wavelength gets shorter) interaction with shorter wavelength light causes a larger change in momentum that is not completely predictable because of the size of the lens needed to reliably collect a photon. It soon became clear that this “observation effect” was independent of any particular observation process and was actually a fundamental principle inherent in all the attempts that had been made so far (eg wave mechanics, matrix mechanics, Hilbert space etc (which are all the same really)) to come up with theories of matter that match our observations (of atomic spectra, what happens when streams of small particles are sent through pairs of slits in a barrier, and so on).

So by the 1930s, most physicists already accepted that the classical idea of fixed “properties” would have to be abandoned. But some (such as Einstein, Bohm, and others) considered this premature and suggested looking for “hidden” fixed properties that could restore certainty if only they were known, and would explain the uncertainties of quantum mechanics as just the result of a lack of knowledge of their actual values.

Various attempts have been made to predict the results of quantum mechanics as averages over unknown values of such hidden variables. But none of them provide any way of actually measuring the hidden variables, and so they are more like a theoretical tool (just like the wave function itself) rather than anything “real”.

Also, it turned out that all of the ways people discovered for actually getting the same results as quantum mechanics from a hidden variables theory (such as the ”pilot wave” theory of de Broglie and Bohm) involved instantaneous interaction between widely separated parts of the system, and so if the hidden variables were “real” this would allow things like faster than light communication (and if the speed of light is independent of that of the observer then this would lead also to things like time travel and the grandfather paradox).

For many years people wondered if perhaps it was just due to not being clever enough and whether perhaps someone might eventually discover a “local” hidden variables theory which didn’t have the ftl and time travel problem. But over time that has come to seem less and less likely,

In 1964 John Bell noted that, even for a very simple system, in any “local” hidden variables theory there would have to be certain relationships between the probabilities which are violated in quantum mechanics. Since then, various experiments have been done with ever more precision to test the quantum predictions (leading to a Nobel prize in 2022 for Alain Aspect, John Clauser, and Anton Zeilinger). So it has become more and more certain that no hidden variables theory can avoid the ftl effect and the paradoxes that would arise if the hidden variables could ever be observed.

Perhaps even more important is the fact that, according to a theorem of Gleason from 1957, and another (proved by Bell in 1966 and extended by Kochen&Specker in 1967), in order to match the quantum predictions the hidden variables would have to be “contextual” – which roughly means that they would depend not just on the system but also the observer – and most of us think this really does put an end to physics being described by any fixed “properties” of the system itself.

And what is a “property” of something anyhow?
I think what most physicists consider to be changed when they become aware of how a system has influenced some measurement apparatus is just their relationship to that system rather than the system itself. In particular, if they have a good theoretical model for how the system evolves, then they may be able to predict the results of future similar “measurement” processes (so long as the system does not interact with anything else in the meantime). The repeatability (or at least predictability) of such outcomes is analogous to that of measurement of some property of a classical system, but quantum systems don’t seem to have fixed “properties” in the same sense, so we generally refer to the results of measurements as “observables” rather than “properties”.
And there may be “complementary” observables about which they necessarily have less information after that first measurement (and for which a subsequent measurement of those complementary observables leaves us back in the dark about what to expect for future measurements of the original one).
But the question of how we came to accept this lack of fixed classical “properties” is a good one.
It’s a long and interesting story that I can’t possibly do justice to here. But it starts with some observations by Werner Heisenberg in the 1920s about the way looking at a small object, so as to determine its position very precisely, involves shining a very short wavelength light on it; and according to Einstein’s interpretation of the Planck’s idea of electromagnetic energy being restricted to discrete jumps (which get bigger as the wavelength gets shorter) interaction with shorter wavelength light causes a larger change in momentum that is not completely predictable because of the size of the lens needed to reliably collect a photon. It soon became clear that this “observation effect” was independent of any particular observation process and was actually inherent in all the attempts that had been made so far (eg wave mechanics, matrix mechanics, Hilbert space etc (which are all the same really)) to come up with theories of matter that match our observations (of atomic spectra, what happens when streams of small particles are sent through pairs of slits in a barrier, and so on).
So by the 1930s, most physicists already accepted that the classical idea of fixed “properties” would have to be abandoned. But some (such as Einstein, Bohm, and others) considered this premature and suggested looking for “hidden” fixed properties that could restore certainty if only they were known, and would explain the uncertainties of quantum mechanics as just the result of a lack of knowledge of their actual values.
Various attempts have been made to predict the results of quantum mechanics as averages over unknown values of such hidden variables. But none of them provide any way of actually measuring the hidden variables, and so they are more like a theoretical tool (just like the wave function itself) rather than anything “real”.
Also, it turned out that all of the ways people discovered for actually getting the same results as quantum mechanics from a hidden variables theory (such as the ”pilot wave” theory of de Broglie and Bohm) involved instantaneous interaction between widely separated parts of the system, and so if the hidden variables were “real” this would allow things like faster than light communication (and if the speed of light is independent of that of the observer then this would lead also to things like time travel and the grandfather paradox).
For many years people wondered if perhaps it was just due to not being clever enough and whether perhaps someone might eventually discover a “local” hidden variables theory which didn’t have the ftl and time travel problem. But over time that has come to seem less and less likely,
In 1964 John Bell noted that, even for a very simple system, in any “local” hidden variables theory there would have to be certain relationships between the probabilities which are violated in quantum mechanics. Since then, various experiments have been done with ever more precision to test the quantum predictions (leading to a Nobel prize in 2022 for Alain Aspect, John Clauser, and Anton Zeilinger). So it has become more and more certain that no hidden variables theory can avoid the ftl effect and the paradoxes that would arise if the hidden variables could ever be observed.
Perhaps even more important is the fact that, according to another theorem (proved by Bell in 1966 and extended by Kochen&Specker in 1967), in order to match the quantum predictions the hidden variables would have to be “contextual” – which roughly means that they would depend not just on the system but also the observer – and most of us think this really does put an end to physics being described by any fixed “properties” of the system itself.

More on acceleration in the Twin “Paradox”

There is a possibly interesting comment thread following my answer to How exactly do Minkowski diagrams prove that acceleration is not needed in resolving the Twin Paradox in Special relativity? – Quora

In it Peter Webb passionately (and sadly sometimes rudely) defends the position that “acceleration has nothing directly to do with the Twin paradox”. This is a position shared by a not insignificant minority of apparently competent physicists (on Quora, Brent Meeker comes to mind as a prominent example), but although I continue to feel that the issue is vastly overblown, I find that the “it’s not acceleration” view is misguided and at least some of its proponents are sufficiently intransigent and aggressive that they demand a further rebuttal.

So, with reference to Peter’s final summary of his position, here goes:

The “paradox”/interest in the TP is that it demonstrates time dilation. The different ages merely demonstrates this.

To be frank, I have no idea what he is talking about here. The usual time dilation of special relativity, which applies in the case of two inertial observers having a constant relative velocity with respect to one another, only exists in the understanding of each observer regarding the clock of the other. There are many ways of demonstrating this effect (most famously with muons from cosmic ray interactions with the upper atmosphere, but also in many other ways), but it is not something that actually happens to either observer in any objective sense – at least not while they both continue in an inertial state of motion.(We may think its obvious that the muon is the one with a dilated lifetime but from the muon’s frame of reference the Earth looks like a very flat pancake and the distance it has to travel can be covered well within its lifetime.)

The Twin “Paradox”, however, is something different in that it does lead to an objective fact of the matter regarding which twin or clock aged more between two specific events in space-time.

Maybe 80% of people with some interest in science think that time dilation is a consequence of acceleration. It isn’t, directly. It is a result of changing reference frames and in the Twin’s paradox that involves acceleration. But time dilation still occurs in the absence of acceleration (eg 3 brothers), and it is easy to show that identical acceleration profiles produce very different amounts of time dilation.

I am not aware of the statistics regarding what “people with some interest in science” think, but I doubt that any significant percentage of them think that the symmetric relative time dilation of unaccelerated motion is a “consequence of acceleration”.

But on the other hand, if they think that the objective observable time difference in the Twins “Paradox” is a consequence of acceleration, then I’m right there with them! (I have no position on how “directly” consequential the acceleration is, but I am happy to see an acknowledgement that indeed having one twin change frames does involve acceleration.)

But now we come to the 3 brothers.

It is true that they provide yet another way of confirming the relative time dilation effect. Indeed, the incoming “brother” brings back to Earth a record of what the outbound one’s clock said when they met  (though of course this is not necessary, and the information could just as easily have been passed by a radio signal). But he (or the radio message) could also inform Earth of the time that the outgoing traveller inferred was on the Earth clock at his idea of when the crossing of paths happened. This would be consistent with the usual unaccelerated time dilation, which is perfectly symmetrical in the sense that while the Earth observer thinks that at the event on the traveller’s path which is concurrent with any time $t_{E}$ on his Earth clock (measured from the outbound traveler’s departure event), the outbound clock reads $t_{O}=\frac{t_{E}}{\gamma}$, and the outbound observer thinks that at the event on Earth which is concurrent with any time $t_{O}$ on his travelling clock (measured again from the shared departure event), the Earth clock reads $t_{E}=\frac{t_{O}}{\gamma}$.

So, at the particular event where the travelers meet, if the Earth brother thinks this occurs at time $t_{M,E}$ then the outbound traveler’s clock reads $t_{M,O}=\frac{t_{M,E}}{\gamma}$ and he thinks that the concurrent time on the Earth clock is just $\frac{t_{M,O}}{\gamma}=\frac{t_{M,E}}{\gamma^{2}}$.

The inference of the inbound traveler on the other hand, after synchronizing his clock with the outbound, is that during the rest of his trip to Earth, the Earth clock only advanced by the same $\frac{t_{M,E}}{\gamma^{2}}$.

But when they synchronize clocks the two travellers can also compare notes on what they think is showing on the Earth clock at that time. And their disagreement on that will show them immediately that everything works out and there is no paradox.

So in the 3 brothers case there is clearly no paradox, as everyone is aware that each traveler considers only a small part of the Earth’s experience as concurrent with their own travel time and there is no reason to expect that those two intervals should add up to the whole time on Earth. So no two observers are ever forced to agree that only one of them is right about the time dilation of the other.

Of course there is not a paradox in the single traveler case either, but it is a more powerful (wrong) intuition that the two intervals judged by the traveller to be concurrent with the legs of his trip should cover the entire Earth time interval (even though we now know from the 3 brothers analysis that they do not).

Note: I am not disputing the validity of the 3brothers scenario as the most appropriate explanation of why there is no paradox. But I will continue to insist that it is not significantly counterintuitive in its own right and that the solution it provides to the single traveler version identifies the “missing” Earth time with the traveller’s “frame jump” and that his “frame jump” (no, not just in his head but between two different frames that he is actually at rest with respect to) is nothing more than an integrated acceleration.

This leads to the question of what we can learn from the sudden jump in the traveler’s inferred Earth time at the turnaround. We learn that when a traveller changes frames  then, as his simultaneity space changes, the times he considers concurrent on clocks that are remote from him in his direction of velocity change are advanced (and those behind retarded).

Of course the sudden turnaround is physically impossible and in any real situation there would be a period of finite acceleration. But it is a simple exercise in calculus to approximate a period of finite acceleration with a number of discrete jumps and conclude that while the traveler is turning around his inferred current time on Earth advances more rapidly than when he is not accelerating (at a rate proportional to both his acceleration and his distance from Earth).

Now none of this made any use of General Relativity. The acceleration effect we have discovered is purely from Special Relativity and I do agree that it’s a monstrous abuse to address the Twin “Paradox” by invoking acceleration=>GR=>”it’s like being in a gravity well which we all know (from the movies!) causes time dilation”.

But the real beauty of all this is that it DOES go the other way!

In SR, acceleration=>time dilation (relative to clocks that are remote in the direction of acceleration), and so, using only the equivalence principle, we learn (without any of the harder GR analysis) that Matt Damon really does outlive his grandchild!

Given Peter’s earlier answer about the Ehrenfest “Paradox” I am totally surprised and disappointed that he doesn’t seem to appreciate this.

What is Schrodinger’s equation? Is it deterministic or not? If it is, how can we prove that? And what conditions must be satisfied for it to be non-deterministic?

Schrodinger’s equation was originally just the partial differential equation satisfied by the position-space wave function of a particle (or more general system) in non-relativistic quantum mechanics. The same name is sometimes also used for the equation $\frac{d}{dt}\Psi(t)=iH\Psi(t)$ satisfied by the state vector in any NRQM system regardless whether or not a position-space representation is being used (or is even available).

It is deterministic (in the sense of determining $\Psi(t)$ uniquely for all $t$ if given an initial condition $\Psi(0))$, so long as the Hamiltonian $H$ is self-adjoint (symmetry is NOT enough!).

The proof of this involves more analysis than I could fit into a Quora answer, but in the general case it follows from the fact that for any self-adjoint operator $H$ on a Hilbert Space, the equation $\frac{d}{dt}\Psi(t)=iH\Psi(t)$ is uniquely satisfied by $\Psi(t)=e^{iHt}\Psi(0)$ where the complex exponential of $H$ is defined in terms of its spectral resolution; and for the PDE special cases it might be done by various theorems involving greens functions or Fourier analysis and convergence properties of improper integrals.

It may be non-deterministic if $H$ has not been specified on a large enough domain to be essentially self-adjoint (as sometimes happens if boundary conditions are omitted from the specification of a problem in which the particle is confined somehow – either by an infinite potential or by living in a single cell of a crystal lattice for example). But such cases are normally just due to inadequate specification of the problem rather than to a real physical indeterminacy.

So I would say that in a properly defined quantum theory model the Schrodinger equation is indeed almost always deterministic.

[N.B. It wasn’t part of the actual question, but I should perhaps add that the reason this does not make quantum mechanics deterministic is because even complete knowledge of the quantum state of a system is not sufficient to predict the outcomes of all possible experimental measurements. For any a state which happens to produce a predictable value for one observable there will be other observables for which the outcome is uncertain.]

What is the definition of an eigenstate of a hermitian operator?

An eigenstate of a quantum observable is a state which results from a measurement of that observable which has produced a precise value; and according to quantum theory this means that it is represented by a normalized eigenvector for the corresponding self-adjoint operator (whose eigenvalue is equal to the observed measurement value).

An eigenvector of an operator $A$ is a vector $\Psi$ for which $A\Psi=\lambda\Psi$ for some number $\lambda$ (which is called the corresponding eigenvalue).