What we interpret as the locations of elementary particles can certainly be compared with the predictions of regular mechanics. And they will often be quite close, so the classical predictions are actually useful. But the pattern of (usually small) variations from those predictions, while not “purely” probabilistic, does seem to have a component which cannot be explained in terms of some more precise classical properties that we just have not been able to properly measure. So our idea of a regular particle may just be something that does not really exist and what we interpret as its position may indeed by something that has an essentially probabilistic component.
Tag: probability
Are Bayesian and Frequentist probability really inherently completely different, or is there a false dichotomy between them?
I would say that they are inherently “somewhat” different even though they are both used for similar purposes and obey the same mathematical axioms (of a countably additive non-negative measure on some space of “outcomes” in which the measure of the set of all outcomes is 1).
Of course there are many probability measures in the mathematical sense which have nothing to do with any actual probabilities in the real world, so having the same mathematical structure does NOT preclude their being “completely different”.
But probabilities in the real world are supposed to have something to do with our expectations regarding what will happen when we perform an experiment which could in principle be repeated an unlimited number of times. And since any concept of probability is intended as a measure of how “likely” we expect physical events to be, if both kinds are useful they can’t really be all that different in what they lead us to expect.
The difference, as I see it, is as follows:
The frequentist probability of an “Event” (which is just the jargon for a subset of the space of all possible outcomes) is assumed to be the proportion of “all” experimental trials in which the outcome is consistent with the event. Despite its rather fuzzy definition (in terms of whatever we mean by “all” trials), this is something that is assumed to be an actual property of the experiment, albeit one that can never be determined by experimental tests (because any finite set of trials might fail to be representative). Frequentist statisticians often try to choose between different possible sets of assumptions by using each set of assumptions for computing the probability of what they have actually seen and choosing the assumptions that lead to higher probability, but they generally do so with the mindset that there is one set of assumptions that is really true.
Bayesian probability, on the other hand, is something that is always acknowledged to depend on the observer (through both initial choices and subsequent acquired knowledge). Given those choices and experience, the Bayesian probabilities are perfectly well known because the Bayesian approach provides an explicit rule for modifying one’s assumptions in the light of experiments. But because it depends on the the observer’s assumptions and experience, the Bayesian probability is not a well-defined property of the experiment and outcome alone.
The difference is subtle though, because one cannot do any frequentist analysis without making an assumption (usually about what constitutes a set of “equally likely” outcomes). But though one can make and change the assumptions, the Frequentist attitude remains that they are assumptions about something that exists and are either true or false, while the Bayesian does not necessarily ever claim that there is one set that is really “true”.
However there are theorems which tell us (roughly speaking) that if some frequentist assumption is in fact correct, then no matter how bad the initial assumptions of the Bayesian are, after a lot of experiments it is very probable (in the frequentist sense) that the Bayesian probabilities become close to the “correct” ones.
So in the end both approaches end up giving the same advice about what one “should” expect (though neither gives anything I can understand as a non-circular definition of what that means!) and whether the difference in attitude is a “false dichotomy” is something I think we each have to decide for ourselves.
fromQuora: Isn’t a ‘probability wave’ simply a statistical function and not a real wave? Does it no more ‘collapse’ than me turning over a card and saying that the probability ‘wave’ of a particular deal has collapsed?
Well, as other answers have noted, the wave function of quantum mechanics is not a probability wave as its values are complex and it is only the squared amplitude that gives a probability density. But the process of “collapse” involved in a quantum measurement does involve something like your playing card analogy.
There are actually two stages in the measurement and observation process. One is the interaction with an incompletely known measurement apparatus which reduces or eliminates the prospects for future interference and basically turns the previously pure state of the isolated system (considered as a subsystem of the larger world) into a statistical mixture. And the second is the collapse of that statistical mixture by observation – with the result that, from the observer’s point of view, of the many possible alternatives only one is actually true.
And if this makes it seem to you that the “state” of the system actually depends on the observer then you are on the right track. (But it is nothing special about the “consciousness” of the observer that is relevant here. Almost any localized system could play the same role relative to the rest of the universe.)
Any configuration history of any physical system can be considered as “seeing” the rest of the universe in a “relative state” which “collapses” when the configuration history in question passes a point beyond which the configuration includes information about that particular measurement value.