In this model, there are no "split particle" paradoxes, because there are no entities that resemble the behavior of macroscopic bodies, with our intuitions about them.
Imagine a Fortran program, with some neat index-based FOR loops, and some per-element computations on a bunch of big arrays. When you look at its compiled form, you notice that the neat loops are now something weird, produced by automatic vectorization. If you try to find out how it runs, you notice that the CPU not only has several cores that run parts of the loop in parallel, but the very instructions in one core run out of order, while still preserving the data dependency invariants.
"But did the computation of X(I) run before or after the computation of X(I+1)?!", you ask in desperation. You cannot tell. It depends. The result is correct though, your program has no bugs and computes what it should. It's counter-intuitive, but the underlying hardware reality is counter-intuitive. It's not illogical or paradoxical though.
There still is the 'split particle paradox' because QFT does not solve the measurement problem.
The 'some kind of interaction of graph nodes' by which I am guessing you are referring to Feynman diagrams are not of a fundamental nature. They are an approximation known as 'perturbation theory'.
We're leaving my area of understanding, but I believe Haag's theorem shows that the naïve approach, where the interacting and free theories share a Hilbert space, completely fails -- even stronger than that, _no_ Hilbert space could even support an interacting QFT (in the ways required by scattering theory). This is a pretty strong argument against the existence of particles except as asymptotic approximations.
Since we don't have consensus on a well-defined, non-perturbative gauge theory, mathematically speaking it's difficult to make any firm statements about what states "exist" in absolute. (I'm certain that people working on the various flavours of non-perturbative (but still heuristic) QFT -- like lattice QFT -- would have more insights about the internal structure of non-asymptotic interactions.)
"Quantum microscopy study makes electrons visible in slow motion" https://news.ycombinator.com/item?id=40981054
There exist single photon emitters and single photon detectors.
Qualify that there are single photons if there are single photon emitters:
Single-photon source: https://en.wikipedia.org/wiki/Single-photon_source
QFT is not yet reconciled with (n-body) [quantum] gravity, which it has 100% error in oredicting. random chance. TOD
IIRC, QFT cannot explain why superfluid helium walks up the sides of a container against gravity, given the mass of each particle/wave of the superfluid and of the beaker and the earth, sun, and moon; though we say that gravity at any given point is the net sum of directional vectors acting upon said given point, or actually gravitational waves with phase and amplitude.
You said "gauge theory",
"Topological gauge theory of vortices in type-III superconductors" https://news.ycombinator.com/item?id=41803662
From https://news.ycombinator.com/context?id=43081303 .. https://news.ycombinator.com/item?id=43310933 :
> Probably not gauge symmetry there, then.
I think it's also a pretty strong argument against the mathematical well-definedness of typical (interacting) QFTs in the first place.
Since in some conditions these mathematical tricks behave very similar to small balls of dirt, we reused the word "particle" and even the names we used when we thought they were small balls of dirt.
[11] We probably never thought they were made of dirt, and in any case the magnetic moment is the double of the value of the small ball of dirt model.
[2] That has so many infinites that would make a mathematician cry.
Another point is that infinities do not necessarily make mathematicians cry. Abraham Robinson is quite pleased with them. It seems a possible hypothesis that at least some QFT are mathematically well-defined using non-standard analysis. Where 'some QFT' at least renormalizable and perhaps also asymptotically free. I don't know enough about it to know how the Haag theorem, mentioned in another comment impacts this.
But a sports team is not atomic, not a "final reality" entity. A sports team can pass through one gate, or through several gates, when entering a stadium. From a doctor's perspective, the team "does not exist", a doctor only operates in terms of individual players' organisms.
This works well when interactions are weak. Electrons do not couple strongly to the electromagnetic field, so it makes sense to view electrons as particles. However, quarks couple very strongly to the strong force (hence the name), so the perturbative approach breaks down, and it makes less sense to view quarks as particles.
Also, for context, my question was posed because the idea of "particle number" as well as "quantum states of particles (which are countable) represented in a Fock space" and in general the idea of particles are, like, page 2 of any QFT textbook. It doesn't approximate anything in the theory. Creation and annihilation of particles (and hence the well-defined concept of a particle) is fundamental to the construction of the theory itself, perturbative or not.
You're also manifestly wrong on "the free particle is the only system we can exactly solve".
The free particle solution is an approximation to reality, because reality includes interactions. There's a mathematical formalism to this that we'd agree on, but you might disagree about how to describe it in words.
QFT doesn’t discard local fields and replace them with only nonlocal graph nodes.
Maybe this is coming from some speculative quantum gravity ideas.
How so? QFT is Lorentz invariant. Even has such a thing as the norm flux.
Consider a world in which everything is “very quantum”, and there are no easy approximations which can generally be relied on. In such a world, our human pattern-matching behavior would be really useless, and “human intelligence” in the form we’re familiar with will have no evolutionary advantage. So the only setting in which we evolve to be confused by this phenomena is one where simple approximations do work for the scales we occupy.
Sincerely, I don’t think this argument is super good. But it’s fun to propose, and maybe slightly valid.
So yes, we can use the antrophic argument as evidence for the existence of the classical limit, but it doesn't have explanatory power for why there is a classical limit.
Technology that works in a different universe without atoms, would require us to be able to experiment within that universe if we wanted to produce technology that works there with our current innovation techniques.
And then there was Feynman asked to explain in layman's terms how magnets work. And he said I can't. Because if I taught you enough to understand you wouldn't be a layman. But he said it's just stuff you're familiar with but at a larger than usual scale. And he hinted even then one level down and you run out of why's again.
But you know about the Anthropic Principle :)
Any standard course goes over various derivations of classical physics laws (Newtonian dynamics) from quantum mechanics.
We also had a somewhat shoddy derivation of Newton's Laws from the Schrödinger equation, but wasn't really satisfactory either, because it doesn't really answer the question when I can treat things classically.
What I'd really like (and haven't seen so far, but also haven't searched too hard) is the derivation of an error function that tells me how wrong I am to treat things classically, depending on some parameters (like number of particles, total mass, interaction strength, temperature, whatever is relevant).
(Another thing that drove me nuts in our QM classes where that "observations" where introduced as: a classical system couples to a quantum system. Which presupposes the existence of classical systems, without properly defining or delineating them. And here QM was supposed to be the more fundamental theory).
There are plenty of ways to do this and things like Wigner functions literally calculate quantum corrections to classical systems.
But generally if you can't even measure a system before it's quantum state decoheres then it's quantum status is pretty irrelevant.
I.e. the time it takes for a 1 micrometer wide piece of dust to decohere is ~10^-31 s and it takes a photon ~10^12s to cross it's diameter. So it decoheres 10 billion billion times faster that a photon could even cross it.
So, given that chemistry plays a huge role in how the human (or any) brain works, it would be quite a stretch to argue that the brain works with classical physics.
We are often sloppy and sort all the chemistry in with classical physics, but that's a very human-centric approach. In reality, the Universe doesn't have different "domains" with separate rules for chemistry and physics; it evolves according to the Schrödinger equation, and we use Chemistry as an abstraction to not have to deal with nasty mathematics to predict how certain reactions will work.
It’s been almost entirely based on maths and careful measurements from machined instruments purpose built for observing phenomena.
So at this point you’d hope the limitations of our biological senses would have been long surpassed.
We can try to retrain and reuse the sense for other purposes though. I'm reminded of that film "The Zero Theorem".
Neural nets are called universal approximators for a reason. If what you guys are discussing is true, then a neural net would not be able to learn from a dataset about quantum experiments. I doubt this is the case. Also there is quantum cognition, and by that I mean the fact some researchers figured out a lot of puzzling results from experimental cognitive science seem to make more sense once analyzed from a quantum perspective.
I think you have this backwards. QM IS the law of the universe and Classical Physics is just a high mass low energy approximation of it. In any case there doesn't need to be a logical explanation at all, the laws of physics are as they are. Why is the value of the fine structure constant what it is?
We observe double-slit diffraction and model it with the wave-function. This doesn't preclude other models, and some of those models will be more intuitive than others. The model we use may only give us a slice of insight. We can model a roll of the dice with a function with 6 strong peaks and consider the state of the dice in superposition. The fact that the model is a continuous real function is an artifact of the model, a weakness not a strength. We are modeling a system who's concrete state is unknown between measurements (the dice is fundamentally "blurred"), and we keep expecting more from the model than it wants to give.
Programmers may have better models, actually. The world is a tree where the structure of a node births a certain number of discrete children at a certain probability, one to be determined "real" by some event (measurement), but it says little about "reality". The work of the scientist is to enumerate the children and their probabilities for ever more complex parent nodes. The foundations of quantum mechanics may be advanced by new experiments, but not, I think, by staring at the models hoping for inspiration.
The only way forward at this point is to start with the model and design experiments focusing on some specific element that strikes you as promising. Unless you're staring at the model you're just guessing, and it's practically impossible that you're going to guess right.
This kind of rhetoric saddens me. Someone says "design an experiment" and you jump to the least charitable conclusion. That people do this is perhaps understandable, but to do it and not get pushback leads to it happening more and more, to the detriment of civil conversation.
No, the experiment I had in mind would take place near the Schwarzchild radius of a black hole. This would require an enormous effort, and (civilizational) luck to defy the expectations set by the Drake equation/Fermi paradox. It's something to look forward to, even if not in our lifetimes!
I think the GP was thinking of more practical experiments, not science fiction.
Whenever a physics theory gets replaced it becomes even harder to make an even better theory. In technology low hanging fruit continues to get picked and the next fruit is a little higher up. Of course there are lots of fruits and sometimes you miss one and a solution turns out to be easier than expected but overall every phase of technology is a little harder and more expensive.
This actually coincides with science. Technology is finding useful configurations of science, and practically speaking there are only so many useful configurations for a given level of science. So the technology S-curve is built on the science S-curve.
An obvious example of this is the assumption of the geocentric universe. That rapidly leads to ever more mind-boggling complex phenomena like multitudes of epicycles, planets suddenly turning around mid-orbit, and much more. It turns out the actual physics are far more simple, but you have to get passed that flawed assumption.
In more modern times relativity was similar. Once it became clear that the luminiferous aether was wrong, and that the universe was really friggin weird, all sorts of new doors opened for easy access. The rapid decline in progress in modern times would seem most likely to suggest that something we are taking as a fundamental assumption is probably wrong, rather than that the next door is just unimaginably difficult to open. This is probably even more true given the vast numbers of open questions for which we have defacto answers, but yet they seem to defy every single test of their correctness.
---
All that said, I don't disagree that technology may be on an s curve, but simply because I think the constraints on 'things' will be far greater than the constraints on knowledge. The most sophisticated naval vessel of modern times would look impressive but otherwise familiar to a seaman of hundreds or perhaps even thousands of years ago. Even things like the engines wouldn't be particularly hard to explain because they would have known full well that a boiling pot of water can push off its top, which is basically 90% of the way to understanding how an engine works.
Even Einstein did not produce (e.g. special relativity) out of whole cloth. He provided a consistent conceptualization of Lorentz contraction, itself the result of observing descrepencies in the motion of Jupiter's moons. The same could be said of the photoelectric effect, the ultraviolet catastrophe, and QM.
All this to say that your statement "The rapid decline in progress in modern times would seem most likely to suggest that something we are taking as a fundamental assumption is probably wrong" is unsupported. Nothing could be more popular than questioning fundamental assumptions in science today!
It could very well be that, as Sean Carroll puts it, we really know how everything larger than the diameter of a nuetron works! Moreover, we know that even if we find strangeness at tiny scales, our current theories WILL remain valid approximations, just like Newtonian mechanics are valid approximations of special and general relativity. The path to progress will not happen because a rogue genius finds something everyone missed and boldly questions assumptions long-held. Scientific revolution first requires an observation inconsistent with known models, but even the LHC hasn't given us even one of those. There is reason to think that GR, QM, and the standard model are all there is...until we do some experiments near a black hole!
That's not true, he didn't.
Geocentric model of the time was a better fit of the data than the Copernican model. What Copernican model had was simplicity (at some cost to observational data fidelity).
Making the heliocentric model approach (and breach) the accuracy obtained by the geocentric model took a lifetime of work by many people.
As a kinematic model (description of the geometry of motions) as observed from Earth's reference frame geocentric is still pretty darn accurate. There's a reason why it is so. Compositions of epicycles are a form of Fourier analysis -- they are universal approximators. They can fit any 'reasonably well behaved' function. The risk is, and it's the same risk with ML, deep neural nets, that one (i) could overfit and (ii) it could generate a model with high predictive accuracy without being a causal model that generalises.
Heliocentric model was proposed much much earlier than Copernicus but the counterarguments were non-ignorable. Reality, it turned out was very surprising and unintuitive.
I don't think this history says anything against your point -- sometimes the time is just not right for the idea -- and even classical science can be very unintuitive and weird, so much so that common sense seems like very strong counter arguments against what eventually turn out to be better models.
I of course learned this over many books, but the mind blanks out over which one to suggest. I think biographies of Copernicus and Kepler would be good places to start.
Edit: you may find this interesting:
https://news.ycombinator.com/item?id=42347533
HN do you know what happened to John Baez's blog that listed his multiparty blog posts ? They are a treasure trove that I do not want to lose. Azimuthproject too seems to have disappeared
If one does genuinely believe in a God then the existence of science need not pose a threat to that, since there's nothing preventing one from believing that God also then created the sciences and rationality of the universe. The classical 'gotchas' like 'Can God create a stone so heavy that he could not lift it?' were trivial to answer by simply accepting that omnipotence does not extend to things which are logically impossible, like a square circle.
[1] - https://en.wikipedia.org/wiki/Science_and_the_Catholic_Churc...
So even if our fundamental assumptions are wrong and some new theory is able to explain a bunch of new stuff, chances are it won't impact the stuff we can practically do here on earth, because scientists have already been doing the most extreme experiments they can, and so far progress is still stalled on fundamental physics.
Heliocentrism was most fundamentally driven by somebody, with extremely poor interpersonal skills (which is much more the reason he was left living his final days in house imprisonment, rather than his theory itself), moving forward on his own somewhat obsessive bias.
Similarly, with relativity. I have no idea what you mean by a 'consistent conceptualization' of Lorentz contraction, but length contraction was a completely ad hoc explanation for the Michelson Morley experiment. It's correctness was/is more incidental than anything else. Einstein did not cite Lorentz (or anybody for that matter), and I do not think that was unfair or egotistical of him.
--
I'm also unsure of what you're referencing with Sean Carroll, but I'd offer a quote from Michelson of the Michelson-Morley experiment saying essentially the same, "The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.... Our future discoveries must be looked for in the sixth place of decimals."
So convinced was Michelson that the 'failure' of his experiment was just a measurement issue that he made that comment in 1894, near to a decade after his experiment and shortly before physics and our understanding of the universe was about to revolutionary explode thanks to a low ranking patent inspector.
In "On the Electrodynamics of Moving Bodies"[1] Einstein checks his derivation against Lorentz contraction. It's on page 20 of the referenced English translation. Lorentz' model was ad hoc, E derived it with only 2 postulates (equivalence principle; c invariance). Lorentz was indeed cited, and the cite is useful to connect E's theory to real-world observation. This is true whether or not you want to get pedantic about the meaning of "cite" vs "reference".
1 - https://www.fourmilab.ch/etexts/einstein/specrel/specrel.pdf Originally "Zur Elektrodynamik bewegter Koerper"
We actually know we have:
Bell’s inequality tells us that the universe is non-local or non-real. We originally preferred to retain locality (ie, Copenhagen interpretation) but were later forced to accept non-locality. But now we have a pedagogy and machinery built on this (incorrect) assumption — which people don’t personally benefit from re-writing.
Science appears trapped in something all too familiar to SDEs:
A technical design choice turned out to be wrong, but a re-write is too costly and risky for your career, so everyone just piles on more tech debt — or modern epicycles.
And perhaps that’s not a bad thing, in and of itself. Eg, geons were initially discarded because the math doesn’t work out — but with the huge asterisk that they might still be topologically stabilized. But the math there is hard and so it makes sense to continue piling onto the current model until enough advances in modeling (eg, 4D anyons) allow for exploring that idea again.
Similar to putting off moving tech stacks until someone else demonstrates it solves their problems.
But at least topological geons would explain one question: why does space look like geometry but particles look like algebra?
Because topological surgery looks like both!
- - - -
> clear that the luminiferous aether was wrong
Another interpretation is that the aether exists, but we’re also made of aether stuff — so we squish when we move, rather than rigidly moving through it (as per the theory tested by Michelson-Morley). That squishing cancels out the expected measurement in MM. LIGO (a scaled MM experiment) then works because waves in the aether squish and stretch us in a detectable way.
Modern theories are effectively this: everything is fields, which we believe to be low-energy parts of some unified field.
The S-curve is really about fundamental limits. Lets say ASI helps us make multiple big leaps ahead, I mean mind blowing stuff. That still doesn't change that there must be a limit somewhere. The idea that science and tech is infinite is pure science fiction.
Now go look up how precise a prediction the same model makes for the muon g-factor.
Reality can be interpreted as non-local. There has been no conclusive proof it isn't.
c isn't a limit on the kind of non-locality that is required, because you can have a mechanism that appears to operate instantaneously - like wavefunction collapse in a huge region of space - but still doesn't allow useful FTL comms.
Bell's Theorem has no problem with this. Some of the Bohmian takes on non-locality have been experimentally disproven, but not all of them.
The Copenhagen POV is that particles do not necessarily exist between observations. Only probabilities exist between observations.
So there has to be some accounting mechanism somewhere which manages the probabilities and makes sure that particle-events are encouraged to happen in certain places/times and discouraged in others, according to what we call the wavefunction.
This mechanism is effectively metaphysical at the moment. It has real consequences and was originally derived by analogy from classical field theory, with a few twists. But it is clearly not the same kind of "object" as either a classical field or particle.
Non-locality means things synchronise instantly across the universe, can go back in time in some reference frames, and yet reality _just so happens_ to censure these secret unobservable wave function components, trading quantum for classical probability so that it is impossible for us to observe the difference between a collapsed and uncollapsed state. Is this really tenable?
Strip back the metaphysical baggage and consider the basic purpose of science. We want a theoretical machine that is supplied a description about what is happening now and gives you a description of what will happen in the future. The "state" of a system is just that description. A good _scientific_ theory's description of state is minimal: it has no redundancy, and it has no extraneous unobservables.
My lightly held conclusion is if it really was a full and more straight forward solution it would dominate the conversation more than it does now. This option was formed reading some primary sources but mostly reviews and comparisons of QM theories. Unlike other methodologies I have never working through a full QM example problem in pilot-wave theory.
> the idea that unknown quantities are determining the outcomes in quantum mechanics has been disproven in the event of the speed of light being a true limit on communication speed.
and I provided an immediate counterexample. Yes, Bell's Theorem and its exact assumptions are not entirely straightforward but let's please stop propagating those falsehoods that die-hard proponents of the Copenhagen interpretation commonly propagate.
To quote section 10.2: "The [experimental] system represents a classical realization of wave–particle duality as envisaged by de Broglie, wherein a real object has both wave and particle components."
We've already got all those fields interacting in the real world, so I don't find it very far fetched that quantum mechanics emerges from their fully classically described interactions, probably expressed in some really gnarly 4D math.
[1] https://thales.mit.edu/bush/wp-content/uploads/2021/04/BushO...
To come up with new experiments that might shed light it certainly helps to spend time exploring the models to come up with new predictions that they might make. Sure, one can also come up with new experiments based only on existing observations, but it's most interesting when we can make predictions, as testing those advances some theories and crushes others.
Or at least some clear statement how comes our reality is not like that.
The wave function is the square root of a probability distribution. The wavefunction is a continuous real function of position because position is modeled as a continuous real variable. The idea of the wavefunction as a function of position is generally supported by the fact that it can be used to predict the measurement results of diffraction experiments like the double-slit experiment, but also practically the whole field of X-ray diffraction.
There is not just one experimental result that is explained by wavefunctions. There are widely used measurement techniques whose outcomes are calculated according to the quantum properties of matter — like X-ray diffraction and Raman scattering — which are widely considered to be extremely reliable. There is a good reason to explain the model of reality expressed by the equations as clearly as possible, because we want people to be able to use the equations.
Plenty of people (though certainly not all) expect quantum mechanics to be eventually modified to have a consistent theory of gravity. But physicists have experience with this. Special relativity and classical quantum mechanics were both more complex than Newtonian (classical) mechanics, and quantum field theory is more complicated than either. General relativity is substantially more involved than special relativity. It is likely that further extensions will continue to get worse.
The model of reality taught by Newtonian (classical) mechanics is also still widely discussed and used in introductory physics courses and many areas of physics (such as fluid dynamics) and engineering. This model also discusses position on the real line. Even though classical mechanics had to be modified, the use of Cartesian coordinates and real numbers turned out to be durable.
Usually the finitists will formally "rescue" countability by suggesting that the world could exist on the computable numbers, which are countable and invariant under computable rotations. But the computable numbers are a very unsatisfying model of reality, and have a lot of the same "weirdness" as the real numbers. Therefore they suggest that some other model must exist without giving a lot of specifics. Why this should be somehow helpful and not injurious to the pedagogy of physics is not clear.
This is how you get the tortured reasoning that views measurement and observation as somehow different. Even einstein struggled.
It you place a detector on one of the two slits in the prior experiment, (so that you measure which slit each individual photon goes through) the interference pattern disappears.
If you leave the detector in place, but don't record the data that was measured, the interference pattern is back.
This is not remotely true. It looks like you read an explanation of the quantum eraser experiment that was either flawed or very badly written, and you're now giving a mangled account of it.
A lot of people pose it as a question of pure information: do you record the data or not?
But what does that mean? The “detector” isn’t physically linked to anything else? Or we fully physically record the data and we look at it in one case vs deliberately not looking in the other? Or what if we construct a scenario where it is “recorded” but encrypted with keys we don’t have?
People are very quick to ascribe highly unintuitive, nearly mystical capabilities with respect to “information” to the experiment but exactly where in the setup they define “information” to begin to exist is unclear, although it should be plain to anyone who actually understands the math and experimental setup.
An interesting experiment to consider is the delayed-choice quantum eraser experiment, in which a special detector detects which path a particle went through, and then the full results of the detector are carefully fully stomped over so that the particles of the detector (and everything else) are in the same exact state no matter which path had been detected. The configurations are able to interfere once this erasure step happens and not if the erasure step isn't done.
Another fun consequence of this all is that we can basically check what configurations count as the same to reality by seeing if you still get interference patterns in the results. You can have a setup where two particles 1 and 2 of the same kind have a chance to end up in locations A and B respectively or in locations B and A, and then run it a bunch of times and see if you get the interference patterns in the results you'd expect if the configurations were able to interfere. Successful experiments like this have been done with many kinds of particles including photons, subatomic particles, and atoms of a given element and isotope, implying that the individual particles of these kinds have no unique internal structure or tracked identity and are basically fungible.
An important thing to realize is that interference is a thing that happens between whole configurations of affected particles, not just between alternate versions of a single particle going through the slit.
https://en.wikipedia.org/wiki/Double-slit_experiment#Variati...
There is a pattern to the wavefunction, where the amplitude at (x+delta, y, z, t+delta) is closely related to the amplitude at (x, y, z, t). (Specifically, it's that amplitude rotated by delta times the mass of the particle). Or, unless you're being wilfully obtuse, the wave packet moves from x to x+delta in time t to t+delta, rotating as it goes as quantum mechanical waves do.
You can, if you really want, insist in Zeno's paradox fashion that nothing ever goes anywhere, that things just exist at given places and times, and in a certain sense that's true. But there's nothing QM-specific about that, and it's misleading to complicate a discussion of QM by claiming so. If we allow that things can move through space, and waves can move through space, then the wave moves through the two slits in the normal sense of all those concepts.
I wish people would stop going out of their way to make QM sound confusing/weird/"spooky". Most of it is just normal wave behaviour for which you can observe exactly the same thing with everyday classical waves.
I think this is a very important distinction actually. A wave amplitude represents an actual displacement in some medium, and waves interfere constructively/destructively because they both move the medium in the same/opposite direction at the same time at the same location. So when a water wave gets pushed through two slits, it breaks into two separate water waves, one coming from each slit, and those two waves push the water up and down at the same time at different locations.
But wavefunctions are very much not like that. A wavefunction amplitude does not represent a displacement in any kind of medium. They represent a measure of the probability that the system being described is in a particular state at a moment in time. That state need not even be a position, it might be a charge, or a spin, or a speed, or any combination of these. Basically quantum systems oscillate between their possible states, they don't oscillate in space-time like matter affected by a wave does.
This also makes it very hard to conceptualize what it means for these wavefunctions to interfere. So the simple picture of "wave A and wave B are pushing the water up at the same time in the same location, so the water rises higher when both waves are there" is much harder to apply to probability oscillations than a direct comparison makes it out to be.
An additional problem when comparing wavefunctions to waves in a medium is that there is no source of a wavefunction. Any system you're analyzing has a single wavefunction, that assigns probability amplitudes to every possible configuration of that system. You can decompose the system's wavefunction as a sum of multiple wavefunctions corresponding to certain measurables, but this is an arbitrary choice: any such decomposition is exactly as valid. In matter waves, if I drop two stones in water at different locations, the water surface's movements can be described as a single wave, but there is a natural decomposition into two interfering waves each caused by one of the stones. There is no similar natural decomposition that quantum mechanics would suggest for a similar quantum mechanical system.
What physically observable distinction are you drawing? The points of water on the far sides of the slits will have a certain height at each point at each time, forming the interference pattern you'd expect. You can decompose that function into a sum of two separate waves, if you want, but you don't have to. And exactly the same thing is true of the quantum mechanical wavefunction for a particle passing through a pair of slits.
> A wavefunction amplitude does not represent a displacement in any kind of medium. They represent a measure of the probability that the system being described is in a particular state at a moment in time. That state need not even be a position, it might be a charge, or a spin, or a speed, or any combination of these. Basically quantum systems oscillate between their possible states, they don't oscillate in space-time like matter affected by a wave does.
I don't think that's a real distinction. Water height is a different dimension from (x,y) position and it behaves very differently; that the wave is moving across the surface and that the surface is moving up and down are orthogonal facts, the reason the former is movement isn't that the latter is movement. A classical electromagnetic wave moves even though it isn't in a medium that's moving (and so does e.g. a fir wave).
> You can decompose the system's wavefunction as a sum of multiple wavefunctions corresponding to certain measurables, but this is an arbitrary choice: any such decomposition is exactly as valid. In matter waves, if I drop two stones in water at different locations, the water surface's movements can be described as a single wave, but there is a natural decomposition into two interfering waves each caused by one of the stones. There is no similar natural decomposition that quantum mechanics would suggest for a similar quantum mechanical system.
Again I don't think this is a real distinction. You have exactly that natural decomposition in the QM system - it's not the only valid decomposition, but it is a valid one and it has some properties that make it nice to work with. And similarly for dropping stones in the water, infinitely many other decompositions are possible and equally valid (e.g. decomposing as two copies of a wave where you dropped two half-sized stones into the water).
Even for EM waves, the classical theory explains them somewhat mechanistically, as an interaction between electrical and magnetic forces that originate from the charged sources and self-propagate.
There is no similar picture you can draw for the quantum mechanical wavefunction. It's the base reality of the system, and it turns out in fact that Newton's laws can be derived as an approximatiom of the wavefunction. But there isn't any kind of "reason" for which the wavefunction does what it does, like there is for the water waves. And so all models for separating the wavefunction into different components is just as "natural" as any other.
I don't see that that follows? Both wave pictures are a mathematical approximation to the "real" system of particles moving. Both are ways of calculating the same result. Neither is objectively more valid than the other. You can say that picturing it as "the wave from the first rock overlaid on the wave from the second rock" is an interpretation that makes more physical sense or is nicer to think about, but that's just as true in the QM case.
Yes, exactly.
> You can say that picturing it as "the wave from the first rock overlaid on the wave from the second rock" is an interpretation that makes more physical sense or is nicer to think about, but that's just as true in the QM case.
No, because in QM, the wavefunction is the real, final picture. Any split is arbitrary, the two rocks didn't "cause" the waves, the the wavefunction of all the molecules that make up the rocks and the water has higher amplitudes for states in which the water molecules look like the wavey surface, and lower amplitudes for states in which the rocks touch the water but the water doesn't move at all, etc. That's all the physics can say.
But at the wave picture level that's always true. You can break down a water wave into a bunch of particles moving according to forces, sure. But that doesn't actually help you answer the question of whether a wave that's gone through a pair of slits is now one wave or two waves, because when you're looking at the particles and forces you can't see the waves any more (except as patterns in the particles and forces - but then you're back in the wave picture).
Where I'm struggling is that classical waves will always spread out spherically, and their energy must do so too. The issue here being that if a photon is a minimal quanta of energy, but is just a classical wave, what prevents it from spreading out and having sub-photon energy? Or if indeed it does, how does that sub-photon quantity get measured? -- if these experiments claim to be emitting a time-series of single photons, classical wave interference won't occur (again, being separated in time).
Right, so that part is new and "spooky" - quantum phenomena are quantised (hence the name). The photon does spread out as a wave, there is in a sense half a photon heading towards the top half of the screen and half a photon heading towards the bottom half of the screen - but then when it hits the screen what we see is a single whole photon that hits either the top half or the bottom half (or, perhaps, half of an us sees a photon hit the top half and half of an us sees a photon hit the bottom half). This is the "wave-particle duality" and while it does fall out of the equations, it's definitely unfamiliar compared to classical physics.
If you want to fully understand, all I can suggest is "work your way through a QM textbook" - every popular explanation I've seen has messed it up one way or another. But it sounds like you're understanding correctly, and thinking for yourself as well - you've hit upon the actual essence of it, the kernel that really is hard.
Again, IANAPhysicist, so I don't know what to think of that video, but the channel seems legit, and the explanation is beautiful in its simplicity.
Instead of particles I like to view the interactions like the forming of a lightning in a thunderstorm. The energy-field builds up, And at some point of contact the energy is being released in a single lightning strike.
What I still wonder is, if the interaction really depletes the energy-field instantly in a single point, or if there is more going on (on different timescales - maybe with speeds not related to the speed of light).
Edit: Thank you all for the responses, it has been very educational. It appears I was misunderstanding the most important aspect of the double slit experiment. A photon is a wave function when unobserved, it literally goes through both slits and creates an interference pattern like how waves in water would. However, when observed at the slit, or at the detector screen, the wave function collapses and only one photon(billiard like particle) will be detected.
"two photons come out" part makes no sense though. On a target side, there's always single hit after single photon/electron, but distribution of theses hits as if said electron got through both slits and interfered with itself
P.S. the funny thing is - this works on any small thingy, measured up to 2000 atoms-big, as if it's the property of the universe itself
They did it by splitting a beam of particles into a pair of entangled particles and then setting up a way to measure the polarity of one of them after the point in time where it even hits the final screen. If you measure the polarity then, after the other stream of particles from the beam had already had time to make the pattern, the pattern will be two clusters. If you don't, it goes back to an interference pattern.
That one really cemented the notion in my head that this is just how the Universe is and not some local weirdness with particles and measurements.
https://www.youtube.com/watch?v=RQv5CVELG3U
This is the video if you're interested. Again, I'm no physicist and don't know if explanations are legit or statistically correct. But that little || trick that all other popsci videos play on you, that's a true concern.
You can check it out here.
Summary https://www.stonybrook.edu/laser/_amarch/eraser/index.html
Paper https://www.stonybrook.edu/laser/_amarch/eraser/Walborn.pdf
My fascination with these experiments has never been due neat clusters of impacts, although popsci depictions have clearly tainted my memory.
https://iopscience.iop.org/article/10.1088/2058-7058/12/11/4
> The largest entities for which the double-slit experiment has been performed were molecules that each comprised 2000 atoms (whose total mass was 25,000 atomic mass units).[19]
The electron/proton entering a slit is affected by gravity!
Presumably a gravity based detector would have similar issues as these particles are affected by gravity (as can be seen around black holes)
https://www.nature.com/articles/s41567-019-0663-9
> Here, we report interference of a molecular library of functionalized oligoporphyrins with masses beyond 25,000 Da and consisting of up to 2,000 atoms, by far the heaviest objects shown to exhibit matter-wave interference to date.
It would be awkward to say that the 2000 atom molecule comes out of both sides... but it does, until you look.
The double slit experiment is not a duplication cheat of reality... it's weirder than that.
I thought the takeaway wasn’t that the particle comes out both sides, the implication is that the behavior of a single particle is the same as the behavior of multiple particles - that is to say, it appears to be an interference pattern, even when there should be no other particles to interferes with the single one.
This is fundamental to 100 years of quantum mechanics and underlies most of physics including all semiconductors, materials science, chemistry, lasers, etc. The double slit experiment is just a very good illustration of the principle boiled down to its essentials, which is why it's everywhere in pop-sci. It makes for more accessible story than describing how a hydrogen atom works.
There is no photon multiplication happening on the double slit.
It is as if every photon that went through the slit is somehow aware of all other photons that did so too so each photon can choose the (random) position where it hits on the wall behind the slit such that together they look like as if a WAWE went through the slit.
That is (one reason) why they call it "Quantum Weirdness". God is playing dice with us
There's no photon multiplication, and no "all other photons" changing their path.
There is some inter-photon interaction because they are bosons. But it's not significant enough to impact the multi-slit experiment. And the experiment works exactly the same way if you send only one photon at a time.
Why isn't it just that there's a probability density function that describes the aggregate outcomes of a large number of samples from a random process? Why is "memory" involved?
Only one photon comes out, but it can interfere with itself if it had the possibility of going through either slit.
That nuance aside, the Quantum Eraser Experiment is a real physical experiment that covers what I think you're asking about. If you send photons through double slits in a setup where you can tell which slit the photon went through, you don't get an interference pattern. If you can't tell, you do get the interference pattern.
You still do not understand what is happening, please READ the article, it shows that the wave function doesn't go through anything and the it certainly doesn't create the interference pattern.
https://en.m.wikipedia.org/wiki/Perturbed_angular_correlatio...
I haven’t used it for my research, but it’s an incredible local probe of electric and magnetic fields in materials. There’s no other technique that I’m aware of that smuggles information about the chemical structure of a single coordination sphere into such clean, distinct emissions. The brief excited state of the isotope after the first emission event and before the second is sensitive to practically everything. It all shows up in the deconvoluted spectra.
Shame nearly all the isotopes that work for this are not ones that are super interesting for modern quantum materials. Perhaps that will change out of necessity.
In fact the photon may not actually exist. and I have questions as to what "single photon experiments" are actually measuring. let me explain.
The EM field is not quantized, or at least not quantized at the level of a photon, what we call a photon is the interaction of the EM field with matter, or more precisely with the electron shell of matter. it is the sound of the wave breaking on the shore, not the wave.
Now none of this actually matters as the only method we have of interacting with the EM field is through matter(electrons really) so we can only measure it in photon sized increments.
But, to "solve" the wave /particle conundrum, I like to think of it as fields all the way down. A "particle" is then a localized and quantisized interaction of said field with another field.
If you think of particles as small billiard balls flying through space on some ballistic trajectory, you'll soon run into all kinds of trouble and the mental model breaks down.
I don't agree with this. You can absolutely consider a classical (non-quantized) EM field interacting with quantized matter. This semi-classical model can describe the photoelectric effect, but it cannot describe other experimental observations such as sub-poissonian photo-detections / photon anti-bunching.
However note that we can only perturb the em field in photon sized energy levels, and we can only pick up disturbances of the em field in photon sized bunches as well. Not sure what this implies for how em field energy is accumulated on electrons in order for us to detect it.
Or in insisting on referring to the electron as a particle.
“We begin by throwing an ultra-microscopic object — perhaps a photon, or an electron, or a neutrino”
In typical probability, we deal with an ensemble of fixed states, or at least phenomena that can be simulated as such.
In quantum physics, the wavefunction is fundamental. The question "what was the exact path?" is meaningless. In particular, if we take the approach of Feynman path integrals, we find that particles take many paths - including circular paths through each slit - before arriving somewhere else where they interact (i.e., become entangled) with, say, an electron in the screen.
Sure, we may consider different experiments (e.g., quantum erasers, see https://lab.quantumflytrap.com/lab/quantum-eraser), but analogies with deterministic particles are whimsical - sometimes they work, sometimes not.
It is not correct— at least not unless you subscribe to the Copenhagen interpretation. Yet, while this interpretation is a simple heuristic for interaction with big systems (e.g., a photon hits a CCD array), none of the quantum physicists I know treat it seriously (for that matter, I have a PhD in quantum optics theory).
I mean, at some certain level, everything is "just a mathematical representation" - in the spirit of "all models are wrong but some are useful". But the wavefunction is more fundamental than measurement. The other can be thought of as a particle entangling with a system so large that, for statistical reasons, it becomes irreversible - because of chaos, not fundamental rules.
For some materials, I recommend materials on decoherence by WH Zurek, e.g. https://arxiv.org/pdf/quant-ph/0105127. Some other references (here a shameless self ad) in https://www.spiedigitallibrary.org/journals/optical-engineer... - mostly in the introduction and, speaking about interpretations, section 3.7.
EDIT: or actually even simpler toy model of measurement, look at the Schrodinger cat in this one: https://arxiv.org/abs/2312.07840
The exact phase of a wavefunction does not matter - but it is an important phenomenon, giving raise to gauge invariance. The Born rule can be derived. In short, since we use unitary operators, length is preserved. For a derivation, see https://journals.aps.org/pra/abstract/10.1103/PhysRevA.71.05....
Also, to be nitpicky, we also never measure probabilities. Something (macroscopic) happened or not. It gives rise to quite fundamental and philosophical questions, including "what is (classical) probability" (I don't know an answer that fully satisfies me), many world interpretations (maybe all possible things just are), and in general what on indeterminism and free will.
I also don't agree with your comparison of what I said to the nuclear reactions happening inside a star. The problem with the wavefunction without the Born rule is not that it's difficult to observe, it's that it's literally meaningless: knowing the value of the wavefunction for some state of a system doesn't tell you anything at all unless you apply the Born rule to this value.
And as for probabilities, certain kinds of probabilities at least have a very clear and simple definition (though they are rather narrow cases): if you repeat an experiment in exactly the same conditions N times, and an outcome O happens in p/N times and doesn't happen (1-p/N) times, then we define P(O), the probability of outcome O, as the value p/N. For systems where this applies, it is very much a measurable quantity (with some noise, of course, related to the fidelity with which you can reproduce the same experiment).
I do agree that this well-defined, measurable, concept of probability is rarely what we mean by "the probability of O", since (a) it's often hard or impossible to repeat (or even perform) the experiment, and (b) we often care about what will happen the next M times we repeat this experiment, and the measure P(O) I defined above does not tell us anything about future events.
You say you need the Born rule to understand what's going on, for this you don't need it as a fundamental phenomenon, you only need to eventually observe the Born statistics, which is sufficient to provide understanding for you.
Actually, I'll tell you how it was checked: they ran lots of experiments, and confirmed that the probability to find the particle in one state or the other is precisely equal to the norm of the wavefunction of the respective state. Also known as the Born rule.
Now, you can dress this in other language. Some versions of MWI say that the universe splits into many literal worlds after any quantum event, and the number of worlds in which it has a certain outcome is proportional to the norm of the amplitude of the wavefunction of that outcome; based on this, they then derive the Born rule as P(stateA) = num_worlds(stateA) / num_total_worlds = norm(|stateA>). Of course, this is still the Born rule, and it is still not derivable from the wavefunction, still an additional postulate - just with extra steps.
And I don't know what you mean when you say that the Born rule is not statistics: it is exactly statistics (or at least probabilities, if you make a distinction). Sure it's possible to get a million tails in a row, that is always possible in statistics - by definition, any event with probability higher than 0 is possible.
Amplitudes as quantitative properties are sufficient for calculation of statistics. Ironically classical theory of probabilities works the same way: first it assigns arbitrary weight to outcomes, then divides them by the weight of ensemble (usually >1 contrary to QM) to get statistical coefficients. The weights can be scaled by any constant factor, and the calculation still works.
There was an experiment that measured and built a picture of electron orbitals in a water molecule.
What the experiment did NOT do is directly detect the wavefunction of the electron, because that is, again, not a phsycially meaningful quantity.
In the dual slit experiment this is visible as you can't get the interference effects by summing the probabilities for "particle through slit 1" and "particle through slit 2" but rather you need to sum the amplitudes of the processes.
Working physicists (since 100 years) just do this, there is no practical need to interpret it further, but it would be cool if someone could figure out some prediction/experiment mismatch that does indeed require tweaking this!
Do we really have to choose between wave and particle? What does the "particle" model bring to the table that a localized (wavelength-sized) wave/vibration could not?
What they differ about is the interpretation of that model. The equations are the same, but differ in what the variables refer to in the real world. It's really a matter of solving the equation for X vs Y, saying which one is independent and which is dependent.
The purpose is to take the fact that none of the variables correspond directly to anything we have any experience with. The best we can hope for is to isolate part of it and say "this much is like this thing we understand, but there's an additional thing that we'll treat as a correction".
We can try to take the whole thing seriously, and just call it "a quantum thingy" which is not like anything else. This is sometimes called "shut up and calculate", but even that makes assumptions about what things are feasible to calculate and which are hard. That skews your understanding even if you're trying to let it speak for itself.
There is one set of observations, and many many models to describe them: Schrödinger equation formulation, matrix mechanics (Heisenberg, Born, and Jordan), path integral formulation (Feynman), phase space formulation, density matrix formulation, QFT or second quantization, variational formulation, pilot wave theory aka de Broglie-Bohm theory, Hamilton-Jacobi formulation, PT-symmetric quantum mechanics, Dirac equation formulation (well, not really independent, just for spin 1/2 particles).
They all give the same results, and are therefore mathematically equivalent, but different models tend to be associated with different interpretations:
Schrödinger Equation : Copenhagen, Bohmian Mechanics, Many-Worlds
Matrix Mechanics : Copenhagen
Path Integral : Many-Worlds, Stochastic
Density Matrix: Ensemble, Decoherence-based
Second Quantization : Many-Worlds
Pilot Wave Theory : Bohmian Mechanics
Consistent Histories : Decoherence-based
Relational QM : Relational Interpretation
Stochastic Models : Stochastic Interpretations, GRW (Ghirardi–Rimini–Weber) Collapse
Luckly, sometimes the exact solution can be very accuately aproximated with a wave ecuation.
Luckly, sometimes the exact solution can be very accuately aproximated with a particle ecuation.
(Sometimes, the exact solution can be aproximated saying that the lowest energy state is an eigenvector of the Schoedinger equation. Is that a wave? It's not localized, but not very wavy.)
But neither are the exact solution, just aproximations that solve tpgether 99% of the experiment.
It's difficult to explain, because to explaing the detials you need like two years of algebra and calculus and then like another 2 years of physics, and now you get a degree in physics.
It's possible to solve the difficult ecuation only in very simple cases like electron-electron colissions, if you allow some cheating and a tiny error. For more complicated systems like electron-muon there are some problems. And for more complicated systems, you get more technical problems and more aproximations.
However, photo-detections with sub-poissonian statistics cannot be explained under this semi-classical model, but it can be explained with properly quantized EM field (i.e. with photons).
For reference, see Mandel and Wolf's Quantum Optics textbook.
My understanding is that theoretically energy transfer is a function of wavelength.
However, this is not true for EM interactions. If you shine infrared light on a solar panel, you'll see 0 current from it, even with an extremely powerful source of light (at some point the material might heat up enough it starts showing some thermo-electric effect, but that's a different thing). However, if you take even a very low intensity ultraviolet source, you'll see a measurable current right away. This is the unexpected behavior that quantized interactions have, which can't be reproduced with non-qunatized waves like sound waves.
OTOH, the energy of a photon is such an abstract concept (not like the kinetic energy of a ball) that I'm not sure it really helps explain it.
But in order to track state changes from free agents, when you get close to that geometry the engine converts it to discrete units.
This duality of continuous foundation becoming discrete units around the point of observation/interaction is not the result of dueling models, but a unified system.
I sometimes wonder if we'd struggle with interpreting QM the same way if there wasn't a paradigm blindness with the interpretations all predating the advances in models in information systems.
Classic labelling issue.
A lot of the article is about this. Start with the section "The Wave Function of Two Particles and a Single Door". The wave packet view can't explain why you don't for example see a "particle" (that is, a dot on a detector) show up simultaneously having gone through two different doors. You have to think about it in terms of a wave in the space of possible joint particle positions.
The problem in these discussions is how to build an intuition about the underlying physical model.
I fail to have an intuition of how can a quantized unit of wave propagate through both slits.
I know that the equations say that the probability of finding the particle at a given location is given by the amplitude squared of the wave function (Born rule).
The image that a "quantized unit of wave propagates through two slits simultaneously" doesn't help me build any further intuition.
Do the two parts going through the two different paths carry half the unit? Clearly that's not the case otherwise they wouldn't be quanta anymore. So does it mean that the entire wavefront is "one unit" no matter how spread out? But in that case, "one unit" of what?
If you have two slits, with a detector to determine which slit the photon went thru, then it'll behave as if it only went thru one of the two slits, at random, and what'll build up on the screen will be the two (slit A + slit B) overlayed diffraction patterns.
Finally, if you have two slits with NO detector, then what will build up on the screen is the interference pattern as if the photon had gone thru both slits simultaneously and the two resulting banded diffraction patterns interfered with each other. So, what SEEMS to be happening in this case is that the quantum state of the system post-slit is that of the photon simultaneously having gone thru both slits, each slit having diverted it per diffraction, and then these diffraction patterns (probabilities) interferering. Wave collapse can only be happening after this interference (if it was before then there would only be one diffraction pattern and no interference), presumably when quantum state interacts with the screen.
So, yeah, it seems that the "photon" does "go" through both slits, but this is a quantum representation, not a classical one.
But underneath it is all quantum mechanics.
Interpreting this in the many-particle case is more difficult, but the basic idea is that due to single-particle uncertainty, you can't have a definite number of particles indexed by momentum and a definite number of particles indexed by position at the same time. If I had 100 particles that were definitely at x=0, in terms of momentum they'd be spread out over the range of possibilities unpredictably.
The Heisenberg uncertainty principle is not about particles. It’s about statistics and our knowledge about something.
That is, the future direction and momentum of an interaction between two particles can't depend very strongly on both the position where the interaction happened, and on the momentum the particles had before the interaction. If the interaction is a direct collision, so the position is heavily constrained, then the momentum the particles had before the collision will not really matter a lot for what happens after they collide.
If you were to "put yourself in the shoes of" one of the particles, you could say that, because it "knows" where the other particle is at the time of the collision with high precision, it can't "know" the momentum the other particle had with any precision, so it's future movement can't depend strongly on that. But this stretches the definition of "knowledge" far beyond the normal understanding of the word.
My point is that it’s not something special about quantum mechanics or particles or even positions and momentum.
It’s inherent in Fourier transform, conjugate variables and covariance matrices.
It happens outside QM, and even outside physics. It’s not a physical attribute, it’s statistical.
This is not true at all. In classical mechanics, particles have fully definite properties. In the theory, if two particles collide, the position and momentum they'll have after the collision depend on their exact position and exact momentum before the collision, with no bound on precision.
Of course, classical mechanics admits that we can't measure things to any level of precision, there is some practical bound below which noise in the measurement will drown out the signal. But the interaction itself has no such bound, it happens with infinite precision. If the speed of one of our particles were higher by just 10^-100 m/s, its trajectory might be completely different.
This is not possible in QM. In QM, if the particles collide (they meet at an exact point in space-time), then their trajectories afterwards wouldn't change even if one of their speeds were 10 times higher: if they have definite position, their speed is extremely fuzzy, and it can't significantly affect their trajectories after the event.
And QM turns out to be right abput this, when you measure things precisely enough.
As for your comment about them not having defined proporties; this is also just one interpretation. You argue for violation of realism. That’s fine, but unnecessary.
Violation of locality or realism is only needed in the context of Bell inequalities, and this assumes there is no superdeterminism and you are “free” to choose your experiment, which is of course a rather strange argument to have to begin with.
This is because a baseball is interacting with other matter on the way to the slit. A photon on the other hand might not interact with any matter and it stays as a wave and you can see an interference pattern on the other side.
This seems to be entire argument:
> But the wave function is a wave in the space of possibilities, and not in physical space.
Which is fair enough as an initial claim, but it doesn't really get motivated further, or at least not before I got bored reading and started skimming.
This reduces to a kind of "shut up and calculate" attitude, so it seems poor starting point from which to write an interpretation text.
However, if a classical three-dimensional wave equation describes how matter osciallates in three-dimensional physical space, a quantum wavefunction doesn't do that. Quantum particles don't oscillate in physical space like that. A three-dimensional wavefunction might describe three particles' positions along a one-dimensional line, and it's oscillations are oscillations of probability, not position. The particles don't move, say, up and down. Their probability to be here or there on that 1-d line waxes and wanes.
This is what the article is trying to explain: the basic mathematics of quantum mechanics, the definition of the wavefunction. The value of a wavefunction for the position of three particles is not a position in space at a moment in time. It is a (complex) probability for the position of every particle at that moment.
This only seems confusing when looking at wavefunctions that describe positions. But wavefunctions often have many more observables, such as spin or polarization. A wavefunctions for two electrons moving around on a plane will not be a two-dimensional wave. It will be a wave in a six-dimensional space, whose axis may be "particle 1 has spin up/down, particle 2 has spin up/down, particle 1 position along x axis, particle 2 position along x axis, particle 1 position along y axis, particle two position along y axis".
If I were to express some sort of wave-function-in-spacetime theory, I'd invoke lots of classical fields filling space and have those wiggle.
In any case, the whole bit about the proper two-particle wave function living in a higher dimensional space is somewhat spoiled by the fact that you can factorise it into normal 3-space pieces (so long as you don't have your particles interacting), it doesn't seem such an alien space to me.
Lots of people think that this is the same picture that the wavefunction gives, but this is wrong. In the QM picture, the emitter emits one photon, which is a quantum of energy described by a four-dimensional wavefunction which assigns some probability of a detection event at the slits, at the screen, etc. In this picture, there is no physical EM wave, any interaction with the light will happen at a single localized point in space. Of course, if you add more particles, especially those carrying charges, the picture changes, and you'll see probabilities that roughly correspond to a picture of an oscillating EM field. But the wavefunction, which is the "bedrock" physical theory, is separate from those waves in the EM field, which are just an approximate picture of the probabilities dictated by the wavefunciton.
Can't really get any other sense out of your reply, but I'm not entirely sure.
Of course this is not the only valid view...just one that makes sense to me. Thinking about these sorts of questions is a very fun endeavour.
> The wave function’s pattern can travel across regions of possibility space that are associated with the slits.
Which to me conflicts with his emphatic “no” at the beginning of the article because this implies you can define some mapping between the physical and probability space. And of course you can because if you couldn’t the theory would not be physically predictive.
And as for saying that the wave moves through both slits, that also doesn't make sense, by the very definition of the wave function - it's a wave in probability space, not in space, so it just doesn't move through space.
I don't think that's a valid argument. Imagine a regular water wave, i.e. a wavefunction h = h(x, y, t) describing the height of the water at position (x, y) at time t. You could say "this is a wave in height space, not in space, so it just doesn't move through space" and in a certain sense that's true. But obviously there is something that does "move" through "space" to the extent that anything can ever be said to do so.
for point 2 it seems you can define a mapping from the physical space to probability space. Saying that the wave doesn’t “move through” space might be technically correct but also seems like semantics on the definition of the phrase “move through” ?
In the original QM model, light is not a wave in the classical electrical theory sense. Light is made up entirely of photons, which are particles just like electrons or billiard balls, and they are described by a wavefunction. That wavefunction gives them various probabilities of being in various states at a certain time, and those probabilities can increase or decrease when more particles come into the mix. The states can represent position, momentum, charge, spin, energy levels, etc.
> Figure 4: The wrong wave function! Even though it appears as though this wave function shows two particles, one trailing the other, similar to Fig. 3, it instead shows a single particle with definite speed but a superposition of two different locations (i.e. here OR there.)
I understand that if treat the act of adding two particles' wave functions as creating a new wave function for one particle, then we have this problem, essentially by definition. But it got me thinking - would it not make sense to treat the result as an expected value, such that we could then measure how many particles are likely to be to the right of the door at each point in time?
This is different from a classical probability. Suppose we simply don't know whether the baseball was fired from HERE or from THERE. In a classical situation, we can carry forward our understanding of the situation in time by simply calculating what the classical particles would do independently. In quantum mechanics the mechanics are of the wave function itself, not of the things we measure. We cannot get the right answer by imagining first that we measure the particle in one location and calculate forward and then by imagining we measure the particle in another and calculating forward and then adding the results. It isn't how the theory works. We must time evolve the wave function to predict the statistical behavior of measurement in the future.
One can only measure by interacting, there is no other way.
The split-operator method for the numerical solution of the time-dependent Schrödinger equation is used to simulate the propagation of a Gaussian wave packet in arbitrarily adjustable potentials.
https://blog.rongarret.info/2018/05/a-quantum-mechanics-puzz...
https://blog.rongarret.info/2018/05/a-quantum-mechanics-puzz...
https://blog.rongarret.info/2018/05/a-quantum-mechanics-puzz...
The wave went through the slits, not the "wave function". There is no "quantum" because there is nothing to measure so there is no quantum physics.
The fact that we are quantifying things is the problem. When we look at everything as a whole which is effected by waves we will find the solution.
No. Group movement of particles is one medium in which waves can occur, but the concept is more fundamental and general. The waves described in the article are not in particles.
So the question in the title doesn't make much sense.
IMO so much writing about quantum mechanics gets harder to follow by trying to jump classic -> quantum, and certain -> probabilistic at the same time. If one does the latter switch first, it cuts out the noise of easier-to-understand things to get to the second.
Is it really similar to the "slits" we see in daily life or something different going on here?
You could make a double slit experiment by shining a laser on a single strand of hair, this would create two wide slits on either side.
Or draw black sharpie on some glass, and scratching two openings on it with a needle.
Since effects are clearer when it matches wavelength, you can also buy pre-made ones if you don’t feel like making it yourself
Trying to collapse a quanta to either a particle or a wave loses some of the behaviour of the thing you're taking about and is where some of the confusions come from trying to take one viewpoint to it's wrong conclusion
Particles are just standing waves, so to speak. They are not just an amorphous clay-like lump of matter. They are made of smaller things and those things are churning around. That in-place churning becomes a wave when the particles move at speeds that approach a significant fraction of c.
The take is that possibility space does not have the same constraints as physical space. But I cannot recall that that was a real problem, but at least that's an opener for a discussion.
The question was and still is what influences individual particles to form an interference pattern over time. One interpretation is that something changes, or interferes with, the probability wave function of the particle's 'virtual' trajectory. The empirical evidence for this statistical anomaly is the pattern on the screen. Well of course, the double slit causes the interference pattern, but it's still not clear why. Best working guesstimate so far is that it behaves like a bullet in physical space and like a wave in possibility space at the same time. This guesstimate was enough to become the foundation of modern quantum physics, and so much more. But is it the whole story? Do we want to know?
I think the double slit experiment is just flawed, and it's a miracle that someone was able to derive a working model from its results.
Because i would like to understand infinite slits from wave probability space described in the article. Although the article author says that’s coming next week or so.
Choice of how to measure -> History
it is,
Choice of how to measure + physical system -> Observations -> Interpretation of observations -> History
The choice of what and how to measure will influence the history you conclude, but that is true of actual "Caesar and Napoleon" history too, and in that case it's definitely not that past events are being changed, instead it is your knowledge of them. A really interesting principle is that any philosophical question that can be phrased without referring to ideas that only exist in quantum mechanics can usually be answered without referring to them.
We are looking at a birds body calling it a particle but it has wings we don’t see which effect the direction the particle flies
Three words, pilot wave theory
To quote Cockshott, the Copenhagen Interpretation is an idealist recapitulation of Russian Machism/Bishop Berkley. The statement "nothing /is/ until it is observed" is not necessarily a Weird Quantum formulation but just a solipsistic attitude applicable towards all scientific observation in general.
In another sense Bohmian mechanics just kicks the can down the road - we may decide to associate the specific thing we observe with a particle situated on the pilot wave, but in fact, as far as the theory goes, the particle can live at any point in the pilot wave it wishes and nothing about the dynamics of the pilot wave changes at all. Thus we simply place the non-determinism in the past rather than in the present.
Furthermore, Bohmian mechanics seems to break Newton's First Law, since the pilot particle, as hinted above, is influenced by the pilot wave but not vice versa. The appeal of Bohmian mechanics is obvious, but superficial. It does not dispense with the can of worms, just opens it from the other side, in my opinion.
It is also rather nice to think of the particles as just being points in space with nothing else associated with them; an electron is just an electron because the portion of the wave function that is relevant and guiding it is the electron portion; see a paper from 2004 entitled "Are all particles identical?" [1] (I am a coauthor on that). If one thinks about it, we only know about particles through their motion so having things like mass and charge linked to the object guiding the particle seems perfectly reasonable. Points are not only not labelled by numbers (particle 1, 2, etc) but also not labelled by mass and charge.
The nondeterminism of not knowing the initial conditions is fine; the point was to have a theory with well-defined objects that give some plausible story and connection to our experiences, such as stuff existing and being somewhere. The fact that non-relativistic Bohmian mechanics happens to be deterministic is just happenstance for many of its supporters. In some QFT versions, the dynamics of creation is not deterministic and there is no reason for that to be a problem. But it is well-specified without having to invoke some special magic action called "observation".
As for QFT, the biggest problem for Bohmian mecahnics is the need to have an actually well-defined evolution of the wave function. The idea of particles being created and annihilated is not particularly hard. And, in fact, recent work has shown that if one takes that seriously and respects probability leaking from n particle space to n+1 and n-1, then at least some of the divergence problems go away. See [2]
1: https://arxiv.org/abs/quant-ph/0405039 2: https://arxiv.org/abs/1809.10235
At that point it's very obviously a violation of Occam's razor though. It's like positing that the content of my field of vision is an objectively real thing, that the reason the universe looks like a video projection is that there really is a video projection going on, even though that video projection has no physical effect.
> If one thinks about it, we only know about particles through their motion so having things like mass and charge linked to the object guiding the particle seems perfectly reasonable.
Indeed. But if one thinks a little more, what's the point of positing a particle at all, if all of the physics is in the pilot wave?
The physics, therefore, is not all in the pilot wave. If you take as the point of a particle theory that there should be particles with positions changing in time, then that is what is being given in Bohmian mechanics.
Also, ask yourself, if the wave function is on configuration space, what constitutes a configuration? In Bohmian mechanics, it is clear, but if the wave function is all there is, then why are we talking about configuration space at all? It is just this abstract vector in Hilbert space evolving and many different representations can happen. Why do we not perceive reality in terms of these other representations?
If it helps, you can think of the wave function a bit like a dynamic law. In [1], the authors suggest thinking of log( psi) analogously to the Hamlitonian H on phase space in classical mechanics. There is no back action on H and most of it is irrelevant to the evolution of a particular particle system in that framework and yet everyone recognizes it as just a convenient way of describing the dynamics.
The difference is that psi evolves but even that may only be true on a subsystem point of view. It is theoretically possible to have a stateless universal wave function which, when particular particle positions of the environment are plugged in, nonetheless gives evolving subsystem wave functions.
Occam's razor is difficult to apply here without a prejudice. If you want to minimize the number of equations, then sure, "the wave function is everything" works, but it comes at the cost of there being what could be considered an infinite number of "you"s and everything else, all slightly different and whole existing other expressions of the universe with no connection to us. If you want collapse somewhere, then you have to posit that mechanism.
On the other hand, by adding in particles and the guiding equation, one gets a singular "you" and everything that we experience is, more or less, definite and singular. So the "existing" stuff is dramatically reduced.
Which one of this is truly simpler is a matter of taste, I would say. I think in terms of communicating with people, the Bohmian version of "there is this universal wave and the positions of stuff are guided by it" is pretty simple. The law itself is so trivially a part of the Schrodinger equation that it could easily be derived before the Schrodinger equation itself. Contrast this with other versions which is "reality collapses to a definite state when we look at it" or "there are infinitely many different universes". None of those seem as simple.
We know that particles don't have identity though - exchange of identical particles is a symmetry and physics would be very different if it wasn't. I won't claim it's compelling, but to me that suggests that a particle is more like a pattern or a field excitation than a thing with its own concrete existence.
> Why do we not perceive reality in terms of these other representations?
What would be different if we did? I mean obviously at a macroscopic level particles moving through space is a model that gives a good approximation and is easy to think in, but that doesn't mean they're any more physically real than e.g. temperature.
The "you" is then a rough set of particles whose trajectories roughly coincide with your macroscopic trajectory. Their identity is just given by where they are.
As for representations, I feel like I can easily understand how to get momentum or temperature from particles with their time evolution (trajectories), but I do not see how, say, to get positions of particle just from knowing what their momentums were and their time evolution.
https://pubmed.ncbi.nlm.nih.gov/26989784/
I tend to think (as some others do) that it's also a much better way to reason about quantum computation. Should a factorization of a large semiprime number by Shor's algorithm be attributed to the semi-mystical power of The Observer collapsing the wave function (which is who by the way, the sensor, or the person reading that sensor?), or are we instead exploiting realism to do the work?
Stop with the “wave particle duality”.
Stop with the “until it’s measured”.
Explain the experimental setup in grosse detail.
What do you mean by “a particle is emitted?”. What do you mean by “a particle is measured?”.
Even within the bounds of self described “double slit experiment”s there are numerous variations on how it is designed, constructed, and conducted.
Stop explaining the abstract notion of the experiment through a lens of your preconceived interpretation.
Show me data.
Show me numerical analysis.
https://iopscience.iop.org/article/10.1088/1367-2630/15/3/03...