Oh no you didn’t (disagree with Brian Greene)
I disagreed with this short video of Brian Greene’s (@bgreene) explaining the Uncertainty Principle the other day on twitter. Though he didn’t reply to my comment, @livingona3brane, another physicist, disagreed with me right back. Unfortunately, twitter is a difficult medium for discussing complex concepts so I was only able to hint at what I meant and let the discussion drop.
Although I’m a computer scientist and software engineer, not a physicist, I have audited several college physics courses from respected professors at multiple universities. Part of my work involves physics simulation and I generally find the subject fascinating so I regularly seek out courseware and lectures.
What I’ve found lacking in nearly every discussion I’ve heard or read about the Uncertainty Principle is a critical but subtle distinction that really should be made clear. I’ve only read and heard about this in the context of signal processing which is something I am quite familiar with from my own field.
The missing critical distinction is this: Uncertainty is a problem of quantization, not a problem of quantum mechanics. In signal processing it’s referred to as the Gabor Limit or Heisenberg-Gabor Limit. This subtle distinction is why I believe it applies to Classical Mechanics as well as Quantum Mechanics though it may only be noticeable at ridiculous extremes of arbitrary precision.
Consider the way you measure momentum and position of a baseball. You have two options, though each can be achieved multiple ways. The first is sampling the position (with a camera, Doppler, etc) and differentiating to get the momentum. The second is to measure the energy imparted on impact (piezo, solenoid, etc) which again is sampled over time and integrated to derive the position at a single point in time. In both cases, the Gabor Limit, and hence Heisenberg Uncertainty, should apply. It’s the same class of conjugate property Fourier transform used for time vs frequency domain and other conjugate pairs subject to uncertainty. Additionally, you have the practical problem of precision being tied to sample rate which has physical limitations.
Now it’s possible I’m missing some esoteric method of directly measuring both simultaneously and thus avoiding the Fourier transform. If that’s the case, I’d love to hear about it and why it doesn’t apply at quantum scales. There’s the oft conflated observer effect which is less of a nuisance at macro scales but I believe it’s been proven that it really is a separate issue that only accounts for a small percentage of overall uncertainty. It’s just minuscule compared to reasonable relative precision at macro scales. It’s still there at extremely high frequency/energy/precision with Doppler and in nonlinear conversion losses with piezoelectric and solenoid sensors.
The only other way I can see to avoid this is if you are dealing solely with Platonic ideals, the functions you are solving for are much simpler and better behaved and you have hypothetical infinite sampling rate capability. That, I think, is a failure of philosophy, not of classical mechanics.
These same issues should apply to relativistic mechanics — it being an extension of classical mechanics for extreme circumstances. You still have to deal with quantization and derive conjugate properties.