Post by RichDConsider the Nyquist criterion for sampling a continuous
waveform - 2x bandwidth - then the Rayleigh resolution
principle - peaks must separate by at least 1 wavelength.
Don't these look much analogous?
Especially as λ = 1/f
Ruminating a bit more... Nyquist sampling can be
viewed as a mandate to sample each period, at least
twice. And, Rayleigh mandates that the image be
'sampled' twice, in the sense of a peak and trough.
It strikes me they may be equivalent, in some deeper
sense. Has anyone ever tried to derive such a result,
mathematically?
I can't be the first to ever conjecture this -
--
Rich
Time in seconds and frequency in hertz are conjugate variables in the
temporal Fourier transform, i.e. they appear multiplied together in the
kernel exp(i 2 pi f t).
In the same way, in Fourier optics distance and angle are conjugate
variables, because the spatial Fourier kernel is exp(i 2 pi (x/lambda)
u), where u is the normalized spatial frequency. You can write u in
terms of the in-plane component of k or in terms of angle:
u = (k_perp / k)= sin theta,
where theta is the angle that the k vector of the given plane wave
component makes with the normal to the plane where you're computing the
transform.
The sampling theorem is perhaps easiest to understand in terms of the
sampling function sha(t), which is an infinite train of unit-strength
delta-functions spaced at 1-second intervals. It has the special
property of being its own transform, i.e. sha(t) has transform sha(f).
If you take some signal g(t) and multiply it by the sha function, only
the values at t = ....-2,-1,0,1,2.... seconds survive.
When you multiply g(t) by h(t), the resulting function has a Fourier
transform G(f) * H(f), where G and H are the transforms of g and h
respectively, and * denotes convolution. This is the convolution
theorem of Fourier transforms. (If you don't know exactly what a
convolution is and what it does, you'll probably have to go find out
before the rest of this explanation will make any sense.)
Convolving a function G(f) with sha(f) produces the sum of infinitely
many copies of G, each copy offset by ...-2,-1,0,,1,2... hertz. The
result is a mess in general because of all the overlapping
contributions, which are said to be _aliased_ because they keep popping
up at the wrong frequency. So why is this useful?
If you specialize to functions G that have a bandwidth of less than a
hertz, the contributions don't overlap, so the true spectrum G can be
recovered by lowpass filtering the sampled data. This is the sampling
theorem.
There's one other fine point: since our G is the spectrum of a
real-valued function (our measured data), so its transform has Hermitian
symmetry, i.e. G(-f) = G'(f), where the prime denotes complex
conjugation. This means that we have to use up half of our available
bandwidth to accommodate the negative frequencies, so the bandwidth of G
has to be less than 0.5 Hz for a 1-Hz sampling rate. That's the special
case of the theorem that we normally quote. With two-phase (I/Q)
sampling, you can use the full 1 Hz.
The Rayleigh criterion also uses both real and Fourier space, but it's a
heuristic based on visual or photographic detection, and not anything
very fundamental. The idea is that if you have two point sources (such
as stars in a telescope) close together, you can tell that there are two
and not just one if they're separated by at least the diameter of the
Airy disc (the central peak of the point spread function). The two are
then said to be _resolved_.
If you know your point spread function accurately enough, then with a
modern CCD or sCMOS detector with lots of pixels there's no clear lower
limit to the two-point resolution--you can take an image of an
unresolved double star, construct a set of parameterized fit functions
consisting of two copies of the PSF added together (with different
amplitudes and positions), and then fit it to the measured peak shape.
There's no clear lower limit to how well you can do that. (I proposed
this in the first edition of my book back in 2000, and was surprised to
find out that nobody had done it yet, at least not in astronomy.
Somebody tried it, it worked, and they credited me. Fun.)
On the other hand, you can make a bit more of a connection between
Rayleigh and Nyquist & Shannon than that. Usually when we use a
telescope or a microscope, we just want to look and see what's there,
without having some a priori model in mind. In that case, resolution
does degrade roughly in line with Rayleigh, though there's no aliasing
or spectral leakage as there is with poorly-designed sampling systems.
There is an analogue of aliasing in optics, namely grating orders.
Cheers
Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510
http://electrooptical.net
https://hobbs-eo.com