Discussion:
Rayleigh vs. Nyquist
(too old to reply)
RichD
2017-12-12 23:36:43 UTC
Permalink
Consider the Nyquist criterion for sampling a continuous
waveform - 2x bandwidth - then the Rayleigh resolution
principle - peaks must separate by at least 1 wavelength.

Don't these look much analogous?
Especially as λ = 1/f

Ruminating a bit more... Nyquist sampling can be
viewed as a mandate to sample each period, at least
twice. And, Rayleigh mandates that the image be
'sampled' twice, in the sense of a peak and trough.

It strikes me they may be equivalent, in some deeper
sense. Has anyone ever tried to derive such a result,
mathematically?

I can't be the first to ever conjecture this -

--
Rich
g***@gmail.com
2017-12-13 13:37:19 UTC
Permalink
Post by RichD
Consider the Nyquist criterion for sampling a continuous
waveform - 2x bandwidth - then the Rayleigh resolution
principle - peaks must separate by at least 1 wavelength.
Well lamda/ D (diameter of lens). But sure it's a kinda a spacial
transform rather than time.

George h.
Post by RichD
Don't these look much analogous?
Especially as λ = 1/f
Ruminating a bit more... Nyquist sampling can be
viewed as a mandate to sample each period, at least
twice. And, Rayleigh mandates that the image be
'sampled' twice, in the sense of a peak and trough.
It strikes me they may be equivalent, in some deeper
sense. Has anyone ever tried to derive such a result,
mathematically?
I can't be the first to ever conjecture this -
--
Rich
Phil Hobbs
2017-12-13 16:16:00 UTC
Permalink
Post by RichD
Consider the Nyquist criterion for sampling a continuous
waveform - 2x bandwidth - then the Rayleigh resolution
principle - peaks must separate by at least 1 wavelength.
Don't these look much analogous?
Especially as λ = 1/f
Ruminating a bit more... Nyquist sampling can be
viewed as a mandate to sample each period, at least
twice. And, Rayleigh mandates that the image be
'sampled' twice, in the sense of a peak and trough.
It strikes me they may be equivalent, in some deeper
sense. Has anyone ever tried to derive such a result,
mathematically?
I can't be the first to ever conjecture this -
--
Rich
Time in seconds and frequency in hertz are conjugate variables in the
temporal Fourier transform, i.e. they appear multiplied together in the
kernel exp(i 2 pi f t).

In the same way, in Fourier optics distance and angle are conjugate
variables, because the spatial Fourier kernel is exp(i 2 pi (x/lambda)
u), where u is the normalized spatial frequency. You can write u in
terms of the in-plane component of k or in terms of angle:

u = (k_perp / k)= sin theta,

where theta is the angle that the k vector of the given plane wave
component makes with the normal to the plane where you're computing the
transform.

The sampling theorem is perhaps easiest to understand in terms of the
sampling function sha(t), which is an infinite train of unit-strength
delta-functions spaced at 1-second intervals. It has the special
property of being its own transform, i.e. sha(t) has transform sha(f).

If you take some signal g(t) and multiply it by the sha function, only
the values at t = ....-2,-1,0,1,2.... seconds survive.

When you multiply g(t) by h(t), the resulting function has a Fourier
transform G(f) * H(f), where G and H are the transforms of g and h
respectively, and * denotes convolution. This is the convolution
theorem of Fourier transforms. (If you don't know exactly what a
convolution is and what it does, you'll probably have to go find out
before the rest of this explanation will make any sense.)

Convolving a function G(f) with sha(f) produces the sum of infinitely
many copies of G, each copy offset by ...-2,-1,0,,1,2... hertz. The
result is a mess in general because of all the overlapping
contributions, which are said to be _aliased_ because they keep popping
up at the wrong frequency. So why is this useful?

If you specialize to functions G that have a bandwidth of less than a
hertz, the contributions don't overlap, so the true spectrum G can be
recovered by lowpass filtering the sampled data. This is the sampling
theorem.

There's one other fine point: since our G is the spectrum of a
real-valued function (our measured data), so its transform has Hermitian
symmetry, i.e. G(-f) = G'(f), where the prime denotes complex
conjugation. This means that we have to use up half of our available
bandwidth to accommodate the negative frequencies, so the bandwidth of G
has to be less than 0.5 Hz for a 1-Hz sampling rate. That's the special
case of the theorem that we normally quote. With two-phase (I/Q)
sampling, you can use the full 1 Hz.

The Rayleigh criterion also uses both real and Fourier space, but it's a
heuristic based on visual or photographic detection, and not anything
very fundamental. The idea is that if you have two point sources (such
as stars in a telescope) close together, you can tell that there are two
and not just one if they're separated by at least the diameter of the
Airy disc (the central peak of the point spread function). The two are
then said to be _resolved_.

If you know your point spread function accurately enough, then with a
modern CCD or sCMOS detector with lots of pixels there's no clear lower
limit to the two-point resolution--you can take an image of an
unresolved double star, construct a set of parameterized fit functions
consisting of two copies of the PSF added together (with different
amplitudes and positions), and then fit it to the measured peak shape.
There's no clear lower limit to how well you can do that. (I proposed
this in the first edition of my book back in 2000, and was surprised to
find out that nobody had done it yet, at least not in astronomy.
Somebody tried it, it worked, and they credited me. Fun.)

On the other hand, you can make a bit more of a connection between
Rayleigh and Nyquist & Shannon than that. Usually when we use a
telescope or a microscope, we just want to look and see what's there,
without having some a priori model in mind. In that case, resolution
does degrade roughly in line with Rayleigh, though there's no aliasing
or spectral leakage as there is with poorly-designed sampling systems.

There is an analogue of aliasing in optics, namely grating orders.

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
https://hobbs-eo.com
RichD
2017-12-18 18:24:55 UTC
Permalink
Post by Phil Hobbs
Post by RichD
Consider the Nyquist criterion for sampling a continuous
waveform - 2x bandwidth - then the Rayleigh resolution
principle - peaks must separate by at least 1 wavelength.
Nyquist sampling can be
viewed as a mandate to sample each period, at least
twice. And, Rayleigh mandates that the image be
'sampled' twice, in the sense of a peak and trough.
It strikes me they may be equivalent, in some deeper
sense. Has anyone ever tried to derive such a result,
mathematically?
The Rayleigh criterion also uses both real and Fourier space, but it's a
heuristic based on visual or photographic detection, and not anything
very fundamental. The idea is that if you have two point sources
close together, you can tell that there are two
and not just one if they're separated by at least the diameter of the
Airy disc (the central peak of the point spread function). The two are
then said to be _resolved_.
The confusing bit is, the Rayleigh criterion is usually
presented as a hard limit, something mathematically precise,
not as a heuristic.
Post by Phil Hobbs
Usually when we use a
telescope or a microscope, we just want to look and see what's there,
without having some a priori model in mind. In that case, resolution
does degrade roughly in line with Rayleigh, though there's no aliasing
or spectral leakage as there is with poorly-designed sampling systems.
In other words, diffraction limited, as the spacing decreases?
Post by Phil Hobbs
There is an analogue of aliasing in optics, namely grating orders.
That would be, if the grating spacing is larger than λ?


--
Rich
Phil Hobbs
2017-12-18 18:37:24 UTC
Permalink
Post by RichD
Post by Phil Hobbs
Post by RichD
Consider the Nyquist criterion for sampling a continuous
waveform - 2x bandwidth - then the Rayleigh resolution
principle - peaks must separate by at least 1 wavelength.
Nyquist sampling can be
viewed as a mandate to sample each period, at least
twice. And, Rayleigh mandates that the image be
'sampled' twice, in the sense of a peak and trough.
It strikes me they may be equivalent, in some deeper
sense. Has anyone ever tried to derive such a result,
mathematically?
The Rayleigh criterion also uses both real and Fourier space, but it's a
heuristic based on visual or photographic detection, and not anything
very fundamental. The idea is that if you have two point sources
close together, you can tell that there are two
and not just one if they're separated by at least the diameter of the
Airy disc (the central peak of the point spread function). The two are
then said to be _resolved_.
The confusing bit is, the Rayleigh criterion is usually
presented as a hard limit, something mathematically precise,
not as a heuristic.
Yes, well, that's completely up a pole, of course. For instance,
there's also the Sparrow criterion, which is more suited for photometric
measurements. Picture two equal-brightness stars, generating two
overlapping Airy patterns in the image, moving closer together.
Initially the spots are well separated, but at some point they coalesce
into a single peak. To define his limit, Rayleigh picked the separation
where the peak of one lands right in the dark ring of the other.

Sparrow picked the point where the two coalesce, i.e. where the valley
between the two peaks disappears. The Sparrow and Rayleigh limits are
of course different, but they're equally arbitrary.
Post by RichD
Post by Phil Hobbs
Usually when we use a
telescope or a microscope, we just want to look and see what's there,
without having some a priori model in mind. In that case, resolution
does degrade roughly in line with Rayleigh, though there's no aliasing
or spectral leakage as there is with poorly-designed sampling systems.
In other words, diffraction limited, as the spacing decreases?
If you don't know in advance what's down there, you can't do any of the
parameter extraction tricks.
Post by RichD
Post by Phil Hobbs
There is an analogue of aliasing in optics, namely grating orders.
That would be, if the grating spacing is larger than λ?
Yes. Gratings whose pitch is less than half a wavelength don't diffract
at all, because the first order is evanescent even at grazing incidence.
Between lambda/2 and lambda, you get exactly one grating order, and it
goes up from there.

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
https://hobbs-eo.com
Behzat Sahin
2017-12-13 19:36:33 UTC
Permalink
Post by RichD
Consider the Nyquist criterion for sampling a continuous
waveform - 2x bandwidth - then the Rayleigh resolution
principle - peaks must separate by at least 1 wavelength.
Don't these look much analogous?
Especially as λ = 1/f
Ruminating a bit more... Nyquist sampling can be
viewed as a mandate to sample each period, at least
twice. And, Rayleigh mandates that the image be
'sampled' twice, in the sense of a peak and trough.
It strikes me they may be equivalent, in some deeper
sense. Has anyone ever tried to derive such a result,
mathematically?
I can't be the first to ever conjecture this -
--
Rich
Both Rayleigh Limit and Nyquist Rate are so 20th century.. They are basically hard limits for differentiation between two close entities (in time or space). It has been shown that you can operate below these limits; that is for good enough snr optical systems yo can achieve resolutions below rayleigh limit, and for sparse or compressible (lossy usually) data you can use undersampling or compressive sampling. It is in the EOTB, if you can understand what you see or hear that is enough. Regards, Asaf
http://www.laserfocusworld.com/articles/print/volume-52/issue-12/world-news/imaging-theory-breaking-rayleigh-s-limit-imaging-resolution-not-defined-by-the-criterion.html
statweb.stanford.edu/~markad/publications/ddek-chapter1-2011.pdf
Phil Hobbs
2017-12-14 14:58:00 UTC
Permalink
Post by Behzat Sahin
Consider the Nyquist criterion for sampling a continuous waveform -
2x bandwidth - then the Rayleigh resolution principle - peaks must
separate by at least 1 wavelength.
Don't these look much analogous? Especially as λ = 1/f
Ruminating a bit more... Nyquist sampling can be viewed as a
mandate to sample each period, at least twice. And, Rayleigh
mandates that the image be 'sampled' twice, in the sense of a peak
and trough.
It strikes me they may be equivalent, in some deeper sense. Has
anyone ever tried to derive such a result, mathematically?
I can't be the first to ever conjecture this -
-- Rich
Both Rayleigh Limit and Nyquist Rate are so 20th century.. They are
basically hard limits for differentiation between two close entities
(in time or space). It has been shown that you can operate below
these limits; that is for good enough snr optical systems yo can
achieve resolutions below rayleigh limit, and for sparse or
compressible (lossy usually) data you can use undersampling or
compressive sampling. It is in the EOTB, if you can understand what
you see or hear that is enough. Regards, Asaf
http://www.laserfocusworld.com/articles/print/volume-52/issue-12/world-news/imaging-theory-breaking-rayleigh-s-limit-imaging-resolution-not-defined-by-the-criterion.html
statweb.stanford.edu/~markad/publications/ddek-chapter1-2011.pdf
Spoken like a true software guy. ;)

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
Behzat Sahin
2017-12-15 08:12:49 UTC
Permalink
Dr Phil, Sir, please watch your language, there are children around! software guy? I am a 15 yr rf-photonics engineer, and been building prototype mm sub-mm imaging systems for the last 7 yrs. My team and I have built a W band material (plaster, foam, composite laminate etc.)inspection purpose reflection and transmission michelson interferometric imaging system last year, w/ 1.5 mm super-resolution. That is half lambda btw, in mm land we love our huge waves. Regards and respects and happy holidays, Asaf.
Post by Behzat Sahin
Post by Behzat Sahin
Consider the Nyquist criterion for sampling a continuous waveform -
2x bandwidth - then the Rayleigh resolution principle - peaks must
separate by at least 1 wavelength.
Don't these look much analogous? Especially as λ = 1/f
Ruminating a bit more... Nyquist sampling can be viewed as a
mandate to sample each period, at least twice. And, Rayleigh
mandates that the image be 'sampled' twice, in the sense of a peak
and trough.
It strikes me they may be equivalent, in some deeper sense. Has
anyone ever tried to derive such a result, mathematically?
I can't be the first to ever conjecture this -
-- Rich
Both Rayleigh Limit and Nyquist Rate are so 20th century.. They are
basically hard limits for differentiation between two close entities
(in time or space). It has been shown that you can operate below
these limits; that is for good enough snr optical systems yo can
achieve resolutions below rayleigh limit, and for sparse or
compressible (lossy usually) data you can use undersampling or
compressive sampling. It is in the EOTB, if you can understand what
you see or hear that is enough. Regards, Asaf
http://www.laserfocusworld.com/articles/print/volume-52/issue-12/world-news/imaging-theory-breaking-rayleigh-s-limit-imaging-resolution-not-defined-by-the-criterion.html
statweb.stanford.edu/~markad/publications/ddek-chapter1-2011.pdf
Spoken like a true software guy. ;)
Cheers
Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510
http://electrooptical.net
http://hobbs-eo.com
whit3rd
2017-12-16 11:45:20 UTC
Permalink
Post by Behzat Sahin
Both Rayleigh Limit and Nyquist Rate are so 20th century.. They are basically hard limits for differentiation between two close entities (in time or space).
The Rayleigh limit is only correct for very low signal/noise ratios, Shannon's theorem
supersedes it (so resolving doublets an order of magnitude closer than Rayleigh
limit is quite possible). Nyquist is only a discrete-transform theorem, the
uncertainty principles of quantum mechanics have broader coverage of
resolution limitations.

FFT and commutator mathematics are, indeed, 'so 20th century'.
Phil Hobbs
2017-12-16 19:31:23 UTC
Permalink
Post by whit3rd
Post by Behzat Sahin
Both Rayleigh Limit and Nyquist Rate are so 20th century.. They are basically hard limits for differentiation between two close entities (in time or space).
The Rayleigh limit is only correct for very low signal/noise ratios, Shannon's theorem
supersedes it (so resolving doublets an order of magnitude closer than Rayleigh
limit is quite possible). Nyquist is only a discrete-transform theorem, the
uncertainty principles of quantum mechanics have broader coverage of
resolution limitations.
FFT and commutator mathematics are, indeed, 'so 20th century'.
Well, 19th century. ;)

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
RichD
2017-12-18 17:46:06 UTC
Permalink
Post by whit3rd
The Rayleigh limit is only correct for very low signal/noise ratios
What is the assumed S/N such that the Rayleigh limit holds?
Post by whit3rd
FFT and commutator mathematics are, indeed, 'so 20th century'.
commutator mathematics?

--
Rich
Phil Hobbs
2017-12-18 18:23:45 UTC
Permalink
Post by RichD
Post by whit3rd
The Rayleigh limit is only correct for very low signal/noise ratios
What is the assumed S/N such that the Rayleigh limit holds?
Post by whit3rd
FFT and commutator mathematics are, indeed, 'so 20th century'.
commutator mathematics?
--
Rich
Noncommuting operators, as used in quantum mechanics, I suppose.

Chronological snobbery is pretty common these days, of course, but one
might have imagined that mathematical proofs were reasonably immune. ;)

Compressive scanning is an interesting technique that relies on
characteristics of visual scenes, in much the same way as JPEG lossy
compression. JPEG is based on block Fourier transforms, but CS uses a
version of the Walsh-Hadamard transform instead, which multiplies the
image by a set of pseudorandom pixel masks. A few years ago I did some
work for a CS startup called InView, which was sort of spun off the Rice
CS group--they were using a TI micromirror chip to form the Walsh
patterns to enable them to make cameras for the short-wave infrared
(1-2.7 microns) using single-element detectors.

The real resolution limit for ordinary imaging is the numerical aperture
of the microscope. The plane-wave spectrum of the received light cuts
off at u = +-NA, a spatial frequency of +- 2 pi NA/lambda (in radians).
You can improve this by imaging in a higher-index medium, as in oil
immersion lenses for biology, water-immersion lenses for lithography, or
"solid immersion" lenses (actually contact lenses) for semiconductor
inspection.

Superresolution can be achieved in several ways. You can do what I did
for my thesis, which is to use a phase-sensitive confocal laser
microscope, which gets you twice the spatial frequency bandwidth, and
then use digital filtering to make the equivalent of a non-confocal
scope working at half the wavelength. Works great.

There are various sample-modulation methods, where you change the
characteristics of the sample in clever ways as a function of time. If
you have *a priori* information about the sample, you can put that into
the analysis in various ways, such as the PSF-fitting approach I talked
about upthread. And if you have a scanned-probe system, you can use
NSOM or photon-assisted tunnelling or other such schemes where the
spatial resolution comes from the probe diameter and not the focused beam.

There are less-reputable approaches such as numerical analytic
continuation of the Fourier transform out to higher spatial frequency,
but there's little theoretical basis for doing that and it tends to be
horribly ill-conditioned.

Cheers

Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
https://hobbs-eo.com
Loading...