Information

What happens at sampling rates lower or higher that the Nyquist rate?

What happens at sampling rates lower or higher that the Nyquist rate?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I do understand that for a signal to be properly sampled, it has to be done on the Nyquist sampling rate. What I do not understand is, what happens at sampling rates lower rate than the Nyquist rate, and at rates higher than the Nyquist rate, for that matter.


Short practical answer

for a signal to be properly sampled, it has to be done [at] the Nyquist sampling rate

Up front: sampling at the Nyquist frequency is the bare minimum rate to reproduce the frequency of the input signal. Dependent on your demands, I would advise to go at least 2 times that, if not more, to reproduce amplitude and shape of the signal. In other words, higher sampling rates yield better reconstructed signals.

Background
The problem is that in this digital age, analogue signals are sampled digitally, i.e. at fixed sampling rates. The Nyquist criterion states that the sampling rate should be at least twice the target frequency of the signal.

Suppose the target signal is a simple sinusoid with a frequency f (Fig. 1). And let us start with the worst-case scenario, namely a digital sampling rate (SR) equaling f. In this scenario, we end up with a straight line in the digitally recorded signal (upper panel).

In case we slightly increase the SR to 4/3 the result is not much better (lower panel).

In fact, it is not until we we go to doubling the SR to 2 times f, that we obtain a saw-tooth with a frequency equaling the input signal. The shape is, however, unlike the input signal, but at least we have the target input frequency right. However, this does not mean we can faithfully reproduce the signal. For one thing, the amplitude of the input signal will depend on the phase shift between SR and the signal. From here on, you can imagine that going to 4 f will substantially improve the reconstructed signal in terms of shape and amplitude (see, e.g. this web page of Cardif Universiy).


Fig. 1. Discrete sampling of an analogue target signal. source: National Instruments


Here is the 10 KHz signal (the maximum frequency of the signal is 10 KHz):

Now, if we sample the signal at 5 KHz and 10 KHz, the signal will look like as shown below (the brown points):

It is clear that we can't get any useful information from the sampled signal. Now, to get some information about the signal, Nyquist said that we must sample the signal at least 2*max signal frequency. Now, here are the signals sampled at the Nyquist frequency and more than it:

It can be seen that by sampling at the Nyquist rate, we can get the frequency information about the signal. However, to faithfully reconstruct the signal, we have to increase the sampling rate even more.

For more details please visit https://www.gaussianwaves.com/tag/sampling-theorem/


What happens at sampling rates lower or higher that the Nyquist rate? - Psychology

Powerlab* recording system has an A/D board as well as amplifiers and communicates with the computer through its USB port.

Input Voltage (mV)
(Real Value)
Binary (Base 2) Value Decimal (Base 10) Value
-10 000 to
-9 999.695
0000000000000000 0
. . .
. 111111111111110 32766
0.0 to +0.305 1000000000000000 32768
. 1000000000000001 32769
. . .
+9 999.695 to +10 000 1111111111111111 65535

As mentioned previously, one important step when carrying out A/D conversion is to keep the input signal within the input voltage range of the ADC. In our case, should the input signal exceed 10 000 mV a 16 bit binary number with an equivalent decimal value of 65535 would still be returned to the computer. The computer would thus interpret the voltage being sent to be +10 mV, which would be in error. This error is called saturation of the ADC. However, the input signal should span as much of the ADC input voltage range as possible, without saturating the ADC, since this increases the signal to noise ratio. Thus if the voltage range of the input signal is much smaller than +/-10 000 mV, the signal should be amplified before being fed to the input of the ADC.

Any ADC has a maximum sampling rate. In some circumstances, this maximum sampling rate is not high enough to satisfy the Nyquist conditions mentioned above. In that case, one can pass the analogue signal through a low-pass filter before sending it on to the ADC. This filter acts to remove some of the high-frequency content of the signal that would otherwise alias down in frequency, producing spurious low-frequency content along the lines illustrated above. Note that this anti-alias filtering could remove high frequency information of physiological importance to the phenomenon under investigation. If it is important to retain these higher frequencies, one has no choice but to use a better data acquisition system that has a higher sampling rate.

A biological signal can be broken down into fundamental frequencies, with each frequency having its own intensity. Display of the intensities at all frequencies is a power spectrum. Usually we are interested in signals of a particular frequency range or bandwidth. The bandwidth is determined by filters, which are devices that alter the frequency composition of the signal.

Ideal frequency-selective filter: is a filter that exactly passes signals at one set of frequency and completely rejects the rest.
There are three types of filter:

  • Low frequency or in old terminology high pass. Filters low frequencies.
  • High frequency or in old terminology low pass. Filters high frequencies.
  • Notch filter. Filters one frequency, usually 60 Hz.

Any unwanted signal that modifies the desired signal is noise. It can have multiple sources.

  • Thermal noise : the random motion of atoms generates this random, uniformly distributed noise.
    Thermal noise is present everywhere and has a nearly constant Power Spectral Density (PSD).
  • Interference : imposition of an unwanted signal from an external source on the signal of interest.
  • Sampling noise : another artifact of the acquisition process, sampling noise occurs when you digitize a continuous signal with an A/D converter that has finite number of steps. It is interesting to note that you can dither (add white noise : i.e.: does not vary with frequency) your signal to reduce the overall sampling noise.
  • Narrowband/broadband : two general categories of noise. Narrowband noise confines itself to a relatively small portion of the overall signal bandwidth as defined by Nyquist. Broadband noise occupies a significant portion of the Nyquist bandwidth. For example, 60Hz-hum is narrowband because it typically limits itself to a 60 Hz component. Thermal noise is definitely broadband because its PSD is constant, meaning that it distributes its energy over nearly the entire spectrum.

Signal to Noise Ratio (SNR): it is a measurement of the amplitude of variance of the signal relative to the variance of the noise. The higher the SNR, the better can you distinguish your signal from the noise.


Section 2.3: Sampling Theory

So now we know that we need to sample a continuous waveform to represent it digitally. We also know that the faster we sample it, the better. But this is still a little vague. How often do we need to sample a waveform in order to achieve a good representation of it?

The answer to this question is given by the Nyquist sampling theorem, which states that to well represent a signal, the sampling rate (or sampling frequency—not to be confused with the frequency content of the sound) needs to be at least twice the highest frequency contained in the sound of the signal.

For example, look back at our time-frequency picture in Figure 2.3 from Section 2.1. It looks like it only contains frequencies up to 8,000 Hz. If this were the case, we would need to sample the sound at a rate of 16,000 Hz (16 kHz) in order to accurately reproduce the sound. That is, we would need to take sound bites (bytes?!) 16,000 times a second.

In the next chapter, when we talk about representing sounds in the frequency domain (as a combination of various amplitude levels of frequency components, which change over time) rather than in the time domain (as a numerical list of sample values of amplitudes), we’ll learn a lot more about the ramifications of the Nyquist theorem for digital sound. But for our current purposes, just remember that since the human ear only responds to sounds up to about 20,000 Hz, we need to sample sounds at least 40,000 times a second, or at a rate of 40,000 Hz, to represent these sounds for human consumption. You may be wondering why we even need to represent sonic frequencies that high (when the piano, for instance, only goes up to the high 4,000 Hz range). The answer is timbral, particularly spectral. Remember that we saw in Section 1.4 that those higher frequencies fill out the descriptive sonic information.


Xtra bit 2.1
Free sample:
a tonsorial tale

Just to review: we measure frequency in cycles per second (cps) or Hertz (Hz). The frequency range of human hearing is usually given as 20 Hz to 20,000 Hz, meaning that we can hear sounds in that range. Knowing that, if we decide that the highest frequency we&rsquore interested in is 20 kHz, then according to the Nyquist theorem, we need a sampling rate of at least twice that frequency, or 40 kHz.

Figure 2.7 Undersampling: What happens if we sample too slowly for the frequencies we&rsquore trying to represent?

We take samples (black dots) of a sine wave (in blue) at a certain interval (the sample rate). If the sine wave is changing too quickly (its frequency is too high), then we can&rsquot grab enough information to reconstruct the waveform from our samples. The result is that the high-frequency waveform masquerades as a lower-frequency waveform (how sneaky!), or that the higher frequency is aliased to a lower frequency.


Applet 2.2
Oscillators

This applet demonstrates band-limited and non-band-limited waveforms.

Band-limited waveforms are those in which the synthesis method itself does not allow higher harmonics, or frequencies, than the sampling rate allows. It’s kind of like putting a governor on your car that doesn’t allow you to go past the speed limit. This technique can be useful in a lot of applications where one has absolutely no interest in the wonderful joy of listening to things like aliasing, foldover, and unwanted distortion.



Soundfile 2.1
Undersampling


Soundfile 2.1 demonstrates undersampling of the same sound source as Soundfile 2.2. In this example, the file was sampled at 1,024 samples per second. Note that the sound sounds "muddy" at a 1,024 sampling rate—that rate does not allow us any frequencies above about 500 Hz, which is sort of like sticking a large canvas bag over your head, and putting your fingers in your ears, while listening.


Soundfile 2.2
Standard sampling at 44,100 samples per second

Soundfile 2.2 was sampled at the standard 44,100 samples per second. This allows frequencies as high as around 22 kHz, which is well above our ear’s high-frequency range. In other words, it’s "good enough."

Figure 2.8 Picture of an undersampled waveform. This sound was sampled 512 times per second. This was way too slow.

Figure 2.9 This is the same sound file as above, but now sampled 44,100 (44.1 kHz) times per second. Much better.


Applet 2.3
Scrubber applet


Inflation and Interest Rates

Inflation is closely related to interest rates, which can influence exchange rates. Countries attempt to balance interest rates and inflation, but the interrelationship between the two is complex and often difficult to manage. Low interest rates spur consumer spending and economic growth, and generally positive influences on currency value. If consumer spending increases to the point where demand exceeds supply, inflation may ensue, which is not necessarily a bad outcome. But low interest rates do not commonly attract foreign investment. Higher interest rates tend to attract foreign investment, which is likely to increase the demand for a country's currency. (See also, The Mundell-Fleming Trilemma.)

The ultimate determination of the value and exchange rate of a nation's currency is the perceived desirability of holding that nation's currency. That perception is influenced by a host of economic factors, such as the stability of a nation's government and economy. Investors' first consideration in regard to currency, before whatever profits they may realize, is the safety of holding cash assets in the currency. If a country is perceived as politically or economically unstable, or if there is any significant possibility of a sudden devaluation or other change in the value of the country's currency, investors tend to shy away from the currency and are reluctant to hold it for significant periods or in large amounts.


What happens at sampling rates lower or higher that the Nyquist rate? - Psychology

Spatial resolution is a term that refers to the number of pixels utilized in construction of a digital image. Images having higher spatial resolution are composed with a greater number of pixels than those of lower spatial resolution. This interactive tutorial explores variations in digital image spatial resolution, and how these values affect the final appearance of the image.

The tutorial initializes with a randomly selected specimen imaged in the microscope appearing in the left-hand window entitled Specimen Image . Each specimen name includes, in parentheses, an abbreviation designating the contrast mechanism employed in obtaining the image. The following nomenclature is used: ( FL ), fluorescence ( BF ), brightfield ( DF ), darkfield ( PC ), phase contrast ( DIC ), differential interference contrast (Nomarski) ( HMC ), Hoffman modulation contrast and ( POL ), polarized light. Visitors will note that specimens captured using the various techniques available in optical microscopy behave differently during image processing in the tutorial.

Adjacent to the Specimen Image window is a Spatial Resolution window that displays the captured image at varying resolutions, which are selectable with the Pixel Dimensions slider. To operate the tutorial, select an image from the Choose A Specimen pull-down menu, and vary the pixel dimensions (and spatial resolution) with the Pixel Dimensions slider. The number of pixels utilized in the horizontal and vertical axes of the image is presented directly beneath the slider, as is the total number of pixels employed in the entire image composition. Available in the tutorial are both full color and grayscale images that can be selected using the Color Images or Grayscale Images radio buttons located beneath the Specimen Image window.

The spatial resolution of a digital image is related to the spatial density of the image and optical resolution of the microscope used to capture the image. The number of pixels contained in a digital image and the distance between each pixel (known as the sampling interval ) are a function of the accuracy of the digitizing device. The optical resolution is a measure of the microscope's ability to resolve the details present in the original specimen, and is related to the quality of the optics, sensor, and electronics in addition to the spatial density (the number of pixels in the digital image). In situations where the optical resolution of the microscope is superior to the spatial density, then the spatial resolution of the resulting digital image is limited only by the spatial density.

All details contained in a digital image, ranging from very coarse to extremely fine, are composed of brightness transitions that cycle between various levels of light and dark. The cycle rate between brightness transitions is known as the spatial frequency of the image, with higher rates corresponding to higher spatial frequencies. Varying levels of brightness in specimens observed through the microscope are common, with the background usually consisting of a uniform intensity and the specimen exhibiting a spectrum of brightness levels. In areas where the intensity is relatively constant (such as the background), the spatial frequency varies only slightly across the viewfield. Alternatively, many specimen details often exhibit extremes of light and dark with a wide gamut of intensities in between.

The numerical value of each pixel in the digital image represents the intensity of the optical image averaged over the sampling interval. Thus, background intensity will consist of a relatively uniform mixture of pixels, while the specimen will often contain pixels with values ranging from very dark to very light. The ability of a digital camera system to accurately capture all of these details is dependent upon the sampling interval. Features seen in the microscope that are smaller than the digital sampling interval (have a high spatial frequency) will not be represented accurately in the digital image. The Nyquist criterion requires a sampling interval equal to twice the highest specimen spatial frequency to accurately preserve the spatial resolution in the resulting digital image. An equivalent measure is Shannon's sampling theorem , which states that the digitizing device must utilize a sampling interval that is no greater than one-half the size of the smallest resolvable feature of the optical image. Therefore, to capture the smallest degree of detail present in a specimen, sampling frequency must be sufficient so that two samples are collected for each feature, guaranteeing that both light and dark portions of the spatial period are gathered by the imaging device.

If sampling of the specimen occurs at an interval beneath that required by either the Nyquist criterion or Shannon theorem, details with high spatial frequency will not be accurately represented in the final digital image. In the optical microscope, the Abbe limit of resolution for optical images is 0.22 micrometers, meaning that a digitizer must be capable of sampling at intervals that correspond in the specimen space to 0.11 micrometers or less. A digitizer that samples the specimen at 512 points per horizontal scan line would produce a maximum horizontal field of view of about 56 micrometers (512 x 0.11 micrometers). If too few pixels are utilized in sample acquisition, then all of the spatial details comprising the specimen will not be present in the final image. Conversely, if too many pixels are gathered by the imaging device (often as a result of excessive optical magnification), no additional spatial information is afforded, and the image is said to have been oversampled . The extra pixels do not theoretically contribute to the spatial resolution, but can often help improve the accuracy of feature measurements taken from a digital image. To ensure adequate sampling for high-resolution imaging, an interval of 2.5 to 3 samples for the smallest resolvable feature is suggested.

A majority of digital cameras coupled to modern microscopes have a fixed minimum sampling interval, which cannot be adjusted to match the specimen's spatial frequency. It is important to choose a camera and digitizer combination that can meet the minimum spatial resolution requirements of the microscope magnification and specimen features. If the sampling interval exceeds that necessary for a particular specimen, the resulting digital image will contain more data than is needed, but no spatial information will be lost.

When operating the tutorial, as the Pixel Dimensions slider is moved to the right, the spatial frequency of the digital image is linearly reduced. The spatial frequencies utilized range from 175 x 175 pixels (30,625 total pixels) down to 7 x 7 pixels (49 total pixels) to provide a wide latitude of possible resolutions within the frequency domain. As the slider is moved to the right (reducing the number of pixels in the digital image), specimen details are sampled at increasingly lower spatial frequencies and image detail is lost. At the lower spatial frequencies, pixel blocking occurs (often referred to as pixelation ) and masks most of the image features.

Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657.

Matthew J. Parry-Hill , John C. Long , Thomas J. Fellers , and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.


The Effect of Interest Rates on Inflation and Recessions

Whenever interest rates are rising or falling, you commonly hear about the federal funds rate. This is the rate that banks use to lend each other money. It can change daily, and because this rate's movement affects all other loan rates, it is used as an indicator to show whether interest rates are rising or falling.

These changes can affect both inflation and recessions. Inflation refers to the rise in the price of goods and services over time. It is the result of a strong and healthy economy. However, if inflation is left unchecked, it can lead to a significant loss of purchasing power.

To help keep inflation manageable, the Fed watches inflation indicators such as the Consumer Price Index (CPI) and the Producer Price Index (PPI). When these indicators start to rise more than 2–3% a year, the Fed will raise the federal funds rate to keep the rising prices under control. Because higher interest rates mean higher borrowing costs, people will eventually start spending less. The demand for goods and services will then drop, which will cause inflation to fall.

A good example of this occurred between 1980 and 1981. Inflation was at 14% and the Fed raised interest rates to 19%.     This caused a severe recession, but it did put an end to the spiraling inflation that the country was seeing. Conversely, falling interest rates can cause recessions to end. When the Fed lowers the federal funds rate, borrowing money becomes cheaper this entices people to start spending again.

A good example of this occurred in 2002 when the Fed cut the federal funds rate to 1.25%.   This greatly contributed to the economy's 2003 recovery. By raising and lowering the federal funds rate, the Fed can prevent runaway inflation and lessen the severity of recessions.


In previous installments of the AudioFile, we've talked about basic PCM audio, which encodes audio into a series of numbers that a computer can play or manipulate. We've also discussed the process of turning that PCM audio data into an MP3 file, which exploits the perceptual limitations of human hearing. What we haven't explored is the process of converting analog audio signals into digital information in the first place.

Thanks to cheap and powerful computing, opportunities for non-professionals to get involved in digital audio are widespread. The medium has become democratized, to some extent—what used to require an entire studio can now be done on the family PC, with a surprising level of quality. Yet, while the technology has become more accessible, the analog-digital converter remains a beginner's stumbling block, for both financial and intellectual reasons. Sadly, we can't buy you new recording equipment. But in the next couple of pages, we can help to explain the workings of analog-to-digital (A/D) and digital-to-analog (D/A) converters, and hopefully cut through some of the confusion.

As with the other AudioFile guides, this is not an exhaustive reference but instead more of an introduction to certain technical topics for newcomers to digital audio. Certain aspects of A/D and D/A have been glossed over or, in some cases, intentionally omitted. Indeed, it is my hope that when you finish reading, you will still have questions to be answered, but you will be somewhat better prepared to locate the answers.

Preliminaries

Before we dive into the details of A/D conversion, let's start with a quick review of digital audio. In PCM, which is the standard for uncompressed audio files, a continuous analog signal is turned into a series of binary numbers by taking samples many, many thousands of times per second. These two factors—bit depth and sample rate—dictate the quality of the audio file. The bit depth determines the dynamic range of the file, and each bit doubles the available resolution. On the other hand, the highest frequency that can be reproduced by a file is equal to one-half of its sample rate, according to the Nyquist theory. A CD sampled at 44.1KHz, for example, can contain frequencies up to around 22KHz.

As listeners, that rule of thumb is primarily how we interact with the Nyquist-Shannon sampling theorem. But when Claude Shannon elaborated on Harry Nyquist's theory in 1949, its actual implication is slightly different. He wrote:

If a function f(t) contains no frequencies higher than W cps [cycles per second], it is completely determined by giving its ordinates at a series of points spaced 1/2 W seconds apart.

Note the words "completely determined." You can recreate a signal with perfect accuracy, Shannon says, as long as you make sure that it doesn't contain any frequencies above the Nyquist limit. As we'll see, the process of filtering out that high-frequency content is one of the important challenges of A/D converter design. But more on that in a bit.


In the above image, the red wave is above the Nyquist limit, and is seen as the blue wave
when sampled at positions indicated by the black dots

You may wonder what happens in the event that a digital recording system does not remove frequencies above the Nyquist limit before sampling. The answer is that the converter "sees" those parts of the waveform too rarely to accurately capture them. They loop around the Nyquist limit and are recorded as signals at a lower frequency instead: a 32KHz signal recorded at 44KHz appears to the system as if it were 12KHz. Because the sampling rate is simply an arbitrary number chosen for audio quality, the new frequencies bear no harmonic resemblance to the original signal, and the result is discordant and distorted. As an interesting aside, however, this "aliasing" property of signals above the Nyquist limit can be used to sample signals with high frequencies but narrow bandwidth using relatively low sample rates, by making the assumption that any captured signals are aliases of higher frequencies, and playing them back accordingly.


Sample Rate, Bit Depth & Buffer Size Explained

It is to audio what frame-rate (Frames Per Second) is to video.

Sample Rate values are typically written in kHz (kiloHertz).

Sample Rates come in 'bands' and common examples include:

  • Single-band - 44.1kHz & 48kHz
  • Dual-band - 88.2kHz & 96kHz
  • Quad-band - 176.4kHz & 192 kHz

For example, when recording using a sample rate of 48kHz. 48000 (forty-eight thousand) samples are being captured each second by your audio recording device.

As you increase the sample rate, you capture more samples of the incoming audio signal each second.

The maximum frequency that can be captured correctly by a recording device 1 is limited by the sample rate the device is set to.

There is quite a simple rule 2 to this:

Sample rate ÷ 2 = maximum frequency that can be correctly captured

This means, when using a sample rate of 48kHz, we can capture audio frequencies up to 24kHz.

The range of human hearing is from around 20Hz to 20kHz (though we lose the ability to hear the higher frequencies as we get older) so sample rates of 44.1 & 48kHz are more than capable of capturing the full range of the human audible spectrum.

As such, the vast majority of digital music available by typical distribution methods (streaming on Spotify/Apple Music, CDs) is at a 44.1kHz sample rate, audio for film tends to be at 48kHz 3 .

What's the point of higher Sample Rate options?

Since sample rates of 44.1/48kHz allow us to capture frequencies spanning the full range of human hearing, you wonder what the purpose of higher sample rate options is.

There is debate in the audio community about the value (or lack of) of using higher sample rates for situations that don't fall into the above categories (I.e., for general recording purposes). We won't get into that here.

Bit Depth

Bit Depth is the number of “bits” captured in each sample per second.

As bit depth changes, so does the dynamic range. Dynamic range is the difference between the lowest and highest volume of a signal that can be recorded. As you increase bit depth, you expand the threshold of what can be heard and recorded by your recording software. However, the maximum range of human hearing typically does not exceed 120dB.

Common Bit Depths: 16, 24, 32-bit float

Buffer Size

Buffer Size is the amount of time allowed for your computer to process the audio of your sound card or audio interface.

This applies when experiencing latency, which is a delay in processing audio in real-time. You can reduce your buffer size to reduce latency but this can result in a higher burden on your computer that can cause glitchy audio or drop-outs.

This can often be fixed by increasing your buffer size in the audio preferences of your DAW or driver control panel.

When introducing more audio tracks to your session, you may need a larger buffer size to accurately record the signal with no distortion and limited latency. Increasing the buffer size will allow more time for the audio to be captured without distortion.

It is important to find the right buffer size for your session as this can vary depending on the number of tracks, plug-ins, audio files etc. We do not recommend a specific setting because it will depend on your specific project. But as a general rule:

When Recording:

  • Set the buffer size as low as you can to reduce latency. If you start hearing clicks and pops or your DAW gives you an error message, either raise the buffer size or reduce the number of effects plug-ins/audio tracks in your project

When Mixing:

  • As latency is not really a factor when mixing, you can afford to put the buffer size at its highest setting. This will reduce the chances of any clicks and pops being heard when you add effects plug-ins.

When listening to general Music/Audio outside of a recording project:

  • Latency is not a factor when just listening to music outside of a DAW (Youtube/Spotify/Media Players) so the buffer size can be set to its highest setting

For more information about latency, please see the below article.

1 This assumes that neither the analogue circuitry nor the analogue to digital converter, in the input stage have any filtering to cut out or attenuate higher frequencies.

2 This rule is known as the Nyquist Theorem.

3 Audio for film tends to be recorded at either 48kHz or a higher multiple of 48kHz for better synchronisation against film frame rates.


Real-Time Versus Equivalent-Time Sampling

As the popularity of Digital Storage Oscilloscopes has grown, a need has arisen for understanding their operating modes and performance characteristics. This technical brief describes two of the fundamental modes of waveform acquisition utilized in Tektronix products. Knowledge of the benefits and trade-offs of Real-Time and Equivalent-Time sampling will make it easier to choose and use a Tektronix digital storage oscilloscope.

Appended to this tech brief is an explanation of the Sin(x)/x interpolation method that Tektronix DSOs use to produce high resolution timing and amplitude measurements and extremely accurate displays.

The performance of the DSO continues to evolve toward an analog-type performance level. Higher sample rates allow mid-range digital scopes to acquire single shot waveforms with a level of timing accuracy rivaling the capabilities of premium DSOs. The standard sampling rate for these reasonably priced scopes has grown exponentially from a 50 or 100 MS/s rate to 2 GS/s.

Analog and Real-Time Bandwidths

To create a waveform accurately, the DSO must gather a sufficient number of samples after the initial trigger. In theory a digital scope needs at least 2 samples per period (one full cycle of a regular waveform) to faithfully reproduce a sine wave otherwise the acquired waveform will be a distorted representation of the input signal. In practice, using Tek's Sin(x)/x interpolation in the TDS Series scopes, a DSO needs at least 2.5 samples per period.

This requirement usually limits the signal frequency a digital scope can acquire in real-time. Because of this limitation in real-time acquisition, most DSOs specify two bandwidths - Analog and Real-Time. The Analog Bandwidth, defined by the circuits composing the input path of the scope, represents the highest frequency signal a DSO can accept without adding distortion. The second bandwidth, called the Real-Time Bandwidth, defines the maximum frequency the DSO can acquire by sampling the entire input waveform in one pass, using a single trigger, and still gather enough samples to reconstruct the waveform accurately. The following equation describes the real-time bandwidth:

For some DSOs, the real-time bandwidth theoretically exceeds the analog bandwidth. But since the input path distorts any signal above its frequency limit, the real-time bandwidth can only be equal to or less than the analog bandwidth. Even though a DSO may sample at a higher bandwidth than its analog bandwidth, the analog bandwidth establishes the highest frequency the scope can accurately capture.

Equivalent-Time Sampling

When a DSO uses equivalent-time sampling, it can acquire any signal up to the analog bandwidth of the scope regardless of the sample rate. In this mode, the scope gathers the necessary number of samples across several triggers. The input signal must be repetitive to generate the multiple triggers needed for equivalent-time sampling. In equivalent-time, a slower, lower-cost digitizer provides the same accuracy on repetitive waveforms as a higher cost DSO with a faster sampler. For example, the TDS 460 offers a 350 MHz bandwidth with only a 100 MS/s sampling rate on each of its four channels.

The TDS 400 and TDS 500 Series scopes use a common method called random equivalent-time sampling. Although these scopes acquire samples sequentially after each trigger, each acquisition starts at a different time with respect to the trigger. Figure 1 depicts how random equivalent-time sampling works.

Figure 1. Random equivalent-time sampling digitally reconstructs a waveform using several trigger events.

Because equivalent-time sampling requires a repetitive signal, it has certain restrictions. A DSO in equivalent-time cannot create a meaningful display from a single-shot event. Also, the signal must repeat identically each time or the displayed waveform will be distorted. Figure 2 illustrates what happens to a display when a repetitive signal changes over time. The scope creates sharp vertical lines, or hashing, indicating the differences in the signal across multiple acquisitions. A viewer could easily misinterpret these lines and conclude that they represent high-frequency noise riding on the signal.

Figure 2. When a signal that changes over time is acquired in equivalent-time, the display has sharp vertical lines indicating the modulation in the signal. This particular type of distortion can easily look like noise to the user.

Some scopes perform equivalent-time sampling exclusively and can accept only repetitive signals. Because these scopes are limited to equivalent-time, they either dramatically increase their accuracy or bandwidth or offer significantly lower cost compared to a real-time digitizing scope.

Real-Time Sampling

When a DSO operates in real-time or single-shot mode, it attempts to gather all the samples for a waveform with one trigger event (Figure 3). Because this mode uses only one trigger from the input signal, real-time sampling treats both repetitive and single-shot waveforms as one time events.

Figure 3. Real-time sampling captures a complete waveform with a single trigger event.

By using DSOs with higher sample rates one can acquire higher-bandwidth signals in real-time. For example, an engineer wants to acquire and store a single-shot 50 MHz signal. Using a scope with a 400 MHz analog bandwidth and a 1 GS/s sample rate, creating a real-time bandwidth of 400 MHz, he can easily acquire the signal in real-time.

However, if the engineer chooses a digital scope with an analog bandwidth of 400 MHz and a 100 MS/s sampler, he cannot accurately acquire the 50 MHz signal in real time. Although this scope, like the first one, has an analog bandwidth of 400 MHz, its maximum sample rate of 100 MS/s limits the real-time bandwidth to only 40 MHz.

TDS 600: Real-Time Scopes

The TDS 620 and TDS 640 digital scopes have a 500 MHz bandwidth and 2 GS/s sample rate. Their theoretical real-time bandwidth is 2 GS/s divided by 2.5 = 800 MHz. Since the TDS 600 scopes cannot pass signals higher than 500 MHz without distorting them, their real-time bandwidth equals their analog bandwidth. Because the two bandwidths are the same, these scopes can easily acquire signals in real-time up to the analog bandwidth of the scope. Digital scopes only require equivalent-time sampling when the real-time bandwidth is lower than the analog bandwidth. Since the TDS 600 scopes can acquire signals up to the bandwidth of the scope with one trigger event, they offer only real-time sampling.

To demonstrate the TDS 600 Series DSO's powerful acquisition capability, Figures 4a-c graphically depict the differences between real-time and equivalent- time sampling. A calibrated pulse generator created a 1 ns rise time pulse as a single-shot event and as a repetitive waveform. For reference, Figure 4a shows a display of this pulse captured by a Tektronix 2400 analog scope. In Figure 4b, the TDS 540 acquires a repetitive version of the same pulse with equivalent-time sampling. Multiple acquisitions were required to capture the signal. In Figure 4c, the TDS 640 displays the same pulse with real-time sampling. Thanks to its 2 GS/s sample rate, the TDS 640's waveform exhibits the same rise time, amplitude, and visual characteristics as the analog display in Figure 4a. Although the TDS 540 and TDS 640 both have 500 MHz bandwidths, the high-speed, real-time sampling of the TDS 640 clearly delivers a more analog-like representation of the input signal.

Figure 4. These three screen captures demonstrate the differences between real-time and equivalent-time sampling.

Figure 4a. For reference, the pulse captured by a Tektronix 2465BDV analog scope

Figure 4b. Using equivalent-time sampling, the TDS 540 digital scope acquires a repetitive version of the pulse.

Figure 4c. With real-time sampling, the TDS 640 displays the pulse after one trigger event. Note how close this waveform's appearance is to the analog display of the signal in Figure 4a.

Reconstruction Techniques for Waveform Display

Whether a digital scope acquires a waveform in real-time or equivalent-time, interpolation displays the acquired signal more clearly. When a scope interpolates, it draws lines between the samples on the display, creating a continuous waveform instead of a string of individual points. Figures 5a and 5b show the difference interpolation makes in creating a more realistic display.

Figure 5. Interpolation helps create more meaningful waveform displays.

Figure 5a. When a DSO displays only sample points, the user can have trouble determining the actual waveform shape.

Figure 5b. Interpolation connects sample points and creates a more intelligible display.

Most DSOs offer two types of interpolation: linear and sine. Linear interpolation draws lines between the samples using a straight-line fit. This method works well with pulses and digital signals but may produce distortions on sine waves. Sine interpolation connects the samples using a curve fit. Ideal for sinusoidal signals, this approach can produce apparent overshoot or undershoot when displaying pulses.

Tektronix DSOs offer a modified sine interpolation method that eliminates the inaccuracies when displaying pulses. The Sin(x)/x method uses an adaptive prefilter to locate and compensate for fast signal transitions. Although this method requires more calculations than linear interpolation, the TDS Series scopes with their custom digital signal processor update their screens quickly in both real-time and equivalent-time modes. Figures 6a and 6b demonstrate linear and sine interpolation.

Figure 6. Some DSOs have two types of interpolation.

Figure 6a. Linear interpolation uses a straight-line fit to draw lines between samples.

Figure 6b. Sin(x)/x interpolation is a modified sine interpolation method that connects samples using a curve fit.

Conclusion

In real-time, a scope's digitizer samples the entire input waveform in one pass, with a single trigger. The term "real-time" arises because acquisition and display occur in the same time frame. Real-time digital scopes are ideal for single-shot applications. Real-time sampling generally results in fewer complicating defects, such as aliasing or distortion, which can occur with equivalent-time sampling.

Random equivalent-time sampling takes advantage of the nature of a repetitive signal by using samples from several trigger events to digitally reconstruct the waveform. Since sampling occurs on both sides of the trigger point, pretrigger capability is very flexible. Because repetitive signals are being sampled, the bandwidth of an equivalent-time scope can far exceed its sample rate.

Appendix

How Sin(x)/x Interpolation Works

The TDS Series scopes expand waveforms by using a digital signal processing technique that reduces the sample rate requirements for sine waves to about 2.5 per cycle. This method of interpolation produces higher resolution timing and amplitude measurements than linear interpolation, as well as more accurate displays. The following discussion explains the technique, which is essentially a linear filtering process.

When the TDS oscilloscope acquires a continuous-time input signal

at a sampling rate with period T, it saves the acquisition as a sequence of equally spaced samples:


Higher Rates

Of course, times have moved on since the development of the CD-quality specification in the mid-'80s, and we're now looking at possible rates of 96kHz or even 192kHz for digital recording, which result in ever higher upper frequency responses of over 40kHz and over 90kHz respectively. This fits with some audiophile schools of thought, which maintain that whilst they are not audible in the conventional manner, sounds over 20kHz can nevertheless be 'felt' or perceived in some way. However, we do have to look at this within the context and perspective of current audio recording and reproduction technology.

Most mics can't record frequencies much above 20kHz (maybe 22kHz at best) and most amps and speakers cannot reproduce frequencies much above these figures either. Some pro monitor speakers can extend up to 40 or 50kHz, as can specialised amps, but your average amp/speaker isn't going to get close! Also, there are very few A-D and D-A converters that can accurately handle these rates either unless you start looking at specialised (and expensive) outboard converters. And then there's the fact that many conventional musical sounds don't have frequencies that come anywhere near even the 20kHz upper frequency limit of the CD standard. Knowing this, it starts to seem a bit silly when you see processing power and extra storage space being eaten up to record at these supersonic sample rates — particularly for kick drums and basses. Even sounds such as piano, strings, guitar, and most drums and percussion can benefit little from being recorded at these rates — especially when you factor in the technical limitations of analogue recording and playback mechanisms, such as those found in even modern mics and monitors.

Nevertheless, there is no doubt that there are many in the modern recording community who swear that they have (say) Fender Precision bass samples recorded at 96kHz that sound superior. For the record, my opinion is that once said Fender bass (or whatever) is buried in a track with other instruments, any hypothetical benefits the ultra-high sampling rates might bring are largely going to be lost — especially when the track is mixed down for use on a 44.1kHz CD and played through the average hi-fi. or worse, made into an MP3 and listened to through iPod earphones on a tube train!

However, I don't wish to start sounding too much like an instalment of Grumpy Old Men, so let's summarise. In practice, 44.1kHz is more than enough to adequately sample most instruments for most musical (and non-musical) applications for playback on most systems. If you want to sample at higher frequencies, that is, of course, your decision if you have the equipment to do so, although it will of course make your sampler work twice as hard (or more) to achieve whatever sonic improvements it does. Certainly the polyphony will be restricted at the higher rates in hardware samplers, and in software samplers, the host CPU will have to work harder, which will either result in the same restrictions or possibly in worse, more intrusive problems, such as dropouts, clicks or outright crashes.