How does sampling rate affect frequency




















Thus, while the performances of both participants are indistinguishable using central tendency measures, the two different temporal arrangements of the same responses are distinct in the frequency domain. The power spectrum thus provides information which effectively complements information from t -tests, ANOVA's, etc. Spectral analysis can not only be used to detect simple periodicities as in the example above, but can also be used to quantify more complex and realistic patterns of variation in psychological data series.

In only a few simple steps, this characteristic pattern of response variability can be observed through spectral analysis. First, a Fourier transform translates the data series into the sum of sines and cosines that best fits the data series. This is schematically represented in Figure 2B. Next, the frequency and power amplitude 2 of each of the fitted waveforms are plotted against each other in a power spectrum see Figure 2C.

Figure 2. B Schematically represents a number of sine waves which are fitted to the data series through a Fourier transform. A data series with random background noise also called white noise, see Figure 3A , however, does not yield a relationship among frequency f and a particular change of amplitude S f in the signal see Figure 3B. Figure 3. A Shows an example of white random noise. The power spectrum of the white noise series is shown in B.

C Shows an example of Brownian noise. The power spectrum of the Brownian noise series is shown in D. Brownian noise is also called a random walk, because it can be produced by adding a random increment to each sample to obtain the next. In contrast to white noise, which can be produced by randomly choosing each sample independently, Brownian noise yields persistence or memory in the data series. Examples include simple and choice reaction Kello et al. Therefore, the slope of a power spectrum is an informative measure in psychological research.

These studies confirm the importance of time series methods like spectral analysis in psychological research. Interestingly, however, all of the examples above are based on the analysis of trial series or interval series.

In a trial series, each sampled data value represents a measure of a discrete response or response interval, as in the example of the simple reaction task mentioned earlier. Many variables in psychological research, however, are continuous in nature, rather than discrete. Continuous processes are represented as a time series through periodic sampling. Here, we investigate whether differences in sample rate constitute an artifact which obscures comparisons across studies and experimental conditions.

The paper is organized as follows. First, a number of details pertaining to analytical choices for spectral analysis are discussed. Psychologists are in general well-aware of the characteristics of a desired sampling regime. That is, any signal that has been periodically sampled can only be perfectly reconstructed if the sampling rate corresponds to a frequency that is minimally twice the highest frequency in the original signal this is known as the Shannon—Nyquist sampling theorem; Shannon, When sampling more sparsely, a phenomenon called aliasing is likely to occur.

Aliasing means that fluctuations outside of the measured frequency range are misinterpreted as different frequencies that fall within the measured range of frequencies, yielding distorted results see Holden, Therefore, sample rate is an important input parameter when applying spectral analysis to periodically sampled data series.

The estimated frequencies should not be faster than half the sample rate. For example, when a given time series is sampled at Hz, the frequencies estimated in spectral analysis the x-axis in the power spectrum should fall in the range of 0—50 Hz to avoid aliasing.

The next input parameter for spectral analysis is the number of frequencies to be estimated within the non-aliased frequency range. This parameter will determine the number of data points in the power spectrum. A spectral analysis with maximum frequency resolution will estimate half as many frequencies as there are data points, because the highest resolvable frequency oscillates back and forth every other data point.

After the log transformation, however, the frequencies are no longer equidistant, and exponentially more frequencies are observed in the high-frequency range than in the low-frequency range of the power spectrum. Specifically, the right-hand side of a power spectrum often presents a flattening or whitening of the slope Holden, ; Holden et al. Therefore, excluding the highest frequencies in the log—log regression is generally recommended Beran, ; Eke et al.

The aim of this study is to achieve a more solid appreciation for the effects of periodic sampling on the outcomes of spectral analysis. This is especially problematic when different studies are compared, which employ a different sampling regime of similar performances i.

Carlini et al. Eke et al. These observations constitute the core measurement problem raised in this paper: the outcomes of spectral analysis hinge on sample rate. This artifact is visually presented in Figures 4A and B , which shows the relative roughness of two different time series Goldberger et al. Relative roughness can be conceived as an index of the suitability of the monofractal framework cf.

Marmelat et al. Figures 4A and B reveal that the relative roughness of a time series is reduced when sampled more densely. Specifically, Figures 4A and B suggest that faster sampling comes with lower amplitude at the higher frequencies making the series more smooth, thus reducing local variance , which may result in overall steeper slopes in the power spectrum compared with processes that are sampled more sparsely.

Figure 4. A Shows the relative roughness of a respiration time series at various sampling rates. B Shows the change in relative roughness of an EEG time series at various sampling rates. This line of reasoning so far is straightforward, but can make a world of difference nonetheless concerning the utility of spectral analysis when confronted with periodically sampled, continuous processes. That is, the highest-frequency range in the spectrum has lower amplitude when higher sample rates are employed, and this artifact likely protrudes gradually into lower frequencies as sample rate further increases.

In other words, the challenge is to focus on the range of frequencies that is not contaminated by the artifact. To answer the question, we evaluated a Galvanic Skin Response GSR time series that was sampled at either Hz yielding a time series of 2 16 data points , Hz 2 15 data points , 50 Hz 2 14 data points , or 25 Hz 2 13 data points. For each sample rate of the same time series, the frequencies in the power spectrum range between 0 Hz and half the sample rate to avoid aliasing.

Then, following Eke et al. Figure 5. Note that most of the estimated frequencies fall in the high-frequency range of the spectrum. Here, we introduce an alternative solution to the problem that outcomes of spectral analysis can hinge on sample rate.

This solution takes advantage of, rather than being contaminated by, inherent differences in sample rate. Figure 6. Spectral slopes are fitted over the lowest 50 of 2 15 A , 2 14 B , 2 13 C , and 2 12 D estimated frequencies. Fitting over a fixed number of frequencies is notably different from fitting over a fixed percentage of frequencies. Specifically, the range of discarded high frequencies remains equals across different sample rates.

Specifically, as sample rate increases the range of discarded high-frequencies increases as well hence, the horizontal line in Figures 6A—D. As a result, the range of discarded frequencies converges much more closely with the range of spurious frequencies.

Specifically, relatively higher frequencies hence, more biased frequencies are incorporated in the fit as sample rate increases. For instance, in Figures 5A—D , the fitted frequencies range between 0 and 25 Hz, 0 and Fitting over a fixed amount of low frequencies 50 frequencies in this example , in contrast, implies a fit over a stable low-frequency range, regardless of sample rate.

We expect, based on previous observations e. That is, we compare empirical or simulated data signals with their downsampled copies. In essence, downsampling is simply a post-hoc reduction in sampling rate by an integer factor. It is to be expected that this post-hoc reduction in sample rate will effectively alter the spectral estimates for sampled data signals.

Taqqu et al. In contrast, when the slope is fitted over the lowest 50 frequencies only, and is thus fitted over a stable low-frequency range, with a stable cut-off frequency, it would be natural to expect the bias to be absent.

The empirical data series have been collected in a precision aiming study. In the study, 15 participants were invited to draw lines back and forth between two visual targets with a stylus, as fast and as accurately as possible. Participants received no instruction concerning pen pressure or pen tilt strategies. The targets were presented on a printed sheet of paper, one at the left side of the paper and one at the right side. The target width was 0.

One block of trials was completed with the dominant hand. When the last trial was reached, a tone signaled the end of the experiment. Pen pressure in grams and pen tilt absolute deviation from the center of the stylus, in cm coordinates were recorded using a digitizer tablet connected to a regular PC.

The tablet samples at a temporal rate of Hz. In addition, a GSR signal was recorded on the fingertips of the non-moving hand at Hz. After data collection, each time series was prepared to fit the needs for the spectral analysis cf. Holden, Next, because the Fourier transform fits stationary sines and cosines to the data series, simple drifts or long-term trends may distort the results.

Linear and quadratic detrending ensures that the analyzed data series is in line with the idealized mathematics of spectral analysis. Thus, linear and quadratic trends were removed for all data series cf. Then, the original time series were normalized, and truncated by removing the data points at the beginning of the data series until 2 16 data points were left.

None of the empirical data series contained fewer than 2 16 data values. Next, the original data series 2 16 data points were downsampled by removing every next data point from the analysis, so that the new data series length was 2 Sampling rate or sampling frequency defines the number of samples per second or per other unit taken from a continuous signal to make a discrete or digital signal.

For time-domain signals like the waveforms for sound and other audio-visual content types , frequencies are measured in in hertz Hz or cycles per second. The Nyquist—Shannon sampling theorem Nyquist principle states that perfect reconstruction of a signal is possible when the sampling frequency is greater than twice the maximum frequency of the signal being sampled.

For example, if an audio signal has an upper limit of 20, Hz the approximate upper limit of human hearing , a sampling frequency greater than 40, Hz 40 kHz will avoid aliasing and allow theoretically perfect reconstruction. Many authorities in the preservation of sound recordings, like the International Association of Sound and Audiovisual Archives IASA , recommend sampling rates that encode audio outside range of human hearing range, i.

Table 1: Alias vs. And what does an alias frequency look like? That's the insidious thing. It looks just like real data. If we were to acquire data in the manner described in Table 1 when f in is equal to Hz we'd see the gray Hz alias waveform shown in Figure 1 instead of the black Hz waveform that was actually connected to our data acquisition system.

Aside from the lower frequency, can you tell the difference between the real signal and the ghost? To further complicate things most of us don't run around acquiring pure sine waves. The typical waveform is a complex assemblage of many frequencies, and a recorded waveform that's aliased might look perfectly reasonable but lead you to exactly the wrong conclusions.

Figure 1 - A Hz waveform black produces an aliased, Hz waveform gray when under-sampled at Hz. Circling back around to where this application note began, we can satisfy the Nyquist sample rate criterion of two times the maximum signal frequency of interest only if we ensure that no other frequency components higher than this limit exist in the signal. Unless we have a high degree of confidence in the frequency content of the signal source, the only way to achieve this condition is to apply the input signal to a low pass anti-aliasing filter before digitizing it.

An in-depth discussion of anti-alias filters is beyond the scope of this application note, but their salient characteristics can be summarized as follows:. Figure 2 is a graphical representation of the ideal anti-alias filter described above. Note that the ideal perpendicular shape of the transition-band is not possible in actual filter design, producing instead a roll-off with some negative slope. This reality forces a compromise in the form of either a lower corner frequency or a higher sample rate.

For example, the human ear can respond to frequencies up to 20 kHz. If an anti-alias filter that adheres to the ideal was possible, music could be digitized using a sample rate of 40 kHz. However the standard rate of Figure 2 - Graphical representation of an ideal anti-alias filter. There is a cross section of pundits in this field who insist that data acquired without an anti-alias filter are useless.

These same people would probably insist that you wear your seatbelt just to pull your car into your garage because "seatbelts save lives. Anyone who disagrees with this statement should ask himself or herself if a filter is needed to measure battery voltage - pure 0 Hz. If not, then we've at least cracked the door to compromise and we can open it further to include the measurement of other DC or near DC signals: temperature, humidity, DC current, flow, pressure, load, torque, spectrograms, GSR, smooth and skeletal muscle baths, etc.

We're starting to cover a lot of measurement territory without the need for a filter.



0コメント

  • 1000 / 1000