Cross correlation function. Cross-correlation functions Implementation of a cross-correlation function in s

Properties of autocorrelation functions

Autocorrelation functions play an important role in the representation of random processes and in the analysis of systems operating with random input signals. Therefore, we present some properties of autocorrelation functions of stationary processes.

1. R x (0) = M(X 2 (t)) = D x (t).

2. R x (t) = R x (-t). The autocorrelation function is an even function. This property of symmetry of the graph of a function is extremely useful when calculating the autocorrelation function, as it means that calculations can only be done for positive t, and for negative t they can be determined using the symmetry property.

3.½R x (t)½£ R x (0). Highest value The autocorrelation function usually takes place at t = 0.

Example. In a random process X(t) = A Coswt, where A is a random variable with characteristics: M(A) = 0, D(A) = s 2, find M(X), D(X) and R x (t 1 ,t 2).

Solution. Let's find the mathematical expectation and variance of the random process:

M(X) = M(A Coswt) = Coswt × M(A) = 0,

D(X) = M((A Coswt-0) 2) = M(A 2) Cos 2 wt = s 2 Cos 2 wt.

Now let's find the autocorrelation function

R x (t 1 ,t 2) = M(A Coswt 1 × A Coswt 2) =

M(A 2) Coswt 1 × Coswt 2 = s 2 Coswt 1 × Coswt 2.

The input X(t) and output Y(t) random signals of the system can be considered as a two-dimensional vector random process. Let us introduce the numerical characteristics of this process.

The mathematical expectation and dispersion of a vector random process are defined as the mathematical expectation and the dispersion of its components:

We introduce the correlation function of the vector process using a second-order matrix:

where R xy (t 1 , t 2) is the cross-correlation function of random processes X(t) and Y(t), defined as follows

From the definition of the cross-correlation function it follows that

R xy (t 1 ,t 2) = R yx (t 2 ,t 1).

The normalized cross-correlation function of two random processes X(t), Y(t) is the function


Definition. If the cross-correlation function of random processes X(t) and Y(t) is equal to zero:

then random processes are called uncorrelated.

For the sum of random processes X(t) and Y(t), the autocorrelation function is equal to

R x + y (t 1 ,t 2) = R x (t 1 ,t 2) + R xy (t 1 ,t 2) + R yx (t 1 ,t 2) + R y (t 1 ,t 2 ).

For uncorrelated random processes X(t) and Y(t), the autocorrelation function of the sum of random processes is equal to the sum of the autocorrelation functions

R x+y (t 1 ,t 2) = R x (t 1 ,t 2) + R y (t 1 ,t 2),



and therefore the variance of the sum of random processes is equal to the sum of the variances:

D x+y (t) = D x (t) + D y (t).

If where X 1 (t), ..., X n (t) are uncorrelated random processes, then

When performing various transformations with random processes, it is often convenient to write them in complex form.

A complex random process is a random process of the form

Z(t) = X(t) + i Y(t),

where X(t) , Y(t) are real random processes.

The mathematical expectation, correlation function and variance of a complex random process are defined as follows:

M(Z) = M(X) + i M(Y),

where the sign * denotes complex conjugation;

Example. Let a random process, where w is a constant variable, Here A and j are independent random variables, and M(A) = m A, D(A) = s 2, and j is a uniformly distributed random variable on the interval. Determine the mathematical expectation, correlation function and variance of the complex random process Z(t).

Solution. Let's find the mathematical expectation:

Using the uniform distribution of the random variable j on the interval , we have

The autocorrelation function of the random process Z(t) is equal to

From here we have

D z (t 1) = R z (t 1, t 1) = s 2 + m A 2.

From the results obtained it follows that the random process Z(t) is stationary in the broad sense.

The set of continuous functions of a real variable ( U n (t) } = { U 0 (t) , U 1 (t),.. . ) is called orthogonal on the interval [ t 0 , t 0 + T] , If

When c = 1 set (U n (t)) is called orthonormal.

To calculate the signal through expansion coefficients, use:


Due to orthogonality conditions we will have

  1. Cross correlation function. Autocorrelation function.

Correlation is a mathematical operation, similar to convolution, that allows you to obtain a third signal from two signals. It happens: autocorrelation (autocorrelation function), cross-correlation (cross-correlation function, cross-correlation function). Example:

[Cross correlation function]

[Autocorrelation function]

Correlation is a technique for detecting previously known signals against a background of noise, also called optimal filtering. Although correlation is very similar to convolution, they are calculated differently. Their areas of application are also different (c(t)=a(t)*b(t) - convolution of two functions, d(t)=a(t)*b(-t) - cross-correlation).

Correlation is the same convolution, only one of the signals is inverted from left to right. Autocorrelation (autocorrelation function) characterizes the degree of connection between the signal and its shift by? a copy. The cross-correlation function characterizes the degree of connection between 2 different signals.

SIGNALS And LINEAR SYSTEMS

Signals and linear systems. Correlation of signals

Topic 6. SIGNAL CORRELATION

Extreme fear and extreme ardor of courage alike upset the stomach and cause diarrhea.

Michel Montaigne. French lawyer-thinker, 16th century.

This is the number! The two functions have a 100% correlation with the third and are orthogonal to each other. Well, the Almighty had jokes during the creation of the World.

Anatoly Pyshmintsev. Novosibirsk geophysicist of the Ural school, 20th century.

1. Autocorrelation functions of signals. The concept of autocorrelation functions (ACFs). ACF of time-limited signals. ACF of periodic signals. Autocovariance functions (ACF). ACF of discrete signals. ACF of noisy signals. ACF of code signals.

2. Cross-correlation functions of signals (CCF). Cross correlation function (CCF). Cross-correlation of noisy signals. VCF of discrete signals. Estimation of periodic signals in noise. Function of mutual correlation coefficients.

3. Spectral densities of correlation functions. Spectral density of ACF. Signal correlation interval. Spectral density of VKF. Calculation of correlation functions using FFT.

introduction

Correlation, and its special case for centered signals - covariance, is a signal analysis method. We present one of the options for using the method. Let us assume that there is a signal s(t), which may (or may not) contain some sequence x(t) of finite length T, the temporal position of which interests us. To search for this sequence in a time window of length T sliding along the signal s(t), the scalar products of the signals s(t) and x(t) are calculated. Thus, we “apply” the desired signal x(t) to the signal s(t), sliding along its argument, and by the value of the scalar product we estimate the degree of similarity of the signals at the points of comparison.


Correlation analysis makes it possible to establish in signals (or in series of digital data of signals) the presence of a certain connection between changes in signal values ​​on an independent variable, that is, when large values ​​of one signal (relative to the average signal values) are associated with large values ​​of another signal (positive correlation), or, conversely, small values ​​of one signal are associated with large values ​​of another (negative correlation), or the data of two signals are not related in any way (zero correlation).

In the functional space of signals, this degree of connection can be expressed in normalized units of the correlation coefficient, i.e., in the cosine of the angle between the signal vectors, and, accordingly, will take values ​​from 1 (complete coincidence of signals) to -1 (complete opposite) and does not depend on the value (scale) of units of measurement.

In the autocorrelation version, a similar technique is used to determine the scalar product of the signal s(t) with its own copy sliding along the argument. Autocorrelation allows you to estimate the average statistical dependence of current signal samples on their previous and subsequent values ​​(the so-called correlation radius of signal values), as well as to identify the presence of periodically repeating elements in the signal.

Correlation methods are of particular importance in the analysis of random processes to identify non-random components and evaluate the non-random parameters of these processes.

Note that there is some confusion regarding the terms "correlation" and "covariance". In the mathematical literature, the term "covariance" is applied to centered functions, and "correlation" to arbitrary ones. In the technical literature, and especially in the literature on signals and methods of their processing, the exact opposite terminology is often used. This is not of fundamental importance, but when familiarizing yourself with literary sources, it is worth paying attention to the accepted purpose of these terms.

6.1. Autocorrelation functions of signals.

The concept of autocorrelation functions of signals . The autocorrelation function (CF - correlation function) of a signal s(t), finite in energy, is a quantitative integral characteristic of the signal shape, identifying in the signal the nature and parameters of the mutual temporal relationship of samples, which always occurs for periodic signals, as well as the interval and the degree of dependence of the reading values ​​at current times on the previous history of the current moment. The ACF is determined by the integral of the product of two copies of the signal s(t), shifted relative to each other by time t:

Bs(t) =s(t) s(t+t) dt = ás(t), s(t+t)ñ = ||s(t)|| ||s(t+t)|| cos j(t). (6.1.1)

As follows from this expression, the ACF is the scalar product of the signal and its copy in functional dependence from the variable shift value t. Accordingly, the ACF has the physical dimension of energy, and at t = 0 the value of the ACF is directly equal to the signal energy and is the maximum possible (the cosine of the angle of interaction of the signal with itself is equal to 1):

Bs(0) =s(t)2 dt = Es.

ACF refers to even functions, which is easy to verify by replacing the variable t = t-t in expression (6.1.1):

Bs(t) = s(t-t) s(t) dt = Bs(-t).

The maximum ACF, equal to the signal energy at t=0, is always positive, and the ACF module at any value of the time shift does not exceed the signal energy. The latter follows directly from the properties of the scalar product (as does the Cauchy-Bunyakovsky inequality):


ás(t), s(t+t)ñ = ||s(t)||×||s(t+t)||×cos j(t),

cos j(t) = 1 at t = 0, ás(t), s(t+t)ñ = ||s(t)||×||s(t)|| = Es,

cos j(t)< 1 при t ¹ 0, ás(t), s(t+t)ñ = ||s(t)||×||s(t+t)||×cos j(t) < Es.

As an example in Fig. 6.1.1 shows two signals - a rectangular pulse and a radio pulse of the same duration T, and the shapes of their ACF corresponding to these signals. The amplitude of the radio pulse oscillations is set equal to the amplitude rectangular pulse, in this case the signal energies will also be the same, which is confirmed by the equal values ​​of the central maxima of the ACF. For finite pulse durations, the ACF durations are also finite, and are equal to double the pulse durations (when a copy of a finite pulse is shifted by an interval of its duration, both to the left and to the right, the product of the pulse with its copy becomes equal to zero). The frequency of oscillations of the ACF of a radio pulse is equal to the frequency of oscillations of the filling of the radio pulse (lateral minima and maxima of the ACF occur each time with successive shifts of a copy of the radio pulse by half the period of oscillations of its filling).

Given parity, graphical representation of the ACF is usually performed only for positive values ​​of t. In practice, signals are usually specified in the interval of positive argument values ​​from 0-T. The +t sign in expression (6.1.1) means that as the values ​​of t increase, a copy of the signal s(t+t) shifts to the left along the t axis and goes beyond 0. For digital signals, this requires a corresponding extension of the data into the region of negative argument values. And since in calculations the interval for specifying t is usually much less than the interval for specifying the signal, it is more practical to shift the copy of the signal to the left along the argument axis, i.e., use the function s(t-t) instead of s(t+t) in expression (6.1.1) ).

Bs(t) = s(t) s(t-t) dt. (6.1.1")

For finite signals, as the value of the shift t increases, the temporary overlap of the signal with its copy decreases, and, accordingly, the cosine of the interaction angle and the scalar product as a whole tend to zero:

The ACF calculated from the centered signal value s(t) is autocovariance signal function:

Cs(t) = dt, (6.1.2)

where ms is the average signal value. Covariance functions are related to correlation functions by a fairly simple relationship:

Cs(t) = Bs(t) - ms2.

ACF of time-limited signals. In practice, signals given over a certain interval are usually studied and analyzed. To compare the ACF of signals specified at different time intervals, a modification of the ACF with normalization to the length of the interval finds practical application. So, for example, when specifying a signal on the interval:

Bs(t) =s(t) s(t+t) dt. (6.1.3)

The ACF can also be calculated for weakly damped signals with infinite energy, as the average value of the scalar product of the signal and its copy when the signal setting interval tends to infinity:

Bs(t) = . (6.1.4)

ACF according to these expressions has a physical dimension of power, and is equal to the average mutual power of the signal and its copy, depending functionally on the shift of the copy.

ACF of periodic signals. The energy of periodic signals is infinite, therefore the ACF of periodic signals is calculated over one period T, averaging the scalar product of the signal and its shifted copy within the period:

Bs(t) = (1/T)s(t) s(t-t) dt. (6.1.5)

A mathematically more rigorous expression:

Bs(t) = .

At t=0, the value of the ACF normalized to the period is equal to the average power of the signals within the period. In this case, the ACF of periodic signals is a periodic function with the same period T. Thus, for the signal s(t) = A cos(w0t+j0) at T=2p/w0 we have:

Bs(t) = A cos(w0t+j0) A cos(w0(t-t)+j0) = (A2/2) cos(w0t). (6.1.6)

The obtained result does not depend on the initial phase of the harmonic signal, which is typical for any periodic signals and is one of the properties of the ACF. Using autocorrelation functions, you can check for periodic properties in any arbitrary signals. An example of the autocorrelation function of a periodic signal is shown in Fig. 6.1.2.

Autocovariance functions (ACF) are calculated similarly, using centered signal values. A remarkable feature of these functions is their simple relationship with the dispersion ss2 of signals (the square of the standard - the standard deviation of the signal values ​​from the average value). As is known, the dispersion value is equal to the average signal power, which follows:

|Cs(t)| ≤ ss2, Cs(0) = ss2 º ||s(t)||2. (6.1.7)

FAC values ​​normalized to the variance value are a function of autocorrelation coefficients:

rs(t) = Cs(t)/Cs(0) = Cs(t)/ss2 º cos j(t). (6.1.8)

This function is sometimes called the "true" autocorrelation function. Due to normalization, its values ​​do not depend on the units (scale) of representation of signal values ​​s(t) and characterize the degree of linear relationship between signal values ​​depending on the magnitude of the shift t between signal samples. The values ​​of rs(t) º cos j(t) can vary from 1 (complete direct correlation of readings) to -1 (inverse correlation).

In Fig. 6.1.3 shows an example of signals s(k) and s1(k) = s(k)+noise with the FAK coefficients corresponding to these signals - rs and rs1. As can be seen in the graphs, FAK confidently revealed the presence of periodic oscillations in the signals. The noise in the signal s1(k) reduced the amplitude of the periodic oscillations without changing the period. This is confirmed by the graph of the Cs/ss1 curve, i.e., the FAC of the signal s(k) with normalization (for comparison) to the value of the signal dispersion s1(k), where one can clearly see that noise pulses, with complete statistical independence of their readings, caused an increase in the value Сs1(0) in relation to the value of Cs(0) and somewhat “blurred” the function of the autocovariance coefficients. This is due to the fact that the value of rs(t) of noise signals tends to 1 at t ® 0 and fluctuates around zero at t ≠ 0, while the fluctuation amplitudes are statistically independent and depend on the number of signal samples (they tend to zero as the number of samples increases).

ACF of discrete signals. When the data sampling interval is Dt = const, the ACF calculation is performed over the intervals Dt = Dt and is usually written as a discrete function of the numbers n of the sample shift nDt:

Bs(nDt) = Dtsk×sk-n. (6.1.9)

Discrete signals are usually specified in the form of numerical arrays of a certain length with sample numbering k = 0.1,...K at Dt = 1, and the calculation of the discrete ACF in energy units is performed in a one-way version, taking into account the length of the arrays. If the entire signal array is used and the number of ACF samples is equal to the number of array samples, then the calculation is performed according to the formula:

Bs(n) = sk×sk-n. (6.1.10)

The multiplier K/(K-n) in this function is a correction factor for the gradual decrease in the number of multiplied and summed values ​​as the shift n increases. Without this correction for uncentered signals, a trend of summation of average values ​​appears in the ACF values. When measuring in units of signal power, the multiplier K/(K-n) is replaced by the multiplier 1/(K-n).

Formula (6.1.10) is used quite rarely, mainly for deterministic signals with a small number of samples. For random and noisy signals, a decrease in the denominator (K-n) and the number of multiplied samples as the shift increases leads to an increase in statistical fluctuations in the ACF calculation. Greater reliability under these conditions is provided by calculating the ACF in units of signal power using the formula:

Bs(n) = sk×sk-n, sk-n = 0 at k-n< 0, (6.1.11)

i.e. with normalization by a constant factor 1/K and with extension of the signal by zero values ​​(to the left side when using k-n shifts or to the right side when using k+n shifts). This estimate is biased and has a slightly smaller dispersion than according to formula (6.1.10). The difference between normalizations according to formulas (6.1.10) and (6.1.11) can be clearly seen in Fig. 6.1.4.

Formula (6.1.11) can be considered as averaging the sum of products, i.e., as an estimate of the mathematical expectation:

Bs(n) = M(sk sk-n) @ . (6.1.12)

In practice, discrete ACF has the same properties as continuous ACF. It is also even, and its value at n = 0 is equal to the energy or power of the discrete signal, depending on the normalization.

ACF of noisy signals . The noisy signal is written as the sum v(k) = s(k)+q(k). In general, noise does not have to have a zero average value, and the power-normalized autocorrelation function of a digital signal containing N samples is written as follows:

Bv(n) = (1/N) ás(k)+q(k), s(k-n)+q(k-n)ñ =

= (1/N) [ás(k), s(k-n)ñ + ás(k), q(k-n)ñ + áq(k), s(k-n)ñ + áq(k), q(k-n)ñ ] =

Bs(n) + M(sk qk-n) + M(qk sk-n) + M(qk qk-n).

Bv(n) = Bs(n) + + + . (6.1.13)

With statistical independence of the useful signal s(k) and noise q(k) taking into account the expansion of the mathematical expectation

M(sk qk-n) = M(sk) M(qk-n) =

the following formula can be used:

Bv(n) = Bs(n) + 2 + . (6.1.13")

An example of a noisy signal and its ACF in comparison with a non-noisy signal is shown in Fig. 6.1.5.

From formulas (6.1.13) it follows that the ACF of a noisy signal consists of the ACF of the signal component of the useful signal with a superimposed noise function that decays to a value of 2+. For large values ​​of K, when → 0, Bv(n) » Bs(n). This makes it possible not only to identify periodic signals from the ACF, which are almost completely hidden in noise (the noise power is much greater than the signal power), but also to determine with high accuracy their period and shape within the period, and for single-frequency harmonic signals, their amplitude using expressions (6.1.6).

Barker Signal

ACF of signal

1, 1, 1, -1, -1, 1, -1

7, 0, -1, 0, -1, 0, -1

1,1,1,-1,-1,-1,1,-1,-1,1,-1

11,0,-1,0,-1,0,-1,0,-1,0,-1

1,1,1,1,1,-1,-1,1,1-1,1,-1,1

13,0,1,0,1,0,1,0,1,0,1,0,1

Code signals are a type of discrete signals. At a certain codeword interval M×Dt, they can have only two amplitude values: 0 and 1 or 1 and –1. When identifying codes at a significant noise level, the shape of the ACF of the codeword is of particular importance. From this point of view, the best codes are those whose ACF side lobe values ​​are minimal over the entire length of the codeword interval with the maximum value of the central peak. Such codes include the Barker code shown in Table 6.1. As can be seen from the table, the amplitude of the central peak of the code is numerically equal to the value of M, while the amplitude of the lateral oscillations at n ¹ 0 does not exceed 1.

6.2. Cross correlation functions of signals.

Cross correlation function (CCF) of different signals (cross-correlation function, CCF) describes both the degree of similarity in the shape of two signals and their relative position relative to each other along the coordinate (independent variable). Generalizing formula (6.1.1) of the autocorrelation function to two different signals s(t) and u(t), we obtain the following scalar product of the signals:

Bsu(t) =s(t) u(t+t) dt. (6.2.1)

Cross-correlation of signals characterizes a certain correlation of phenomena and physical processes reflected by these signals, and can serve as a measure of the “stability” of this relationship when signals are processed separately in different devices. For signals with finite energy, the VCF is also finite, and:

|Bsu(t)| £ ||s(t)||×||u(t)||,

which follows from the Cauchy-Bunyakovsky inequality and the independence of the signal norms from the coordinate shift.

When replacing the variable t = t-t in formula (6.2.1), we get:

Bsu(t) =s(t-t) u(t) dt = u(t) s(t-t) dt = Bus(-t).

It follows that the parity condition, Bsu(t) ¹ Bsu(-t), is not satisfied for the TCF, and the values ​​of the TCF are not required to have a maximum at t = 0.

This can be clearly seen in Fig. 6.2.1, where two identical signals are given with centers at points 0.5 and 1.5. Calculation using formula (6.2.1) with a gradual increase in t values ​​means successive shifts of the signal s2(t) to the left along the time axis (for each value of s1(t), the values ​​s2(t+t) are taken for integrand multiplication). At t=0 the signals are orthogonal and the value of B12(t)=0. The maximum B12(t) will be observed when the signal s2(t) is shifted to the left by the value t=1, at which the signals s1(t) and s2(t+t) are completely combined.

The same values ​​of the CCF according to formulas (6.2.1) and (6.2.1") are observed at the same relative position of the signals: when the signal u(t) is shifted by an interval t relative to s(t) to the right along the ordinate axis and signal s(t) relative to signal u(t) to the left, i.e. Bsu(t) = Bus(-t).

In Fig. 6.2.2 shows examples of CCF for a rectangular signal s(t) and two identical triangular signals u(t) and v(t). All signals have the same duration T, while the signal v(t) is shifted forward by the interval T/2.

The signals s(t) and u(t) are identical in time location and the area of ​​“overlap” of the signals is maximum at t=0, which is fixed by the Bsu function. At the same time, the Bsu function is sharply asymmetric, since with an asymmetric signal shape u(t) for a symmetric shape s(t) (relative to the center of the signals), the “overlap” area of ​​the signals changes differently depending on the direction of the shift (the sign of t as the value of t increases from zero). When the initial position of the signal u(t) is shifted to the left along the ordinate axis (in advance of the signal s(t) - signal v(t)), the shape of the CCF remains unchanged and shifts to the right by the same shift value - function Bsv in Fig. 6.2.2. If we swap the expressions of the functions in (6.2.1), then the new function Bvs will be a function Bsv mirrored with respect to t=0.

Taking these features into account, the total CCF is calculated, as a rule, separately for positive and negative delays:

Bsu(t) =s(t) u(t+t) dt. Bus(t) =u(t) s(t+t) dt. (6.2.1")

Cross-correlation of noisy signals . For two noisy signals u(t) = s1(t)+q1(t) and v(t) = s2(t)+q2(t), using the technique of deriving formulas (6.1.13) with replacing the copy of the signal s(t ) to the signal s2(t), it is easy to derive the cross-correlation formula in the following form:

Buv(t) = Bs1s2(t) + Bs1q2(t) + Bq1s2(t) + Bq1q2(t). (6.2.2)

The last three terms on the right side of (6.2.2) decay to zero as t increases. For large signal setting intervals, the expression can be written in the following form:

Buv(t) = Bs1s2(t) + + + . (6.2.3)

With zero average noise values ​​and statistical independence from signals, the following occurs:

Buv(t) → Bs1s2(t).

VCF of discrete signals. All properties of the VCF of analog signals are also valid for the VCF of discrete signals, while the features of discrete signals outlined above for discrete ACF are also valid for them (formulas 6.1.9-6.1.12). In particular, with Dt = const =1 for signals x(k) and y(k) with the number of samples K:

Bxy(n) = xk yk-n. (6.2.4)

When normalized in power units:

Bxy(n) = xk yk-n @ . (6.2.5)

Estimation of periodic signals in noise . A noisy signal can be estimated by cross-correlation with the “reference” signal using trial and error, adjusting the cross-correlation function to its maximum value.

For a signal u(k)=s(k)+q(k) with statistical independence of noise and → 0, the cross-correlation function (6.2.2) with the signal pattern p(k) with q2(k)=0 takes the form:

Bup(k) = Bsp(k) + Bqp(k) = Bsp(k) + .

And since → 0 as N increases, then Bup(k) → Bsp(k). Obviously, the function Bup(k) will have a maximum when p(k) = s(k). By changing the shape of the template p(k) and maximizing the function Bup(k), we can obtain an estimate of s(k) in the form of the optimal shape p(k).

Cross correlation coefficient function (VKF) is a quantitative indicator of the degree of similarity of signals s(t) and u(t). Similar to the function of autocorrelation coefficients, it is calculated through the centered values ​​of the functions (to calculate the cross-covariance it is enough to center only one of the functions), and is normalized to the product of the values ​​of the standard functions s(t) and v(t):

rsu(t) = Csu(t)/sssv. (6.2.6)

The interval for changing the values ​​of correlation coefficients with shifts t can vary from –1 (complete reverse correlation) to 1 (complete similarity or one hundred percent correlation). At shifts t at which zero values ​​of rsu(t) are observed, the signals are independent of each other (uncorrelated). The cross-correlation coefficient allows you to establish the presence of a connection between signals, regardless of physical properties signals and their magnitude.

When calculating the CCF of noisy discrete signals of limited length using formula (6.2.4), there is a probability of occurrence of values ​​|rsu(n)| > 1.

For periodic signals, the concept of CCF is usually not applied, except for signals with the same period, for example, input and output signals when studying the characteristics of systems.

6.3. Spectral densities of correlation functions.

ACF spectral density can be determined from the following simple considerations.

In accordance with expression (6.1.1), the ACF is a function of the scalar product of the signal and its copy, shifted by the interval t, at -¥< t < ¥:

Bs(t) = ás(t), s(t-t)ñ.

The dot product can be defined in terms of the spectral densities of the signal and its copies, the product of which is the mutual power spectral density:

ás(t), s(t-t)ñ = (1/2p)S(w) St*(w) dw.

The displacement of the signal along the abscissa axis by interval t is displayed in the spectral representation by multiplying the signal spectrum by exp(-jwt), and for the conjugate spectrum by the factor exp(jwt):

St*(w) = S*(w) exp(jwt).

Taking this into account we get:

Bs(t) = (1/2p)S(w) S*(w) exp(jwt) dw =

= (1/2p)|S(w)|2 exp(jwt) dw. (6.3.1)

But the last expression is the inverse Fourier transform of the signal's energy spectrum (spectral energy density). Consequently, the energy spectrum of the signal and its autocorrelation function are related by the Fourier transform:

Bs(t) Û |S(w)|2 = Ws(w). (6.3.2)

Thus, the spectral density of the ACF is nothing more than the spectral power density of the signal, which, in turn, can be determined by the direct Fourier transform through the ACF:

|S(w)|2 = Bs(t) exp(-jwt) dt. (6.3.3)

The latter expression imposes certain restrictions on the form of ACF and the method of limiting their duration.

Rice. 6.3.1. Spectrum of a non-existent ACF

The energy spectrum of signals is always positive; signal power cannot be negative. Consequently, the ACF cannot have the shape of a rectangular pulse, since the Fourier transform of a rectangular pulse is an alternating integral sine. There should be no discontinuities of the first kind (jumps) on the ACF, since taking into account the parity of the ACF, any symmetric jump along the ±t coordinate generates a “division” of the ACF into the sum of a certain continuous function and a rectangular pulse of duration 2t with the corresponding appearance of negative values ​​in the energy spectrum An example of the latter is shown in Fig. 6.3.1 (graphs of functions are shown, as is customary for even functions, only with their right side).

ACFs of sufficiently extended signals are usually limited in size (limited data correlation intervals from –T/2 to T/2 are studied). However, truncation of the ACF is the multiplication of the ACF by a rectangular selection pulse of duration T, which in the frequency domain is reflected by convolution of the actual power spectrum with an alternating integral sine function sinc(wT/2). On the one hand, this causes a certain smoothing of the power spectrum, which is often useful, for example, when studying signals at a significant noise level. But, on the other hand, a significant underestimation of the magnitude of energy peaks can occur if the signal contains any harmonic components, as well as the appearance of negative power values ​​at the edge parts of peaks and jumps. An example of the manifestation of these factors is shown in Fig. 6.3.2.

Rice. 6.3.2. Calculation of the energy spectrum of a signal using ACFs of different lengths.

As is known, signal power spectra do not have a phase characteristic and it is impossible to reconstruct signals from them. Consequently, the ACF of signals, as a temporary representation of power spectra, also does not have information about the phase characteristics of the signals and reconstruction of signals using the ACF is impossible. Signals of the same shape, shifted in time, have the same ACF. Moreover, the signals different shapes may have similar ACFs if they have similar power spectra.

Let us rewrite equation (6.3.1) in the following form

s(t) s(t-t) dt = (1/2p)S(w) S*(w) exp(jwt) dw,

and substitute the value t=0 into this expression. The resulting equality is well known and is called Parseval's equality

s2(t) dt = (1/2p)|S(w)|2 dw.

It allows you to calculate the signal energy, both in the time and frequency domains of the signal description.

Signal correlation interval is a numerical parameter for assessing the width of the ACF and the degree of significant correlation of signal values ​​by argument.

If we assume that the signal s(t) has an approximately uniform energy spectrum with a value of W0 and with an upper limit frequency up to wв (the shape of a centered rectangular pulse, such as signal 1 in Fig. 6.3.3 with fв = 50 Hz in one-sided representation ), then the ACF of the signal is determined by the expression:

Bs(t) = (Wo/p)cos(wt) dw = (Wowв/p) sin(wвt)/(wвt).

The signal correlation interval tk is considered to be the width of the central peak of the ACF from the maximum to the first intersection of the zero line. In this case, for a rectangular spectrum with an upper limit frequency wв, the first zero crossing corresponds to sinc(wвt) = 0 at wвt = p, from which:

tк = p/wв =1/2fв. (6.3.4)

The higher the upper limit frequency of the signal spectrum, the smaller the correlation interval. For signals with a smooth cut at the top cutoff frequency The role of the wв parameter is played by the average spectrum width (signal 2 in Fig. 6.3.3).

The power spectral density of statistical noise at a single measurement is a random function Wq(w) with mean value Wq(w) Þ sq2, where sq2 is the noise variance. In the limit, with a uniform spectral distribution of noise from 0 to ¥, the noise ACF tends to the value Bq(t) Þ sq2 at t Þ 0, Bq(t) Þ 0 at t ¹ 0, i.e. statistical noise is not correlated (tk Þ 0).

Practical calculations of the ACF of finite signals are usually limited to the shift interval t = (0, (3-5)tk), in which, as a rule, the main information on the autocorrelation of signals is concentrated.

Spectral density VKF can be obtained based on the same considerations as for AFC, or directly from formula (6.3.1) by replacing the spectral density of the signal S(w) with the spectral density of the second signal U(w):

Bsu(t) = (1/2p)S*(w) U(w) exp(jwt) dw. (6.3.5)

Or, when changing the order of signals:

Bus(t) = (1/2p)U*(w) S(w) exp(jwt) dw. (6.3.5")

The product S*(w)U(w) represents the mutual energy spectrum Wsu(w) of the signals s(t) and u(t). Accordingly, U*(w)S(w) = Wus(w). Therefore, like the ACF, the cross-correlation function and the spectral density of the mutual power of the signals are related to each other by Fourier transforms:

Bsu(t) Û Wsu(w) º W*us(w). (6.3.6)

Bus(t) Û Wus(w) º W*su(w). (6.3.6")

In the general case, with the exception of the spectra of even functions, from the condition of non-compliance with parity for the CCF functions it follows that the mutual energy spectra are complex functions:

U(w) = Au(w) + j Bu(w), V(w) = Av(w) + j Bv(w).

Wuv = AuAv+BuBv+j(BuAv - AuBv) = Re Wuv(w) + j Im Wuv(w),

In Fig. 6.3.4 you can clearly see the features of the formation of the CCF using the example of two signals of the same shape, shifted relative to each other.

Rice. 6.3.4. Formation of the VKF.

The shape of the signals and their relative position are shown in form A. The module and argument of the spectrum of the signal s(t) are shown in form B. The spectrum module u(t) is identical to the module S(w). The same view shows the modulus of the mutual signal power spectrum S(w)U*(w). As is known, when multiplying complex spectra, the moduli of the spectra are multiplied, and the phase angles are added, while for the conjugate spectrum U*(w) the phase angle changes sign. If the first signal in the formula for calculating the CCF (6.2.1) is the signal s(t), and the signal u(t-t) on the ordinate axis is ahead of s(t), then the phase angles S(w) increase towards negative values ​​as the frequency increases angles (without taking into account the periodic reset of values ​​by 2p), and the phase angles U*(w) in absolute values ​​are less than the phase angles s(t) and increase (due to conjugation) towards positive values. The result of multiplying the spectra (as can be seen in Fig. 6.3.4, view C) is the subtraction of the angle values ​​U*(w) from the phase angles S(w), while the phase angles of the spectrum S(w)U*(w) remain in region of negative values, which ensures a shift of the entire CCF function (and its peak values) to the right from zero along the t axis by a certain amount (for identical signals - by the amount of the difference between the signals along the ordinate axis). When the initial position of the signal u(t) is shifted towards the signal s(t), the phase angles S(w)U*(w) decrease, in the limit to zero values ​​with complete alignment of the signals, while the function Bsu(t) shifts to zero values t, in the limit before conversion to ACF (for identical signals s(t) and u(t)).

As is known for deterministic signals, if the spectra of two signals do not overlap and, accordingly, the mutual energy of the signals is zero, such signals are orthogonal to each other. The connection between energy spectra and correlation functions of signals shows another side of the interaction of signals. If the spectra of the signals do not overlap and their mutual energy spectrum is zero at all frequencies, then for any time shifts t relative to each other their CCF is also zero. This means that such signals are uncorrelated. This is valid for both deterministic and random signals and processes.

Calculating Correlation Functions Using FFT is, especially for long number series, a method tens and hundreds of times faster than successive shifts in the time domain at large correlation intervals. The essence of the method follows from formulas (6.3.2) for the ACF and (6.3.6) for the VCF. Considering that the ACF can be considered as a special case of the CCF for the same signal, we will consider the calculation process using the example of the CCF for signals x(k) and y(k) with the number of samples K. It includes:

1. Calculation of the FFT spectra of the signals x(k) → X(k) and y(k) → Y(k). With different numbers of samples, the shorter row is padded with zeros to the size of the larger row.

2. Calculation of power density spectra Wxy(k) = X*(k) Y(k).

3. Inverse FFT Wxy(k) → Bxy(k).

Let us note some features of the method.

Inverse FFT, as is known, calculates the cyclic convolution of functions x(k) ③ y(k). If the number of function samples is equal to K, the number of complex samples of function spectra is also equal to K, as well as the number of samples of their product Wxy(k). Accordingly, the number of samples Bxy(k) during the inverse FFT is also equal to K and is cyclically repeated with a period equal to K. Meanwhile, with linear convolution of complete arrays of signals according to formula (6.2.5), the size of only one half of the ICF is K points, and the full bilateral size is 2K dots. Consequently, with the inverse FFT, taking into account the cyclicity of the convolution, its side periods will be superimposed on the main period of the CCF, as with the usual cyclic convolution of two functions.

In Fig. 6.3.5 shows an example of two signals and the VCF values ​​calculated by linear convolution (B1xy) and cyclic convolution via FFT (B2xy). To eliminate the effect of overlapping side periods, it is necessary to supplement the signals with zeros, in the limit, up to doubling the number of samples, while the FFT result (B3xy graph in Figure 6.3.5) completely repeats the result of linear convolution (taking into account normalization for an increase in the number of samples).

In practice, the number of signal extension zeros depends on the nature of the correlation function. The minimum number of zeros is usually taken to be equal to the significant information part of the functions, i.e., about (3-5) correlation intervals.

literature

1. Baskakov circuits and signals: Textbook for universities. - M.: Higher School, 1988.

19. Applied time series analysis. – M.: Mir, 1982. – 428 p.

25. Sergienko signal processing. / Textbook for universities. – St. Petersburg: Peter, 203. – 608 p.

33. Digital signal processing. Practical approach. / M., "Williams", 2004, 992 p.

About noticed typos, errors and suggestions for additions: *****@***ru.

Copyright©2008DavydovA.V.

Cross-correlation function- a standard method for assessing the degree of correlation of two sequences. It is often used to search a long sequence for a shorter one known in advance. Consider two series f and g. Cross correlation is determined by the formula:

(f\star g)_i \ \stackrel(\mathrm(def))(=)\ \sum_j f^*_j\,g_(i+j),

Where i- the shift between sequences relative to each other, and the superscript in the form of an asterisk means complex conjugation. In general, for continuous functions f (t) And g (t) cross-correlation is defined as

(f \star g)(t)\ \stackrel(\mathrm(def))(=) \int_(-\infty)^(\infty) f^*(\tau)\ g(t+\tau)\, d\tau,

If X And Y- two independent random numbers with probability distribution functions respectively f And g, then the cross-correlation f \star g corresponds to the probability distribution of the expression -X+Y. On the contrary, convolution f * g corresponds to the probability distribution of the sum X+Y.

Properties

Cross-correlation and convolution are related:

f(t)\star g(t) = f^*(-t)*g(t)

so if the functions f And g are even, then

(f\star g) = f*g

Also: (f\star g)\star(f\star g)=(f\star f)\star (g\star g)

See also.

Write a review about the article "Cross-correlation function"

Links

  • function in MATLAB

Excerpt characterizing the Cross-Correlation Function

Anatole was always pleased with his position, himself and others. He was instinctively convinced with his whole being that he could not live differently than the way he lived, and that he had never done anything bad in his life. He was unable to think about how his actions might affect others, nor what might come of such or such an action. He was convinced that just as a duck was created in such a way that it should always live in water, so he was created by God in such a way that he should live with an income of thirty thousand and always occupy the highest position in society. He believed in this so firmly that, looking at him, others were convinced of this and did not deny him either a higher position in the world or money, which he obviously borrowed without return from those he met and those who met him.
He was not a gambler, at least he never wanted to win. He wasn't vain. He didn't care at all what people thought about him. Still less could he be guilty of ambition. He teased his father several times, ruining his career, and laughed at all the honors. He was not stingy and did not refuse anyone who asked him. The only thing he loved was fun and women, and since, according to his concepts, there was nothing ignoble in these tastes, and he could not think about what came out of satisfying his tastes for other people, in his soul he believed considered himself an impeccable person, sincerely despised scoundrels and bad people and carried his head high with a calm conscience.
The revelers, these male Magdalenes, have a secret sense of consciousness of innocence, the same as the female Magdalenes, based on the same hope of forgiveness. “Everything will be forgiven to her, because she loved a lot, and everything will be forgiven to him, because he had a lot of fun.”
Dolokhov, who this year appeared again in Moscow after his exile and Persian adventures, and led a luxurious gambling and carousing life, became close to his old St. Petersburg comrade Kuragin and used him for his own purposes.
Anatole sincerely loved Dolokhov for his intelligence and daring. Dolokhov, who needed the name, nobility, connections of Anatoly Kuragin to lure rich young people into his gambling society, without letting him feel this, used and amused himself with Kuragin. In addition to the calculation for which he needed Anatol, the very process of controlling someone else’s will was a pleasure, a habit and a need for Dolokhov.

In this chapter, the concepts introduced in Chap. 5 and 6 (issue 1) apply to the case of a pair of time series and random processes. The first such generalization given in Sect. 8.1 is the cross-correlation function of a two-dimensional stationary random process. This function characterizes the correlation of two processes at different delays. The second generalization is a two-dimensional linear process formed by linear operations on two white noise sources. Important special cases of such a process are the two-dimensional autoregressive process and the two-dimensional moving average process.

In Sect. In Section 8.2 we discuss the issue of estimating the cross-correlation function. We will show that if you do not apply filtering to both series, converting them into white noise, then falsely inflated cross-correlation values ​​may arise during estimation. In Sect. 8.3, a third generalization is introduced - the cross spectrum of a stationary two-dimensional process. The cross spectrum contains two different types of information characterizing the relationship between two processes. Information of the first type is contained in the coherence spectrum, which is an effective measure of the correlation of two processes at each frequency. Information of the second type is given by the phase spectrum, which characterizes the difference in the phases of two processes at each frequency. In Sect. 8.4 illustrates both of these types of information with simple examples.

8.1. CROSS CORRELATION FUNCTION

8.1.1. Introduction

In this chapter we will deal with the issues of describing a pair of time series, or a bivariate time series. The methods used are a generalization of the methods used in Chap. 5, 6, and therefore all related to time series general provisions, set out in Sect. 5.1 are also applicable in this case. In Sect. 5.1 under the heading “Multidimensional Temporal

series”, it was briefly mentioned that individual time series forming a multivariate series may be unequal in relation to each other. Consider, for example, the system shown in Fig. 8.1, which has two inputs and two outputs

Rice. 8.1. A physical system with two inputs and two outputs.

Two situations can be distinguished. In the first case, the two rows are in the same position in relation to each other, such as the two entrances in Fig. 8.1.

Rice. 8.2. Common-mode and phase-shifted currents at the turbogenerator output.

In this case, there may be two correlated control variables whose interaction we want to study. An example of a pair of time series that fall into this category is shown in Fig. 8.2,

where records of in-phase and phase-shifted input currents of the turbogenerator are given.

In the second case, two time series are causally related, for example, the input in Fig. 8.1 and its dependent output. In such a situation, it is usually necessary to estimate the properties of the system in such a way that it is convenient to predict the output from the input. An example of a pair of time series of this type is shown in Fig. 8.3, which shows the gas inlet velocity and the carbon dioxide concentration at the outlet of a gas furnace.

Rice. 8.3. Signals at the input and output of a gas furnace.

It can be seen that the output lags behind the input due to the fact that it takes some time to deliver the gas to the reactor.

If you find an error, please select a piece of text and press Ctrl+Enter.