Signal averaging

Signal processing technique

Signal averaging is a signal processing technique applied in the time domain, intended to increase the strength of a signal relative to noise that is obscuring it. By averaging a set of replicate measurements, the signal-to-noise ratio (SNR) will be increased, ideally in proportion to the square root of the number of measurements.

Deriving the SNR for averaged signals

Assumed that

  • Signal s ( t ) {\displaystyle s(t)} is uncorrelated to noise, and noise z ( t ) {\displaystyle z(t)} is uncorrelated : E [ z ( t ) z ( t τ ) ] = 0 = E [ z ( t ) s ( t τ ) ] t , τ {\displaystyle E[z(t)z(t-\tau )]=0=E[z(t)s(t-\tau )]\forall t,\tau } .
  • Signal power P s i g n a l = E [ s 2 ] {\displaystyle P_{signal}=E[s^{2}]} is constant in the replicate measurements.
  • Noise is random, with a mean of zero and constant variance in the replicate measurements: E [ z ] = 0 = μ {\displaystyle E[z]=0=\mu } and 0 < E [ ( z μ ) 2 ] = E [ z 2 ] = P n o i s e = σ 2 {\displaystyle 0<E[\left(z-\mu \right)^{2}]=E[z^{2}]=P_{noise}=\sigma ^{2}} .
  • We (canonically) define Signal-to-Noise ratio as S N R = P s i g n a l P n o i s e = E [ s 2 ] σ 2 {\displaystyle SNR={\frac {P_{signal}}{P_{noise}}}={\frac {E[s^{2}]}{\sigma ^{2}}}} .

Noise power for sampled signals

Assuming we sample the noise, we get a per-sample variance of

V a r ( z ) = E [ z 2 ] = σ 2 {\displaystyle \mathrm {Var} (z)=E[z^{2}]=\sigma ^{2}} .

Averaging a random variable leads to the following variance:

V a r ( 1 n i = 1 n z i ) = 1 n 2 V a r ( i = 1 n z i ) = 1 n 2 i = 1 n V a r ( z i ) {\displaystyle \mathrm {Var} \left({\frac {1}{n}}\sum _{i=1}^{n}z_{i}\right)={\frac {1}{n^{2}}}\mathrm {Var} \left(\sum _{i=1}^{n}z_{i}\right)={\frac {1}{n^{2}}}\sum _{i=1}^{n}\mathrm {Var} \left(z_{i}\right)} .

Since noise variance is constant σ 2 {\displaystyle \sigma ^{2}} :

V a r ( N avg ) = V a r ( 1 n i = 1 n z i ) = 1 n 2 n σ 2 = 1 n σ 2 {\displaystyle \mathrm {Var} (N_{\text{avg}})=\mathrm {Var} \left({\frac {1}{n}}\sum _{i=1}^{n}z_{i}\right)={\frac {1}{n^{2}}}n\sigma ^{2}={\frac {1}{n}}\sigma ^{2}} ,

demonstrating that averaging n {\displaystyle n} realizations of the same, uncorrelated noise reduces noise power by a factor of n {\displaystyle n} , and reduces noise level by a factor of n {\displaystyle {\sqrt {n}}} .

Signal power for sampled signals

Considering n {\displaystyle n} vectors V i , i { 1 , , n } {\displaystyle V_{i},\,i\in \{1,\ldots ,n\}} of signal samples of length T {\displaystyle T} :

V i = [ s i , 1 , , s i , T ] , s i , k K T {\displaystyle V_{i}=\left[s_{i,1},\ldots ,s_{i,T}\right],\quad s_{i,k}\in \mathbb {K} ^{T}} ,

the power P i {\displaystyle P_{i}} of such a vector simply is

P i = k = 1 T s i , k 2 = | V i | 2 {\displaystyle P_{i}=\sum _{k=1}^{T}{s_{i,k}^{2}}=\left|V_{i}\right|^{2}} .

Again, averaging the n {\displaystyle n} vectors V i , i = 1 , , n {\displaystyle V_{i},\,i=1,\ldots ,n} , yields the following averaged vector

V avg = 1 n k = 1 T i = 1 n s i , k = 1 n i = 1 n k = 1 T s i , k {\displaystyle V_{\text{avg}}={\frac {1}{n}}\sum _{k=1}^{T}\sum _{i=1}^{n}s_{i,k}={\frac {1}{n}}\sum _{i=1}^{n}\sum _{k=1}^{T}s_{i,k}} .

In the case where V n V m m , n { 1 , , n } {\displaystyle V_{n}\equiv V_{m}\forall m,n\in \{1,\ldots ,n\}} , we see that V avg {\displaystyle V_{\text{avg}}} reaches a maximum of

V avg, identical signals = P i {\displaystyle V_{\text{avg, identical signals}}=P_{i}} .

In this case, the ratio of signal to noise also reaches a maximum,

SNR avg, identical signals = V avg, identical signals N avg = n SNR {\displaystyle {\text{SNR}}_{\text{avg, identical signals}}={\frac {V_{\text{avg, identical signals}}}{N_{\text{avg}}}}=n{\text{SNR}}} .

This is the oversampling case, where the observed signal is correlated (because oversampling implies that the signal observations are strongly correlated).

Time-locked signals

Averaging is applied to enhance a time-locked signal component in noisy measurements; time-locking implies that the signal is observation-periodic, so we end up in the maximum case above.

Averaging odd and even trials

A specific way of obtaining replicates is to average all the odd and even trials in separate buffers. This has the advantage of allowing for comparison of even and odd results from interleaved trials. An average of odd and even averages generates the completed averaged result, while the difference between the odd and even averages, divided by two, constitutes an estimate of the noise.

Algorithmic implementation

The following is a MATLAB simulation of the averaging process:

N=1000;   % signal length
even=zeros(N,1);  % even buffer
odd=even;         % odd buffer
actual_noise=even;% keep track of noise level
x=sin(linspace(0,4*pi,N))'; % tracked signal
for ii=1:256 % number of replicates
    n = randn(N,1); % random noise
    actual_noise = actual_noise+n;
    
    if (mod(ii,2))
        even = even+n+x;
    else
        odd=odd+n+x;
    end
end

even_avg = even/(ii/2); % even buffer average 
odd_avg = odd/(ii/2);   % odd buffer average
act_avg = actual_noise/ii; % actual noise level

db(rms(act_avg))
db(rms((even_avg-odd_avg)/2))
plot((odd_avg+even_avg)); 
hold on; 
plot((even_avg-odd_avg)/2)

The averaging process above, and in general, results in an estimate of the signal. When compared with the raw trace, the averaged noise component is reduced with every averaged trial. When averaging real signals, the underlying component may not always be as clear, resulting in repeated averages in a search for consistent components in two or three replicates. It is unlikely that two or more consistent results will be produced by chance alone.

Correlated noise

Signal averaging typically relies heavily on the assumption that the noise component of a signal is random, having zero mean, and being unrelated to the signal. However, there are instances in which the noise is not uncorrelated. A common example of correlated noise is quantization noise (e.g. the noise created when converting from an analog to a digital signal).

References

  • v
  • t
  • e
Digital signal processing
Theory
Sub-fields
  • Audio signal processing
  • Digital image processing
  • Speech processing
  • Statistical signal processing
TechniquesSampling