DSP – Anti-Causal Systems An anti-causal system is just a little bit modified version of a non-causal system. The system depends upon the future values of the input only. It has no dependency either on present or on the past values. Examples Find out whether the following systems are anti-causal. a) $y(t) = x(t)+x(t-1)$ The system has two sub-functions. One sub function x(t+1) depends on the future value of the input but another sub-function x(t) depends only on the present. As the system is dependent on the present value also in addition to future value, this system is not anti-causal. b) $y(t) = x(t+3)$ If we analyze the above system, we can see that the system depends only on the future values of the system i.e. if we put t = 0, it will reduce to x(3), which is a future value. This system is a perfect example of anti-causal system. Learning working make money
Category: digital Signal Processing
DSP – DFT Sectional Convolution Suppose, the input sequence x(n) of long duration is to be processed with a system having finite duration impulse response by convolving the two sequences. Since, the linear filtering performed via DFT involves operation on a fixed size data block, the input sequence is divided into different fixed size data block before processing. The successive blocks are then processed one at a time and the results are combined to produce the net result. As the convolution is performed by dividing the long input sequence into different fixed size sections, it is called sectioned convolution. A long input sequence is segmented to fixed size blocks, prior to FIR filter processing. Two methods are used to evaluate the discrete convolution − Overlap-save method Overlap-add method Overlap Save Method Overlap–save is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal x(n) and a finite impulse response (FIR) filter h(n). Given below are the steps of Overlap save method − Let the length of input data block = N = L+M-1. Therefore, DFT and IDFT length = N. Each data block carries M-1 data points of previous block followed by L new data points to form a data sequence of length N = L+M-1. First, N-point DFT is computed for each data block. By appending (L-1) zeros, the impulse response of FIR filter is increased in length and N point DFT is calculated and stored. Multiplication of two N-point DFTs H(k) and Xm(k) : Y′m(k) = H(k).Xm(k), where K=0,1,2,…N-1 Then, IDFT[Y′m((k)] = y′((n) = [y′m(0), y′m(1), y′m(2),…….y′m(M-1), y′m(M),…….y′m(N-1)] (here, N-1 = L+M-2) First M-1 points are corrupted due to aliasing and hence, they are discarded because the data record is of length N. The last L points are exactly same as a result of convolution, so y′m (n) = ym(n) where n = M, M+1,….N-1 To avoid aliasing, the last M-1 elements of each data record are saved and these points carry forward to the subsequent record and become 1st M-1 elements. Result of IDFT, where first M-1 Points are avoided, to nullify aliasing and remaining L points constitute desired result as that of a linear convolution. Overlap Add Method Given below are the steps to find out the discrete convolution using Overlap method − Let the input data block size be L. Therefore, the size of DFT and IDFT: N = L+M-1 Each data block is appended with M-1 zeros to the last. Compute N-point DFT. Two N-point DFTs are multiplied: Ym(k) = H(k).Xm(k), where k = 0,,1,2,….,N-1 IDFT [Ym(k)] produces blocks of length N which are not affected by aliasing as the size of DFT is N = L+M-1 and increased lengths of the sequences to N-points by appending M-1 zeros to each block. Last M-1 points of each block must be overlapped and added to first M-1 points of the succeeding block. (reason: Each data block terminates with M-1 zeros) Hence, this method is known Overlap-add method. Thus, we get − y(n) = {y1(0), y1(1), y1(2), … .., y1(L-1), y1(L)+y2(0), y1(L+1)+y2(1), … … .., y1(N-1)+y2(M-1),y2(M), … … … … … } Learning working make money
DSP – Time-Invariant Systems For a time-invariant system, the output and input should be delayed by some time unit. Any delay provided in the input must be reflected in the output for a time invariant system. Examples a) $y(T) = x(2T)$ If the above expression, it is first passed through the system and then through the time delay (as shown in the upper part of the figure); then the output will become $x(2T-2t)$. Now, the same expression is passed through a time delay first and then through the system (as shown in the lower part of the figure). The output will become $x(2T-t)$. Hence, the system is not a time-invariant system. b) $y(T) = sin [x(T)]$ If the signal is first passed through the system and then through the time delay process, the output be $sin x(T-t)$. Similarly, if the system is passed through the time delay first then through the system then output will be $sin x(T-t)$. We can see clearly that both the outputs are same. Hence, the system is time invariant. Learning working make money
DSP – Non-Causal Systems A non-causal system is just opposite to that of causal system. If a system depends upon the future values of the input at any instant of the time then the system is said to be non-causal system. Examples Let us take some examples and try to understand this in a better way. a) $y(t) = x(t+1)$ We have already discussed this system in causal system too. For any input, it will reduce the system to its future value. For instance, if we put t = 2, it will reduce to x(3), which is a future value. Therefore, the system is Non-Causal. b) $y(t) = x(t)+x(t+2)$ In this case, x(t) is purely a present value dependent function. We have already discussed that x(t+2) function is future dependent because for t = 3 it will give values for x(5). Therefore, it is Non-causal. c) $y(t) = x(t-1)+x(t)$ In this system, it depends upon the present and past values of the given input. Whatever values we substitute, it will never show any future dependency. Clearly, it is not a non-causal system; rather it is a Causal system. Learning working make money
Digital Signal Processing – Dynamic Systems If a system depends upon the past and future value of the signal at any instant of the time then it is known as dynamic system. Unlike static systems, these are not memory less systems. They store past and future values. Therefore, they require some memory. Let us understand this theory better through some examples. Examples Find out whether the following systems are dynamic. a) $y(t) = x(t+1)$ In this case if we put t = 1 in the equation, it will be converted to x(2), which is a future dependent value. Because here we are giving input as 1 but it is showing value for x(2). As it is a future dependent signal, so clearly it is a dynamic system. b) $y(t) = Real[x(t)]$ $$= frac{[x(t)+x(t)^*]}{2}$$ In this case, whatever the value we will put it will show that time real value signal. It has no dependency on future or past values. Therefore, it is not a dynamic system rather it is a static system. c) $y(t) = Even[x(t)]$ $$= frac{[x(t)+x(-t)]}{2}$$ Here, if we will substitute t = 1, one signal shows x(1) and another will show x(-1) which is a past value. Similarly, if we will put t = -1 then one signal will show x(-1) and another will show x(1) which is a future value. Therefore, clearly it is a case of Dynamic system. d) $y(t) = cos [x(t)]$ In this case, as the system is cosine function it has a certain domain of values which lies between -1 to +1. Therefore, whatever values we will put we will get the result within specified limit. Therefore, it is a static system From the above examples, we can draw the following conclusions − All time shifting cases signals are dynamic signals. In case of time scaling too, all signals are dynamic signals. Integration cases signals are dynamic signals. Learning working make money
DSP – DFT Circular Convolution Let us take two finite duration sequences x1(n) and x2(n), having integer length as N. Their DFTs are X1(K) and X2(K) respectively, which is shown below − $$X_1(K) = sum_{n = 0}^{N-1}x_1(n)e^{frac{j2Pi kn}{N}}quad k = 0,1,2…N-1$$ $$X_2(K) = sum_{n = 0}^{N-1}x_2(n)e^{frac{j2Pi kn}{N}}quad k = 0,1,2…N-1$$ Now, we will try to find the DFT of another sequence x3(n), which is given as X3(K) $X_3(K) = X_1(K)times X_2(K)$ By taking the IDFT of the above we get $x_3(n) = frac{1}{N}displaystylesumlimits_{n = 0}^{N-1}X_3(K)e^{frac{j2Pi kn}{N}}$ After solving the above equation, finally, we get $x_3(n) = displaystylesumlimits_{m = 0}^{N-1}x_1(m)x_2[((n-m))_N]quad m = 0,1,2…N-1$ Comparison points Linear Convolution Circular Convolution Shifting Linear shifting Circular shifting Samples in the convolution result $N_1+N_2−1$ $Max(N_1,N_2)$ Finding response of a filter Possible Possible with zero padding Methods of Circular Convolution Generally, there are two methods, which are adopted to perform circular convolution and they are − Concentric circle method, Matrix multiplication method. Concentric Circle Method Let $x_1(n)$ and $x_2(n)$ be two given sequences. The steps followed for circular convolution of $x_1(n)$ and $x_2(n)$ are Take two concentric circles. Plot N samples of $x_1(n)$ on the circumference of the outer circle (maintaining equal distance successive points) in anti-clockwise direction. For plotting $x_2(n)$, plot N samples of $x_2(n)$ in clockwise direction on the inner circle, starting sample placed at the same point as 0th sample of $x_1(n)$ Multiply corresponding samples on the two circles and add them to get output. Rotate the inner circle anti-clockwise with one sample at a time. Matrix Multiplication Method Matrix method represents the two given sequence $x_1(n)$ and $x_2(n)$ in matrix form. One of the given sequences is repeated via circular shift of one sample at a time to form a N X N matrix. The other sequence is represented as column matrix. The multiplication of two matrices give the result of circular convolution. Learning working make money
DSP – Z-Transform Solved Examples Example 1 Find the response of the system $s(n+2)-3s(n+1)+2s(n) = delta (n)$, when all the initial conditions are zero. Solution − Taking Z-transform on both the sides of the above equation, we get $$S(z)Z^2-3S(z)Z^1+2S(z) = 1$$ $Rightarrow S(z)lbrace Z^2-3Z+2rbrace = 1$ $Rightarrow S(z) = frac{1}{lbrace z^2-3z+2rbrace}=frac{1}{(z-2)(z-1)} = frac{alpha _1}{z-2}+frac{alpha _2}{z-1}$ $Rightarrow S(z) = frac{1}{z-2}-frac{1}{z-1}$ Taking the inverse Z-transform of the above equation, we get $S(n) = Z^{-1}[frac{1}{Z-2}]-Z^{-1}[frac{1}{Z-1}]$ $= 2^{n-1}-1^{n-1} = -1+2^{n-1}$ Example 2 Find the system function H(z) and unit sample response h(n) of the system whose difference equation is described as under $y(n) = frac{1}{2}y(n-1)+2x(n)$ where, y(n) and x(n) are the output and input of the system, respectively. Solution − Taking the Z-transform of the above difference equation, we get $y(z) = frac{1}{2}Z^{-1}Y(Z)+2X(z)$ $= Y(Z)[1-frac{1}{2}Z^{-1}] = 2X(Z)$ $= H(Z) = frac{Y(Z)}{X(Z)} = frac{2}{[1-frac{1}{2}Z^{-1}]}$ This system has a pole at $Z = frac{1}{2}$ and $Z = 0$ and $H(Z) = frac{2}{[1-frac{1}{2}Z^{-1}]}$ Hence, taking the inverse Z-transform of the above, we get $h(n) = 2(frac{1}{2})^nU(n)$ Example 3 Determine Y(z),n≥0 in the following case − $y(n)+frac{1}{2}y(n-1)-frac{1}{4}y(n-2) = 0quad givenquad y(-1) = y(-2) = 1$ Solution − Applying the Z-transform to the above equation, we get $Y(Z)+frac{1}{2}[Z^{-1}Y(Z)+Y(-1)]-frac{1}{4}[Z^{-2}Y(Z)+Z^{-1}Y(-1)+4(-2)] = 0$ $Rightarrow Y(Z)+frac{1}{2Z}Y(Z)+frac{1}{2}-frac{1}{4Z^2}Y(Z)-frac{1}{4Z}-frac{1}{4} = 0$ $Rightarrow Y(Z)[1+frac{1}{2Z}-frac{1}{4Z^2}] =frac{1}{4Z}-frac{1}{2}$ $Rightarrow Y(Z)[frac{4Z^2+2Z-1}{4Z^2}] = frac{1-2Z}{4Z}$ $Rightarrow Y(Z) = frac{Z(1-2Z)}{4Z^2+2Z-1}$ Learning working make money
Digital Signal Processing – Signals-Definition Definition Anything that carries information can be called as signal. It can also be defined as a physical quantity that varies with time, temperature, pressure or with any independent variables such as speech signal or video signal. The process of operation in which the characteristics of a signal (Amplitude, shape, phase, frequency, etc.) undergoes a change is known as signal processing. Note − Any unwanted signal interfering with the main signal is termed as noise. So, noise is also a signal but unwanted. According to their representation and processing, signals can be classified into various categories details of which are discussed below. Continuous Time Signals Continuous-time signals are defined along a continuum of time and are thus, represented by a continuous independent variable. Continuous-time signals are often referred to as analog signals. This type of signal shows continuity both in amplitude and time. These will have values at each instant of time. Sine and cosine functions are the best example of Continuous time signal. The signal shown above is an example of continuous time signal because we can get value of signal at each instant of time. Discrete Time signals The signals, which are defined at discrete times are known as discrete signals. Therefore, every independent variable has distinct value. Thus, they are represented as sequence of numbers. Although speech and video signals have the privilege to be represented in both continuous and discrete time format; under certain circumstances, they are identical. Amplitudes also show discrete characteristics. Perfect example of this is a digital signal; whose amplitude and time both are discrete. The figure above depicts a discrete signal’s discrete amplitude characteristic over a period of time. Mathematically, these types of signals can be formularized as; $$x = left { xleft [ n right ] right },quad -infty < n< infty$$ Where, n is an integer. It is a sequence of numbers x, where nth number in the sequence is represented as x[n]. Learning working make money
DSP – Operations on Signals Shifting Shifting means movement of the signal, either in time domain (around Y-axis) or in amplitude domain (around X-axis). Accordingly, we can classify the shifting into two categories named as Time shifting and Amplitude shifting, these are subsequently discussed below. Time Shifting Time shifting means, shifting of signals in the time domain. Mathematically, it can be written as $$x(t) rightarrow y(t+k)$$ This K value may be positive or it may be negative. According to the sign of k value, we have two types of shifting named as Right shifting and Left shifting. Case 1 (K > 0) When K is greater than zero, the shifting of the signal takes place towards “left” in the time domain. Therefore, this type of shifting is known as Left Shifting of the signal. Example Case 2 (K < 0) When K is less than zero the shifting of signal takes place towards right in the time domain. Therefore, this type of shifting is known as Right shifting. Example The figure given below shows right shifting of a signal by 2. Amplitude Shifting Amplitude shifting means shifting of signal in the amplitude domain (around X-axis). Mathematically, it can be represented as − $$x(t) rightarrow x(t)+K$$ This K value may be positive or negative. Accordingly, we have two types of amplitude shifting which are subsequently discussed below. Case 1 (K > 0) When K is greater than zero, the shifting of signal takes place towards up in the x-axis. Therefore, this type of shifting is known as upward shifting. Example Let us consider a signal x(t) which is given as; $$x = begin{cases}0, & t < 0\1, & 0leq tleq 2\ 0, & t>0end{cases}$$ Let we have taken K=+1 so new signal can be written as − $y(t) rightarrow x(t)+1$ So, y(t) can finally be written as; $$x(t) = begin{cases}1, & t < 0\2, & 0leq tleq 2\ 1, & t>0end{cases}$$ Case 2 (K < 0) When K is less than zero shifting of signal takes place towards downward in the X- axis. Therefore, it is called downward shifting of the signal. Example Let us consider a signal x(t) which is given as; $$x(t) = begin{cases}0, & t < 0\1, & 0leq tleq 2\ 0, & t>0end{cases}$$ Let we have taken K = -1 so new signal can be written as; $y(t)rightarrow x(t)-1$ So, y(t) can finally be written as; $$y(t) = begin{cases}-1, & t < 0\0, & 0leq tleq 2\ -1, & t>0end{cases}$$ Learning working make money
DSP – Operations on Signals Reversal Whenever the time in a signal gets multiplied by -1, the signal gets reversed. It produces its mirror image about Y or X-axis. This is known as Reversal of the signal. Reversal can be classified into two types based on the condition whether the time or the amplitude of the signal is multiplied by -1. Time Reversal Whenever signal’s time is multiplied by -1, it is known as time reversal of the signal. In this case, the signal produces its mirror image about Y-axis. Mathematically, this can be written as; $$x(t) rightarrow y(t) rightarrow x(-t)$$ This can be best understood by the following example. In the above example, we can clearly see that the signal has been reversed about its Y-axis. So, it is one kind of time scaling also, but here the scaling quantity is (-1) always. Amplitude Reversal Whenever the amplitude of a signal is multiplied by -1, then it is known as amplitude reversal. In this case, the signal produces its mirror image about X-axis. Mathematically, this can be written as; $$x(t)rightarrow y(t)rightarrow -x(t)$$ Consider the following example. Amplitude reversal can be seen clearly. Learning working make money