Digital Signal Processing – Linear Systems A linear system follows the laws of superposition. This law is necessary and sufficient condition to prove the linearity of the system. Apart from this, the system is a combination of two types of laws − Law of additivity Law of homogeneity Both, the law of homogeneity and the law of additivity are shown in the above figures. However, there are some other conditions to check whether the system is linear or not. The conditions are − The output should be zero for zero input. There should not be any non-linear operator present in the system. Examples of non-linear operators − (a) Trigonometric operators- Sin, Cos, Tan, Cot, Sec, Cosec etc. (b) Exponential, logarithmic, modulus, square, Cube etc. (c) sa(i/p) , Sinc (i/p) , Sqn (i/p) etc. Either input x or output y should not have these non-linear operators. Examples Let us find out whether the following systems are linear. a) $y(t) = x(t)+3$ This system is not a linear system because it violates the first condition. If we put input as zero, making x(t) = 0, then the output is not zero. b) $y(t) = sin tx(t)$ In this system, if we give input as zero, the output will become zero. Hence, the first condition is clearly satisfied. Again, there is no non-linear operator that has been applied on x(t). Hence, second condition is also satisfied. Therefore, the system is a linear system. c) $y(t) = sin (x(t))$ In the above system, first condition is satisfied because if we put x(t) = 0, the output will also be sin(0) = 0. However, the second condition is not satisfied, as there is a non-linear operator which operates x(t). Hence, the system is not linear. Learning working make money
Category: digital Signal Processing
DSP – Z-Transform Existence A system, which has system function, can only be stable if all the poles lie inside the unit circle. First, we check whether the system is causal or not. If the system is Causal, then we go for its BIBO stability determination; where BIBO stability refers to the bounded input for bounded output condition. This can be written as; $Mod(X(Z))< infty$ $= Mod(sum x(n)Z^{-n})< infty$ $= sum Mod(x(n)Z^{-n})< infty$ $= sum Mod[x(n)(re^{jw})^{-n}]< 0$ $= sum Mod[x(n)r^{-n}]Mod[e^{-jwn}]< infty$ $= sum_{n = -infty}^infty Mod[x(n)r^{-n}]< infty$ The above equation shows the condition for existence of Z-transform. However, the condition for existence of DTFT signal is $$sum_{n = -infty}^infty Mod(x(n)< infty$$ Example 1 Let us try to find out the Z-transform of the signal, which is given as $x(n) = -(-0.5)^{-n}u(-n)+3^nu(n)$ $= -(-2)^nu(n)+3^nu(n)$ Solution − Here, for $-(-2)^nu(n)$ the ROC is Left sided and Z<2 For $3^nu(n)$ ROC is right sided and Z>3 Hence, here Z-transform of the signal will not exist because there is no common region. Example 2 Let us try to find out the Z-transform of the signal given by $x(n) = -2^nu(-n-1)+(0.5)^nu(n)$ Solution − Here, for $-2^nu(-n-1)$ ROC of the signal is Left sided and Z<2 For signal $(0.5)^nu(n)$ ROC is right sided and Z>0.5 So, the common ROC being formed as 0.5<Z<2 Therefore, Z-transform can be written as; $X(Z) = lbracefrac{1}{1-2Z^{-1}}rbrace+lbracefrac{1}{(1-0.5Z)^{-1}}rbrace$ Example 3 Let us try to find out the Z-transform of the signal, which is given as $x(n) = 2^{r(n)}$ Solution − r(n) is the ramp signal. So the signal can be written as; $x(n) = 2^{nu(n)}lbrace 1, n<0 (u(n)=0)quad andquad2^n, ngeq 0(u(n) = 1)rbrace$ $= u(-n-1)+2^nu(n)$ Here, for the signal $u(-n-1)$ and ROC Z<1 and for $2^nu(n)$ with ROC is Z>2. So, Z-transformation of the signal will not exist. Z -Transform for Causal System Causal system can be defined as $h(n) = 0,n<0$. For causal system, ROC will be outside the circle in Z-plane. $H(Z) = displaystylesumlimits_{n = 0}^{infty}h(n)Z^{-n}$ Expanding the above equation, $H(Z) = h(0)+h(1)Z^{-1}+h(2)Z^{-2}+…quad…quad…$ $= N(Z)/D(Z)$ For causal systems, expansion of Transfer Function does not include positive powers of Z. For causal system, order of numerator cannot exceed order of denominator. This can be written as- $lim_{z rightarrow infty}H(Z) = h(0) = 0quad orquad Finite$ For stability of causal system, poles of Transfer function should be inside the unit circle in Z-plane. Z-transform for Anti-causal System Anti-causal system can be defined as $h(n) = 0, ngeq 0$ . For Anti causal system, poles of transfer function should lie outside unit circle in Z-plane. For anti-causal system, ROC will be inside the circle in Z-plane. Learning working make money
DSP – Operations on Signals Scaling Scaling of a signal means, a constant is multiplied with the time or amplitude of the signal. Time Scaling If a constant is multiplied to the time axis then it is known as Time scaling. This can be mathematically represented as; $x(t) rightarrow y(t) = x(alpha t)$ or $x(frac{t}{alpha})$; where α ≠ 0 So the y-axis being same, the x- axis magnitude decreases or increases according to the sign of the constant (whether positive or negative). Therefore, scaling can also be divided into two categories as discussed below. Time Compression Whenever alpha is greater than zero, the signal’s amplitude gets divided by alpha whereas the value of the Y-axis remains the same. This is known as Time Compression. Example Let us consider a signal x(t), which is shown as in figure below. Let us take the value of alpha as 2. So, y(t) will be x(2t), which is illustrated in the given figure. Clearly, we can see from the above figures that the time magnitude in y-axis remains the same but the amplitude in x-axis reduces from 4 to 2. Therefore, it is a case of Time Compression. Time Expansion When the time is divided by the constant alpha, the Y-axis magnitude of the signal get multiplied alpha times, keeping X-axis magnitude as it is. Therefore, this is called Time expansion type signal. Example Let us consider a square signal x(t), of magnitude 1. When we time scaled it by a constant 3, such that $x(t) rightarrow y(t) rightarrow x(frac{t}{3})$, then the signal’s amplitude gets modified by 3 times which is shown in the figure below. Amplitude Scaling Multiplication of a constant with the amplitude of the signal causes amplitude scaling. Depending upon the sign of the constant, it may be either amplitude scaling or attenuation. Let us consider a square wave signal x(t) = Π(t/4). Suppose we define another function y(t) = 2 Π(t/4). In this case, value of y-axis will be doubled, keeping the time axis value as it is. The is illustrated in the figure given below. Consider another square wave function defined as z(t) where z(t) = 0.5 Π(t/4). Here, amplitude of the function z(t) will be half of that of x(t) i.e. time axis remaining same, amplitude axis will be halved. This is illustrated by the figure given below. Learning working make money
DSP – DFT Time Frequency Transform We know that when $omega = 2pi K/N$ and $Nrightarrow infty,omega$ becomes a continuous variable and limits summation become $-infty$ to $+infty$. Therefore, $$NC_k = X(frac{2pi}{N}k) = X(e^{jomega}) = displaystylesumlimits_{n = -infty}^infty x(n)e^{frac{-j2pi nk}{N}} = displaystylesumlimits_{n = -infty}^infty x(n)e^{-jomega n}$$ Discrete Time Fourier Transform (DTFT) We know that, $X(e^{jomega}) = sum_{n = -infty}^infty x(n)e^{-jomega n}$ Where, $X(e^{jomega})$ is continuous and periodic in ω and with period 2π.…eq(1) Now, $x_p(n) = sum_{k = 0}^{N-1}NC_ke^{j2 pi nk/N}$ … From Fourier series $x_p(n) = frac{1}{2pi}sum_{k=0}^{N-1}NC_ke^{j2pi nk/N}times frac{2pi}{N}$ ω becomes continuous and $frac{2pi}{N}rightarrow domega$, because of the reasons cited above. $x(n) = frac{1}{2pi}int_{n = 0}^{2pi}X(e^{jomega})e^{jomega n}domega$…eq(2) Inverse Discrete Time Fourier Transform Symbolically, $x(n)Longleftrightarrow x(e^{jomega})$(The Fourier Transform pair) Necessary and sufficient condition for existence of Discrete Time Fourier Transform for a non-periodic sequence x(n) is absolute summable. i.e.$sum_{n = -infty}^infty|x(n)|<infty$ Properties of DTFT Linearity : $a_1x_1(n)+a_2x_2(n)Leftrightarrow a_1X_1(e^{jomega})+a_2X_2(e^{jomega})$ Time shifting − $x(n-k)Leftrightarrow e^{-jomega k}.X(e^{jomega})$ Time Reversal − $x(-n)Leftrightarrow X(e^{-jomega})$ Frequency shifting − $e^{jomega _0n}x(n)Leftrightarrow X(e^{j(omega -omega _0)})$ Differentiation frequency domain − $nx(n) = jfrac{d}{domega}X(e^{jomega})$ Convolution − $x_1(n)*x_2(n)Leftrightarrow X_1(e^{jomega})times X_2(e^{jomega})$ Multiplication − $x_1(n)times x_2(n)Leftrightarrow X_1(e^{jomega})*X_2(e^{jomega})$ Co-relation − $y_{x_1times x_2}(l)Leftrightarrow X_1(e^{jomega})times X_2(e^{jomega})$ Modulation theorem − $x(n)cos omega _0n = frac{1}{2}[X_1(e^{j(omega +omega _0})*X_2(e^{jw})$ Symmetry −$x^*(n)Leftrightarrow X^*(e^{-jomega})$ ; $x^*(-n)Leftrightarrow X^*(e^{jomega})$ ; $Real[x(n)]Leftrightarrow X_{even}(e^{jomega})$ ; $Imag[x(n)]Leftrightarrow X_{odd}(e^{jomega})$ ; $x_{even}(n)Leftrightarrow Real[x(e^{jomega})]$ ; $x_{odd}(n)Leftrightarrow Imag[x(e^{jomega})]$ ; Parseval’s theorem − $sum_{-infty}^infty|x_1(n)|^2 = frac{1}{2pi}int_{-pi}^{pi}|X_1(e^{jomega})|^2domega$ Earlier, we studied sampling in frequency domain. With that basic knowledge, we sample $X(e^{jomega})$ in frequency domain, so that a convenient digital analysis can be done from that sampled data. Hence, DFT is sampled in both time and frequency domain. With the assumption $x(n) = x_p(n)$ Hence, DFT is given by − $X(k) = DFT[x(n)] = X(frac{2pi}{N}k) = displaystylesumlimits_{n = 0}^{N-1}x(n)e^{-frac{j2pi nk}{N}}$,k=0,1,….,N−1…eq(3) And IDFT is given by − $X(n) = IDFT[X(k)] = frac{1}{N}sum_{k = 0}^{N-1}X(k)e^{frac{j2pi nk}{N}}$,n=0,1,….,N−1…eq(4) $therefore x(n)Leftrightarrow X(k)$ Twiddle Factor It is denoted as $W_N$ and defined as $W_N = e^{-j2pi /N}$ . Its magnitude is always maintained at unity. Phase of $W_N = -2pi /N$ . It is a vector on unit circle and is used for computational convenience. Mathematically, it can be shown as − $W_N^r = W_N^{rpm N} = W_N^{rpm 2N} = …$ It is function of r and period N. Consider N = 8, r = 0,1,2,3,….14,15,16,…. $Longleftrightarrow W_8^0 = W_8^8 = W_8^{16} = … = … = W_8^{32} = … =1= 1angle 0$ $W_8^1 = W_8^9 = W_8^{17} = … = … = W_8^{33} = … =frac{1}{sqrt 2}= jfrac{1}{sqrt 2} = 1angle-frac{pi}{4}$ Linear Transformation Let us understand Linear Transformation − We know that, $DFT(k) = DFT[x(n)] = X(frac{2pi}{N}k) = sum_{n = 0}^{N-1}x(n).W_n^{-nk};quad k = 0,1,….,N−1$ $x(n) = IDFT[X(k)] = frac{1}{N}sum_{k = 0}^{N-1}X(k).W_N^{-nk};quad n = 0,1,….,N−1$ Note − Computation of DFT can be performed with N2 complex multiplication and N(N-1) complex addition. $x_N = begin{bmatrix}x(0)\x(1)\.\.\x(N-1) end{bmatrix}quad Nquad pointquad vectorquad ofquad signalquad x_N$ $X_N = begin{bmatrix}X(0)\X(1)\.\.\X(N-1) end{bmatrix}quad Nquad pointquad vectorquad ofquad signalquad X_N$ $begin{bmatrix}1 & 1 & 1 & … & … & 1\1 & W_N & W_N^2 & … & … & W_N^{N-1}\. & W_N^2 & W_N^4 & … & … & W_N^{2(N-1)}\.\1 & W_N^{N-1} & W_N^{2(N-1)} & … & … & W_N^{(N-1)(N-1)} end{bmatrix}$ N – point DFT in matrix term is given by – $X_N = W_Nx_N$ $W_Nlongmapsto$ Matrix of linear transformation $Now,quad x_N = W_N^{-1}X_N$ IDFT in Matrix form is given by $$x_N = frac{1}{N}W_N^*X_N$$ Comparing both the expressions of $x_N,quad W_N^{-1} = frac{1}{N}W_N^*$ and $W_Ntimes W_N^* = N[I]_{Ntimes N}$ Therefore, $W_N$ is a linear transformation matrix, an orthogonal (unitary) matrix. From periodic property of $W_N$ and from its symmetric property, it can be concluded that, $W_N^{k+N/2} = -W_N^k$ Circular Symmetry N-point DFT of a finite duration x(n) of length N≤L, is equivalent to the N-point DFT of periodic extension of x(n), i.e. $x_p(n)$ of period N. and $x_p(n) = sum_{l = -infty}^infty x(n-Nl)$ . Now, if we shift the sequence, which is a periodic sequence by k units to the right, another periodic sequence is obtained. This is known as Circular shift and this is given by, $$x_p^prime (n) = x_p(n-k) = sum_{l = -infty}^infty x(n-k-Nl)$$ The new finite sequence can be represented as $$x_p^prime (n) = begin{cases}x_p^prime(n), & 0leq nleq N-1\0 & Otherwiseend{cases}$$ Example − Let x(n)= {1,2,4,3}, N = 4, $x_p^prime (n) = x(n-k,moduloquad N)equiv x((n-k))_Nquad;ex-ifquad k=2i.equad 2quad unitquad rightquad shiftquad andquad N = 4,$ Assumed clockwise direction as positive direction. We got, $xprime(n) = x((n-2))_4$ $xprime(0) = x((-2))_4 = x(2) = 4$ $xprime(1) = x((-1))_4 = x(3) = 3$ $xprime(2) = x((-2))_4 = x(0) = 1$ $xprime(3) = x((1))_4 = x(1) = 2$ Conclusion − Circular shift of N-point sequence is equivalent to a linear shift of its periodic extension and vice versa. Circularly even sequence − $x(N-n) = x(n),quad 1leq nleq N-1$ $i.e.x_p(n) = x_p(-n) = x_p(N-n)$ Conjugate even −$x_p(n) = x_p^*(N-n)$ Circularly odd sequence − $x(N-n) = -x(n),quad 1leq nleq N-1$ $i.e.x_p(n) = -x_p(-n) = -x_p(N-n)$ Conjugate odd − $x_p(n) = -x_p^*(N-n)$ Now, $x_p(n) = x_{pe}+x_{po}(n)$, where, $x_{pe}(n) = frac{1}{2}[x_p(n)+x_p^*(N-n)]$ $x_{po}(n) = frac{1}{2}[x_p(n)-x_p^*(N-n)]$ For any real signal x(n),$X(k) = X^*(N-k)$ $X_R(k) = X_R(N-k)$ $X_l(k) = -X_l(N-k)$ $angle X(k) = -angle X(N-K)$ Time reversal − reversing sample about the 0th sample. This is given as; $x((-n))_N = x(N-n),quad 0leq nleq N-1$ Time reversal is plotting samples of sequence, in clockwise direction i.e. assumed negative direction. Some Other Important Properties Other important IDFT properties $x(n)longleftrightarrow X(k)$ Time reversal − $x((-n))_N = x(N-n)longleftrightarrow X((-k))_N = X(N-k)$ Circular time shift − $x((n-l))_N longleftrightarrow X(k)e^{j2pi lk/N}$ Circular frequency shift − $x(n)e^{j2pi ln/N} longleftrightarrow X((k-l))_N$ Complex conjugate properties − $x^*(n)longleftrightarrow X^*((-k))_N = X^*(N-k)quad and$ $x^*((-n))_N = x^*(N-n)longleftrightarrow X^*(-k)$ Multiplication of two sequence − $x_1(n)longleftrightarrow X_1(k)quad andquad x_2(n)longleftrightarrow X_2(k)$ $therefore x_1(n)x_2(n)longleftrightarrow X_1(k)quadⓃ X_2(k)$ Circular convolution − and multiplication of two DFT $x_1(k)quad Ⓝ x_2(k) =sum_{k = 0}^{N-1}x_1(n).x_2((m-n))_n,quad m = 0,1,2,… .,N-1 $ $x_1(k)quad Ⓝ x_2(k)longleftrightarrow X_1(k).X_2(k)$ Circular correlation − If $x(n)longleftrightarrow X(k)$ and $y(n)longleftrightarrow Y(k)$ , then there exists a cross correlation sequence denoted as $bar Y_{xy}$ such that $bar Y_{xy}(l) = sum_{n = 0}^{N-1}x(n)y^*((n-l))_N = X(k).Y^*(k)$ Parseval’s
Digital Signal Processing – Unstable Systems Unstable systems do not satisfy the BIBO conditions. Therefore, for a bounded input, we cannot expect a bounded output in case of unstable systems. Examples a) $y(t) = tx(t)$ Here, for a finite input, we cannot expect a finite output. For example, if we will put $x(t) = 2 Rightarrow y(t) = 2t$. This is not a finite value because we do not know the value of t. So, it can be ranged from anywhere. Therefore, this system is not stable. It is an unstable system. b) $y(t) = frac{x(t)}{sin t}$ We have discussed earlier, that the sine function has a definite range from -1 to +1; but here, it is present in the denominator. So, in worst case scenario, if we put t = 0 and sine function becomes zero, then the whole system will tend to infinity. Therefore, this type of system is not at all stable. Obviously, this is an unstable system. Learning working make money
DSP – Operations on Signals Convolution The convolution of two signals in the time domain is equivalent to the multiplication of their representation in frequency domain. Mathematically, we can write the convolution of two signals as $$y(t) = x_{1}(t)*x_{2}(t)$$ $$= int_{-infty}^{infty}x_{1}(p).x_{2}(t-p)dp$$ Steps for convolution Take signal x1(t) and put t = p there so that it will be x1(p). Take the signal x2(t) and do the step 1 and make it x2(p). Make the folding of the signal i.e. x2(-p). Do the time shifting of the above signal x2[-(p-t)] Then do the multiplication of both the signals. i.e. $x_{1}(p).x_{2}[−(p−t)]$ Example Let us do the convolution of a step signal u(t) with its own kind. $y(t) = u(t)*u(t)$ $= int_{-infty}^{infty}[u(p).u[-(p-t)]dp$ Now this t can be greater than or less than zero, which are shown in below figures So, with the above case, the result arises with following possibilities $y(t) = begin{cases}0, & ifquad t<0\int_{0}^{t}1dt, & forquad t>0end{cases}$ $= begin{cases}0, & ifquad t<0\t, & t>0end{cases} = r(t)$ Properties of Convolution Commutative It states that order of convolution does not matter, which can be shown mathematically as $$x_{1}(t)*x_{2}(t) = x_{2}(t)*x_{1}(t)$$ Associative It states that order of convolution involving three signals, can be anything. Mathematically, it can be shown as; $$x_{1}(t)*[x_{2}(t)*x_{3}(t)] = [x_{1}(t)*x_{2}(t)]*x_{3}(t)$$ Distributive Two signals can be added first, and then their convolution can be made to the third signal. This is equivalent to convolution of two signals individually with the third signal and added finally. Mathematically, this can be written as; $$x_{1}(t)*[x_{2}(t)+x_{3}(t)] = [x_{1}(t)*x_{2}(t)+x_{1}(t)*x_{3}(t)]$$ Area If a signal is the result of convolution of two signals then the area of the signal is the multiplication of those individual signals. Mathematically this can be written If $y(t) = x_{1}*x_{2}(t)$ Then, Area of y(t) = Area of x1(t) X Area of x2(t) Scaling If two signals are scaled to some unknown constant “a” and convolution is done then resultant signal will also be convoluted to same constant “a” and will be divided by that quantity as shown below. If, $x_{1}(t)*x_{2}(t) = y(t)$ Then, $x_{1}(at)*x_{2}(at) = frac{y(at)}{a}, a ne 0$ Delay Suppose a signal y(t) is a result from the convolution of two signals x1(t) and x2(t). If the two signals are delayed by time t1 and t2 respectively, then the resultant signal y(t) will be delayed by (t1+t2). Mathematically, it can be written as − If, $x_{1}(t)*x_{2}(t) = y(t)$ Then, $x_{1}(t-t_{1})*x_{2}(t-t_{2}) = y[t-(t_{1}+t_{2})]$ Solved Examples Example 1 − Find the convolution of the signals u(t-1) and u(t-2). Solution − Given signals are u(t-1) and u(t-2). Their convolution can be done as shown below − $y(t) = u(t-1)*u(t-2)$ $y(t) = int_{-infty}^{+infty}[u(t-1).u(t-2)]dt$ $= r(t-1)+r(t-2)$ $= r(t-3)$ Example 2 − Find the convolution of two signals given by $x_{1}(n) = lbrace 3,-2, 2rbrace $ $x_{2}(n) = begin{cases}2, & 0leq nleq 4\0, & x > elsewhereend{cases}$ Solution − x2(n) can be decoded as $x_{2}(n) = lbrace 2,2,2,2,2rbrace Originalfirst$ x1(n) is previously given $= lbrace 3,-2,3rbrace = 3-2Z^{-1}+2Z^{-2}$ Similarly, $x_{2}(z) = 2+2Z^{-1}+2Z^{-2}+2Z^{-3}+2Z^{-4}$ Resultant signal, $X(Z) = X_{1}(Z)X_{2}(z)$ $= lbrace 3-2Z^{-1}+2Z^{-2}rbrace times lbrace 2+2Z^{-1}+2Z^{-2}+2Z^{-3}+2Z^{-4}rbrace$ $= 6+2Z^{-1}+6Z^{-2}+6Z^{-3}+6Z^{-4}+6Z^{-5}$ Taking inverse Z-transformation of the above, we will get the resultant signal as $x(n) = lbrace 6,2,6,6,6,0,4rbrace$ Origin at the first Example 3 − Determine the convolution of following 2 signals − $x(n) = lbrace 2,1,0,1rbrace$ $h(n) = lbrace 1,2,3,1rbrace$ Solution − Taking the Z-transformation of the signals, we get, $x(z) = 2+2Z^{-1}+2Z^{-3}$ And $h(n) = 1+2Z^{-1}+3Z^{-2}+Z^{-3}$ Now convolution of two signal means multiplication of their Z-transformations That is $Y(Z) = X(Z) times h(Z)$ $= lbrace 2+2Z^{-1}+2Z^{-3}rbrace times lbrace 1+2Z^{-1}+3Z^{-2}+Z^{-3}rbrace$ $= lbrace 2+5Z^{-1}+8Z^{-2}+6Z^{-3}+3Z^{-4}+3Z^{-5}+Z^{-6}rbrace$ Taking the inverse Z-transformation, the resultant signal can be written as; $y(n) = lbrace 2,5,8,6,6,1 rbrace Originalfirst$ Learning working make money
Digital Signal Processing – Stable Systems A stable system satisfies the BIBO (bounded input for bounded output) condition. Here, bounded means finite in amplitude. For a stable system, output should be bounded or finite, for finite or bounded input, at every instant of time. Some examples of bounded inputs are functions of sine, cosine, DC, signum and unit step. Examples a) $y(t) = x(t)+10$ Here, for a definite bounded input, we can get definite bounded output i.e. if we put $x(t) = 2, y(t) = 12$ which is bounded in nature. Therefore, the system is stable. b) $y(t) = sin [x(t)]$ In the given expression, we know that sine functions have a definite boundary of values, which lies between -1 to +1. So, whatever values we will substitute at x(t), we will get the values within our boundary. Therefore, the system is stable. Learning working make money
Digital Signal Processing – Causal Systems Previously, we saw that the system needs to be independent from the future and past values to become static. In this case, the condition is almost same with little modification. Here, for the system to be causal, it should be independent from the future values only. That means past dependency will cause no problem for the system from becoming causal. Causal systems are practically or physically realizable system. Let us consider some examples to understand this much better. Examples Let us consider the following signals. a) $y(t) = x(t)$ Here, the signal is only dependent on the present values of x. For example if we substitute t = 3, the result will show for that instant of time only. Therefore, as it has no dependence on future value, we can call it a Causal system. b) $y(t) = x(t-1)$ Here, the system depends on past values. For instance if we substitute t = 3, the expression will reduce to x(2), which is a past value against our input. At no instance, it depends upon future values. Therefore, this system is also a causal system. c) $y(t) = x(t)+x(t+1)$ In this case, the system has two parts. The part x(t), as we have discussed earlier, depends only upon the present values. So, there is no issue with it. However, if we take the case of x(t+1), it clearly depends on the future values because if we put t = 1, the expression will reduce to x(2) which is future value. Therefore, it is not causal. Learning working make money
DSP – Operations on Signals Differentiation Two very important operations performed on the signals are Differentiation and Integration. Differentiation Differentiation of any signal x(t) means slope representation of that signal with respect to time. Mathematically, it is represented as; $$x(t)rightarrow frac{dx(t)}{dt}$$ In the case of OPAMP differentiation, this methodology is very helpful. We can easily differentiate a signal graphically rather than using the formula. However, the condition is that the signal must be either rectangular or triangular type, which happens in most cases. Original Signal Differentiated Signal Ramp Step Step Impulse Impulse 1 The above table illustrates the condition of the signal after being differentiated. For example, a ramp signal converts into a step signal after differentiation. Similarly, a unit step signal becomes an impulse signal. Example Let the signal given to us be $x(t) = 4[r(t)-r(t-2)]$. When this signal is plotted, it will look like the one on the left side of the figure given below. Now, our aim is to differentiate the given signal. To start with, we will start differentiating the given equation. We know that the ramp signal after differentiation gives unit step signal. So our resulting signal y(t) can be written as; $y(t) = frac{dx(t)}{dt}$ $= frac{d4[r(t)-r(t-2)]}{dt}$ $= 4[u(t)-u(t-2)]$ Now this signal is plotted finally, which is shown in the right hand side of the above figure. Learning working make money
Digital Signal Processing – Static Systems Some systems have feedback and some do not. Those, which do not have feedback systems, their output depends only upon the present values of the input. Past value of the data is not present at that time. These types of systems are known as static systems. It does not depend upon future values too. Since these systems do not have any past record, so they do not have any memory also. Therefore, we say all static systems are memory-less systems. Let us take an example to understand this concept much better. Example Let us verify whether the following systems are static systems or not. $y(t) = x(t)+x(t-1)$ $y(t) = x(2t)$ $y(t) = x = sin [x(t)]$ a) $y(t) = x(t)+x(t-1)$ Here, x(t) is the present value. It has no relation with the past values of the time. So, it is a static system. However, in case of x(t-1), if we put t = 0, it will reduce to x(-1) which is a past value dependent. So, it is not static. Therefore here y(t) is not a static system. b) $y(t) = x(2t)$ If we substitute t = 2, the result will be y(t) = x(4). Again, it is future value dependent. So, it is also not a static system. c) $y(t) = x = sin [x(t)]$ In this expression, we are dealing with sine function. The range of sine function lies within -1 to +1. So, whatever the values we substitute for x(t), we will get in between -1 to +1. Therefore, we can say it is not dependent upon any past or future values. Hence, it is a static system. From the above examples, we can draw the following conclusions − Any system having time shifting is not static. Any system having amplitude shifting is also not static. Integration and differentiation cases are also not static. Learning working make money