Z-Transforms (ZT) Analysis of continuous time LTI systems can be done using z-transforms. It is a powerful mathematical tool to convert differential equations into algebraic equations. The bilateral (two sided) z-transform of a discrete time signal x(n) is given as $Z.T[x(n)] = X(Z) = Sigma_{n = -infty}^{infty} x(n)z^{-n} $ The unilateral (one sided) z-transform of a discrete time signal x(n) is given as $Z.T[x(n)] = X(Z) = Sigma_{n = 0}^{infty} x(n)z^{-n} $ Z-transform may exist for some signals for which Discrete Time Fourier Transform (DTFT) does not exist. Concept of Z-Transform and Inverse Z-Transform Z-transform of a discrete time signal x(n) can be represented with X(Z), and it is defined as $X(Z) = Sigma_{n=- infty }^ {infty} x(n)z^{-n} ,…,…,(1)$ If $Z = re^{jomega}$ then equation 1 becomes $X(re^{jomega}) = Sigma_{n=- infty}^{infty} x(n)[re^{j omega} ]^{-n}$ $= Sigma_{n=- infty}^{infty} x(n)[r^{-n} ] e^{-j omega n}$ $X(re^{j omega} ) = X(Z) = F.T[x(n)r^{-n}] ,…,…,(2) $ The above equation represents the relation between Fourier transform and Z-transform. $ X(Z) |_{z=e^{j omega}} = F.T [x(n)]. $ Inverse Z-transform $X(re^{j omega}) = F.T[x(n)r^{-n}] $ $x(n)r^{-n} = F.T^{-1}[X(re^{j omega}]$ $x(n) = r^n,F.T^{-1}[X(re^{j omega} )]$ $= r^n {1 over 2pi} int X(re{^j omega} )e^{j omega n} d omega $ $= {1 over 2pi} int X(re{^j omega} )[re^{j omega} ]^n d omega ,…,…,(3)$ Substitute $re^{j omega} = z$. $dz = jre^{j omega} d omega = jz d omega$ $d omega = {1 over j }z^{-1}dz$ Substitute in equation 3. $ 3, to , x(n) = {1 over 2pi} int, X(z)z^n {1 over j } z^{-1} dz = {1 over 2pi j} int ,X(z) z^{n-1} dz $ $$X(Z) = sum_{n=- infty }^{infty} ,x(n)z^{-n}$$ $$x(n) = {1 over 2pi j} int, X(z) z^{n-1} dz$$ Learning working make money
Category: signals And Systems
Z-Transforms Properties Z-Transform has following properties: Linearity Property If $,x (n) stackrel{mathrm{Z.T}}{longleftrightarrow} X(Z)$ and $,y(n) stackrel{mathrm{Z.T}}{longleftrightarrow} Y(Z)$ Then linearity property states that $a, x (n) + b, y (n) stackrel{mathrm{Z.T}}{longleftrightarrow} a, X(Z) + b, Y(Z)$ Time Shifting Property If $,x (n) stackrel{mathrm{Z.T}}{longleftrightarrow} X(Z)$ Then Time shifting property states that $x (n-m) stackrel{mathrm{Z.T}}{longleftrightarrow} z^{-m} X(Z)$ Multiplication by Exponential Sequence Property If $,x (n) stackrel{mathrm{Z.T}}{longleftrightarrow} X(Z)$ Then multiplication by an exponential sequence property states that $a^n, . x(n) stackrel{mathrm{Z.T}}{longleftrightarrow} X(Z/a)$ Time Reversal Property If $, x (n) stackrel{mathrm{Z.T}}{longleftrightarrow} X(Z)$ Then time reversal property states that $x (-n) stackrel{mathrm{Z.T}}{longleftrightarrow} X(1/Z)$ Differentiation in Z-Domain OR Multiplication by n Property If $, x (n) stackrel{mathrm{Z.T}}{longleftrightarrow} X(Z)$ Then multiplication by n or differentiation in z-domain property states that $ n^k x (n) stackrel{mathrm{Z.T}}{longleftrightarrow} [-1]^k z^k{d^k X(Z) over dZ^K} $ Convolution Property If $,x (n) stackrel{mathrm{Z.T}}{longleftrightarrow} X(Z)$ and $,y(n) stackrel{mathrm{Z.T}}{longleftrightarrow} Y(Z)$ Then convolution property states that $x(n) * y(n) stackrel{mathrm{Z.T}}{longleftrightarrow} X(Z).Y(Z)$ Correlation Property If $,x (n) stackrel{mathrm{Z.T}}{longleftrightarrow} X(Z)$ and $,y(n) stackrel{mathrm{Z.T}}{longleftrightarrow} Y(Z)$ Then correlation property states that $x(n) otimes y(n) stackrel{mathrm{Z.T}}{longleftrightarrow} X(Z).Y(Z^{-1})$ Initial Value and Final Value Theorems Initial value and final value theorems of z-transform are defined for causal signal. Initial Value Theorem For a causal signal x(n), the initial value theorem states that $ x (0) = lim_{z to infty }X(z) $ This is used to find the initial value of the signal without taking inverse z-transform Final Value Theorem For a causal signal x(n), the final value theorem states that $ x ( infty ) = lim_{z to 1} [z-1] X(z) $ This is used to find the final value of the signal without taking inverse z-transform. Region of Convergence (ROC) of Z-Transform The range of variation of z for which z-transform converges is called region of convergence of z-transform. Properties of ROC of Z-Transforms ROC of z-transform is indicated with circle in z-plane. ROC does not contain any poles. If x(n) is a finite duration causal sequence or right sided sequence, then the ROC is entire z-plane except at z = 0. If x(n) is a finite duration anti-causal sequence or left sided sequence, then the ROC is entire z-plane except at z = ∞. If x(n) is a infinite duration causal sequence, ROC is exterior of the circle with radius a. i.e. |z| > a. If x(n) is a infinite duration anti-causal sequence, ROC is interior of the circle with radius a. i.e. |z| < a. If x(n) is a finite duration two sided sequence, then the ROC is entire z-plane except at z = 0 & z = ∞. The concept of ROC can be explained by the following example: Example 1: Find z-transform and ROC of $a^n u[n] + a^{-}nu[-n-1]$ $Z.T[a^n u[n]] + Z.T[a^{-n}u[-n-1]] = {Z over Z-a} + {Z over Z {-1 over a}}$ $$ ROC: |z| gt a quadquad ROC: |z| lt {1 over a} $$ The plot of ROC has two conditions as a > 1 and a < 1, as you do not know a. In this case, there is no combination ROC. Here, the combination of ROC is from $a lt |z| lt {1 over a}$ Hence for this problem, z-transform is possible when a < 1. Causality and Stability Causality condition for discrete time LTI systems is as follows: A discrete time LTI system is causal when ROC is outside the outermost pole. In The transfer function H[Z], the order of numerator cannot be grater than the order of denominator. Stability Condition for Discrete Time LTI Systems A discrete time LTI system is stable when its system function H[Z] include unit circle |z|=1. all poles of the transfer function lay inside the unit circle |z|=1. Z-Transform of Basic Signals x(t) X[Z] $delta$ 1 $u(n)$ ${Zover Z-1}$ $u(-n-1)$ $ -{Zover Z-1}$ $delta(n-m)$ $z^{-m}$ $a^n u[n]$ ${Z over Z-a}$ $a^n u[-n-1]$ $- {Z over Z-a}$ $n,a^n u[n]$ ${aZ over |Z-a|^2}$ $n,a^n u[-n-1] $ $- {aZ over |Z-a|^2}$ $a^n cos omega n u[n] $ ${Z^2-aZ cos omega over Z^2-2aZ cos omega +a^2}$ $a^n sin omega n u[n] $ $ {aZ sin omega over Z^2 -2aZ cos omega +a^2 } $ Learning working make money
Discuss Signals and Systems Signals and Systems tutorial is designed to cover analysis, types, convolution, sampling and operations performed on signals. It also describes various types of systems. Learning working make money
Signals Sampling Techniques There are three types of sampling techniques: Impulse sampling. Natural sampling. Flat Top sampling. Impulse Sampling Impulse sampling can be performed by multiplying input signal x(t) with impulse train $Sigma_{n=-infty}^{infty}delta(t-nT)$ of period ”T”. Here, the amplitude of impulse changes with respect to amplitude of input signal x(t). The output of sampler is given by $y(t) = x(t) ×$ impulse train $= x(t) × Sigma_{n=-infty}^{infty} delta(t-nT)$ $ y(t) = y_{delta} (t) = Sigma_{n=-infty}^{infty}x(nt) delta(t-nT),…,… 1 $ To get the spectrum of sampled signal, consider Fourier transform of equation 1 on both sides $Y(omega) = {1 over T} Sigma_{n=-infty}^{infty} X(omega – n omega_s ) $ This is called ideal sampling or impulse sampling. You cannot use this practically because pulse width cannot be zero and the generation of impulse train is not possible practically. Natural Sampling Natural sampling is similar to impulse sampling, except the impulse train is replaced by pulse train of period T. i.e. you multiply input signal x(t) to pulse train $Sigma_{n=-infty}^{infty} P(t-nT)$ as shown below The output of sampler is $y(t) = x(t) times text{pulse train}$ $= x(t) times p(t) $ $= x(t) times Sigma_{n=-infty}^{infty} P(t-nT),…,…(1) $ The exponential Fourier series representation of p(t) can be given as $p(t) = Sigma_{n=-infty}^{infty} F_n e^{j nomega_s t},…,…(2) $ $= Sigma_{n=-infty}^{infty} F_n e^{j 2 pi nf_s t} $ Where $F_n= {1 over T} int_{-T over 2}^{T over 2} p(t) e^{-j n omega_s t} dt$ $= {1 over TP}(n omega_s)$ Substitute Fn value in equation 2 $ therefore p(t) = Sigma_{n=-infty}^{infty} {1 over T} P(n omega_s)e^{j n omega_s t}$ $ = {1 over T} Sigma_{n=-infty}^{infty} P(n omega_s)e^{j n omega_s t}$ Substitute p(t) in equation 1 $y(t) = x(t) times p(t)$ $= x(t) times {1 over T} Sigma_{n=-infty}^{infty} P(n omega_s),e^{j n omega_s t} $ $y(t) = {1 over T} Sigma_{n=-infty}^{infty} P( n omega_s), x(t), e^{j n omega_s t} $ To get the spectrum of sampled signal, consider the Fourier transform on both sides. $F.T, [ y(t)] = F.T [{1 over T} Sigma_{n=-infty}^{infty} P( n omega_s), x(t), e^{j n omega_s t}]$ $ = {1 over T} Sigma_{n=-infty}^{infty} P( n omega_s),F.T,[ x(t), e^{j n omega_s t} ] $ According to frequency shifting property $F.T,[ x(t), e^{j n omega_s t} ] = X[omega-nomega_s] $ $ therefore, Y[omega] = {1 over T} Sigma_{n=-infty}^{infty} P( n omega_s),X[omega-nomega_s] $ Flat Top Sampling During transmission, noise is introduced at top of the transmission pulse which can be easily removed if the pulse is in the form of flat top. Here, the top of the samples are flat i.e. they have constant amplitude. Hence, it is called as flat top sampling or practical sampling. Flat top sampling makes use of sample and hold circuit. Theoretically, the sampled signal can be obtained by convolution of rectangular pulse p(t) with ideally sampled signal say yδ(t) as shown in the diagram: i.e. $ y(t) = p(t) times y_delta (t), … , …(1) $ To get the sampled spectrum, consider Fourier transform on both sides for equation 1 $Y[omega] = F.T,[P(t) times y_delta (t)] $ By the knowledge of convolution property, $Y[omega] = P(omega), Y_delta (omega)$ Here $P(omega) = T Sa({omega T over 2}) = 2 sin omega T/ omega$ Nyquist Rate It is the minimum sampling rate at which signal can be converted into samples and can be recovered back without distortion. Nyquist rate fN = 2fm hz Nyquist interval = ${1 over fN}$ = $ {1 over 2fm}$ seconds. Samplings of Band Pass Signals In case of band pass signals, the spectrum of band pass signal X[ω] = 0 for the frequencies outside the range f1 ≤ f ≤ f2. The frequency f1 is always greater than zero. Plus, there is no aliasing effect when fs > 2f2. But it has two disadvantages: The sampling rate is large in proportion with f2. This has practical limitations. The sampled signal spectrum has spectral gaps. To overcome this, the band pass theorem states that the input signal x(t) can be converted into its samples and can be recovered back without distortion when sampling frequency fs < 2f2. Also, $$ f_s = {1 over T} = {2f_2 over m} $$ Where m is the largest integer < ${f_2 over B}$ and B is the bandwidth of the signal. If f2=KB, then $$ f_s = {1 over T} = {2KB over m} $$ For band pass signals of bandwidth 2fm and the minimum sampling rate fs= 2 B = 4fm, the spectrum of sampled signal is given by $Y[omega] = {1 over T} Sigma_{n=-infty}^{infty},X[ omega – 2nB]$ Learning working make money
Fourier Series Types Trigonometric Fourier Series (TFS) $sin nomega_0 t$ and $sin momega_0 t$ are orthogonal over the interval $(t_0, t_0+{2pi over omega_0})$. So $sinomega_0 t,, sin 2omega_0 t$ forms an orthogonal set. This set is not complete without {$cos nomega_0 t$ } because this cosine set is also orthogonal to sine set. So to complete this set we must include both cosine and sine terms. Now the complete orthogonal set contains all cosine and sine terms i.e. {$sin nomega_0 t,,cos nomega_0 t$ } where n=0, 1, 2… $therefore$ Any function x(t) in the interval $(t_0, t_0+{2pi over omega_0})$ can be represented as $$ x(t) = a_0 cos0omega_0 t+ a_1 cos 1omega_0 t+ a_2 cos2 omega_0 t +…+ a_n cos nomega_0 t + … $$ $$ + b_0 sin 0omega_0 t + b_1 sin 1omega_0 t +…+ b_n sin nomega_0 t + … $$ $$ = a_0 + a_1 cos 1omega_0 t + a_2 cos 2 omega_0 t +…+ a_n cos nomega_0 t + …$$ $$ + b_1 sin 1omega_0 t +…+ b_n sin nomega_0 t + …$$ $$ therefore x(t) = a_0 + sum_{n=1}^{infty} (a_n cos nomega_0 t + b_n sin nomega_0 t ) quad (t_0 The above equation represents trigonometric Fourier series representation of x(t). $$text{Where} ,a_0 = {int_{t_0}^{t_0+T} x(t)·1 dt over int_{t_0}^{t_0+T} 1^2 dt} = {1 over T}· int_{t_0}^{t_0+T} x(t)dt $$ $$a_n = {int_{t_0}^{t_0+T} x(t)· cos nomega_0 t,dt over int_{t_0}^{t_0+T} cos ^2 nomega_0 t, dt}$$ $$b_n = {int_{t_0}^{t_0+T} x(t)· sin nomega_0 t,dt over int_{t_0}^{t_0+T} sin ^2 nomega_0 t, dt}$$ $$text{Here}, int_{t_0}^{t_0+T} cos ^2 nomega_0 t, dt = int_{t_0}^{t_0+T} sin ^2 nomega_0 t, dt = {Tover 2}$$ $$therefore a_n = {2over T}· int_{t_0}^{t_0+T} x(t)· cos nomega_0 t,dt$$ $$b_n = {2over T}· int_{t_0}^{t_0+T} x(t)· sin nomega_0 t,dt$$ Exponential Fourier Series (EFS) Consider a set of complex exponential functions $left{e^{jnomega_0 t}right} (n=0, pm1, pm2…)$ which is orthogonal over the interval $(t_0, t_0+T)$. Where $T={2pi over omega_0}$ . This is a complete set so it is possible to represent any function f(t) as shown below $ f(t) = F_0 + F_1e^{jomega_0 t} + F_2e^{j 2omega_0 t} + … + F_n e^{j nomega_0 t} + …$ $quad quad ,,F_{-1}e^{-jomega_0 t} + F_{-2}e^{-j 2omega_0 t} +…+ F_{-n}e^{-j nomega_0 t}+…$ $$ therefore f(t) = sum_{n=-infty}^{infty} F_n e^{j nomega_0 t} quad quad (t_0 Equation 1 represents exponential Fourier series representation of a signal f(t) over the interval (t0, t0+T). The Fourier coefficient is given as $$ F_n = {int_{t_0}^{t_0+T} f(t) (e^{j nomega_0 t} )^* dt over int_{t_0}^{t_0+T} e^{j nomega_0 t} (e^{j nomega_0 t} )^* dt} $$ $$ quad = {int_{t_0}^{t_0+T} f(t) e^{-j nomega_0 t} dt over int_{t_0}^{t_0+T} e^{-j nomega_0 t} e^{j nomega_0 t} dt} $$ $$ quad quad quad quad quad quad quad quad quad ,, = {int_{t_0}^{t_0+T} f(t) e^{-j nomega_0 t} dt over int_{t_0}^{t_0+T} 1, dt} = {1 over T} int_{t_0}^{t_0+T} f(t) e^{-j nomega_0 t} dt $$ $$ therefore F_n = {1 over T} int_{t_0}^{t_0+T} f(t) e^{-j nomega_0 t} dt $$ Relation Between Trigonometric and Exponential Fourier Series Consider a periodic signal x(t), the TFS & EFS representations are given below respectively $ x(t) = a_0 + Sigma_{n=1}^{infty}(a_n cos nomega_0 t + b_n sin nomega_0 t) … … (1)$ $ x(t) = Sigma_{n=-infty}^{infty} F_n e^{j nomega_0 t}$ $quad ,,, = F_0 + F_1e^{jomega_0 t} + F_2e^{j 2omega_0 t} + … + F_n e^{j nomega_0 t} + … $ $quad quad quad quad F_{-1} e^{-jomega_0 t} + F_{-2}e^{-j 2omega_0 t} + … + F_{-n}e^{-j nomega_0 t} + … $ $ = F_0 + F_1(cos omega_0 t + j sinomega_0 t) + F_2(cos 2omega_0 t + j sin 2omega_0 t) + … + F_n(cos nomega_0 t+j sin nomega_0 t)+ … + F_{-1}(cosomega_0 t-j sinomega_0 t) + F_{-2}(cos 2omega_0 t-j sin 2omega_0 t) + … + F_{-n}(cos nomega_0 t-j sin nomega_0 t) + … $ $ = F_0 + (F_1+ F_{-1}) cosomega_0 t + (F_2+ F_{-2}) cos2omega_0 t +…+ j(F_1 – F_{-1}) sinomega_0 t + j(F_2 – F_{-2}) sin2omega_0 t+… $ $ therefore x(t) = F_0 + Sigma_{n=1}^{infty}( (F_n +F_{-n} ) cos nomega_0 t+j(F_n-F_{-n}) sin nomega_0 t) … … (2) $ Compare equation 1 and 2. $a_0= F_0$ $a_n=F_n+F_{-n}$ $b_n = j(F_n-F_{-n} )$ Similarly, $F_n = frac12 (a_n – jb_n )$ $F_{-n} = frac12 (a_n + jb_n )$ Learning working make money
Fourier Series Jean Baptiste Joseph Fourier,a French mathematician and a physicist; was born in Auxerre, France. He initialized Fourier series, Fourier transforms and their applications to problems of heat transfer and vibrations. The Fourier series, Fourier transforms and Fourier”s Law are named in his honour. Jean Baptiste Joseph Fourier (21 March 1768 – 16 May 1830) Fourier series To represent any periodic signal x(t), Fourier developed an expression called Fourier series. This is in terms of an infinite sum of sines and cosines or exponentials. Fourier series uses orthoganality condition. Fourier Series Representation of Continuous Time Periodic Signals A signal is said to be periodic if it satisfies the condition x (t) = x (t + T) or x (n) = x (n + N). Where T = fundamental time period, ω0= fundamental frequency = 2π/T There are two basic periodic signals: $x(t) = cosomega_0t$ (sinusoidal) & $x(t) = e^{jomega_0 t} $ (complex exponential) These two signals are periodic with period $T= 2pi/omega_0$. A set of harmonically related complex exponentials can be represented as {$phi_k (t)$} $${ phi_k (t)} = { e^{jkomega_0t}} = { e^{jk({2pi over T})t}} text{where} ,k = 0 pm 1, pm 2 ..n ,,,…..(1) $$ All these signals are periodic with period T According to orthogonal signal space approximation of a function x (t) with n, mutually orthogonal functions is given by $$x(t) = sum_{k = – infty}^{infty} a_k e^{jkomega_0t} ….. (2) $$ $$ = sum_{k = – infty}^{infty} a_kk e^{jkomega_0t} $$ Where $a_k$= Fourier coefficient = coefficient of approximation. This signal x(t) is also periodic with period T. Equation 2 represents Fourier series representation of periodic signal x(t). The term k = 0 is constant. The term $k = pm1$ having fundamental frequency $omega_0$, is called as 1st harmonics. The term $k = pm2$ having fundamental frequency $2omega_0$, is called as 2nd harmonics, and so on… The term $k = ±n$ having fundamental frequency $nomega0$, is called as nth harmonics. Deriving Fourier Coefficient We know that $x(t) = Sigma_{k=- infty}^{infty} a_k e^{jk omega_0 t} …… (1)$ Multiply $e^{-jnomega_0 t}$ on both sides. Then $$ x(t)e^{-jnomega_0 t} = sum_{k=- infty}^{infty} a_k e^{jk omega_0 t} . e^{-jnomega_0 t} $$ Consider integral on both sides. $$ int_{0}^{T} x(t) e^{jk omega_0 t} dt = int_{0}^{T} sum_{k=-infty}^{infty} a_k e^{jk omega_0 t} . e^{-jnomega_0 t}dt $$ $$ quad quad quad quad ,, = int_{0}^{T} sum_{k=-infty}^{infty} a_k e^{j(k-n) omega_0 t} . dt$$ $$ int_{0}^{T} x(t) e^{jk omega_0 t} dt = sum_{k=-infty}^{infty} a_k int_{0}^{T} e^{j(k-n) omega_0 t} dt. ,, ….. (2)$$ by Euler”s formula, $$ int_{0}^{T} e^{j(k-n) omega_0 t} dt. = int_{0}^{T} cos(k-n)omega_0 dt + j int_{0}^{T} sin(k-n)omega_0t,dt$$ $$ int_{0}^{T} e^{j(k-n) omega_0 t} dt. = left{ begin{array}{l l} T & quad k = n \ 0 & quad k neq n end{array} right. $$ Hence in equation 2, the integral is zero for all values of k except at k = n. Put k = n in equation 2. $$Rightarrow int_{0}^{T} x(t) e^{-jnomega_0 t} dt = a_n T $$ $$Rightarrow a_n = {1 over T} int_{0}^{T} e^{-jnomega_0 t} dt $$ Replace n by k. $$Rightarrow a_k = {1 over T} int_{0}^{T} e^{-jkomega_0 t} dt$$ $$therefore x(t) = sum_{k=-infty}^{infty} a_k e^{j(k-n) omega_0 t} $$ $$text{where} a_k = {1 over T} int_{0}^{T} e^{-jkomega_0 t} dt $$ Learning working make money
Region of Convergence (ROC) The range variation of σ for which the Laplace transform converges is called region of convergence. Properties of ROC of Laplace Transform ROC contains strip lines parallel to jω axis in s-plane. If x(t) is absolutely integral and it is of finite duration, then ROC is entire s-plane. If x(t) is a right sided sequence then ROC : Re{s} > σo. If x(t) is a left sided sequence then ROC : Re{s} < σo. If x(t) is a two sided sequence then ROC is the combination of two regions. ROC can be explained by making use of examples given below: Example 1: Find the Laplace transform and ROC of $x(t) = e-^{at}u(t)$ $L.T[x(t)] = L.T[e-^{at}u(t)] = {1 over S+a}$ $ Re{} gt -a $ $ ROC:Re{s} gt >-a$ Example 2: Find the Laplace transform and ROC of $x(t) = e^{at}u(-t)$ $ L.T[x(t)] = L.T[e^{at}u(t)] = {1 over S-a} $ $ Re{s} < a $ $ ROC: Re{s} < a $ Example 3: Find the Laplace transform and ROC of $x(t) = e^{-at}u(t)+e^{at}u(-t)$ $L.T[x(t)] = L.T[e^{-at}u(t)+e^{at}u(-t)] = {1 over S+a} + {1 over S-a}$ For ${1 over S+a} Re{s} gt -a $ For ${1 over S-a} Re{s} lt a $ Referring to the above diagram, combination region lies from –a to a. Hence, $ ROC: -a < Re{s} < a $ Causality and Stability For a system to be causal, all poles of its transfer function must be right half of s-plane. A system is said to be stable when all poles of its transfer function lay on the left half of s-plane. A system is said to be unstable when at least one pole of its transfer function is shifted to the right half of s-plane. A system is said to be marginally stable when at least one pole of its transfer function lies on the jω axis of s-plane. ROC of Basic Functions f(t) F(s) ROC $u(t)$ $${1over s}$$ ROC: Re{s} > 0 $ t, u(t) $ $${1over s^2} $$ ROC:Re{s} > 0 $ t^n, u(t) $ $$ {n! over s^{n+1}} $$ ROC:Re{s} > 0 $ e^{at}, u(t) $ $$ {1over s-a} $$ ROC:Re{s} > a $ e^{-at}, u(t) $ $$ {1over s+a} $$ ROC:Re{s} > -a $ e^{at}, u(t) $ $$ – {1over s-a} $$ ROC:Re{s} < a $ e^{-at}, u(-t) $ $$ – {1over s+a} $$ ROC:Re{s} < -a $ t, e^{at}, u(t) $ $$ {1 over (s-a)^2} $$ ROC:Re{s} > a $ t^{n} e^{at}, u(t) $ $$ {n! over (s-a)^{n+1}} $$ ROC:Re{s} > a $ t, e^{-at}, u(t) $ $$ {1 over (s+a)^2} $$ ROC:Re{s} > -a $ t^n, e^{-at}, u(t) $ $${n! over (s+a)^{n+1}} $$ ROC:Re{s} > -a $ t, e^{at}, u(-t) $ $$ – {1 over (s-a)^2} $$ ROC:Re{s} < a $ t^n, e^{at}, u(-t) $ $$ – {n! over (s-a)^{n+1}} $$ ROC:Re{s} < a $ t, e^{-at},u(-t) $ $$ – {1 over (s+a)^2} $$ ROC:Re{s} < -a $ t^n, e^{-at}, u(-t) $ $$ – {n! over (s+a)^{n+1}} $$ ROC:Re{s} < -a $ e^{-at} cos , bt $ $$ {s+a over (s+a)^2 + b^2 } $$ $ e^{-at} sin, bt $ $$ {b over (s+a)^2 + b^2 } $$ Learning working make money
Laplace Transforms Properties The properties of Laplace transform are: Linearity Property If $,x (t) stackrel{mathrm{L.T}}{longleftrightarrow} X(s)$ & $, y(t) stackrel{mathrm{L.T}}{longleftrightarrow} Y(s)$ Then linearity property states that $a x (t) + b y (t) stackrel{mathrm{L.T}}{longleftrightarrow} a X(s) + b Y(s)$ Time Shifting Property If $,x (t) stackrel{mathrm{L.T}}{longleftrightarrow} X(s)$ Then time shifting property states that $x (t-t_0) stackrel{mathrm{L.T}}{longleftrightarrow} e^{-st_0 } X(s)$ Frequency Shifting Property If $, x (t) stackrel{mathrm{L.T}}{longleftrightarrow} X(s)$ Then frequency shifting property states that $e^{s_0 t} . x (t) stackrel{mathrm{L.T}}{longleftrightarrow} X(s-s_0)$ Time Reversal Property If $,x (t) stackrel{mathrm{L.T}}{longleftrightarrow} X(s)$ Then time reversal property states that $x (-t) stackrel{mathrm{L.T}}{longleftrightarrow} X(-s)$ Time Scaling Property If $,x (t) stackrel{mathrm{L.T}}{longleftrightarrow} X(s)$ Then time scaling property states that $x (at) stackrel{mathrm{L.T}}{longleftrightarrow} {1over |a|} X({sover a})$ Differentiation and Integration Properties If $, x (t) stackrel{mathrm{L.T}}{longleftrightarrow} X(s)$ Then differentiation property states that $ {dx (t) over dt} stackrel{mathrm{L.T}}{longleftrightarrow} s. X(s) – s. X(0) $ ${d^n x (t) over dt^n} stackrel{mathrm{L.T}}{longleftrightarrow} (s)^n . X(s)$ The integration property states that $int x (t) dt stackrel{mathrm{L.T}}{longleftrightarrow} {1 over s} X(s)$ $iiint ,…, int x (t) dt stackrel{mathrm{L.T}}{longleftrightarrow} {1 over s^n} X(s)$ Multiplication and Convolution Properties If $,x(t) stackrel{mathrm{L.T}}{longleftrightarrow} X(s)$ and $ y(t) stackrel{mathrm{L.T}}{longleftrightarrow} Y(s)$ Then multiplication property states that $x(t). y(t) stackrel{mathrm{L.T}}{longleftrightarrow} {1 over 2 pi j} X(s)*Y(s)$ The convolution property states that $x(t) * y(t) stackrel{mathrm{L.T}}{longleftrightarrow} X(s).Y(s)$ Learning working make money
Fourier Transforms Properties Here are the properties of Fourier Transform: Linearity Property $text{If},,x (t) stackrel{mathrm{F.T}}{longleftrightarrow} X(omega) $ $ text{&} ,, y(t) stackrel{mathrm{F.T}}{longleftrightarrow} Y(omega) $ Then linearity property states that $a x (t) + b y (t) stackrel{mathrm{F.T}}{longleftrightarrow} a X(omega) + b Y(omega) $ Time Shifting Property $text{If}, x(t) stackrel{mathrm{F.T}}{longleftrightarrow} X (omega)$ Then Time shifting property states that $x (t-t_0) stackrel{mathrm{F.T}}{longleftrightarrow} e^{-jomega t_0 } X(omega)$ Frequency Shifting Property $text{If},, x(t) stackrel{mathrm{F.T}}{longleftrightarrow} X(omega)$ Then frequency shifting property states that $e^{jomega_0 t} . x (t) stackrel{mathrm{F.T}}{longleftrightarrow} X(omega – omega_0)$ Time Reversal Property $ text{If},, x(t) stackrel{mathrm{F.T}}{longleftrightarrow} X(omega)$ Then Time reversal property states that $ x (-t) stackrel{mathrm{F.T}}{longleftrightarrow} X(-omega)$ Time Scaling Property $ text{If},, x (t) stackrel{mathrm{F.T}}{longleftrightarrow} X(omega) $ Then Time scaling property states that $ x (at) {1 over |,a,|} X { omega over a}$ Differentiation and Integration Properties $ If ,, x (t) stackrel{mathrm{F.T}}{longleftrightarrow} X(omega)$ Then Differentiation property states that $ {dx (t) over dt} stackrel{mathrm{F.T}}{longleftrightarrow} jomega . X(omega)$ $ {d^n x (t) over dt^n } stackrel{mathrm{F.T}}{longleftrightarrow} (j omega)^n . X(omega) $ and integration property states that $ int x(t) , dt stackrel{mathrm{F.T}}{longleftrightarrow} {1 over j omega} X(omega) $ $ iiint … int x(t), dt stackrel{mathrm{F.T}}{longleftrightarrow} { 1 over (jomega)^n} X(omega) $ Multiplication and Convolution Properties $ text{If} ,, x(t) stackrel{mathrm{F.T}}{longleftrightarrow} X(omega) $ $ text{&} ,,y(t) stackrel{mathrm{F.T}}{longleftrightarrow} Y(omega) $ Then multiplication property states that $ x(t). y(t) stackrel{mathrm{F.T}}{longleftrightarrow} X(omega)*Y(omega) $ and convolution property states that $ x(t) * y(t) stackrel{mathrm{F.T}}{longleftrightarrow} {1 over 2 pi} X(omega).Y(omega) $ Learning working make money
Hilbert Transform Hilbert transform of a signal x(t) is defined as the transform in which phase angle of all components of the signal is shifted by $pm text{90}^o $. Hilbert transform of x(t) is represented with $hat{x}(t)$,and it is given by $$ hat{x}(t) = { 1 over pi } int_{-infty}^{infty} {x(k) over t-k } dk $$ The inverse Hilbert transform is given by $$ hat{x}(t) = { 1 over pi } int_{-infty}^{infty} {x(k) over t-k } dk $$ x(t), $hat{x}$(t) is called a Hilbert transform pair. Properties of the Hilbert Transform A signal x(t) and its Hilbert transform $hat{x}$(t) have The same amplitude spectrum. The same autocorrelation function. The energy spectral density is same for both x(t) and $hat{x}$(t). x(t) and $hat{x}$(t) are orthogonal. The Hilbert transform of $hat{x}$(t) is -x(t) If Fourier transform exist then Hilbert transform also exists for energy and power signals. Learning working make money