Learning Signals and Systems – Resources work project make money

Signals and Systems – Useful Resources The following resources contain additional information on Signals and Systems. Please use them to get more in-depth knowledge on this. Useful Links on Signals and Systems – Wikipedia Reference for Signals and Systems. Useful Books on Signals and Systems To enlist your site on this page, please drop an email to [email protected] Learning working make money

Learning Systems Classification work project make money

Systems Classification Systems are classified into the following categories: linear and Non-linear Systems Time Variant and Time Invariant Systems linear Time variant and linear Time invariant systems Static and Dynamic Systems Causal and Non-causal Systems Invertible and Non-Invertible Systems Stable and Unstable Systems linear and Non-linear Systems A system is said to be linear when it satisfies superposition and homogenate principles. Consider two systems with inputs as x1(t), x2(t), and outputs as y1(t), y2(t) respectively. Then, according to the superposition and homogenate principles, T [a1 x1(t) + a2 x2(t)] = a1 T[x1(t)] + a2 T[x2(t)] $therefore, $ T [a1 x1(t) + a2 x2(t)] = a1 y1(t) + a2 y2(t) From the above expression, is clear that response of overall system is equal to response of individual system. Example: (t) = x2(t) Solution: y1 (t) = T[x1(t)] = x12(t) y2 (t) = T[x2(t)] = x22(t) T [a1 x1(t) + a2 x2(t)] = [ a1 x1(t) + a2 x2(t)]2 Which is not equal to a1 y1(t) + a2 y2(t). Hence the system is said to be non linear. Time Variant and Time Invariant Systems A system is said to be time variant if its input and output characteristics vary with time. Otherwise, the system is considered as time invariant. The condition for time invariant system is: y (n , t) = y(n-t) The condition for time variant system is: y (n , t) $neq$ y(n-t) Where y (n , t) = T[x(n-t)] = input change y (n-t) = output change Example: y(n) = x(-n) y(n, t) = T[x(n-t)] = x(-n-t) y(n-t) = x(-(n-t)) = x(-n + t) $therefore$ y(n, t) ≠ y(n-t). Hence, the system is time variant. linear Time variant (LTV) and linear Time Invariant (LTI) Systems If a system is both linear and time variant, then it is called linear time variant (LTV) system. If a system is both linear and time Invariant then that system is called linear time invariant (LTI) system. Static and Dynamic Systems Static system is memory-less whereas dynamic system is a memory system. Example 1: y(t) = 2 x(t) For present value t=0, the system output is y(0) = 2x(0). Here, the output is only dependent upon present input. Hence the system is memory less or static. Example 2: y(t) = 2 x(t) + 3 x(t-3) For present value t=0, the system output is y(0) = 2x(0) + 3x(-3). Here x(-3) is past value for the present input for which the system requires memory to get this output. Hence, the system is a dynamic system. Causal and Non-Causal Systems A system is said to be causal if its output depends upon present and past inputs, and does not depend upon future input. For non causal system, the output depends upon future inputs also. Example 1: y(n) = 2 x(t) + 3 x(t-3) For present value t=1, the system output is y(1) = 2x(1) + 3x(-2). Here, the system output only depends upon present and past inputs. Hence, the system is causal. Example 2: y(n) = 2 x(t) + 3 x(t-3) + 6x(t + 3) For present value t=1, the system output is y(1) = 2x(1) + 3x(-2) + 6x(4) Here, the system output depends upon future input. Hence the system is non-causal system. Invertible and Non-Invertible systems A system is said to invertible if the input of the system appears at the output. Y(S) = X(S) H1(S) H2(S) = X(S) H1(S) · $1 over ( H1(S) )$       Since H2(S) = 1/( H1(S) ) $therefore, $ Y(S) = X(S) $to$ y(t) = x(t) Hence, the system is invertible. If y(t) $neq$ x(t), then the system is said to be non-invertible. Stable and Unstable Systems The system is said to be stable only when the output is bounded for bounded input. For a bounded input, if the output is unbounded in the system then it is said to be unstable. Note: For a bounded signal, amplitude is finite. Example 1: y (t) = x2(t) Let the input is u(t) (unit step bounded input) then the output y(t) = u2(t) = u(t) = bounded output. Hence, the system is stable. Example 2: y (t) = $int x(t), dt$ Let the input is u (t) (unit step bounded input) then the output y(t) = $int u(t), dt$ = ramp signal (unbounded because amplitude of ramp is not finite it goes to infinite when t $to$ infinite). Hence, the system is unstable. Learning working make money

Learning Fourier Series Properties work project make money

Fourier Series Properties These are properties of Fourier series: Linearity Property If $ x(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{xn}$ & $ y(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{yn}$ then linearity property states that $ text{a}, x(t) + text{b}, y(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} text{a}, f_{xn} + text{b}, f_{yn}$ Time Shifting Property If $ x(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{xn}$ then time shifting property states that $x(t-t_0) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} e^{-jnomega_0 t_0}f_{xn} $ Frequency Shifting Property If $ x(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{xn}$ then frequency shifting property states that $e^{jnomega_0 t_0} . x(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{x(n-n_0)} $ Time Reversal Property If $ x(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{xn}$ then time reversal property states that If $ x(-t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{-xn}$ Time Scaling Property If $ x(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{xn}$ then time scaling property states that If $ x(at) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{xn}$ Time scaling property changes frequency components from $omega_0$ to $aomega_0$. Differentiation and Integration Properties If $ x(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{xn}$ then differentiation property states that If $ {dx(t)over dt} xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} jnomega_0 . f_{xn}$ & integration property states that If $ int x(t) dt xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} {f_{xn} over jnomega_0} $ Multiplication and Convolution Properties If $ x(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{xn}$ & $ y(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{yn}$ then multiplication property states that $ x(t) . y(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} T f_{xn} * f_{yn}$ & convolution property states that $ x(t) * y(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} T f_{xn} . f_{yn}$ Conjugate and Conjugate Symmetry Properties If $ x(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f_{xn}$ Then conjugate property states that $ x*(t) xleftarrow[,]{fourier,series}xrightarrow[,]{coefficient} f*_{xn}$ Conjugate symmetry property for real valued time signal states that $$f*_{xn} = f_{-xn}$$ & Conjugate symmetry property for imaginary valued time signal states that $$f*_{xn} = -f_{-xn} $$ Learning working make money

Learning Convolution and Correlation work project make money

Convolution and Correlation Convolution Convolution is a mathematical operation used to express the relation between input and output of an LTI system. It relates input, output and impulse response of an LTI system as $$ y (t) = x(t) * h(t) $$ Where y (t) = output of LTI x (t) = input of LTI h (t) = impulse response of LTI There are two types of convolutions: Continuous convolution Discrete convolution Continuous Convolution $ y(t) ,,= x (t) * h (t) $ $= int_{-infty}^{infty} x(tau) h (t-tau)dtau$ (or) $= int_{-infty}^{infty} x(t – tau) h (tau)dtau $ Discrete Convolution $y(n),,= x (n) * h (n)$ $= Sigma_{k = – infty}^{infty} x(k) h (n-k) $ (or) $= Sigma_{k = – infty}^{infty} x(n-k) h (k) $ By using convolution we can find zero state response of the system. Deconvolution Deconvolution is reverse process to convolution widely used in signal and image processing. Properties of Convolution Commutative Property $ x_1 (t) * x_2 (t) = x_2 (t) * x_1 (t) $ Distributive Property $ x_1 (t) * [x_2 (t) + x_3 (t) ] = [x_1 (t) * x_2 (t)] + [x_1 (t) * x_3 (t)]$ Associative Property $x_1 (t) * [x_2 (t) * x_3 (t) ] = [x_1 (t) * x_2 (t)] * x_3 (t) $ Shifting Property $ x_1 (t) * x_2 (t) = y (t) $ $ x_1 (t) * x_2 (t-t_0) = y (t-t_0) $ $ x_1 (t-t_0) * x_2 (t) = y (t-t_0) $ $ x_1 (t-t_0) * x_2 (t-t_1) = y (t-t_0-t_1) $ Convolution with Impulse $ x_1 (t) * delta (t) = x(t) $ $ x_1 (t) * delta (t- t_0) = x(t-t_0) $ Convolution of Unit Steps $ u (t) * u (t) = r(t) $ $ u (t-T_1) * u (t-T_2) = r(t-T_1-T_2) $ $ u (n) * u (n) = [n+1] u(n) $ Scaling Property If $x (t) * h (t) = y (t) $ then $x (a t) * h (a t) = {1 over |a|} y (a t)$ Differentiation of Output if $y (t) = x (t) * h (t)$ then $ {dy (t) over dt} = {dx(t) over dt} * h (t) $ or $ {dy (t) over dt} = x (t) * {dh(t) over dt}$ Note: Convolution of two causal sequences is causal. Convolution of two anti causal sequences is anti causal. Convolution of two unequal length rectangles results a trapezium. Convolution of two equal length rectangles results a triangle. A function convoluted itself is equal to integration of that function. Example: You know that $u(t) * u(t) = r(t)$ According to above note, $u(t) * u(t) = int u(t)dt = int 1dt = t = r(t)$ Here, you get the result just by integrating $u(t)$. Limits of Convoluted Signal If two signals are convoluted then the resulting convoluted signal has following range: Sum of lower limits < t < sum of upper limits Ex: find the range of convolution of signals given below Here, we have two rectangles of unequal length to convolute, which results a trapezium. The range of convoluted signal is: Sum of lower limits < t < sum of upper limits $-1+-2 < t < 2+2 $ $-3 < t < 4 $ Hence the result is trapezium with period 7. Area of Convoluted Signal The area under convoluted signal is given by $A_y = A_x A_h$ Where Ax = area under input signal Ah = area under impulse response Ay = area under output signal Proof: $y(t) = int_{-infty}^{infty} x(tau) h (t-tau)dtau$ Take integration on both sides $ int y(t)dt ,,, =int int_{-infty}^{infty}, x (tau) h (t-tau)dtau dt $ $ =int x (tau) dtau int_{-infty}^{infty}, h (t-tau) dt $ We know that area of any signal is the integration of that signal itself. $therefore A_y = A_x,A_h$ DC Component DC component of any signal is given by $text{DC component}={text{area of the signal} over text{period of the signal}}$ Ex: what is the dc component of the resultant convoluted signal given below? Here area of x1(t) = length × breadth = 1 × 3 = 3 area of x2(t) = length × breadth = 1 × 4 = 4 area of convoluted signal = area of x1(t) × area of x2(t) = 3 × 4 = 12 Duration of the convoluted signal = sum of lower limits < t < sum of upper limits = -1 + -2 < t < 2+2 = -3 < t < 4 Period=7 $therefore$ Dc component of the convoluted signal = $text{area of the signal} over text{period of the signal}$ Dc component = ${12 over 7}$ Discrete Convolution Let us see how to calculate discrete convolution: i. To calculate discrete linear convolution: Convolute two sequences x[n] = {a,b,c} & h[n] = [e,f,g] Convoluted output = [ ea, eb+fa, ec+fb+ga, fc+gb, gc] Note: if any two sequences have m, n number of samples respectively, then the resulting convoluted sequence will have [m+n-1] samples. Example: Convolute two sequences x[n] = {1,2,3} & h[n] = {-1,2,2} Convoluted output y[n] = [ -1, -2+2, -3+4+2, 6+4, 6] = [-1, 0, 3, 10, 6] Here x[n] contains 3 samples and h[n] is also having 3 samples so the resulting sequence having 3+3-1 = 5 samples. ii. To calculate periodic or circular convolution: Periodic convolution is valid for discrete Fourier transform. To calculate periodic convolution all the samples must be real. Periodic or circular convolution is also called as fast convolution. If two sequences of length m, n respectively are convoluted using circular convolution then resulting sequence having max [m,n] samples. Ex: convolute two sequences x[n] = {1,2,3} & h[n] = {-1,2,2} using circular convolution Normal Convoluted output y[n] = [ -1, -2+2, -3+4+2, 6+4, 6]. = [-1, 0, 3, 10, 6] Here x[n] contains 3 samples and h[n] also has 3 samples. Hence the resulting sequence obtained by circular convolution must have max[3,3]= 3 samples. Now to get periodic convolution result, 1st 3 samples [as the period is 3] of normal convolution is same next

Learning Signals Analysis work project make money

Signals Analysis Analogy Between Vectors and Signals There is a perfect analogy between vectors and signals. Vector A vector contains magnitude and direction. The name of the vector is denoted by bold face type and their magnitude is denoted by light face type. Example: V is a vector with magnitude V. Consider two vectors V1 and V2 as shown in the following diagram. Let the component of V1 along with V2 is given by C12V2. The component of a vector V1 along with the vector V2 can obtained by taking a perpendicular from the end of V1 to the vector V2 as shown in diagram: The vector V1 can be expressed in terms of vector V2 V1= C12V2 + Ve Where Ve is the error vector. But this is not the only way of expressing vector V1 in terms of V2. The alternate possibilities are: V1=C1V2+Ve1 V2=C2V2+Ve2 The error signal is minimum for large component value. If C12=0, then two signals are said to be orthogonal. Dot Product of Two Vectors V1 . V2 = V1.V2 cosθ θ = Angle between V1 and V2 V1 . V2 =V2.V1 The components of V1 alogn V2 = V1 Cos θ = $V1.V2 over V2$ From the diagram, components of V1 alogn V2 = C 12 V2 $$V_1.V_2 over V_2 = C_12,V_2$$ $$ Rightarrow C_{12} = {V_1.V_2 over V_2}$$ Signal The concept of orthogonality can be applied to signals. Let us consider two signals f1(t) and f2(t). Similar to vectors, you can approximate f1(t) in terms of f2(t) as f1(t) = C12 f2(t) + fe(t) for (t1 2) $ Rightarrow $ fe(t) = f1(t) – C12 f2(t) One possible way of minimizing the error is integrating over the interval t1 to t2. $${1 over {t_2 – t_1}} int_{t_1}^{t_2} [f_e (t)] dt$$ $${1 over {t_2 – t_1}} int_{t_1}^{t_2} [f_1(t) – C_{12}f_2(t)]dt $$ However, this step also does not reduce the error to appreciable extent. This can be corrected by taking the square of error function. $varepsilon = {1 over {t_2 – t_1}} int_{t_1}^{t_2} [f_e (t)]^2 dt$ $Rightarrow {1 over {t_2 – t_1}} int_{t_1}^{t_2} [f_e (t) – C_{12}f_2]^2 dt $ Where ε is the mean square value of error signal. The value of C12 which minimizes the error, you need to calculate ${dvarepsilon over dC_{12} } = 0 $ $Rightarrow {d over dC_{12} } [ {1 over t_2 – t_1 } int_{t_1}^{t_2} [f_1 (t) – C_{12} f_2 (t)]^2 dt]= 0 $ $Rightarrow {1 over {t_2 – t_1}} int_{t_1}^{t_2} [ {d over dC_{12} } f_{1}^2(t) – {d over dC_{12} } 2f_1(t)C_{12}f_2(t)+ {d over dC_{12} } f_{2}^{2} (t) C_{12}^2 ] dt =0 $ Derivative of the terms which do not have C12 term are zero. $Rightarrow int_{t_1}^{t_2} – 2f_1(t) f_2(t) dt + 2C_{12}int_{t_1}^{t_2}[f_{2}^{2} (t)]dt = 0 $ If $C_{12} = {{int_{t_1}^{t_2}f_1(t)f_2(t)dt } over {int_{t_1}^{t_2} f_{2}^{2} (t)dt }} $ component is zero, then two signals are said to be orthogonal. Put C12 = 0 to get condition for orthogonality. 0 = $ {{int_{t_1}^{t_2}f_1(t)f_2(t)dt } over {int_{t_1}^{t_2} f_{2}^{2} (t)dt }} $ $$ int_{t_1}^{t_2} f_1 (t)f_2(t) dt = 0 $$ Orthogonal Vector Space A complete set of orthogonal vectors is referred to as orthogonal vector space. Consider a three dimensional vector space as shown below: Consider a vector A at a point (X1, Y1, Z1). Consider three unit vectors (VX, VY, VZ) in the direction of X, Y, Z axis respectively. Since these unit vectors are mutually orthogonal, it satisfies that $$V_X. V_X= V_Y. V_Y= V_Z. V_Z = 1 $$ $$V_X. V_Y= V_Y. V_Z= V_Z. V_X = 0 $$ You can write above conditions as $$V_a . V_b = left{ begin{array}{l l} 1 & quad a = b \ 0 & quad a neq b end{array} right. $$ The vector A can be represented in terms of its components and unit vectors as $A = X_1 V_X + Y_1 V_Y + Z_1 V_Z…………….(1) $ Any vectors in this three dimensional space can be represented in terms of these three unit vectors only. If you consider n dimensional space, then any vector A in that space can be represented as $ A = X_1 V_X + Y_1 V_Y + Z_1 V_Z+…+ N_1V_N…..(2) $ As the magnitude of unit vectors is unity for any vector A The component of A along x axis = A.VX The component of A along Y axis = A.VY The component of A along Z axis = A.VZ Similarly, for n dimensional space, the component of A along some G axis $= A.VG……………(3)$ Substitute equation 2 in equation 3. $Rightarrow CG= (X_1 V_X + Y_1 V_Y + Z_1 V_Z +…+G_1 V_G…+ N_1V_N)V_G$ $= X_1 V_X V_G + Y_1 V_Y V_G + Z_1 V_Z V_G +…+ G_1V_G V_G…+ N_1V_N V_G$ $= G_1 ,,,,, text{since } V_G V_G=1$ $If V_G V_G neq 1 ,,text{i.e.} V_G V_G= k$ $AV_G = G_1V_G V_G= G_1K$ $G_1 = {(AV_G) over K}$ Orthogonal Signal Space Let us consider a set of n mutually orthogonal functions x1(t), x2(t)… xn(t) over the interval t1 to t2. As these functions are orthogonal to each other, any two signals xj(t), xk(t) have to satisfy the orthogonality condition. i.e. $$int_{t_1}^{t_2} x_j(t)x_k(t)dt = 0 ,,, text{where}, j neq k$$ $$text{Let} int_{t_1}^{t_2}x_{k}^{2}(t)dt = k_k $$ Let a function f(t), it can be approximated with this orthogonal signal space by adding the components along mutually orthogonal signals i.e. $,,,f(t) = C_1x_1(t) + C_2x_2(t) + … + C_nx_n(t) + f_e(t) $ $quadquad=Sigma_{r=1}^{n} C_rx_r (t) $ $,,,f(t) = f(t) – Sigma_{r=1}^n C_rx_r (t) $ Mean sqaure error $ varepsilon = {1 over t_2 – t_2 } int_{t_1}^{t_2} [ f_e(t)]^2 dt$ $$ = {1 over t_2 – t_2 } int_{t_1}^{t_2} [ f[t] – sum_{r=1}^{n} C_rx_r(t) ]^2 dt $$ The component which minimizes the mean square error can be found by $$ {dvarepsilon over dC_1} = {dvarepsilon over dC_2} = … = {dvarepsilon over dC_k} = 0 $$ Let us consider ${dvarepsilon over dC_k} = 0 $ $${d over dC_k}[ {1 over t_2 – t_1} int_{t_1}^{t_2} [ f(t) – Sigma_{r=1}^n C_rx_r(t)]^2 dt] = 0 $$

Learning Signals Classification work project make money

Signals Classification Signals are classified into the following categories: Continuous Time and Discrete Time Signals Deterministic and Non-deterministic Signals Even and Odd Signals Periodic and Aperiodic Signals Energy and Power Signals Real and Imaginary Signals Continuous Time and Discrete Time Signals A signal is said to be continuous when it is defined for all instants of time. A signal is said to be discrete when it is defined at only discrete instants of time/ Deterministic and Non-deterministic Signals A signal is said to be deterministic if there is no uncertainty with respect to its value at any instant of time. Or, signals which can be defined exactly by a mathematical formula are known as deterministic signals. A signal is said to be non-deterministic if there is uncertainty with respect to its value at some instant of time. Non-deterministic signals are random in nature hence they are called random signals. Random signals cannot be described by a mathematical equation. They are modelled in probabilistic terms. Even and Odd Signals A signal is said to be even when it satisfies the condition x(t) = x(-t) Example 1: t2, t4… cost etc. Let x(t) = t2 x(-t) = (-t)2 = t2 = x(t) $therefore, $ t2 is even function Example 2: As shown in the following diagram, rectangle function x(t) = x(-t) so it is also even function. A signal is said to be odd when it satisfies the condition x(t) = -x(-t) Example: t, t3 … And sin t Let x(t) = sin t x(-t) = sin(-t) = -sin t = -x(t) $therefore, $ sin t is odd function. Any function ƒ(t) can be expressed as the sum of its even function ƒe(t) and odd function ƒo(t). ƒ(t ) = ƒe(t ) + ƒ0(t ) where ƒe(t ) = ½[ƒ(t ) +ƒ(-t )] Periodic and Aperiodic Signals A signal is said to be periodic if it satisfies the condition x(t) = x(t + T) or x(n) = x(n + N). Where T = fundamental time period, 1/T = f = fundamental frequency. The above signal will repeat for every time interval T0 hence it is periodic with period T0. Energy and Power Signals A signal is said to be energy signal when it has finite energy. $$text{Energy}, E = int_{-infty}^{infty} x^2,(t)dt$$ A signal is said to be power signal when it has finite power. $$text{Power}, P = lim_{T to infty},{1over2T},int_{-T}^{T},x^2(t)dt$$ NOTE:A signal cannot be both, energy and power simultaneously. Also, a signal may be neither energy nor power signal. Power of energy signal = 0 Energy of power signal = ∞ Real and Imaginary Signals A signal is said to be real when it satisfies the condition x(t) = x*(t) A signal is said to be odd when it satisfies the condition x(t) = -x*(t) Example: If x(t)= 3 then x*(t)=3*=3 here x(t) is a real signal. If x(t)= 3j then x*(t)=3j* = -3j = -x(t) hence x(t) is a odd signal. Note: For a real signal, imaginary part should be zero. Similarly for an imaginary signal, real part should be zero. Learning working make money

Learning Signals & Systems Overview work project make money

Signals and Systems Overview What is Signal? Signal is a time varying physical phenomenon which is intended to convey information. OR Signal is a function of time. OR Signal is a function of one or more independent variables, which contain some information. Example: voice signal, video signal, signals on telephone wires etc. Note: Noise is also a signal, but the information conveyed by noise is unwanted hence it is considered as undesirable. What is System? System is a device or combination of devices, which can operate on signals and produces corresponding response. Input to a system is called as excitation and output from it is called as response. For one or more inputs, the system can have one or more outputs. Example: Communication System Learning working make money

Learning Signals & Systems Home work project make money

Signals and Systems Tutorial Job Search Signals and Systems tutorial is designed to cover analysis, types, convolution, sampling and operations performed on signals. It also describes various types of systems. Audience This tutorial is designed for students and all enthusiastic learners, who are willing to learn signals and systems in simple and easy steps. This tutorial will give you deep understanding on Signals and Systems concepts. After completing this tutorial, you will be at intermediate level of expertise from where you can take yourself to higher level of expertise. Prerequisites Before proceeding with this tutorial, you must have a basic understanding of differential and integral calculus, limits and adequate knowledge of mathematics. Learning working make money

Learning Signals Basic Types work project make money

Signals Basic Types Here are a few basic signals: Unit Step Function Unit step function is denoted by u(t). It is defined as u(t) = $left{begin{matrix}1 & t geqslant 0\ 0 & t It is used as best test signal. Area under unit step function is unity. Unit Impulse Function Impulse function is denoted by δ(t). and it is defined as δ(t) = $left{begin{matrix}1 & t = 0\ 0 & tneq 0 end{matrix}right.$ $$ int_{-infty}^{infty} δ(t)dt=u (t)$$ $$ delta(t) = {du(t) over dt } $$ Ramp Signal Ramp signal is denoted by r(t), and it is defined as r(t) = $left{begin {matrix}t & tgeqslant 0\ 0 & t $$ int u(t) = int 1 = t = r(t) $$ $$ u(t) = {dr(t) over dt} $$ Area under unit ramp is unity. Parabolic Signal Parabolic signal can be defined as x(t) = $left{begin{matrix} t^2/2 & t geqslant 0\ 0 & t $$iint u(t)dt = int r(t)dt = int t dt = {t^2 over 2} = parabolic signal $$ $$ Rightarrow u(t) = {d^2x(t) over dt^2} $$ $$ Rightarrow r(t) = {dx(t) over dt} $$ Signum Function Signum function is denoted as sgn(t). It is defined as sgn(t) = $ left{begin{matrix}1 & t>0\ 0 & t=0\ -1 & t sgn(t) = 2u(t) – 1 Exponential Signal Exponential signal is in the form of x(t) = $e^{alpha t}$. The shape of exponential can be defined by $alpha$. Case i: if $alpha$ = 0 $to$ x(t) = $e^0$ = 1 Case ii: if $alpha$ Case iii: if $alpha$ > 0 i.e. +ve then x(t) = $e^{alpha t}$. The shape is called raising exponential. Rectangular Signal Let it be denoted as x(t) and it is defined as Triangular Signal Let it be denoted as x(t) Sinusoidal Signal Sinusoidal signal is in the form of x(t) = A cos(${w}_{0},pm phi$) or A sin(${w}_{0},pm phi$) Where T0 = $ 2pi over {w}_{0} $ Sinc Function It is denoted as sinc(t) and it is defined as sinc $$ (t) = {sin pi t over pi t} $$ $$ = 0, text{for t} = pm 1, pm 2, pm 3 … $$ Sampling Function It is denoted as sa(t) and it is defined as $$sa(t) = {sin t over t}$$ $$= 0 ,, text{for t} = pm pi,, pm 2 pi,, pm 3 pi ,… $$ Learning working make money