Source Coding Theorem The Code produced by a discrete memoryless source, has to be efficiently represented, which is an important problem in communications. For this to happen, there are code words, which represent these source codes. For example, in telegraphy, we use Morse code, in which the alphabets are denoted by Marks and Spaces. If the letter E is considered, which is mostly used, it is denoted by “.” Whereas the letter Q which is rarely used, is denoted by “–.-” Let us take a look at the block diagram. Where Sk is the output of the discrete memoryless source and bk is the output of the source encoder which is represented by 0s and 1s. The encoded sequence is such that it is conveniently decoded at the receiver. Let us assume that the source has an alphabet with k different symbols and that the kth symbol Sk occurs with the probability Pk, where k = 0, 1…k-1. Let the binary code word assigned to symbol Sk, by the encoder having length lk, measured in bits. Hence, we define the average code word length L of the source encoder as $$overline{L} = displaystylesumlimits_{k=0}^{k-1} p_kl_k$$ L represents the average number of bits per source symbol If $L_{min} = : minimum : possible : value : of : overline{L}$ Then coding efficiency can be defined as $$eta = frac{L{min}}{overline{L}}$$ With $overline{L}geq L_{min}$ we will have $eta leq 1$ However, the source encoder is considered efficient when $eta = 1$ For this, the value $L_{min}$ has to be determined. Let us refer to the definition, “Given a discrete memoryless source of entropy $H(delta)$, the average code-word length L for any source encoding is bounded as $overline{L} geq H(delta)$.” In simpler words, the code word (example: Morse code for the word QUEUE is -.- ..- . ..- . ) is always greater than or equal to the source code (QUEUE in example). Which means, the symbols in the code word are greater than or equal to the alphabets in the source code. Hence with $L_{min} = H(delta)$, the efficiency of the source encoder in terms of Entropy $H(delta)$ may be written as $$eta = frac{H(delta)}{overline{L}}$$ This source coding theorem is called as noiseless coding theorem as it establishes an error-free encoding. It is also called as Shannon’s first theorem. Learning working make money
Category: digital Communication
Digital Modulation Techniques Digital-to-Analog signals is the next conversion we will discuss in this chapter. These techniques are also called as Digital Modulation techniques. Digital Modulation provides more information capacity, high data security, quicker system availability with great quality communication. Hence, digital modulation techniques have a greater demand, for their capacity to convey larger amounts of data than analog modulation techniques. There are many types of digital modulation techniques and also their combinations, depending upon the need. Of them all, we will discuss the prominent ones. ASK – Amplitude Shift Keying The amplitude of the resultant output depends upon the input data whether it should be a zero level or a variation of positive and negative, depending upon the carrier frequency. FSK – Frequency Shift Keying The frequency of the output signal will be either high or low, depending upon the input data applied. PSK – Phase Shift Keying The phase of the output signal gets shifted depending upon the input. These are mainly of two types, namely Binary Phase Shift Keying (BPSK) and Quadrature Phase Shift Keying (QPSK), according to the number of phase shifts. The other one is Differential Phase Shift Keying (DPSK) which changes the phase according to the previous value. M-ary Encoding M-ary Encoding techniques are the methods where more than two bits are made to transmit simultaneously on a single signal. This helps in the reduction of bandwidth. The types of M-ary techniques are − M-ary ASK M-ary FSK M-ary PSK All of these are discussed in subsequent chapters. Learning working make money
Discuss Digital Communication Digital communication is the process of devices communicating information digitally. This tutorial helps the readers to get a good idea on how the signals are digitized and why digitization is needed. By the completion of this tutorial, the reader will be able to understand the conceptual details involved in digital communication. Learning working make money
Digital Communication – Information Theory Information is the source of a communication system, whether it is analog or digital. Information theory is a mathematical approach to the study of coding of information along with the quantification, storage, and communication of information. Conditions of Occurrence of Events If we consider an event, there are three conditions of occurrence. If the event has not occurred, there is a condition of uncertainty. If the event has just occurred, there is a condition of surprise. If the event has occurred, a time back, there is a condition of having some information. These three events occur at different times. The difference in these conditions help us gain knowledge on the probabilities of the occurrence of events. Entropy When we observe the possibilities of the occurrence of an event, how surprising or uncertain it would be, it means that we are trying to have an idea on the average content of the information from the source of the event. Entropy can be defined as a measure of the average information content per source symbol. Claude Shannon, the “father of the Information Theory”, provided a formula for it as − $$H = – sum_{i} p_i log_{b}p_i$$ Where pi is the probability of the occurrence of character number i from a given stream of characters and b is the base of the algorithm used. Hence, this is also called as Shannon’s Entropy. The amount of uncertainty remaining about the channel input after observing the channel output, is called as Conditional Entropy. It is denoted by $H(x mid y)$ Mutual Information Let us consider a channel whose output is Y and input is X Let the entropy for prior uncertainty be X = H(x) (This is assumed before the input is applied) To know about the uncertainty of the output, after the input is applied, let us consider Conditional Entropy, given that Y = yk $$Hleft ( xmid y_k right ) = sum_{j = 0}^{j – 1}pleft ( x_j mid y_k right )log_{2}left [ frac{1}{p(x_j mid y_k)} right ]$$ This is a random variable for $H(X mid y = y_0) : … : … : … : … : … : H(X mid y = y_k)$ with probabilities $p(y_0) : … : … : … : … : p(y_{k-1)}$ respectively. The mean value of $H(X mid y = y_k)$ for output alphabet y is − $Hleft ( Xmid Y right ) = displaystylesumlimits_{k = 0}^{k – 1}Hleft ( X mid y=y_k right )pleft ( y_k right )$ $= displaystylesumlimits_{k = 0}^{k – 1} displaystylesumlimits_{j = 0}^{j – 1}pleft (x_j mid y_k right )pleft ( y_k right )log_{2}left [ frac{1}{pleft ( x_j mid y_k right )} right ]$ $= displaystylesumlimits_{k = 0}^{k – 1} displaystylesumlimits_{j = 0}^{j – 1}pleft (x_j ,y_k right )log_{2}left [ frac{1}{pleft ( x_j mid y_k right )} right ]$ Now, considering both the uncertainty conditions (before and after applying the inputs), we come to know that the difference, i.e. $H(x) – H(x mid y)$ must represent the uncertainty about the channel input that is resolved by observing the channel output. This is called as the Mutual Information of the channel. Denoting the Mutual Information as $I(x;y)$, we can write the whole thing in an equation, as follows $$I(x;y) = H(x) – H(x mid y)$$ Hence, this is the equational representation of Mutual Information. Properties of Mutual information These are the properties of Mutual information. Mutual information of a channel is symmetric. $$I(x;y) = I(y;x)$$ Mutual information is non-negative. $$I(x;y) geq 0$$ Mutual information can be expressed in terms of entropy of the channel output. $$I(x;y) = H(y) – H(y mid x)$$ Where $H(y mid x)$ is a conditional entropy Mutual information of a channel is related to the joint entropy of the channel input and the channel output. $$I(x;y) = H(x)+H(y) – H(x,y)$$ Where the joint entropy $H(x,y)$ is defined by $$H(x,y) = displaystylesumlimits_{j=0}^{j-1} displaystylesumlimits_{k=0}^{k-1}p(x_j,y_k)log_{2} left ( frac{1}{pleft ( x_i,y_k right )} right )$$ Channel Capacity We have so far discussed mutual information. The maximum average mutual information, in an instant of a signaling interval, when transmitted by a discrete memoryless channel, the probabilities of the rate of maximum reliable transmission of data, can be understood as the channel capacity. It is denoted by C and is measured in bits per channel use. Discrete Memoryless Source A source from which the data is being emitted at successive intervals, which is independent of previous values, can be termed as discrete memoryless source. This source is discrete as it is not considered for a continuous time interval, but at discrete time intervals. This source is memoryless as it is fresh at each instant of time, without considering the previous values. Learning working make money
Digital Communication – Techniques There are a few techniques which have paved the basic path to digital communication processes. For the signals to get digitized, we have the sampling and quantizing techniques. For them to be represented mathematically, we have LPC and digital multiplexing techniques. These digital modulation techniques are further discussed. Linear Predictive Coding Linear Predictive Coding (LPC) is a tool which represents digital speech signals in linear predictive model. This is mostly used in audio signal processing, speech synthesis, speech recognition, etc. Linear prediction is based on the idea that the current sample is based on the linear combination of past samples. The analysis estimates the values of a discrete-time signal as a linear function of the previous samples. The spectral envelope is represented in a compressed form, using the information of the linear predictive model. This can be mathematically represented as − $s(n) = displaystylesumlimits_{k = 1}^p alpha_k s(n – k)$ for some value of p and αk Where s(n) is the current speech sample k is a particular sample p is the most recent value αk is the predictor co-efficient s(n – k) is the previous speech sample For LPC, the predictor co-efficient values are determined by minimizing the sum of squared differences (over a finite interval) between the actual speech samples and the linearly predicted ones. This is a very useful method for encoding speech at a low bit rate. The LPC method is very close to the Fast Fourier Transform (FFT) method. Multiplexing Multiplexing is the process of combining multiple signals into one signal, over a shared medium. These signals, if analog in nature, the process is called as analog multiplexing. If digital signals are multiplexed, it is called as digital multiplexing. Multiplexing was first developed in telephony. A number of signals were combined to send through a single cable. The process of multiplexing divides a communication channel into several number of logical channels, allotting each one for a different message signal or a data stream to be transferred. The device that does multiplexing, can be called as a MUX. The reverse process, i.e., extracting the number of channels from one, which is done at the receiver is called as de-multiplexing. The device which does de-multiplexing is called as DEMUX. The following figures represent MUX and DEMUX. Their primary use is in the field of communications. Types of Multiplexers There are mainly two types of multiplexers, namely analog and digital. They are further divided into FDM, WDM, and TDM. The following figure gives a detailed idea on this classification. Actually, there are many types of multiplexing techniques. Of them all, we have the main types with general classification, mentioned in the above figure. Analog Multiplexing The analog multiplexing techniques involve signals which are analog in nature. The analog signals are multiplexed according to their frequency (FDM) or wavelength (WDM). Frequency Division Multiplexing (FDM) In analog multiplexing, the most used technique is Frequency Division Multiplexing (FDM). This technique uses various frequencies to combine streams of data, for sending them on a communication medium, as a single signal. Example − A traditional television transmitter, which sends a number of channels through a single cable, uses FDM. Wavelength Division Multiplexing (WDM) Wavelength Division multiplexing is an analog technique, in which many data streams of different wavelengths are transmitted in the light spectrum. If the wavelength increases, the frequency of the signal decreases. A prism which can turn different wavelengths into a single line, can be used at the output of MUX and input of DEMUX. Example − Optical fiber communications use WDM technique to merge different wavelengths into a single light for communication. Digital Multiplexing The term digital represents the discrete bits of information. Hence, the available data is in the form of frames or packets, which are discrete. Time Division Multiplexing (TDM) In TDM, the time frame is divided into slots. This technique is used to transmit a signal over a single communication channel, by allotting one slot for each message. Of all the types of TDM, the main ones are Synchronous and Asynchronous TDM. Synchronous TDM In Synchronous TDM, the input is connected to a frame. If there are ‘n’ number of connections, then the frame is divided into ‘n’ time slots. One slot is allocated for each input line. In this technique, the sampling rate is common to all signals and hence the same clock input is given. The MUX allocates the same slot to each device at all times. Asynchronous TDM In Asynchronous TDM, the sampling rate is different for each of the signals and a common clock is not required. If the allotted device, for a time-slot, transmits nothing and sits idle, then that slot is allotted to another device, unlike synchronous. This type of TDM is used in Asynchronous transfer mode networks. Regenerative Repeater For any communication system to be reliable, it should transmit and receive the signals effectively, without any loss. A PCM wave, after transmitting through a channel, gets distorted due to the noise introduced by the channel. The regenerative pulse compared with the original and received pulse, will be as shown in the following figure. For a better reproduction of the signal, a circuit called as regenerative repeater is employed in the path before the receiver. This helps in restoring the signals from the losses occurred. Following is the diagrammatical representation. This consists of an equalizer along with an amplifier, a timing circuit, and a decision making device. Their working of each of the components is detailed as follows. Equalizer The channel produces amplitude and phase distortions to the signals. This is due to the transmission characteristics of the channel. The Equalizer circuit compensates these losses by shaping the received pulses. Timing Circuit To obtain a quality output, the sampling of the pulses should be done where the signal to noise ratio (SNR) is maximum. To achieve this perfect sampling, a periodic pulse train has to be derived from the received pulses, which is done by the
Amplitude Shift Keying Amplitude Shift Keying (ASK) is a type of Amplitude Modulation which represents the binary data in the form of variations in the amplitude of a signal. Any modulated signal has a high frequency carrier. The binary signal when ASK modulated, gives a zero value for Low input while it gives the carrier output for High input. The following figure represents ASK modulated waveform along with its input. To find the process of obtaining this ASK modulated wave, let us learn about the working of the ASK modulator. ASK Modulator The ASK modulator block diagram comprises of the carrier signal generator, the binary sequence from the message signal and the band-limited filter. Following is the block diagram of the ASK Modulator. The carrier generator, sends a continuous high-frequency carrier. The binary sequence from the message signal makes the unipolar input to be either High or Low. The high signal closes the switch, allowing a carrier wave. Hence, the output will be the carrier signal at high input. When there is low input, the switch opens, allowing no voltage to appear. Hence, the output will be low. The band-limiting filter, shapes the pulse depending upon the amplitude and phase characteristics of the band-limiting filter or the pulse-shaping filter. ASK Demodulator There are two types of ASK Demodulation techniques. They are − Asynchronous ASK Demodulation/detection Synchronous ASK Demodulation/detection The clock frequency at the transmitter when matches with the clock frequency at the receiver, it is known as a Synchronous method, as the frequency gets synchronized. Otherwise, it is known as Asynchronous. Asynchronous ASK Demodulator The Asynchronous ASK detector consists of a half-wave rectifier, a low pass filter, and a comparator. Following is the block diagram for the same. The modulated ASK signal is given to the half-wave rectifier, which delivers a positive half output. The low pass filter suppresses the higher frequencies and gives an envelope detected output from which the comparator delivers a digital output. Synchronous ASK Demodulator Synchronous ASK detector consists of a Square law detector, low pass filter, a comparator, and a voltage limiter. Following is the block diagram for the same. The ASK modulated input signal is given to the Square law detector. A square law detector is one whose output voltage is proportional to the square of the amplitude modulated input voltage. The low pass filter minimizes the higher frequencies. The comparator and the voltage limiter help to get a clean digital output. Learning working make money
Pulse Code Modulation Modulation is the process of varying one or more parameters of a carrier signal in accordance with the instantaneous values of the message signal. The message signal is the signal which is being transmitted for communication and the carrier signal is a high frequency signal which has no data, but is used for long distance transmission. There are many modulation techniques, which are classified according to the type of modulation employed. Of them all, the digital modulation technique used is Pulse Code Modulation (PCM). A signal is pulse code modulated to convert its analog information into a binary sequence, i.e., 1s and 0s. The output of a PCM will resemble a binary sequence. The following figure shows an example of PCM output with respect to instantaneous values of a given sine wave. Instead of a pulse train, PCM produces a series of numbers or digits, and hence this process is called as digital. Each one of these digits, though in binary code, represent the approximate amplitude of the signal sample at that instant. In Pulse Code Modulation, the message signal is represented by a sequence of coded pulses. This message signal is achieved by representing the signal in discrete form in both time and amplitude. Basic Elements of PCM The transmitter section of a Pulse Code Modulator circuit consists of Sampling, Quantizing and Encoding, which are performed in the analog-to-digital converter section. The low pass filter prior to sampling prevents aliasing of the message signal. The basic operations in the receiver section are regeneration of impaired signals, decoding, and reconstruction of the quantized pulse train. Following is the block diagram of PCM which represents the basic elements of both the transmitter and the receiver sections. Low Pass Filter This filter eliminates the high frequency components present in the input analog signal which is greater than the highest frequency of the message signal, to avoid aliasing of the message signal. Sampler This is the technique which helps to collect the sample data at instantaneous values of message signal, so as to reconstruct the original signal. The sampling rate must be greater than twice the highest frequency component W of the message signal, in accordance with the sampling theorem. Quantizer Quantizing is a process of reducing the excessive bits and confining the data. The sampled output when given to Quantizer, reduces the redundant bits and compresses the value. Encoder The digitization of analog signal is done by the encoder. It designates each quantized level by a binary code. The sampling done here is the sample-and-hold process. These three sections (LPF, Sampler, and Quantizer) will act as an analog to digital converter. Encoding minimizes the bandwidth used. Regenerative Repeater This section increases the signal strength. The output of the channel also has one regenerative repeater circuit, to compensate the signal loss and reconstruct the signal, and also to increase its strength. Decoder The decoder circuit decodes the pulse coded waveform to reproduce the original signal. This circuit acts as the demodulator. Reconstruction Filter After the digital-to-analog conversion is done by the regenerative circuit and the decoder, a low-pass filter is employed, called as the reconstruction filter to get back the original signal. Hence, the Pulse Code Modulator circuit digitizes the given analog signal, codes it and samples it, and then transmits it in an analog form. This whole process is repeated in a reverse pattern to obtain the original signal. Learning working make money
Differential Phase Shift Keying In Differential Phase Shift Keying (DPSK) the phase of the modulated signal is shifted relative to the previous signal element. No reference signal is considered here. The signal phase follows the high or low state of the previous element. This DPSK technique doesn’t need a reference oscillator. The following figure represents the model waveform of DPSK. It is seen from the above figure that, if the data bit is Low i.e., 0, then the phase of the signal is not reversed, but continued as it was. If the data is a High i.e., 1, then the phase of the signal is reversed, as with NRZI, invert on 1 (a form of differential encoding). If we observe the above waveform, we can say that the High state represents an M in the modulating signal and the Low state represents a W in the modulating signal. DPSK Modulator DPSK is a technique of BPSK, in which there is no reference phase signal. Here, the transmitted signal itself can be used as a reference signal. Following is the diagram of DPSK Modulator. DPSK encodes two distinct signals, i.e., the carrier and the modulating signal with 180° phase shift each. The serial data input is given to the XNOR gate and the output is again fed back to the other input through 1-bit delay. The output of the XNOR gate along with the carrier signal is given to the balance modulator, to produce the DPSK modulated signal. DPSK Demodulator In DPSK demodulator, the phase of the reversed bit is compared with the phase of the previous bit. Following is the block diagram of DPSK demodulator. From the above figure, it is evident that the balance modulator is given the DPSK signal along with 1-bit delay input. That signal is made to confine to lower frequencies with the help of LPF. Then it is passed to a shaper circuit, which is a comparator or a Schmitt trigger circuit, to recover the original binary data as the output. Learning working make money
Digital Communication – M-ary Encoding The word binary represents two bits. M represents a digit that corresponds to the number of conditions, levels, or combinations possible for a given number of binary variables. This is the type of digital modulation technique used for data transmission in which instead of one bit, two or more bits are transmitted at a time. As a single signal is used for multiple bit transmission, the channel bandwidth is reduced. M-ary Equation If a digital signal is given under four conditions, such as voltage levels, frequencies, phases, and amplitude, then M = 4. The number of bits necessary to produce a given number of conditions is expressed mathematically as $$N = log_{2}{M}$$ Where N is the number of bits necessary M is the number of conditions, levels, or combinations possible with N bits. The above equation can be re-arranged as $$2^N = M$$ For example, with two bits, 22 = 4 conditions are possible. Types of M-ary Techniques In general, Multi-level (M-ary) modulation techniques are used in digital communications as the digital inputs with more than two modulation levels are allowed on the transmitter’s input. Hence, these techniques are bandwidth efficient. There are many M-ary modulation techniques. Some of these techniques, modulate one parameter of the carrier signal, such as amplitude, phase, and frequency. M-ary ASK This is called M-ary Amplitude Shift Keying (M-ASK) or M-ary Pulse Amplitude Modulation (PAM). The amplitude of the carrier signal, takes on M different levels. Representation of M-ary ASK $S_m(t) = A_mcos (2 pi f_ct) quad A_mepsilon {(2m – 1 – M) Delta, m = 1,2… : .M} quad and quad 0 leq t leq T_s$ Some prominent features of M-ary ASK are − This method is also used in PAM. Its implementation is simple. M-ary ASK is susceptible to noise and distortion. M-ary FSK This is called as M-ary Frequency Shift Keying (M-ary FSK). The frequency of the carrier signal, takes on M different levels. Representation of M-ary FSK $S_i(t) = sqrt{frac{2E_s}{T_s}} cos left ( frac{pi}{T_s}left (n_c+iright )tright )$ $0 leq t leq T_s quad and quad i = 1,2,3… : ..M$ Where $f_c = frac{n_c}{2T_s}$ for some fixed integer n. Some prominent features of M-ary FSK are − Not susceptible to noise as much as ASK. The transmitted M number of signals are equal in energy and duration. The signals are separated by $frac{1}{2T_s}$ Hz making the signals orthogonal to each other. Since M signals are orthogonal, there is no crowding in the signal space. The bandwidth efficiency of M-ary FSK decreases and the power efficiency increases with the increase in M. M-ary PSK This is called as M-ary Phase Shift Keying (M-ary PSK). The phase of the carrier signal, takes on M different levels. Representation of M-ary PSK $S_i(t) = sqrt{frac{2E}{T}} cos left (w_o t + phi _itright )$ $0 leq t leq T quad and quad i = 1,2 … M$ $$phi _i left ( t right ) = frac{2 pi i}{M} quad where quad i = 1,2,3 … : …M$$ Some prominent features of M-ary PSK are − The envelope is constant with more phase possibilities. This method was used during the early days of space communication. Better performance than ASK and FSK. Minimal phase estimation error at the receiver. The bandwidth efficiency of M-ary PSK decreases and the power efficiency increases with the increase in M. So far, we have discussed different modulation techniques. The output of all these techniques is a binary sequence, represented as 1s and 0s. This binary or digital information has many types and forms, which are discussed further. Learning working make money
Digital Communication – Analog to Digital The communication that occurs in our day-to-day life is in the form of signals. These signals, such as sound signals, generally, are analog in nature. When the communication needs to be established over a distance, then the analog signals are sent through wire, using different techniques for effective transmission. The Necessity of Digitization The conventional methods of communication used analog signals for long distance communications, which suffer from many losses such as distortion, interference, and other losses including security breach. In order to overcome these problems, the signals are digitized using different techniques. The digitized signals allow the communication to be more clear and accurate without losses. The following figure indicates the difference between analog and digital signals. The digital signals consist of 1s and 0s which indicate High and Low values respectively. Advantages of Digital Communication As the signals are digitized, there are many advantages of digital communication over analog communication, such as − The effect of distortion, noise, and interference is much less in digital signals as they are less affected. Digital circuits are more reliable. Digital circuits are easy to design and cheaper than analog circuits. The hardware implementation in digital circuits, is more flexible than analog. The occurrence of cross-talk is very rare in digital communication. The signal is un-altered as the pulse needs a high disturbance to alter its properties, which is very difficult. Signal processing functions such as encryption and compression are employed in digital circuits to maintain the secrecy of the information. The probability of error occurrence is reduced by employing error detecting and error correcting codes. Spread spectrum technique is used to avoid signal jamming. Combining digital signals using Time Division Multiplexing (TDM) is easier than combining analog signals using Frequency Division Multiplexing (FDM). The configuring process of digital signals is easier than analog signals. Digital signals can be saved and retrieved more conveniently than analog signals. Many of the digital circuits have almost common encoding techniques and hence similar devices can be used for a number of purposes. The capacity of the channel is effectively utilized by digital signals. Elements of Digital Communication The elements which form a digital communication system is represented by the following block diagram for the ease of understanding. Following are the sections of the digital communication system. Source The source can be an analog signal. Example: A Sound signal Input Transducer This is a transducer which takes a physical input and converts it to an electrical signal (Example: microphone). This block also consists of an analog to digital converter where a digital signal is needed for further processes. A digital signal is generally represented by a binary sequence. Source Encoder The source encoder compresses the data into minimum number of bits. This process helps in effective utilization of the bandwidth. It removes the redundant bits (unnecessary excess bits, i.e., zeroes). Channel Encoder The channel encoder, does the coding for error correction. During the transmission of the signal, due to the noise in the channel, the signal may get altered and hence to avoid this, the channel encoder adds some redundant bits to the transmitted data. These are the error correcting bits. Digital Modulator The signal to be transmitted is modulated here by a carrier. The signal is also converted to analog from the digital sequence, in order to make it travel through the channel or medium. Channel The channel or a medium, allows the analog signal to transmit from the transmitter end to the receiver end. Digital Demodulator This is the first step at the receiver end. The received signal is demodulated as well as converted again from analog to digital. The signal gets reconstructed here. Channel Decoder The channel decoder, after detecting the sequence, does some error corrections. The distortions which might occur during the transmission, are corrected by adding some redundant bits. This addition of bits helps in the complete recovery of the original signal. Source Decoder The resultant signal is once again digitized by sampling and quantizing so that the pure digital output is obtained without the loss of information. The source decoder recreates the source output. Output Transducer This is the last block which converts the signal into the original physical form, which was at the input of the transmitter. It converts the electrical signal into physical output (Example: loud speaker). Output Signal This is the output which is produced after the whole process. Example − The sound signal received. This unit has dealt with the introduction, the digitization of signals, the advantages and the elements of digital communications. In the coming chapters, we will learn about the concepts of Digital communications, in detail. Learning working make money