Digital Communication – Useful Resources The following resources contain additional information on Digital Communication Please use them to get more in-depth knowledge on this topic. Useful Links on Digital Communication − Wikipedia Reference for Digital Communication. Useful Books on Digital Communication To enlist your site on this page, please drop an email to [email protected] Learning working make money
Category: digital Communication
Frequency Shift Keying Frequency Shift Keying (FSK) is the digital modulation technique in which the frequency of the carrier signal varies according to the digital signal changes. FSK is a scheme of frequency modulation. The output of a FSK modulated wave is high in frequency for a binary High input and is low in frequency for a binary Low input. The binary 1s and 0s are called Mark and Space frequencies. The following image is the diagrammatic representation of FSK modulated waveform along with its input. To find the process of obtaining this FSK modulated wave, let us know about the working of a FSK modulator. FSK Modulator The FSK modulator block diagram comprises of two oscillators with a clock and the input binary sequence. Following is its block diagram. The two oscillators, producing a higher and a lower frequency signals, are connected to a switch along with an internal clock. To avoid the abrupt phase discontinuities of the output waveform during the transmission of the message, a clock is applied to both the oscillators, internally. The binary input sequence is applied to the transmitter so as to choose the frequencies according to the binary input. FSK Demodulator There are different methods for demodulating a FSK wave. The main methods of FSK detection are asynchronous detector and synchronous detector. The synchronous detector is a coherent one, while asynchronous detector is a non-coherent one. Asynchronous FSK Detector The block diagram of Asynchronous FSK detector consists of two band pass filters, two envelope detectors, and a decision circuit. Following is the diagrammatic representation. The FSK signal is passed through the two Band Pass Filters (BPFs), tuned to Space and Mark frequencies. The output from these two BPFs look like ASK signal, which is given to the envelope detector. The signal in each envelope detector is modulated asynchronously. The decision circuit chooses which output is more likely and selects it from any one of the envelope detectors. It also re-shapes the waveform to a rectangular one. Synchronous FSK Detector The block diagram of Synchronous FSK detector consists of two mixers with local oscillator circuits, two band pass filters and a decision circuit. Following is the diagrammatic representation. The FSK signal input is given to the two mixers with local oscillator circuits. These two are connected to two band pass filters. These combinations act as demodulators and the decision circuit chooses which output is more likely and selects it from any one of the detectors. The two signals have a minimum frequency separation. For both of the demodulators, the bandwidth of each of them depends on their bit rate. This synchronous demodulator is a bit complex than asynchronous type demodulators. Learning working make money
Digital Communication – Quantization The digitization of analog signals involves the rounding off of the values which are approximately equal to the analog values. The method of sampling chooses a few points on the analog signal and then these points are joined to round off the value to a near stabilized value. Such a process is called as Quantization. Quantizing an Analog Signal The analog-to-digital converters perform this type of function to create a series of digital values out of the given analog signal. The following figure represents an analog signal. This signal to get converted into digital, has to undergo sampling and quantizing. The quantizing of an analog signal is done by discretizing the signal with a number of quantization levels. Quantization is representing the sampled values of the amplitude by a finite set of levels, which means converting a continuous-amplitude sample into a discrete-time signal. The following figure shows how an analog signal gets quantized. The blue line represents analog signal while the brown one represents the quantized signal. Both sampling and quantization result in the loss of information. The quality of a Quantizer output depends upon the number of quantization levels used. The discrete amplitudes of the quantized output are called as representation levels or reconstruction levels. The spacing between the two adjacent representation levels is called a quantum or step-size. The following figure shows the resultant quantized signal which is the digital form for the given analog signal. This is also called as Stair-case waveform, in accordance with its shape. Types of Quantization There are two types of Quantization – Uniform Quantization and Non-uniform Quantization. The type of quantization in which the quantization levels are uniformly spaced is termed as a Uniform Quantization. The type of quantization in which the quantization levels are unequal and mostly the relation between them is logarithmic, is termed as a Non-uniform Quantization. There are two types of uniform quantization. They are Mid-Rise type and Mid-Tread type. The following figures represent the two types of uniform quantization. Figure 1 shows the mid-rise type and figure 2 shows the mid-tread type of uniform quantization. The Mid-Rise type is so called because the origin lies in the middle of a raising part of the stair-case like graph. The quantization levels in this type are even in number. The Mid-tread type is so called because the origin lies in the middle of a tread of the stair-case like graph. The quantization levels in this type are odd in number. Both the mid-rise and mid-tread type of uniform quantizers are symmetric about the origin. Quantization Error For any system, during its functioning, there is always a difference in the values of its input and output. The processing of the system results in an error, which is the difference of those values. The difference between an input value and its quantized value is called a Quantization Error. A Quantizer is a logarithmic function that performs Quantization (rounding off the value). An analog-to-digital converter (ADC) works as a quantizer. The following figure illustrates an example for a quantization error, indicating the difference between the original signal and the quantized signal. Quantization Noise It is a type of quantization error, which usually occurs in analog audio signal, while quantizing it to digital. For example, in music, the signals keep changing continuously, where a regularity is not found in errors. Such errors create a wideband noise called as Quantization Noise. Companding in PCM The word Companding is a combination of Compressing and Expanding, which means that it does both. This is a non-linear technique used in PCM which compresses the data at the transmitter and expands the same data at the receiver. The effects of noise and crosstalk are reduced by using this technique. There are two types of Companding techniques. They are − A-law Companding Technique Uniform quantization is achieved at A = 1, where the characteristic curve is linear and no compression is done. A-law has mid-rise at the origin. Hence, it contains a non-zero value. A-law companding is used for PCM telephone systems. µ-law Companding Technique Uniform quantization is achieved at µ = 0, where the characteristic curve is linear and no compression is done. µ-law has mid-tread at the origin. Hence, it contains a zero value. µ-law companding is used for speech and music signals. µ-law is used in North America and Japan. Learning working make money
Digital Communication – Information Theory Information is the source of a communication system, whether it is analog or digital. Information theory is a mathematical approach to the study of coding of information along with the quantification, storage, and communication of information. Conditions of Occurrence of Events If we consider an event, there are three conditions of occurrence. If the event has not occurred, there is a condition of uncertainty. If the event has just occurred, there is a condition of surprise. If the event has occurred, a time back, there is a condition of having some information. These three events occur at different times. The difference in these conditions help us gain knowledge on the probabilities of the occurrence of events. Entropy When we observe the possibilities of the occurrence of an event, how surprising or uncertain it would be, it means that we are trying to have an idea on the average content of the information from the source of the event. Entropy can be defined as a measure of the average information content per source symbol. Claude Shannon, the “father of the Information Theory”, provided a formula for it as − $$H = – sum_{i} p_i log_{b}p_i$$ Where pi is the probability of the occurrence of character number i from a given stream of characters and b is the base of the algorithm used. Hence, this is also called as Shannon’s Entropy. The amount of uncertainty remaining about the channel input after observing the channel output, is called as Conditional Entropy. It is denoted by $H(x mid y)$ Mutual Information Let us consider a channel whose output is Y and input is X Let the entropy for prior uncertainty be X = H(x) (This is assumed before the input is applied) To know about the uncertainty of the output, after the input is applied, let us consider Conditional Entropy, given that Y = yk $$Hleft ( xmid y_k right ) = sum_{j = 0}^{j – 1}pleft ( x_j mid y_k right )log_{2}left [ frac{1}{p(x_j mid y_k)} right ]$$ This is a random variable for $H(X mid y = y_0) : … : … : … : … : … : H(X mid y = y_k)$ with probabilities $p(y_0) : … : … : … : … : p(y_{k-1)}$ respectively. The mean value of $H(X mid y = y_k)$ for output alphabet y is − $Hleft ( Xmid Y right ) = displaystylesumlimits_{k = 0}^{k – 1}Hleft ( X mid y=y_k right )pleft ( y_k right )$ $= displaystylesumlimits_{k = 0}^{k – 1} displaystylesumlimits_{j = 0}^{j – 1}pleft (x_j mid y_k right )pleft ( y_k right )log_{2}left [ frac{1}{pleft ( x_j mid y_k right )} right ]$ $= displaystylesumlimits_{k = 0}^{k – 1} displaystylesumlimits_{j = 0}^{j – 1}pleft (x_j ,y_k right )log_{2}left [ frac{1}{pleft ( x_j mid y_k right )} right ]$ Now, considering both the uncertainty conditions (before and after applying the inputs), we come to know that the difference, i.e. $H(x) – H(x mid y)$ must represent the uncertainty about the channel input that is resolved by observing the channel output. This is called as the Mutual Information of the channel. Denoting the Mutual Information as $I(x;y)$, we can write the whole thing in an equation, as follows $$I(x;y) = H(x) – H(x mid y)$$ Hence, this is the equational representation of Mutual Information. Properties of Mutual information These are the properties of Mutual information. Mutual information of a channel is symmetric. $$I(x;y) = I(y;x)$$ Mutual information is non-negative. $$I(x;y) geq 0$$ Mutual information can be expressed in terms of entropy of the channel output. $$I(x;y) = H(y) – H(y mid x)$$ Where $H(y mid x)$ is a conditional entropy Mutual information of a channel is related to the joint entropy of the channel input and the channel output. $$I(x;y) = H(x)+H(y) – H(x,y)$$ Where the joint entropy $H(x,y)$ is defined by $$H(x,y) = displaystylesumlimits_{j=0}^{j-1} displaystylesumlimits_{k=0}^{k-1}p(x_j,y_k)log_{2} left ( frac{1}{pleft ( x_i,y_k right )} right )$$ Channel Capacity We have so far discussed mutual information. The maximum average mutual information, in an instant of a signaling interval, when transmitted by a discrete memoryless channel, the probabilities of the rate of maximum reliable transmission of data, can be understood as the channel capacity. It is denoted by C and is measured in bits per channel use. Discrete Memoryless Source A source from which the data is being emitted at successive intervals, which is independent of previous values, can be termed as discrete memoryless source. This source is discrete as it is not considered for a continuous time interval, but at discrete time intervals. This source is memoryless as it is fresh at each instant of time, without considering the previous values. Learning working make money
Digital Communication – Techniques There are a few techniques which have paved the basic path to digital communication processes. For the signals to get digitized, we have the sampling and quantizing techniques. For them to be represented mathematically, we have LPC and digital multiplexing techniques. These digital modulation techniques are further discussed. Linear Predictive Coding Linear Predictive Coding (LPC) is a tool which represents digital speech signals in linear predictive model. This is mostly used in audio signal processing, speech synthesis, speech recognition, etc. Linear prediction is based on the idea that the current sample is based on the linear combination of past samples. The analysis estimates the values of a discrete-time signal as a linear function of the previous samples. The spectral envelope is represented in a compressed form, using the information of the linear predictive model. This can be mathematically represented as − $s(n) = displaystylesumlimits_{k = 1}^p alpha_k s(n – k)$ for some value of p and αk Where s(n) is the current speech sample k is a particular sample p is the most recent value αk is the predictor co-efficient s(n – k) is the previous speech sample For LPC, the predictor co-efficient values are determined by minimizing the sum of squared differences (over a finite interval) between the actual speech samples and the linearly predicted ones. This is a very useful method for encoding speech at a low bit rate. The LPC method is very close to the Fast Fourier Transform (FFT) method. Multiplexing Multiplexing is the process of combining multiple signals into one signal, over a shared medium. These signals, if analog in nature, the process is called as analog multiplexing. If digital signals are multiplexed, it is called as digital multiplexing. Multiplexing was first developed in telephony. A number of signals were combined to send through a single cable. The process of multiplexing divides a communication channel into several number of logical channels, allotting each one for a different message signal or a data stream to be transferred. The device that does multiplexing, can be called as a MUX. The reverse process, i.e., extracting the number of channels from one, which is done at the receiver is called as de-multiplexing. The device which does de-multiplexing is called as DEMUX. The following figures represent MUX and DEMUX. Their primary use is in the field of communications. Types of Multiplexers There are mainly two types of multiplexers, namely analog and digital. They are further divided into FDM, WDM, and TDM. The following figure gives a detailed idea on this classification. Actually, there are many types of multiplexing techniques. Of them all, we have the main types with general classification, mentioned in the above figure. Analog Multiplexing The analog multiplexing techniques involve signals which are analog in nature. The analog signals are multiplexed according to their frequency (FDM) or wavelength (WDM). Frequency Division Multiplexing (FDM) In analog multiplexing, the most used technique is Frequency Division Multiplexing (FDM). This technique uses various frequencies to combine streams of data, for sending them on a communication medium, as a single signal. Example − A traditional television transmitter, which sends a number of channels through a single cable, uses FDM. Wavelength Division Multiplexing (WDM) Wavelength Division multiplexing is an analog technique, in which many data streams of different wavelengths are transmitted in the light spectrum. If the wavelength increases, the frequency of the signal decreases. A prism which can turn different wavelengths into a single line, can be used at the output of MUX and input of DEMUX. Example − Optical fiber communications use WDM technique to merge different wavelengths into a single light for communication. Digital Multiplexing The term digital represents the discrete bits of information. Hence, the available data is in the form of frames or packets, which are discrete. Time Division Multiplexing (TDM) In TDM, the time frame is divided into slots. This technique is used to transmit a signal over a single communication channel, by allotting one slot for each message. Of all the types of TDM, the main ones are Synchronous and Asynchronous TDM. Synchronous TDM In Synchronous TDM, the input is connected to a frame. If there are ‘n’ number of connections, then the frame is divided into ‘n’ time slots. One slot is allocated for each input line. In this technique, the sampling rate is common to all signals and hence the same clock input is given. The MUX allocates the same slot to each device at all times. Asynchronous TDM In Asynchronous TDM, the sampling rate is different for each of the signals and a common clock is not required. If the allotted device, for a time-slot, transmits nothing and sits idle, then that slot is allotted to another device, unlike synchronous. This type of TDM is used in Asynchronous transfer mode networks. Regenerative Repeater For any communication system to be reliable, it should transmit and receive the signals effectively, without any loss. A PCM wave, after transmitting through a channel, gets distorted due to the noise introduced by the channel. The regenerative pulse compared with the original and received pulse, will be as shown in the following figure. For a better reproduction of the signal, a circuit called as regenerative repeater is employed in the path before the receiver. This helps in restoring the signals from the losses occurred. Following is the diagrammatical representation. This consists of an equalizer along with an amplifier, a timing circuit, and a decision making device. Their working of each of the components is detailed as follows. Equalizer The channel produces amplitude and phase distortions to the signals. This is due to the transmission characteristics of the channel. The Equalizer circuit compensates these losses by shaping the received pulses. Timing Circuit To obtain a quality output, the sampling of the pulses should be done where the signal to noise ratio (SNR) is maximum. To achieve this perfect sampling, a periodic pulse train has to be derived from the received pulses, which is done by the
Amplitude Shift Keying Amplitude Shift Keying (ASK) is a type of Amplitude Modulation which represents the binary data in the form of variations in the amplitude of a signal. Any modulated signal has a high frequency carrier. The binary signal when ASK modulated, gives a zero value for Low input while it gives the carrier output for High input. The following figure represents ASK modulated waveform along with its input. To find the process of obtaining this ASK modulated wave, let us learn about the working of the ASK modulator. ASK Modulator The ASK modulator block diagram comprises of the carrier signal generator, the binary sequence from the message signal and the band-limited filter. Following is the block diagram of the ASK Modulator. The carrier generator, sends a continuous high-frequency carrier. The binary sequence from the message signal makes the unipolar input to be either High or Low. The high signal closes the switch, allowing a carrier wave. Hence, the output will be the carrier signal at high input. When there is low input, the switch opens, allowing no voltage to appear. Hence, the output will be low. The band-limiting filter, shapes the pulse depending upon the amplitude and phase characteristics of the band-limiting filter or the pulse-shaping filter. ASK Demodulator There are two types of ASK Demodulation techniques. They are − Asynchronous ASK Demodulation/detection Synchronous ASK Demodulation/detection The clock frequency at the transmitter when matches with the clock frequency at the receiver, it is known as a Synchronous method, as the frequency gets synchronized. Otherwise, it is known as Asynchronous. Asynchronous ASK Demodulator The Asynchronous ASK detector consists of a half-wave rectifier, a low pass filter, and a comparator. Following is the block diagram for the same. The modulated ASK signal is given to the half-wave rectifier, which delivers a positive half output. The low pass filter suppresses the higher frequencies and gives an envelope detected output from which the comparator delivers a digital output. Synchronous ASK Demodulator Synchronous ASK detector consists of a Square law detector, low pass filter, a comparator, and a voltage limiter. Following is the block diagram for the same. The ASK modulated input signal is given to the Square law detector. A square law detector is one whose output voltage is proportional to the square of the amplitude modulated input voltage. The low pass filter minimizes the higher frequencies. The comparator and the voltage limiter help to get a clean digital output. Learning working make money
Digital Communication – Line Codes A line code is the code used for data transmission of a digital signal over a transmission line. This process of coding is chosen so as to avoid overlap and distortion of signal such as inter-symbol interference. Properties of Line Coding Following are the properties of line coding − As the coding is done to make more bits transmit on a single signal, the bandwidth used is much reduced. For a given bandwidth, the power is efficiently used. The probability of error is much reduced. Error detection is done and the bipolar too has a correction capability. Power density is much favorable. The timing content is adequate. Long strings of 1s and 0s is avoided to maintain transparency. Types of Line Coding There are 3 types of Line Coding Unipolar Polar Bi-polar Unipolar Signaling Unipolar signaling is also called as On-Off Keying or simply OOK. The presence of pulse represents a 1 and the absence of pulse represents a 0. There are two variations in Unipolar signaling − Non Return to Zero (NRZ) Return to Zero (RZ) Unipolar Non-Return to Zero (NRZ) In this type of unipolar signaling, a High in data is represented by a positive pulse called as Mark, which has a duration T0 equal to the symbol bit duration. A Low in data input has no pulse. The following figure clearly depicts this. Advantages The advantages of Unipolar NRZ are − It is simple. A lesser bandwidth is required. Disadvantages The disadvantages of Unipolar NRZ are − No error correction done. Presence of low frequency components may cause the signal droop. No clock is present. Loss of synchronization is likely to occur (especially for long strings of 1s and 0s). Unipolar Return to Zero (RZ) In this type of unipolar signaling, a High in data, though represented by a Mark pulse, its duration T0 is less than the symbol bit duration. Half of the bit duration remains high but it immediately returns to zero and shows the absence of pulse during the remaining half of the bit duration. It is clearly understood with the help of the following figure. Advantages The advantages of Unipolar RZ are − It is simple. The spectral line present at the symbol rate can be used as a clock. Disadvantages The disadvantages of Unipolar RZ are − No error correction. Occupies twice the bandwidth as unipolar NRZ. The signal droop is caused at the places where signal is non-zero at 0 Hz. Polar Signaling There are two methods of Polar Signaling. They are − Polar NRZ Polar RZ Polar NRZ In this type of Polar signaling, a High in data is represented by a positive pulse, while a Low in data is represented by a negative pulse. The following figure depicts this well. Advantages The advantages of Polar NRZ are − It is simple. No low-frequency components are present. Disadvantages The disadvantages of Polar NRZ are − No error correction. No clock is present. The signal droop is caused at the places where the signal is non-zero at 0 Hz. Polar RZ In this type of Polar signaling, a High in data, though represented by a Mark pulse, its duration T0 is less than the symbol bit duration. Half of the bit duration remains high but it immediately returns to zero and shows the absence of pulse during the remaining half of the bit duration. However, for a Low input, a negative pulse represents the data, and the zero level remains same for the other half of the bit duration. The following figure depicts this clearly. Advantages The advantages of Polar RZ are − It is simple. No low-frequency components are present. Disadvantages The disadvantages of Polar RZ are − No error correction. No clock is present. Occupies twice the bandwidth of Polar NRZ. The signal droop is caused at places where the signal is non-zero at 0 Hz. Bipolar Signaling This is an encoding technique which has three voltage levels namely +, – and 0. Such a signal is called as duo-binary signal. An example of this type is Alternate Mark Inversion (AMI). For a 1, the voltage level gets a transition from + to – or from – to +, having alternate 1s to be of equal polarity. A 0 will have a zero voltage level. Even in this method, we have two types. Bipolar NRZ Bipolar RZ From the models so far discussed, we have learnt the difference between NRZ and RZ. It just goes in the same way here too. The following figure clearly depicts this. The above figure has both the Bipolar NRZ and RZ waveforms. The pulse duration and symbol bit duration are equal in NRZ type, while the pulse duration is half of the symbol bit duration in RZ type. Advantages Following are the advantages − It is simple. No low-frequency components are present. Occupies low bandwidth than unipolar and polar NRZ schemes. This technique is suitable for transmission over AC coupled lines, as signal drooping doesn’t occur here. A single error detection capability is present in this. Disadvantages Following are the disadvantages − No clock is present. Long strings of data causes loss of synchronization. Power Spectral Density The function which describes how the power of a signal got distributed at various frequencies, in the frequency domain is called as Power Spectral Density (PSD). PSD is the Fourier Transform of Auto-Correlation (Similarity between observations). It is in the form of a rectangular pulse. PSD Derivation According to the Einstein-Wiener-Khintchine theorem, if the auto correlation function or power spectral density of a random process is known, the other can be found exactly. Hence, to derive the power spectral density, we shall use the time auto-correlation $(R_x(tau))$ of a power signal $x(t)$ as shown below. $R_x(tau) = lim_{T_p rightarrow infty}frac{1}{T_p}int_{frac{{-T_p}}{2}}^{frac{T_p}{2}}x(t)x(t + tau)dt$ Since $x(t)$ consists of impulses, $R_x(tau)$ can be written as $R_x(tau) = frac{1}{T}displaystylesumlimits_{n = -infty}^infty R_ndelta(tau – nT)$ Where $R_n = lim_{N rightarrow infty}frac{1}{N}sum_ka_ka_{k + n}$ Getting to
Channel Coding Theorem The noise present in a channel creates unwanted errors between the input and the output sequences of a digital communication system. The error probability should be very low, nearly ≤ 10-6 for a reliable communication. The channel coding in a communication system, introduces redundancy with a control, so as to improve the reliability of the system. The source coding reduces redundancy to improve the efficiency of the system. Channel coding consists of two parts of action. Mapping incoming data sequence into a channel input sequence. Inverse Mapping the channel output sequence into an output data sequence. The final target is that the overall effect of the channel noise should be minimized. The mapping is done by the transmitter, with the help of an encoder, whereas the inverse mapping is done by the decoder in the receiver. Channel Coding Let us consider a discrete memoryless channel (δ) with Entropy H (δ) Ts indicates the symbols that δ gives per second Channel capacity is indicated by C Channel can be used for every Tc secs Hence, the maximum capability of the channel is C/Tc The data sent = $frac{H(delta)}{T_s}$ If $frac{H(delta)}{T_s} leq frac{C}{T_c}$ it means the transmission is good and can be reproduced with a small probability of error. In this, $frac{C}{T_c}$ is the critical rate of channel capacity. If $frac{H(delta)}{T_s} = frac{C}{T_c}$ then the system is said to be signaling at a critical rate. Conversely, if $frac{H(delta)}{T_s} > frac{C}{T_c}$, then the transmission is not possible. Hence, the maximum rate of the transmission is equal to the critical rate of the channel capacity, for reliable error-free messages, which can take place, over a discrete memoryless channel. This is called as Channel coding theorem. Learning working make money
Pulse Code Modulation Modulation is the process of varying one or more parameters of a carrier signal in accordance with the instantaneous values of the message signal. The message signal is the signal which is being transmitted for communication and the carrier signal is a high frequency signal which has no data, but is used for long distance transmission. There are many modulation techniques, which are classified according to the type of modulation employed. Of them all, the digital modulation technique used is Pulse Code Modulation (PCM). A signal is pulse code modulated to convert its analog information into a binary sequence, i.e., 1s and 0s. The output of a PCM will resemble a binary sequence. The following figure shows an example of PCM output with respect to instantaneous values of a given sine wave. Instead of a pulse train, PCM produces a series of numbers or digits, and hence this process is called as digital. Each one of these digits, though in binary code, represent the approximate amplitude of the signal sample at that instant. In Pulse Code Modulation, the message signal is represented by a sequence of coded pulses. This message signal is achieved by representing the signal in discrete form in both time and amplitude. Basic Elements of PCM The transmitter section of a Pulse Code Modulator circuit consists of Sampling, Quantizing and Encoding, which are performed in the analog-to-digital converter section. The low pass filter prior to sampling prevents aliasing of the message signal. The basic operations in the receiver section are regeneration of impaired signals, decoding, and reconstruction of the quantized pulse train. Following is the block diagram of PCM which represents the basic elements of both the transmitter and the receiver sections. Low Pass Filter This filter eliminates the high frequency components present in the input analog signal which is greater than the highest frequency of the message signal, to avoid aliasing of the message signal. Sampler This is the technique which helps to collect the sample data at instantaneous values of message signal, so as to reconstruct the original signal. The sampling rate must be greater than twice the highest frequency component W of the message signal, in accordance with the sampling theorem. Quantizer Quantizing is a process of reducing the excessive bits and confining the data. The sampled output when given to Quantizer, reduces the redundant bits and compresses the value. Encoder The digitization of analog signal is done by the encoder. It designates each quantized level by a binary code. The sampling done here is the sample-and-hold process. These three sections (LPF, Sampler, and Quantizer) will act as an analog to digital converter. Encoding minimizes the bandwidth used. Regenerative Repeater This section increases the signal strength. The output of the channel also has one regenerative repeater circuit, to compensate the signal loss and reconstruct the signal, and also to increase its strength. Decoder The decoder circuit decodes the pulse coded waveform to reproduce the original signal. This circuit acts as the demodulator. Reconstruction Filter After the digital-to-analog conversion is done by the regenerative circuit and the decoder, a low-pass filter is employed, called as the reconstruction filter to get back the original signal. Hence, the Pulse Code Modulator circuit digitizes the given analog signal, codes it and samples it, and then transmits it in an analog form. This whole process is repeated in a reverse pattern to obtain the original signal. Learning working make money
Differential Phase Shift Keying In Differential Phase Shift Keying (DPSK) the phase of the modulated signal is shifted relative to the previous signal element. No reference signal is considered here. The signal phase follows the high or low state of the previous element. This DPSK technique doesn’t need a reference oscillator. The following figure represents the model waveform of DPSK. It is seen from the above figure that, if the data bit is Low i.e., 0, then the phase of the signal is not reversed, but continued as it was. If the data is a High i.e., 1, then the phase of the signal is reversed, as with NRZI, invert on 1 (a form of differential encoding). If we observe the above waveform, we can say that the High state represents an M in the modulating signal and the Low state represents a W in the modulating signal. DPSK Modulator DPSK is a technique of BPSK, in which there is no reference phase signal. Here, the transmitted signal itself can be used as a reference signal. Following is the diagram of DPSK Modulator. DPSK encodes two distinct signals, i.e., the carrier and the modulating signal with 180° phase shift each. The serial data input is given to the XNOR gate and the output is again fed back to the other input through 1-bit delay. The output of the XNOR gate along with the carrier signal is given to the balance modulator, to produce the DPSK modulated signal. DPSK Demodulator In DPSK demodulator, the phase of the reversed bit is compared with the phase of the previous bit. Following is the block diagram of DPSK demodulator. From the above figure, it is evident that the balance modulator is given the DPSK signal along with 1-bit delay input. That signal is made to confine to lower frequencies with the help of LPF. Then it is passed to a shaper circuit, which is a comparator or a Schmitt trigger circuit, to recover the original binary data as the output. Learning working make money