Digital Communication – Pulse Shaping After going through different types of coding techniques, we have an idea on how the data is prone to distortion and how the measures are taken to prevent it from getting affected so as to establish a reliable communication. There is another important distortion which is most likely to occur, called as Inter-Symbol Interference (ISI). Inter Symbol Interference This is a form of distortion of a signal, in which one or more symbols interfere with subsequent signals, causing noise or delivering a poor output. Causes of ISI The main causes of ISI are − Multi-path Propagation Non-linear frequency in channels The ISI is unwanted and should be completely eliminated to get a clean output. The causes of ISI should also be resolved in order to lessen its effect. To view ISI in a mathematical form present in the receiver output, we can consider the receiver output. The receiving filter output $y(t)$ is sampled at time $t_i = iT_b$ (with i taking on integer values), yielding − $y(t_i) = mu displaystylesumlimits_{k = -infty}^{infty}a_kp(iT_b – kT_b)$ $= mu a_i + mu displaystylesumlimits_{k = -infty \ k neq i}^{infty}a_kp(iT_b – kT_b)$ In the above equation, the first term $mu a_i$ is produced by the ith transmitted bit. The second term represents the residual effect of all other transmitted bits on the decoding of the ith bit. This residual effect is called as Inter Symbol Interference. In the absence of ISI, the output will be − $$y(t_i) = mu a_i$$ This equation shows that the ith bit transmitted is correctly reproduced. However, the presence of ISI introduces bit errors and distortions in the output. While designing the transmitter or a receiver, it is important that you minimize the effects of ISI, so as to receive the output with the least possible error rate. Correlative Coding So far, we’ve discussed that ISI is an unwanted phenomenon and degrades the signal. But the same ISI if used in a controlled manner, is possible to achieve a bit rate of 2W bits per second in a channel of bandwidth W Hertz. Such a scheme is called as Correlative Coding or Partial response signaling schemes. Since the amount of ISI is known, it is easy to design the receiver according to the requirement so as to avoid the effect of ISI on the signal. The basic idea of correlative coding is achieved by considering an example of Duo-binary Signaling. Duo-binary Signaling The name duo-binary means doubling the binary system’s transmission capability. To understand this, let us consider a binary input sequence {ak} consisting of uncorrelated binary digits each having a duration Ta seconds. In this, the signal 1 is represented by a +1 volt and the symbol 0 by a -1 volt. Therefore, the duo-binary coder output ck is given as the sum of present binary digit ak and the previous value ak-1 as shown in the following equation. $$c_k = a_k + a_{k-1}$$ The above equation states that the input sequence of uncorrelated binary sequence {ak} is changed into a sequence of correlated three level pulses {ck}. This correlation between the pulses may be understood as introducing ISI in the transmitted signal in an artificial manner. Eye Pattern An effective way to study the effects of ISI is the Eye Pattern. The name Eye Pattern was given from its resemblance to the human eye for binary waves. The interior region of the eye pattern is called the eye opening. The following figure shows the image of an eye-pattern. Jitter is the short-term variation of the instant of digital signal, from its ideal position, which may lead to data errors. When the effect of ISI increases, traces from the upper portion to the lower portion of the eye opening increases and the eye gets completely closed, if ISI is very high. An eye pattern provides the following information about a particular system. Actual eye patterns are used to estimate the bit error rate and the signal-to-noise ratio. The width of the eye opening defines the time interval over which the received wave can be sampled without error from ISI. The instant of time when the eye opening is wide, will be the preferred time for sampling. The rate of the closure of the eye, according to the sampling time, determines how sensitive the system is to the timing error. The height of the eye opening, at a specified sampling time, defines the margin over noise. Hence, the interpretation of eye pattern is an important consideration. Equalization For reliable communication to be established, we need to have a quality output. The transmission losses of the channel and other factors affecting the quality of the signal, have to be treated. The most occurring loss, as we have discussed, is the ISI. To make the signal free from ISI, and to ensure a maximum signal to noise ratio, we need to implement a method called Equalization. The following figure shows an equalizer in the receiver portion of the communication system. The noise and interferences which are denoted in the figure, are likely to occur, during transmission. The regenerative repeater has an equalizer circuit, which compensates the transmission losses by shaping the circuit. The Equalizer is feasible to get implemented. Error Probability and Figure-of-merit The rate at which data can be communicated is called the data rate. The rate at which error occurs in the bits, while transmitting data is called the Bit Error Rate (BER). The probability of the occurrence of BER is the Error Probability. The increase in Signal to Noise Ratio (SNR) decreases the BER, hence the Error Probability also gets decreased. In an Analog receiver, the figure of merit at the detection process can be termed as the ratio of output SNR to the input SNR. A greater value of figure-of-merit will be an advantage. Learning working make money
Category: digital Communication
Digital Communication – Phase Shift Keying Phase Shift Keying (PSK) is the digital modulation technique in which the phase of the carrier signal is changed by varying the sine and cosine inputs at a particular time. PSK technique is widely used for wireless LANs, bio-metric, contactless operations, along with RFID and Bluetooth communications. PSK is of two types, depending upon the phases the signal gets shifted. They are − Binary Phase Shift Keying (BPSK) This is also called as 2-phase PSK or Phase Reversal Keying. In this technique, the sine wave carrier takes two phase reversals such as 0° and 180°. BPSK is basically a Double Side Band Suppressed Carrier (DSBSC) modulation scheme, for message being the digital information. Quadrature Phase Shift Keying (QPSK) This is the phase shift keying technique, in which the sine wave carrier takes four phase reversals such as 0°, 90°, 180°, and 270°. If this kind of techniques are further extended, PSK can be done by eight or sixteen values also, depending upon the requirement. BPSK Modulator The block diagram of Binary Phase Shift Keying consists of the balance modulator which has the carrier sine wave as one input and the binary sequence as the other input. Following is the diagrammatic representation. The modulation of BPSK is done using a balance modulator, which multiplies the two signals applied at the input. For a zero binary input, the phase will be 0° and for a high input, the phase reversal is of 180°. Following is the diagrammatic representation of BPSK Modulated output wave along with its given input. The output sine wave of the modulator will be the direct input carrier or the inverted (180° phase shifted) input carrier, which is a function of the data signal. BPSK Demodulator The block diagram of BPSK demodulator consists of a mixer with local oscillator circuit, a bandpass filter, a two-input detector circuit. The diagram is as follows. By recovering the band-limited message signal, with the help of the mixer circuit and the band pass filter, the first stage of demodulation gets completed. The base band signal which is band limited is obtained and this signal is used to regenerate the binary message bit stream. In the next stage of demodulation, the bit clock rate is needed at the detector circuit to produce the original binary message signal. If the bit rate is a sub-multiple of the carrier frequency, then the bit clock regeneration is simplified. To make the circuit easily understandable, a decision-making circuit may also be inserted at the 2nd stage of detection. Learning working make money
Digital Communication – Quick Guide Digital Communication – Analog to Digital The communication that occurs in our day-to-day life is in the form of signals. These signals, such as sound signals, generally, are analog in nature. When the communication needs to be established over a distance, then the analog signals are sent through wire, using different techniques for effective transmission. The Necessity of Digitization The conventional methods of communication used analog signals for long distance communications, which suffer from many losses such as distortion, interference, and other losses including security breach. In order to overcome these problems, the signals are digitized using different techniques. The digitized signals allow the communication to be more clear and accurate without losses. The following figure indicates the difference between analog and digital signals. The digital signals consist of 1s and 0s which indicate High and Low values respectively. Advantages of Digital Communication As the signals are digitized, there are many advantages of digital communication over analog communication, such as − The effect of distortion, noise, and interference is much less in digital signals as they are less affected. Digital circuits are more reliable. Digital circuits are easy to design and cheaper than analog circuits. The hardware implementation in digital circuits, is more flexible than analog. The occurrence of cross-talk is very rare in digital communication. The signal is un-altered as the pulse needs a high disturbance to alter its properties, which is very difficult. Signal processing functions such as encryption and compression are employed in digital circuits to maintain the secrecy of the information. The probability of error occurrence is reduced by employing error detecting and error correcting codes. Spread spectrum technique is used to avoid signal jamming. Combining digital signals using Time Division Multiplexing (TDM) is easier than combining analog signals using Frequency Division Multiplexing (FDM). The configuring process of digital signals is easier than analog signals. Digital signals can be saved and retrieved more conveniently than analog signals. Many of the digital circuits have almost common encoding techniques and hence similar devices can be used for a number of purposes. The capacity of the channel is effectively utilized by digital signals. Elements of Digital Communication The elements which form a digital communication system is represented by the following block diagram for the ease of understanding. Following are the sections of the digital communication system. Source The source can be an analog signal. Example: A Sound signal Input Transducer This is a transducer which takes a physical input and converts it to an electrical signal (Example: microphone). This block also consists of an analog to digital converter where a digital signal is needed for further processes. A digital signal is generally represented by a binary sequence. Source Encoder The source encoder compresses the data into minimum number of bits. This process helps in effective utilization of the bandwidth. It removes the redundant bits (unnecessary excess bits, i.e., zeroes). Channel Encoder The channel encoder, does the coding for error correction. During the transmission of the signal, due to the noise in the channel, the signal may get altered and hence to avoid this, the channel encoder adds some redundant bits to the transmitted data. These are the error correcting bits. Digital Modulator The signal to be transmitted is modulated here by a carrier. The signal is also converted to analog from the digital sequence, in order to make it travel through the channel or medium. Channel The channel or a medium, allows the analog signal to transmit from the transmitter end to the receiver end. Digital Demodulator This is the first step at the receiver end. The received signal is demodulated as well as converted again from analog to digital. The signal gets reconstructed here. Channel Decoder The channel decoder, after detecting the sequence, does some error corrections. The distortions which might occur during the transmission, are corrected by adding some redundant bits. This addition of bits helps in the complete recovery of the original signal. Source Decoder The resultant signal is once again digitized by sampling and quantizing so that the pure digital output is obtained without the loss of information. The source decoder recreates the source output. Output Transducer This is the last block which converts the signal into the original physical form, which was at the input of the transmitter. It converts the electrical signal into physical output (Example: loud speaker). Output Signal This is the output which is produced after the whole process. Example − The sound signal received. This unit has dealt with the introduction, the digitization of signals, the advantages and the elements of digital communications. In the coming chapters, we will learn about the concepts of Digital communications, in detail. Pulse Code Modulation Modulation is the process of varying one or more parameters of a carrier signal in accordance with the instantaneous values of the message signal. The message signal is the signal which is being transmitted for communication and the carrier signal is a high frequency signal which has no data, but is used for long distance transmission. There are many modulation techniques, which are classified according to the type of modulation employed. Of them all, the digital modulation technique used is Pulse Code Modulation (PCM). A signal is pulse code modulated to convert its analog information into a binary sequence, i.e., 1s and 0s. The output of a PCM will resemble a binary sequence. The following figure shows an example of PCM output with respect to instantaneous values of a given sine wave. Instead of a pulse train, PCM produces a series of numbers or digits, and hence this process is called as digital. Each one of these digits, though in binary code, represent the approximate amplitude of the signal sample at that instant. In Pulse Code Modulation, the message signal is represented by a sequence of coded pulses. This message signal is achieved by representing the signal in discrete form in both time and amplitude. Basic Elements of PCM The transmitter section of
Digital Communication – Error Control Coding Noise or Error is the main problem in the signal, which disturbs the reliability of the communication system. Error control coding is the coding procedure done to control the occurrences of errors. These techniques help in Error Detection and Error Correction. There are many different error correcting codes depending upon the mathematical principles applied to them. But, historically, these codes have been classified into Linear block codes and Convolution codes. Linear Block Codes In the linear block codes, the parity bits and message bits have a linear combination, which means that the resultant code word is the linear combination of any two code words. Let us consider some blocks of data, which contains k bits in each block. These bits are mapped with the blocks which has n bits in each block. Here n is greater than k. The transmitter adds redundant bits which are (n-k) bits. The ratio k/n is the code rate. It is denoted by r and the value of r is r < 1. The (n-k) bits added here, are parity bits. Parity bits help in error detection and error correction, and also in locating the data. In the data being transmitted, the left most bits of the code word correspond to the message bits, and the right most bits of the code word correspond to the parity bits. Systematic Code Any linear block code can be a systematic code, until it is altered. Hence, an unaltered block code is called as a systematic code. Following is the representation of the structure of code word, according to their allocation. If the message is not altered, then it is called as systematic code. It means, the encryption of the data should not change the data. Convolution Codes So far, in the linear codes, we have discussed that systematic unaltered code is preferred. Here, the data of total n bits if transmitted, k bits are message bits and (n-k) bits are parity bits. In the process of encoding, the parity bits are subtracted from the whole data and the message bits are encoded. Now, the parity bits are again added and the whole data is again encoded. The following figure quotes an example for blocks of data and stream of data, used for transmission of information. The whole process, stated above is tedious which has drawbacks. The allotment of buffer is a main problem here, when the system is busy. This drawback is cleared in convolution codes. Where the whole stream of data is assigned symbols and then transmitted. As the data is a stream of bits, there is no need of buffer for storage. Hamming Codes The linearity property of the code word is that the sum of two code words is also a code word. Hamming codes are the type of linear error correcting codes, which can detect up to two bit errors or they can correct one bit errors without the detection of uncorrected errors. While using the hamming codes, extra parity bits are used to identify a single bit error. To get from one-bit pattern to the other, few bits are to be changed in the data. Such number of bits can be termed as Hamming distance. If the parity has a distance of 2, one-bit flip can be detected. But this can”t be corrected. Also, any two bit flips cannot be detected. However, Hamming code is a better procedure than the previously discussed ones in error detection and correction. BCH Codes BCH codes are named after the inventors Bose, Chaudari and Hocquenghem. During the BCH code design, there is control on the number of symbols to be corrected and hence multiple bit correction is possible. BCH codes is a powerful technique in error correcting codes. For any positive integers m ≥ 3 and t < 2m-1 there exists a BCH binary code. Following are the parameters of such code. Block length n = 2m-1 Number of parity-check digits n – k ≤ mt Minimum distance dmin ≥ 2t + 1 This code can be called as t-error-correcting BCH code. Cyclic Codes The cyclic property of code words is that any cyclic-shift of a code word is also a code word. Cyclic codes follow this cyclic property. For a linear code C, if every code word i.e., C = (C1, C2, …… Cn) from C has a cyclic right shift of components, it becomes a code word. This shift of right is equal to n-1 cyclic left shifts. Hence, it is invariant under any shift. So, the linear code C, as it is invariant under any shift, can be called as a Cyclic code. Cyclic codes are used for error correction. They are mainly used to correct double errors and burst errors. Hence, these are a few error correcting codes, which are to be detected at the receiver. These codes prevent the errors from getting introduced and disturb the communication. They also prevent the signal from getting tapped by unwanted receivers. There is a class of signaling techniques to achieve this, which are discussed in the next chapter. Learning working make money
Spread Spectrum Modulation A collective class of signaling techniques are employed before transmitting a signal to provide a secure communication, known as the Spread Spectrum Modulation. The main advantage of spread spectrum communication technique is to prevent “interference” whether it is intentional or unintentional. The signals modulated with these techniques are hard to interfere and cannot be jammed. An intruder with no official access is never allowed to crack them. Hence, these techniques are used for military purposes. These spread spectrum signals transmit at low power density and has a wide spread of signals. Pseudo-Noise Sequence A coded sequence of 1s and 0s with certain auto-correlation properties, called as Pseudo-Noise coding sequence is used in spread spectrum techniques. It is a maximum-length sequence, which is a type of cyclic code. Narrow-band and Spread-spectrum Signals Both the Narrow band and Spread spectrum signals can be understood easily by observing their frequency spectrum as shown in the following figures. Narrow-band Signals The Narrow-band signals have the signal strength concentrated as shown in the following frequency spectrum figure. Following are some of its features − Band of signals occupy a narrow range of frequencies. Power density is high. Spread of energy is low and concentrated. Though the features are good, these signals are prone to interference. Spread Spectrum Signals The spread spectrum signals have the signal strength distributed as shown in the following frequency spectrum figure. Following are some of its features − Band of signals occupy a wide range of frequencies. Power density is very low. Energy is wide spread. With these features, the spread spectrum signals are highly resistant to interference or jamming. Since multiple users can share the same spread spectrum bandwidth without interfering with one another, these can be called as multiple access techniques. FHSS and DSSS / CDMA Spread spectrum multiple access techniques uses signals which have a transmission bandwidth of a magnitude greater than the minimum required RF bandwidth. These are of two types. Frequency Hopped Spread Spectrum (FHSS) Direct Sequence Spread Spectrum (DSSS) Frequency Hopped Spread Spectrum (FHSS) This is frequency hopping technique, where the users are made to change the frequencies of usage, from one to another in a specified time interval, hence called as frequency hopping. For example, a frequency was allotted to sender 1 for a particular period of time. Now, after a while, sender 1 hops to the other frequency and sender 2 uses the first frequency, which was previously used by sender 1. This is called as frequency reuse. The frequencies of the data are hopped from one to another in order to provide a secure transmission. The amount of time spent on each frequency hop is called as Dwell time. Direct Sequence Spread Spectrum (DSSS) Whenever a user wants to send data using this DSSS technique, each and every bit of the user data is multiplied by a secret code, called as chipping code. This chipping code is nothing but the spreading code which is multiplied with the original message and transmitted. The receiver uses the same code to retrieve the original message. Comparison between FHSS and DSSS/CDMA Both the spread spectrum techniques are popular for their characteristics. To have a clear understanding, let us take a look at their comparisons. FHSS DSSS / CDMA Multiple frequencies are used Single frequency is used Hard to find the user’s frequency at any instant of time User frequency, once allotted is always the same Frequency reuse is allowed Frequency reuse is not allowed Sender need not wait Sender has to wait if the spectrum is busy Power strength of the signal is high Power strength of the signal is low Stronger and penetrates through the obstacles It is weaker compared to FHSS It is never affected by interference It can be affected by interference It is cheaper It is expensive This is the commonly used technique This technique is not frequently used Advantages of Spread Spectrum Following are the advantages of spread spectrum − Cross-talk elimination Better output with data integrity Reduced effect of multipath fading Better security Reduction in noise Co-existence with other systems Longer operative distances Hard to detect Not easy to demodulate/decode Difficult to jam the signals Although spread spectrum techniques were originally designed for military uses, they are now being used widely for commercial purpose. Learning working make money
Digital Communication – Delta Modulation The sampling rate of a signal should be higher than the Nyquist rate, to achieve better sampling. If this sampling interval in Differential PCM is reduced considerably, the sampleto-sample amplitude difference is very small, as if the difference is 1-bit quantization, then the step-size will be very small i.e., Δ (delta). Delta Modulation The type of modulation, where the sampling rate is much higher and in which the stepsize after quantization is of a smaller value Δ, such a modulation is termed as delta modulation. Features of Delta Modulation Following are some of the features of delta modulation. An over-sampled input is taken to make full use of the signal correlation. The quantization design is simple. The input sequence is much higher than the Nyquist rate. The quality is moderate. The design of the modulator and the demodulator is simple. The stair-case approximation of output waveform. The step-size is very small, i.e., Δ (delta). The bit rate can be decided by the user. This involves simpler implementation. Delta Modulation is a simplified form of DPCM technique, also viewed as 1-bit DPCM scheme. As the sampling interval is reduced, the signal correlation will be higher. Delta Modulator The Delta Modulator comprises of a 1-bit quantizer and a delay circuit along with two summer circuits. Following is the block diagram of a delta modulator. The predictor circuit in DPCM is replaced by a simple delay circuit in DM. From the above diagram, we have the notations as − $x(nT_{s})$ = over sampled input $e_{p}(nT_{s})$ = summer output and quantizer input $e_{q}(nT_{s})$ = quantizer output = $v(nT_s)$ $widehat{x}(nT_{s})$ = output of delay circuit $u(nT_{s})$ = input of delay circuit Using these notations, now we shall try to figure out the process of delta modulation. $e_{p}(nT_{s}) = x(nT_{s}) – widehat{x}(nT_{s})$ ———equation 1 $= x(nT_{s}) – u([n – 1]T_{s})$ $= x(nT_{s}) – [widehat{x} [[n – 1]T_{s}] + v[[n-1]T_{s}]]$ ———equation 2 Further, $v(nT_{s}) = e_{q}(nT_{s}) = S.sig.[e_{p}(nT_{s})]$ ———equation 3 $u(nT_{s}) = widehat{x}(nT_{s})+e_{q}(nT_{s})$ Where, $widehat{x}(nT_{s})$ = the previous value of the delay circuit $e_{q}(nT_{s})$ = quantizer output = $v(nT_s)$ Hence, $u(nT_{s}) = u([n-1]T_{s}) + v(nT_{s})$ ———equation 4 Which means, The present input of the delay unit = (The previous output of the delay unit) + (the present quantizer output) Assuming zero condition of Accumulation, $u(nT_{s}) = S displaystylesumlimits_{j=1}^n sig[e_{p}(jT_{s})]$ Accumulated version of DM output = $displaystylesumlimits_{j = 1}^n v(jT_{s})$ ———equation 5 Now, note that $widehat{x}(nT_{s}) = u([n-1]T_{s})$ $= displaystylesumlimits_{j = 1}^{n – 1} v(jT_{s})$ ———equation 6 Delay unit output is an Accumulator output lagging by one sample. From equations 5 & 6, we get a possible structure for the demodulator. A Stair-case approximated waveform will be the output of the delta modulator with the step-size as delta (Δ). The output quality of the waveform is moderate. Delta Demodulator The delta demodulator comprises of a low pass filter, a summer, and a delay circuit. The predictor circuit is eliminated here and hence no assumed input is given to the demodulator. Following is the diagram for delta demodulator. From the above diagram, we have the notations as − $widehat{v}(nT_{s})$ is the input sample $widehat{u}(nT_{s})$ is the summer output $bar{x}(nT_{s})$ is the delayed output A binary sequence will be given as an input to the demodulator. The stair-case approximated output is given to the LPF. Low pass filter is used for many reasons, but the prominent reason is noise elimination for out-of-band signals. The step-size error that may occur at the transmitter is called granular noise, which is eliminated here. If there is no noise present, then the modulator output equals the demodulator input. Advantages of DM Over DPCM 1-bit quantizer Very easy design of the modulator and the demodulator However, there exists some noise in DM. Slope Over load distortion (when Δ is small) Granular noise (when Δ is large) Adaptive Delta Modulation (ADM) In digital modulation, we have come across certain problem of determining the step-size, which influences the quality of the output wave. A larger step-size is needed in the steep slope of modulating signal and a smaller stepsize is needed where the message has a small slope. The minute details get missed in the process. So, it would be better if we can control the adjustment of step-size, according to our requirement in order to obtain the sampling in a desired fashion. This is the concept of Adaptive Delta Modulation. Following is the block diagram of Adaptive delta modulator. The gain of the voltage controlled amplifier is adjusted by the output signal from the sampler. The amplifier gain determines the step-size and both are proportional. ADM quantizes the difference between the value of the current sample and the predicted value of the next sample. It uses a variable step height to predict the next values, for the faithful reproduction of the fast varying values. Learning working make money
Data Encoding Techniques Encoding is the process of converting the data or a given sequence of characters, symbols, alphabets etc., into a specified format, for the secured transmission of data. Decoding is the reverse process of encoding which is to extract the information from the converted format. Data Encoding Encoding is the process of using various patterns of voltage or current levels to represent 1s and 0s of the digital signals on the transmission link. The common types of line encoding are Unipolar, Polar, Bipolar, and Manchester. Encoding Techniques The data encoding technique is divided into the following types, depending upon the type of data conversion. Analog data to Analog signals − The modulation techniques such as Amplitude Modulation, Frequency Modulation and Phase Modulation of analog signals, fall under this category. Analog data to Digital signals − This process can be termed as digitization, which is done by Pulse Code Modulation (PCM). Hence, it is nothing but digital modulation. As we have already discussed, sampling and quantization are the important factors in this. Delta Modulation gives a better output than PCM. Digital data to Analog signals − The modulation techniques such as Amplitude Shift Keying (ASK), Frequency Shift Keying (FSK), Phase Shift Keying (PSK), etc., fall under this category. These will be discussed in subsequent chapters. Digital data to Digital signals − These are in this section. There are several ways to map digital data to digital signals. Some of them are − Non Return to Zero (NRZ) NRZ Codes has 1 for High voltage level and 0 for Low voltage level. The main behavior of NRZ codes is that the voltage level remains constant during bit interval. The end or start of a bit will not be indicated and it will maintain the same voltage state, if the value of the previous bit and the value of the present bit are same. The following figure explains the concept of NRZ coding. If the above example is considered, as there is a long sequence of constant voltage level and the clock synchronization may be lost due to the absence of bit interval, it becomes difficult for the receiver to differentiate between 0 and 1. There are two variations in NRZ namely − NRZ – L (NRZ – LEVEL) There is a change in the polarity of the signal, only when the incoming signal changes from 1 to 0 or from 0 to 1. It is the same as NRZ, however, the first bit of the input signal should have a change of polarity. NRZ – I (NRZ – INVERTED) If a 1 occurs at the incoming signal, then there occurs a transition at the beginning of the bit interval. For a 0 at the incoming signal, there is no transition at the beginning of the bit interval. NRZ codes has a disadvantage that the synchronization of the transmitter clock with the receiver clock gets completely disturbed, when there is a string of 1s and 0s. Hence, a separate clock line needs to be provided. Bi-phase Encoding The signal level is checked twice for every bit time, both initially and in the middle. Hence, the clock rate is double the data transfer rate and thus the modulation rate is also doubled. The clock is taken from the signal itself. The bandwidth required for this coding is greater. There are two types of Bi-phase Encoding. Bi-phase Manchester Differential Manchester Bi-phase Manchester In this type of coding, the transition is done at the middle of the bit-interval. The transition for the resultant pulse is from High to Low in the middle of the interval, for the input bit 1. While the transition is from Low to High for the input bit 0. Differential Manchester In this type of coding, there always occurs a transition in the middle of the bit interval. If there occurs a transition at the beginning of the bit interval, then the input bit is 0. If no transition occurs at the beginning of the bit interval, then the input bit is 1. The following figure illustrates the waveforms of NRZ-L, NRZ-I, Bi-phase Manchester and Differential Manchester coding for different digital inputs. Block Coding Among the types of block coding, the famous ones are 4B/5B encoding and 8B/6T encoding. The number of bits are processed in different manners, in both of these processes. 4B/5B Encoding In Manchester encoding, to send the data, the clocks with double speed is required rather than NRZ coding. Here, as the name implies, 4 bits of code is mapped with 5 bits, with a minimum number of 1 bits in the group. The clock synchronization problem in NRZ-I encoding is avoided by assigning an equivalent word of 5 bits in the place of each block of 4 consecutive bits. These 5-bit words are predetermined in a dictionary. The basic idea of selecting a 5-bit code is that, it should have one leading 0 and it should have no more than two trailing 0s. Hence, these words are chosen such that two transactions take place per block of bits. 8B/6T Encoding We have used two voltage levels to send a single bit over a single signal. But if we use more than 3 voltage levels, we can send more bits per signal. For example, if 6 voltage levels are used to represent 8 bits on a single signal, then such encoding is termed as 8B/6T encoding. Hence in this method, we have as many as 729 (3^6) combinations for signal and 256 (2^8) combinations for bits. These are the techniques mostly used for converting digital data into digital signals by compressing or coding them for reliable transmission of data. Learning working make money
Source Coding Theorem The Code produced by a discrete memoryless source, has to be efficiently represented, which is an important problem in communications. For this to happen, there are code words, which represent these source codes. For example, in telegraphy, we use Morse code, in which the alphabets are denoted by Marks and Spaces. If the letter E is considered, which is mostly used, it is denoted by “.” Whereas the letter Q which is rarely used, is denoted by “–.-” Let us take a look at the block diagram. Where Sk is the output of the discrete memoryless source and bk is the output of the source encoder which is represented by 0s and 1s. The encoded sequence is such that it is conveniently decoded at the receiver. Let us assume that the source has an alphabet with k different symbols and that the kth symbol Sk occurs with the probability Pk, where k = 0, 1…k-1. Let the binary code word assigned to symbol Sk, by the encoder having length lk, measured in bits. Hence, we define the average code word length L of the source encoder as $$overline{L} = displaystylesumlimits_{k=0}^{k-1} p_kl_k$$ L represents the average number of bits per source symbol If $L_{min} = : minimum : possible : value : of : overline{L}$ Then coding efficiency can be defined as $$eta = frac{L{min}}{overline{L}}$$ With $overline{L}geq L_{min}$ we will have $eta leq 1$ However, the source encoder is considered efficient when $eta = 1$ For this, the value $L_{min}$ has to be determined. Let us refer to the definition, “Given a discrete memoryless source of entropy $H(delta)$, the average code-word length L for any source encoding is bounded as $overline{L} geq H(delta)$.” In simpler words, the code word (example: Morse code for the word QUEUE is -.- ..- . ..- . ) is always greater than or equal to the source code (QUEUE in example). Which means, the symbols in the code word are greater than or equal to the alphabets in the source code. Hence with $L_{min} = H(delta)$, the efficiency of the source encoder in terms of Entropy $H(delta)$ may be written as $$eta = frac{H(delta)}{overline{L}}$$ This source coding theorem is called as noiseless coding theorem as it establishes an error-free encoding. It is also called as Shannon’s first theorem. Learning working make money
Digital Modulation Techniques Digital-to-Analog signals is the next conversion we will discuss in this chapter. These techniques are also called as Digital Modulation techniques. Digital Modulation provides more information capacity, high data security, quicker system availability with great quality communication. Hence, digital modulation techniques have a greater demand, for their capacity to convey larger amounts of data than analog modulation techniques. There are many types of digital modulation techniques and also their combinations, depending upon the need. Of them all, we will discuss the prominent ones. ASK – Amplitude Shift Keying The amplitude of the resultant output depends upon the input data whether it should be a zero level or a variation of positive and negative, depending upon the carrier frequency. FSK – Frequency Shift Keying The frequency of the output signal will be either high or low, depending upon the input data applied. PSK – Phase Shift Keying The phase of the output signal gets shifted depending upon the input. These are mainly of two types, namely Binary Phase Shift Keying (BPSK) and Quadrature Phase Shift Keying (QPSK), according to the number of phase shifts. The other one is Differential Phase Shift Keying (DPSK) which changes the phase according to the previous value. M-ary Encoding M-ary Encoding techniques are the methods where more than two bits are made to transmit simultaneously on a single signal. This helps in the reduction of bandwidth. The types of M-ary techniques are − M-ary ASK M-ary FSK M-ary PSK All of these are discussed in subsequent chapters. Learning working make money
Discuss Digital Communication Digital communication is the process of devices communicating information digitally. This tutorial helps the readers to get a good idea on how the signals are digitized and why digitization is needed. By the completion of this tutorial, the reader will be able to understand the conceptual details involved in digital communication. Learning working make money