DIP – Concept of Dithering

Concept of Dithering ”; Previous Next In the last two tutorials of Quantization and contouring, we have seen that reducing the gray level of an image reduces the number of colors required to denote an image. If the gray levels are reduced two 2, the image that appears doesnot have much spatial resolution or is not very much appealing. Dithering Dithering is the process by which we create illusions of the color that are not present actually. It is done by the random arrangement of pixels. For example. Consider this image. This is an image with only black and white pixels in it. Its pixels are arranged in an order to form another image that is shown below. Note at the arrangement of pixels has been changed, but not the quantity of pixels. Why Dithering? Why do we need dithering, the answer of this lies in its relation with quantization. Dithering with quantization When we perform quantization, to the last level, we see that the image that comes in the last level (level 2) looks like this. Now as we can see from the image here, that the picture is not very clear, especially if you will look at the left arm and back of the image of the Einstein. Also this picture does not have much information or detail of the Einstein. Now if we were to change this image into some image that gives more detail then this, we have to perform dithering. Performing dithering First of all, we will work on threholding. Dithering is usually working to improve thresholding. During threholding, the sharp edges appear where gradients are smooth in an image. In thresholding, we simply choose a constant value. All the pixels above that value are considered as 1 and all the value below it are considered as 0. We got this image after thresholding. Since there is not much change in the image, as the values are already 0 and 1 or black and white in this image. Now we perform some random dithering to it. Its some random arrangement of pixels. We got an image that gives slighter of the more details, but its contrast is very low. So we do some more dithering that will increase the contrast. The image that we got is this: Now we mix the concepts of random dithering, along with threshold and we got an image like this. Now you see, we got all these images by just re-arranging the pixels of an image. This re-arranging could be random or could be according to some measure. Print Page Previous Next Advertisements ”;

DIP – Pixels Dots and Lines per inch

Pixels, Dots and Lines Per Inch ”; Previous Next In the previous tutorial of spatial resolution , we discussed the brief introduction of PPI, DPI, LPI. Now we are formally going to discuss all of them. Pixels per inch Pixel density or Pixels per inch is a measure of spatial resolution for different devices that includes tablets, mobile phones. The higher is the PPI, the higher is the quality. In order to more understand it, that how it calculated. Lets calculate the PPI of a mobile phone. Calculating pixels per inch (PPI) of Samsung galaxy S4: The Samsung galaxy s4 has PPI or pixel density of 441. But how does it is calculated? First of all we will Pythagoras theorem to calculate the diagonal resolution in pixels. It can be given as: Where a and b are the height and width resolutions in pixel and c is the diagonal resolution in pixels. For Samsung galaxy s4, it is 1080 x 1920 pixels. So putting those values in the equation gives the result C = 2202.90717 Now we will calculate PPI PPI = c / diagonal size in inches The diagonal size in inches of Samsung galaxy s4 is 5.0 inches , which can be confirmed from anywhere. PPI = 2202.90717/5.0 PPI = 440.58 PPI = 441 (approx) That means that the pixel density of Samsung galaxy s4 is 441 PPI. Dots per inch. The dpi is often relate to PPI, whereas there is a difference between these two. DPI or dots per inch is a measure of spatial resolution of printers. In case of printers, dpi means that how many dots of ink are printed per inch when an image get printed out from the printer. Remember, it is not necessary that each Pixel per inch is printed by one dot per inch. There may be many dots per inch used for printing one pixel. The reason behind this that most of the color printers uses CMYK model. The colors are limited. Printer has to choose from these colors to make the color of the pixel whereas within pc, you have hundreds of thousands of colors. The higher is the dpi of the printer, the higher is the quality of the printed document or image on paper. Usually some of the laser printers have dpi of 300 and some have 600 or more. Lines per inch When dpi refers to dots per inch, liner per inch refers to lines of dots per inch. The resolution of halftone screen is measured in lines per inch. The following table shows some of the lines per inch capacity of the printers. Printer LPI Screen printing 45-65 lpi Laser printer (300 dpi) 65 lpi Laser printer (600 dpi) 85-105 lpi Offset Press (newsprint paper) 85 lpi Offset Press (coated paper) 85-185 lpi Print Page Previous Next Advertisements ”;

DIP – Quick Guide

DIP – Quick Guide ”; Previous Next Digital Image Processing Introduction Introduction Signal processing is a discipline in electrical engineering and in mathematics that deals with analysis and processing of analog and digital signals , and deals with storing , filtering , and other operations on signals. These signals include transmission signals , sound or voice signals , image signals , and other signals e.t.c. Out of all these signals , the field that deals with the type of signals for which the input is an image and the output is also an image is done in image processing. As it name suggests, it deals with the processing on images. It can be further divided into analog image processing and digital image processing. Analog image processing Analog image processing is done on analog signals. It includes processing on two dimensional analog signals. In this type of processing, the images are manipulated by electrical means by varying the electrical signal. The common example include is the television image. Digital image processing has dominated over analog image processing with the passage of time due its wider range of applications. Digital image processing The digital image processing deals with developing a digital system that performs operations on an digital image. What is an Image An image is nothing more than a two dimensional signal. It is defined by the mathematical function f(x,y) where x and y are the two co-ordinates horizontally and vertically. The value of f(x,y) at any point is gives the pixel value at that point of an image. The above figure is an example of digital image that you are now viewing on your computer screen. But actually , this image is nothing but a two dimensional array of numbers ranging between 0 and 255. 128 30 123 232 123 321 123 77 89 80 255 255 Each number represents the value of the function f(x,y) at any point. In this case the value 128 , 230 ,123 each represents an individual pixel value. The dimensions of the picture is actually the dimensions of this two dimensional array. Relationship between a digital image and a signal If the image is a two dimensional array then what does it have to do with a signal? In order to understand that , We need to first understand what is a signal? Signal In physical world, any quantity measurable through time over space or any higher dimension can be taken as a signal. A signal is a mathematical function, and it conveys some information. A signal can be one dimensional or two dimensional or higher dimensional signal. One dimensional signal is a signal that is measured over time. The common example is a voice signal. The two dimensional signals are those that are measured over some other physical quantities. The example of two dimensional signal is a digital image. We will look in more detail in the next tutorial of how a one dimensional or two dimensional signals and higher signals are formed and interpreted. Relationship Since anything that conveys information or broadcast a message in physical world between two observers is a signal. That includes speech or (human voice) or an image as a signal. Since when we speak , our voice is converted to a sound wave/signal and transformed with respect to the time to person we are speaking to. Not only this , but the way a digital camera works, as while acquiring an image from a digital camera involves transfer of a signal from one part of the system to the other. How a digital image is formed Since capturing an image from a camera is a physical process. The sunlight is used as a source of energy. A sensor array is used for the acquisition of the image. So when the sunlight falls upon the object, then the amount of light reflected by that object is sensed by the sensors, and a continuous voltage signal is generated by the amount of sensed data. In order to create a digital image , we need to convert this data into a digital form. This involves sampling and quantization. (They are discussed later on). The result of sampling and quantization results in an two dimensional array or matrix of numbers which are nothing but a digital image. Overlapping fields Machine/Computer vision Machine vision or computer vision deals with developing a system in which the input is an image and the output is some information. For example: Developing a system that scans human face and opens any kind of lock. This system would look something like this. Computer graphics Computer graphics deals with the formation of images from object models, rather then the image is captured by some device. For example: Object rendering. Generating an image from an object model. Such a system would look something like this. Artificial intelligence Artificial intelligence is more or less the study of putting human intelligence into machines. Artificial intelligence has many applications in image processing. For example: developing computer aided diagnosis systems that help doctors in interpreting images of X-ray , MRI e.t.c and then highlighting conspicuous section to be examined by the doctor. Signal processing Signal processing is an umbrella and image processing lies under it. The amount of light reflected by an object in the physical world (3d world) is pass through the lens of the camera and it becomes a 2d signal and hence result in image formation. This image is then digitized using methods of signal processing and then this digital image is manipulated in digital image processing. Signals and Systems Introduction This tutorial covers the basics of signals and system necessary for understanding the concepts of digital image processing. Before going into the detail concepts , lets first define the simple terms. Signals In electrical engineering, the fundamental quantity of representing some information is called a signal. It doesnot matter what the information is i-e: Analog or digital information. In mathematics, a signal is a function

DIP – Histogram Equalization

Histogram Equalization ”; Previous Next We have already seen that contrast can be increased using histogram stretching. In this tutorial we will see that how histogram equalization can be used to enhance contrast. Before performing histogram equalization, you must know two important concepts used in equalizing histograms. These two concepts are known as PMF and CDF. They are discussed in our tutorial of PMF and CDF. Please visit them in order to successfully grasp the concept of histogram equalization. Histogram Equalization Histogram equalization is used to enhance contrast. It is not necessary that contrast will always be increase in this. There may be some cases were histogram equalization can be worse. In that cases the contrast is decreased. Lets start histogram equalization by taking this image below as a simple image. Image Histogram of this image The histogram of this image has been shown below. Now we will perform histogram equalization to it. PMF First we have to calculate the PMF (probability mass function) of all the pixels in this image. If you donot know how to calculate PMF, please visit our tutorial of PMF calculation. CDF Our next step involves calculation of CDF (cumulative distributive function). Again if you donot know how to calculate CDF , please visit our tutorial of CDF calculation. Calculate CDF according to gray levels Lets for instance consider this , that the CDF calculated in the second step looks like this. Gray Level Value CDF 0 0.11 1 0.22 2 0.55 3 0.66 4 0.77 5 0.88 6 0.99 7 1 Then in this step you will multiply the CDF value with (Gray levels (minus) 1) . Considering we have an 3 bpp image. Then number of levels we have are 8. And 1 subtracts 8 is 7. So we multiply CDF by 7. Here what we got after multiplying. Gray Level Value CDF CDF * (Levels-1) 0 0.11 0 1 0.22 1 2 0.55 3 3 0.66 4 4 0.77 5 5 0.88 6 6 0.99 6 7 1 7 Now we have is the last step, in which we have to map the new gray level values into number of pixels. Lets assume our old gray levels values has these number of pixels. Gray Level Value Frequency 0 2 1 4 2 6 3 8 4 10 5 12 6 14 7 16 Now if we map our new values to , then this is what we got. Gray Level Value New Gray Level Value Frequency 0 0 2 1 1 4 2 3 6 3 4 8 4 5 10 5 6 12 6 6 14 7 7 16 Now map these new values you are onto histogram, and you are done. Lets apply this technique to our original image. After applying we got the following image and its following histogram. Histogram Equalization Image Cumulative Distributive function of this image Histogram Equalization histogram Comparing both the histograms and images Conclusion As you can clearly see from the images that the new image contrast has been enhanced and its histogram has also been equalized. There is also one important thing to be note here that during histogram equalization the overall shape of the histogram changes, where as in histogram stretching the overall shape of histogram remains same. Print Page Previous Next Advertisements ”;

DIP – Concept of Blurring

Concept of Blurring ”; Previous Next A brief introduction of blurring has been discussed in our previous tutorial of concept of masks, but we are formally going to discuss it here. Blurring In blurring, we simple blur an image. An image looks more sharp or more detailed if we are able to perceive all the objects and their shapes correctly in it. For example. An image with a face, looks clear when we are able to identify eyes, ears, nose, lips, forehead e.t.c very clear. This shape of an object is due to its edges. So in blurring, we simple reduce the edge content and makes the transition form one color to the other very smooth. Blurring vs zooming You might have seen a blurred image when you zoom an image. When you zoom an image using pixel replication, and zooming factor is increased, you saw a blurred image. This image also has less details, but it is not true blurring. Because in zooming, you add new pixels to an image, that increase the overall number of pixels in an image, whereas in blurring, the number of pixels of a normal image and a blurred image remains the same. Common example of a blurred image Types of filters Blurring can be achieved by many ways. The common type of filters that are used to perform blurring are. Mean filter Weighted average filter Gaussian filter Out of these three, we are going to discuss the first two here and Gaussian will be discussed later on in the upcoming tutorials. Mean filter Mean filter is also known as Box filter and average filter. A mean filter has the following properties. It must be odd ordered The sum of all the elements should be 1 All the elements should be same If we follow this rule, then for a mask of 3×3. We get the following result. 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 Since it is a 3×3 mask, that means it has 9 cells. The condition that all the element sum should be equal to 1 can be achieved by dividing each value by 9. As 1/9 + 1/9 + 1/9 + 1/9 + 1/9 + 1/9 + 1/9 + 1/9 + 1/9 = 9/9 = 1 The result of a mask of 3×3 on an image is shown below Original Image Blurred Image May be the results are not much clear. Let’s increase the blurring. The blurring can be increased by increasing the size of the mask. The more is the size of the mask, the more is the blurring. Because with greater mask, greater number of pixels are catered and one smooth transition is defined. The result of a mask of 5×5 on an image is shown below Original Image Blurred Image Same way if we increase the mask, the blurring would be more and the results are shown below. The result of a mask of 7×7 on an image is shown below. Original Image Blurred Image The result of a mask of 9×9 on an image is shown below. Original Image Blurred Image The result of a mask of 11×11 on an image is shown below. Original Image Blurred Image Weighted average filter In weighted average filter, we gave more weight to the center value. Due to which the contribution of center becomes more then the rest of the values. Due to weighted average filtering, we can actually control the blurring. Properties of the weighted average filter are. It must be odd ordered The sum of all the elements should be 1 The weight of center element should be more then all of the other elements Filter 1 1 1 1 1 2 1 1 1 1 The two properties are satisfied which are (1 and 3). But the property 2 is not satisfied. So in order to satisfy that we will simple divide the whole filter by 10, or multiply it with 1/10. Filter 2 1 1 1 1 10 1 1 1 1 Dividing factor = 18. Print Page Previous Next Advertisements ”;

DIP – Frequency Domain Analysis

Introduction to Frequency domain ”; Previous Next We have deal with images in many domains. Now we are processing signals (images) in frequency domain. Since this Fourier series and frequency domain is purely mathematics, so we will try to minimize that math’s part and focus more on its use in DIP. Frequency domain analysis Till now, all the domains in which we have analyzed a signal , we analyze it with respect to time. But in frequency domain we don’t analyze signal with respect to time, but with respect of frequency. Difference between spatial domain and frequency domain In spatial domain, we deal with images as it is. The value of the pixels of the image change with respect to scene. Whereas in frequency domain, we deal with the rate at which the pixel values are changing in spatial domain. For simplicity, Let’s put it this way. Spatial domain In simple spatial domain, we directly deal with the image matrix. Whereas in frequency domain, we deal an image like this. Frequency Domain We first transform the image to its frequency distribution. Then our black box system perform what ever processing it has to performed, and the output of the black box in this case is not an image, but a transformation. After performing inverse transformation, it is converted into an image which is then viewed in spatial domain. It can be pictorially viewed as Here we have used the word transformation. What does it actually mean? Transformation A signal can be converted from time domain into frequency domain using mathematical operators called transforms. There are many kind of transformation that does this. Some of them are given below. Fourier Series Fourier transformation Laplace transform Z transform Out of all these, we will thoroughly discuss Fourier series and Fourier transformation in our next tutorial. Frequency components Any image in spatial domain can be represented in a frequency domain. But what do this frequencies actually mean. We will divide frequency components into two major components. High frequency components High frequency components correspond to edges in an image. Low frequency components Low frequency components in an image correspond to smooth regions. Print Page Previous Next Advertisements ”;

DIP – Perspective Transformation

Perspective Transformation ”; Previous Next When human eyes see near things they look bigger as compare to those who are far away. This is called perspective in a general way. Whereas transformation is the transfer of an object e.t.c from one state to another. So overall, the perspective transformation deals with the conversion of 3d world into 2d image. The same principle on which human vision works and the same principle on which the camera works. We will see in detail about why this happens, that those objects which are near to you look bigger, while those who are far away, look smaller even though they look bigger when you reach them. We will start this discussion by the concept of frame of reference: Frame of reference: Frame of reference is basically a set of values in relation to which we measure something. 5 frames of reference In order to analyze a 3d world/image/scene, 5 different frame of references are required. Object World Camera Image Pixel Object coordinate frame Object coordinate frame is used for modeling objects. For example, checking if a particular object is in a proper place with respect to the other object. It is a 3d coordinate system. World coordinate frame World coordinate frame is used for co-relating objects in a 3 dimensional world. It is a 3d coordinate system. Camera coordinate frame Camera co-ordinate frame is used to relate objects with respect of the camera. It is a 3d coordinate system. Image coordinate frame It is not a 3d coordinate system, rather it is a 2d system. It is used to describe how 3d points are mapped in a 2d image plane. Pixel coordinate frame It is also a 2d coordinate system. Each pixel has a value of pixel co ordinates. Transformation between these 5 frames Thats how a 3d scene is transformed into 2d, with image of pixels. Now we will explain this concept mathematically. Where Y = 3d object y = 2d Image f = focal length of the camera Z = distance between object and the camera Now there are two different angles formed in this transform which are represented by Q. The first angle is Where minus denotes that image is inverted. The second angle that is formed is: Comparing these two equations we get From this equation, we can see that when the rays of light reflect back after striking from the object, passed from the camera, an invert image is formed. We can better understand this, with this example. For example Calculating the size of image formed Suppose an image has been taken of a person 5m tall, and standing at a distance of 50m from the camera, and we have to tell that what is the size of the image of the person, with a camera of focal length is 50mm. Solution: Since the focal length is in millimeter, so we have to convert every thing in millimeter in order to calculate it. So, Y = 5000 mm. f = 50 mm. Z = 50000 mm. Putting the values in the formula, we get = -5 mm. Again, the minus sign indicates that the image is inverted. Print Page Previous Next Advertisements ”;

DIP – Image Processing Introduction

Digital Image Processing Introduction ”; Previous Next Introduction Signal processing is a discipline in electrical engineering and in mathematics that deals with analysis and processing of analog and digital signals , and deals with storing , filtering , and other operations on signals. These signals include transmission signals , sound or voice signals , image signals , and other signals e.t.c. Out of all these signals , the field that deals with the type of signals for which the input is an image and the output is also an image is done in image processing. As it name suggests, it deals with the processing on images. It can be further divided into analog image processing and digital image processing. Analog image processing Analog image processing is done on analog signals. It includes processing on two dimensional analog signals. In this type of processing, the images are manipulated by electrical means by varying the electrical signal. The common example include is the television image. Digital image processing has dominated over analog image processing with the passage of time due its wider range of applications. Digital image processing The digital image processing deals with developing a digital system that performs operations on an digital image. What is an Image An image is nothing more than a two dimensional signal. It is defined by the mathematical function f(x,y) where x and y are the two co-ordinates horizontally and vertically. The value of f(x,y) at any point is gives the pixel value at that point of an image. The above figure is an example of digital image that you are now viewing on your computer screen. But actually , this image is nothing but a two dimensional array of numbers ranging between 0 and 255. 128 30 123 232 123 321 123 77 89 80 255 255 Each number represents the value of the function f(x,y) at any point. In this case the value 128 , 230 ,123 each represents an individual pixel value. The dimensions of the picture is actually the dimensions of this two dimensional array. Relationship between a digital image and a signal If the image is a two dimensional array then what does it have to do with a signal? In order to understand that , We need to first understand what is a signal? Signal In physical world, any quantity measurable through time over space or any higher dimension can be taken as a signal. A signal is a mathematical function, and it conveys some information. A signal can be one dimensional or two dimensional or higher dimensional signal. One dimensional signal is a signal that is measured over time. The common example is a voice signal. The two dimensional signals are those that are measured over some other physical quantities. The example of two dimensional signal is a digital image. We will look in more detail in the next tutorial of how a one dimensional or two dimensional signals and higher signals are formed and interpreted. Relationship Since anything that conveys information or broadcast a message in physical world between two observers is a signal. That includes speech or (human voice) or an image as a signal. Since when we speak , our voice is converted to a sound wave/signal and transformed with respect to the time to person we are speaking to. Not only this , but the way a digital camera works, as while acquiring an image from a digital camera involves transfer of a signal from one part of the system to the other. How a digital image is formed Since capturing an image from a camera is a physical process. The sunlight is used as a source of energy. A sensor array is used for the acquisition of the image. So when the sunlight falls upon the object, then the amount of light reflected by that object is sensed by the sensors, and a continuous voltage signal is generated by the amount of sensed data. In order to create a digital image , we need to convert this data into a digital form. This involves sampling and quantization. (They are discussed later on). The result of sampling and quantization results in an two dimensional array or matrix of numbers which are nothing but a digital image. Overlapping fields Machine/Computer vision Machine vision or computer vision deals with developing a system in which the input is an image and the output is some information. For example: Developing a system that scans human face and opens any kind of lock. This system would look something like this. Computer graphics Computer graphics deals with the formation of images from object models, rather then the image is captured by some device. For example: Object rendering. Generating an image from an object model. Such a system would look something like this. Artificial intelligence Artificial intelligence is more or less the study of putting human intelligence into machines. Artificial intelligence has many applications in image processing. For example: developing computer aided diagnosis systems that help doctors in interpreting images of X-ray , MRI e.t.c and then highlighting conspicuous section to be examined by the doctor. Signal processing Signal processing is an umbrella and image processing lies under it. The amount of light reflected by an object in the physical world (3d world) is pass through the lens of the camera and it becomes a 2d signal and hence result in image formation. This image is then digitized using methods of signal processing and then this digital image is manipulated in digital image processing. Print Page Previous Next Advertisements ”;

DIP – Color Codes Conversion

Color Codes Conversion ”; Previous Next In this tutorial, we will see that how different color codes can be combined to make other colors, and how we can covert RGB color codes to hex and vice versa. Different color codes All the colors here are of the 24 bit format, that means each color has 8 bits of red, 8 bits of green, 8 bits of blue, in it. Or we can say each color has three different portions. You just have to change the quantity of these three portions to make any color. Binary color format Color:Black Image: Decimal Code: (0,0,0) Explanation: As it has been explained in the previous tutorials, that in an 8-bit format, 0 refers to black. So if we have to make a pure black color, we have to make all the three portion of R, G, B to 0. Color:White Image: Decimal Code: (255,255,255) Explanation: Since each portion of R, G, B is an 8 bit portion. So in 8-bit, the white color is formed by 255. It is explained in the tutorial of pixel. So in order to make a white color we set each portion to 255 and thats how we got a white color. By setting each of the value to 255, we get overall value of 255, thats make the color white. RGB color model: Color:Red Image: Decimal Code: (255,0,0) Explanation: Since we need only red color, so we zero out the rest of the two portions which are green and blue, and we set the red portion to its maximum which is 255. Color:Green Image: Decimal Code: (0,255,0) Explanation: Since we need only green color, so we zero out the rest of the two portions which are red and blue, and we set the green portion to its maximum which is 255. Color: Blue Image: Decimal Code: (0,0,255) Explanation: Since we need only blue color, so we zero out the rest of the two portions which are red and green, and we set the blue portion to its maximum which is 255 Gray color: Color: Gray Image: Decimal Code: (128,128,128) Explanation As we have already defined in our tutorial of pixel, that gray color Is actually the mid point. In an 8-bit format, the mid point is 128 or 127. In this case we choose 128. So we set each of the portion to its mid point which is 128, and that results in overall mid value and we got gray color. CMYK color model: CMYK is another color model where c stands for cyan, m stands for magenta, y stands for yellow, and k for black. CMYK model is commonly used in color printers in which there are two carters of color is used. One consist of CMY and other consist of black color. The colors of CMY can also made from changing the quantity or portion of red, green and blue. Color: Cyan Image: Decimal Code: (0,255,255) Explanation: Cyan color is formed from the combination of two different colors which are Green and blue. So we set those two to maximum and we zero out the portion of red. And we get cyan color. Color: Magenta Image: Decimal Code: (255,0,255) Explanation: Magenta color is formed from the combination of two different colors which are Red and Blue. So we set those two to maximum and we zero out the portion of green. And we get magenta color. Color: Yellow Image: Decimal Code: (255,255,0) Explanation: Yellow color is formed from the combination of two different colors which are Red and Green. So we set those two to maximum and we zero out the portion of blue. And we get yellow color. Conversion Now we will see that how color are converted are from one format to another. Conversion from RGB to Hex code: Conversion from Hex to rgb is done through this method: Take a color. E.g: White = (255, 255, 255). Take the first portion e.g 255. Divide it by 16. Like this: Take the two numbers below line, the factor, and the remainder. In this case it is 15 � 15 which is FF. Repeat the step 2 for the next two portions. Combine all the hex code into one. Answer: #FFFFFF Conversion from Hex to RGB: Conversion from hex code to rgb decimal format is done in this way. Take a hex number. E.g: #FFFFFF Break this number into 3 parts: FF FF FF Take the first part and separate its components: F F Convert each of the part separately into binary: (1111) ( 1111) Now combine the individual binaries into one: 11111111 Convert this binary into decimal: 255 Now repeat step 2, two more times. The value comes in the first step is R, second one is G, and the third one belongs to B. Answer: ( 255 , 255 , 255 ) Common colors and their Hex code has been given in this table. Color Hex Code Black #000000 White #FFFFFF Gray #808080 Red #FF0000 Green #00FF00 Blue #0000FF Cyan #00FFFF Magenta #FF00FF Yellow #FFFF00 Print Page Previous Next Advertisements ”;

DIP – JPEG compression

Introduction to JPEG Compression ”; Previous Next In our last tutorial of image compression, we discuss some of the techniques used for compression We are going to discuss JPEG compression which is lossy compression, as some data is loss in the end. Let’s discuss first what image compression is. Image compression Image compression is the method of data compression on digital images. The main objective in the image compression is: Store data in an efficient form Transmit data in an efficient form Image compression can be lossy or lossless. JPEG compression JPEG stands for Joint photographic experts group. It is the first interanational standard in image compression. It is widely used today. It could be lossy as well as lossless . But the technique we are going to discuss here today is lossy compression technique. How jpeg compression works First step is to divide an image into blocks with each having dimensions of 8 x8. Let’s for the record, say that this 8×8 image contains the following values. The range of the pixels intensities now are from 0 to 255. We will change the range from -128 to 127. Subtracting 128 from each pixel value yields pixel value from -128 to 127. After subtracting 128 from each of the pixel value, we got the following results. Now we will compute using this formula. The result comes from this is stored in let’s say A(j,k) matrix. There is a standard matrix that is used for computing JPEG compression, which is given by a matrix called as Luminance matrix. This matrix is given below Applying the following formula We got this result after applying. Now we will perform the real trick which is done in JPEG compression which is ZIG-ZAG movement. The zig zag sequence for the above matrix is shown below. You have to perform zig zag until you find all zeroes ahead. Hence our image is now compressed. Summarizing JPEG compression The first step is to convert an image to Y’CbCr and just pick the Y’ channel and break into 8 x 8 blocks. Then starting from the first block, map the range from -128 to 127. After that you have to find the discrete Fourier transform of the matrix. The result of this should be quantized. The last step is to apply encoding in the zig zag manner and do it till you find all zero. Save this one dimensional array and you are done. Note. You have to repeat this procedure for all the block of 8 x 8. Print Page Previous Next Advertisements ”;