Robinson Compass Mask ”; Previous Next Robinson compass masks are another type of derrivate mask which is used for edge detection. This operator is also known as direction mask. In this operator we take one mask and rotate it in all the 8 compass major directions that are following: North North West West South West South South East East North East There is no fixed mask. You can take any mask and you have to rotate it to find edges in all the above mentioned directions. All the masks are rotated on the bases of direction of zero columns. For example let’s see the following mask which is in North Direction and then rotate it to make all the direction masks. North Direction Mask -1 0 1 -2 0 2 -1 0 1 North West Direction Mask 0 1 2 -1 0 1 -2 -1 0 West Direction Mask 1 2 1 0 0 0 -1 -2 -1 South West Direction Mask 2 1 0 1 0 -1 0 -1 -2 South Direction Mask 1 0 -1 2 0 -2 1 0 -1 South East Direction Mask 0 -1 -2 1 0 -1 2 1 0 East Direction Mask -1 -2 -1 0 0 0 1 2 1 North East Direction Mask -2 -1 0 -1 0 1 0 1 2 As you can see that all the directions are covered on the basis of zeros direction. Each mask will give you the edges on its direction. Now let’s see the result of the entire above masks. Suppose we have a sample picture from which we have to find all the edges. Here is our sample picture: Sample Picture Now we will apply all the above filters on this image and we get the following result. North Direction Edges North West Direction Edges West Direction Edges South West Direction Edges South Direction Edges South East Direction Edges East Direction Edges North East Direction Edges As you can see that by applying all the above masks you will get edges in all the direction. Result is also depends on the image. Suppose there is an image, which do not have any North East direction edges so then that mask will be ineffective. Print Page Previous Next Advertisements ”;
Category: dip
DIP – Applications and Usage
Applications and Usage ”; Previous Next Since digital image processing has very wide applications and almost all of the technical fields are impacted by DIP, we will just discuss some of the major applications of DIP. Digital Image processing is not just limited to adjust the spatial resolution of the everyday images captured by the camera. It is not just limited to increase the brightness of the photo, e.t.c. Rather it is far more than that. Electromagnetic waves can be thought of as stream of particles, where each particle is moving with the speed of light. Each particle contains a bundle of energy. This bundle of energy is called a photon. The electromagnetic spectrum according to the energy of photon is shown below. In this electromagnetic spectrum, we are only able to see the visible spectrum.Visible spectrum mainly includes seven different colors that are commonly term as (VIBGOYR). VIBGOYR stands for violet , indigo , blue , green , orange , yellow and Red. But that doesnot nullify the existence of other stuff in the spectrum. Our human eye can only see the visible portion, in which we saw all the objects. But a camera can see the other things that a naked eye is unable to see. For example: x rays , gamma rays , e.t.c. Hence the analysis of all that stuff too is done in digital image processing. This discussion leads to another question which is why do we need to analyze all that other stuff in EM spectrum too? The answer to this question lies in the fact, because that other stuff such as XRay has been widely used in the field of medical. The analysis of Gamma ray is necessary because it is used widely in nuclear medicine and astronomical observation. Same goes with the rest of the things in EM spectrum. Applications of Digital Image Processing Some of the major fields in which digital image processing is widely used are mentioned below Image sharpening and restoration Medical field Remote sensing Transmission and encoding Machine/Robot vision Color processing Pattern recognition Video processing Microscopic Imaging Others Image sharpening and restoration Image sharpening and restoration refers here to process images that have been captured from the modern camera to make them a better image or to manipulate those images in way to achieve desired result. It refers to do what Photoshop usually does. This includes Zooming, blurring , sharpening , gray scale to color conversion, detecting edges and vice versa , Image retrieval and Image recognition. The common examples are: The original image The zoomed image Blurr image Sharp image Edges Medical field The common applications of DIP in the field of medical is Gamma ray imaging PET scan X Ray Imaging Medical CT UV imaging UV imaging In the field of remote sensing , the area of the earth is scanned by a satellite or from a very high ground and then it is analyzed to obtain information about it. One particular application of digital image processing in the field of remote sensing is to detect infrastructure damages caused by an earthquake. As it takes longer time to grasp damage, even if serious damages are focused on. Since the area effected by the earthquake is sometimes so wide , that it not possible to examine it with human eye in order to estimate damages. Even if it is , then it is very hectic and time consuming procedure. So a solution to this is found in digital image processing. An image of the effected area is captured from the above ground and then it is analyzed to detect the various types of damage done by the earthquake. The key steps include in the analysis are The extraction of edges Analysis and enhancement of various types of edges Transmission and encoding The very first image that has been transmitted over the wire was from London to New York via a submarine cable. The picture that was sent is shown below. The picture that was sent took three hours to reach from one place to another. Now just imagine , that today we are able to see live video feed , or live cctv footage from one continent to another with just a delay of seconds. It means that a lot of work has been done in this field too. This field doesnot only focus on transmission , but also on encoding. Many different formats have been developed for high or low bandwith to encode photos and then stream it over the internet or e.t.c. Machine/Robot vision Apart form the many challenges that a robot face today , one of the biggest challenge still is to increase the vision of the robot. Make robot able to see things , identify them , identify the hurdles e.t.c. Much work has been contributed by this field and a complete other field of computer vision has been introduced to work on it. Hurdle detection Hurdle detection is one of the common task that has been done through image processing, by identifying different type of objects in the image and then calculating the distance between robot and hurdles. Line follower robot Most of the robots today work by following the line and thus are called line follower robots. This help a robot to move on its path and perform some tasks. This has also been achieved through image processing. Color processing Color processing includes processing of colored images and different color spaces that are used. For example RGB color model , YCbCr, HSV. It also involves studying transmission , storage , and encoding of these color images. Pattern recognition Pattern recognition involves study from image processing and from various other fields that includes machine learning ( a branch of artificial intelligence). In pattern recognition , image processing is used for identifying the objects in an images and then machine learning is used to train the system for the change in pattern. Pattern recognition is used in computer aided
DIP – Concept of Masks
Concept of Mask ”; Previous Next What is a mask A mask is a filter. Concept of masking is also known as spatial filtering. Masking is also known as filtering. In this concept we just deal with the filtering operation that is performed directly on the image. A sample mask has been shown below -1 0 1 -1 0 1 -1 0 1 What is filtering The process of filtering is also known as convolving a mask with an image. As this process is same of convolution so filter masks are also known as convolution masks. How it is done The general process of filtering and applying masks is consists of moving the filter mask from point to point in an image. At each point (x,y) of the original image, the response of a filter is calculated by a pre defined relationship. All the filters values are pre defined and are a standard. Types of filters Generally there are two types of filters. One is called as linear filters or smoothing filters and others are called as frequency domain filters. Why filters are used? Filters are applied on image for multiple purposes. The two most common uses are as following: Filters are used for Blurring and noise reduction Filters are used or edge detection and sharpness Blurring and noise reduction Filters are most commonly used for blurring and for noise reduction. Blurring is used in pre processing steps, such as removal of small details from an image prior to large object extraction. Masks for blurring The common masks for blurring are. Box filter Weighted average filter In the process of blurring we reduce the edge content in an image and try to make the transitions between different pixel intensities as smooth as possible. Noise reduction is also possible with the help of blurring. Edge Detection and sharpness Masks or filters can also be used for edge detection in an image and to increase sharpness of an image. What are edges We can also say that sudden changes of discontinuities in an image are called as edges. Significant transitions in an image are called as edges.A picture with edges is shown below. Original picture Same picture with edges Print Page Previous Next Advertisements ”;
DIP – Gray Level Resolution
Gray Level Resolution ”; Previous Next Image resolution A resolution can be defined as the total number of pixels in an image. This has been discussed in Image resolution. And we have also discussed, that clarity of an image does not depends on number of pixels, but on the spatial resolution of the image. This has been discussed in the spatial resolution. Here we are going to discuss another type of resolution which is called gray level resolution. Gray level resolution Gray level resolution refers to the predictable or deterministic change in the shades or levels of gray in an image. In short gray level resolution is equal to the number of bits per pixel. We have already discussed bits per pixel in our tutorial of bits per pixel and image storage requirements. We will define bpp here briefly. BPP The number of different colors in an image is depends on the depth of color or bits per pixel. Mathematically The mathematical relation that can be established between gray level resolution and bits per pixel can be given as. In this equation L refers to number of gray levels. It can also be defined as the shades of gray. And k refers to bpp or bits per pixel. So the 2 raise to the power of bits per pixel is equal to the gray level resolution. For example: The above image of Einstein is an gray scale image. Means it is an image with 8 bits per pixel or 8bpp. Now if were to calculate the gray level resolution, here how we gonna do it. It means it gray level resolution is 256. Or in other way we can say that this image has 256 different shades of gray. The more is the bits per pixel of an image, the more is its gray level resolution. Defining gray level resolution in terms of bpp It is not necessary that a gray level resolution should only be defined in terms of levels. We can also define it in terms of bits per pixel. For example If you are given an image of 4 bpp, and you are asked to calculate its gray level resolution. There are two answers to that question. The first answer is 16 levels. The second answer is 4 bits. Finding bpp from Gray level resolution You can also find the bits per pixels from the given gray level resolution. For this, we just have to twist the formula a little. Equation 1. This formula finds the levels. Now if we were to find the bits per pixel or in this case k, we will simply change it like this. K = log base 2(L) Equation (2) Because in the first equation the relationship between Levels (L ) and bits per pixel (k) is exponentional. Now we have to revert it, and thus the inverse of exponentional is log. Lets take an example to find bits per pixel from gray level resolution. For example: If you are given an image of 256 levels. What is the bits per pixel required for it. Putting 256 in the equation, we get. K = log base 2 ( 256) K = 8. So the answer is 8 bits per pixel. Gray level resolution and quantization: The quantization will be formally introduced in the next tutorial, but here we are just going to explain the relation ship between gray level resolution and quantization. Gray level resolution is found on the y axis of the signal. In the tutorial of Introduction to signals and system, we have studied that digitizing a an analog signal requires two steps. Sampling and quantization. Sampling is done on x axis. And quantization is done in Y axis. So that means digitizing the gray level resolution of an image is done in quantization. Print Page Previous Next Advertisements ”;
Gray Level Transformation ”; Previous Next We have discussed some of the basic transformations in our tutorial of Basic transformation. In this tutorial we will look at some of the basic gray level transformations. Image enhancement Enhancing an image provides better contrast and a more detailed image as compare to non enhanced image. Image enhancement has very applications. It is used to enhance medical images, images captured in remote sensing, images from satellite e.t.c The transformation function has been given below s = T ( r ) where r is the pixels of the input image and s is the pixels of the output image. T is a transformation function that maps each value of r to each value of s. Image enhancement can be done through gray level transformations which are discussed below. Gray level transformation There are three basic gray level transformation. Linear Logarithmic Power – law The overall graph of these transitions has been shown below. Linear transformation First we will look at the linear transformation. Linear transformation includes simple identity and negative transformation. Identity transformation has been discussed in our tutorial of image transformation, but a brief description of this transformation has been given here. Identity transition is shown by a straight line. In this transition, each value of the input image is directly mapped to each other value of output image. That results in the same input image and output image. And hence is called identity transformation. It has been shown below: Negative transformation The second linear transformation is negative transformation, which is invert of identity transformation. In negative transformation, each value of the input image is subtracted from the L-1 and mapped onto the output image. The result is somewhat like this. Input Image Output Image In this case the following transition has been done. s = (L – 1) – r since the input image of Einstein is an 8 bpp image, so the number of levels in this image are 256. Putting 256 in the equation, we get this s = 255 – r So each value is subtracted by 255 and the result image has been shown above. So what happens is that, the lighter pixels become dark and the darker picture becomes light. And it results in image negative. It has been shown in the graph below. Logarithmic transformations Logarithmic transformation further contains two type of transformation. Log transformation and inverse log transformation. Log transformation The log transformations can be defined by this formula s = c log(r + 1). Where s and r are the pixel values of the output and the input image and c is a constant. The value 1 is added to each of the pixel value of the input image because if there is a pixel intensity of 0 in the image, then log (0) is equal to infinity. So 1 is added, to make the minimum value at least 1. During log transformation, the dark pixels in an image are expanded as compare to the higher pixel values. The higher pixel values are kind of compressed in log transformation. This result in following image enhancement. The value of c in the log transform adjust the kind of enhancement you are looking for. Input Image Log Tranform Image The inverse log transform is opposite to log transform. Power – Law transformations There are further two transformation is power law transformations, that include nth power and nth root transformation. These transformations can be given by the expression: s=cr^γ This symbol γ is called gamma, due to which this transformation is also known as gamma transformation. Variation in the value of γ varies the enhancement of the images. Different display devices / monitors have their own gamma correction, that’s why they display their image at different intensity. This type of transformation is used for enhancing images for different type of display devices. The gamma of different display devices is different. For example Gamma of CRT lies in between of 1.8 to 2.5, that means the image displayed on CRT is dark. Correcting gamma. s=cr^γ s=cr^(1/2.5) The same image but with different gamma values has been shown here. For example Gamma = 10 Gamma = 8 Gamma = 6 Print Page Previous Next Advertisements ”;
Introduction to Probability ”; Previous Next PMF and CDF both terms belongs to probability and statistics. Now the question that should arise in your mind, is that why are we studying probability. It is because these two concepts of PMF and CDF are going to be used in the next tutorial of Histogram equalization. So if you dont know how to calculate PMF and CDF, you can not apply histogram equalization on your image . What is PMF? PMF stands for probability mass function. As it name suggest, it gives the probability of each number in the data set or you can say that it basically gives the count or frequency of each element. How PMF is calculated We will calculate PMF from two different ways. First from a matrix, because in the next tutorial, we have to calculate the PMF from a matrix, and an image is nothing more then a two dimensional matrix. Then we will take another example in which we will calculate PMF from the histogram. Consider this matrix. 1 2 7 5 6 7 2 3 4 5 0 1 5 7 3 1 2 5 6 7 6 1 0 3 4 Now if we were to calculate the PMF of this matrix, here how we are going to do it. At first, we will take the first value in the matrix , and then we will count, how much time this value appears in the whole matrix. After count they can either be represented in a histogram, or in a table like this below. PMF 0 2 2/25 1 4 4/25 2 3 3/25 3 3 3/25 4 2 2/25 5 4 4/25 6 3 3/25 7 4 4/25 Note that the sum of the count must be equal to total number of values. Calculating PMF from histogram The above histogram shows frequency of gray level values for an 8 bits per pixel image. Now if we have to calculate its PMF, we will simple look at the count of each bar from vertical axis and then divide it by total count. So the PMF of the above histogram is this. Another important thing to note in the above histogram is that it is not monotonically increasing. So in order to increase it monotonically, we will calculate its CDF. What is CDF? CDF stands for cumulative distributive function. It is a function that calculates the cumulative sum of all the values that are calculated by PMF. It basically sums the previous one. How it is calculated? We will calculate CDF using a histogram. Here how it is done. Consider the histogram shown above which shows PMF. Since this histogram is not increasing monotonically, so will make it grow monotonically. We will simply keep the first value as it is, and then in the 2nd value , we will add the first one and so on. Here is the CDF of the above PMF function. Now as you can see from the graph above, that the first value of PMF remain as it is. The second value of PMF is added in the first value and placed over 128. The third value of PMF is added in the second value of CDF , that gives 110/110 which is equal to 1. And also now, the function is growing monotonically which is necessary condition for histogram equalization. PMF and CDF usage in histogram equalization Histogram equalization Histogram equalization is discussed in the next tutorial but a brief introduction of histogram equalization is given below. Histogram equalization is used for enhancing the contrast of the images. PMF and CDF are both use in histogram equalization as it is described in the beginning of this tutorial. In the histogram equalization, the first and the second step are PMF and CDF. Since in histogram equalization, we have to equalize all the pixel values of an image. So PMF helps us calculating the probability of each pixel value in an image. And CDF gives us the cumulative sum of these values. Further on, this CDF is multiplied by levels, to find the new pixel intensities, which are mapped into old values, and your histogram is equalized. Print Page Previous Next Advertisements ”;
DIP – Concept of Pixel
Concept of Pixel ”; Previous Next Pixel Pixel is the smallest element of an image. Each pixel correspond to any one value. In an 8-bit gray scale image, the value of the pixel between 0 and 255. The value of a pixel at any point correspond to the intensity of the light photons striking at that point. Each pixel store a value proportional to the light intensity at that particular location. PEL A pixel is also known as PEL. You can have more understanding of the pixel from the pictures given below. In the above picture, there may be thousands of pixels, that together make up this image. We will zoom that image to the extent that we are able to see some pixels division. It is shown in the image below. Relationship with CCD array We have seen that how an image is formed in the CCD array. So a pixel can also be defined as The smallest division the CCD array is also known as pixel. Each division of CCD array contains the value against the intensity of the photon striking to it. This value can also be called as a pixel. Calculation of total number of pixels We have define an image as a two dimensional signal or matrix. Then in that case the number of PEL would be equal to the number of rows multiply with number of columns. This can be mathematically represented as below: Total number of pixels = number of rows ( X ) number of columns Or we can say that the number of (x,y) coordinate pairs make up the total number of pixels. We will look in more detail in the tutorial of image types, that how do we calculate the pixels in a color image. Gray level The value of the pixel at any point denotes the intensity of image at that location, and that is also known as gray level. We will see in more detail about the value of the pixels in the image storage and bits per pixel tutorial, but for now we will just look at the concept of only one pixel value. Pixel value.(0) As it has already been define in the beginning of this tutorial, that each pixel can have only one value and each value denotes the intensity of light at that point of the image. We will now look at a very unique value 0. The value 0 means absence of light. It means that 0 denotes dark, and it further means that when ever a pixel has a value of 0, it means at that point, black color would be formed. Have a look at this image matrix 0 0 0 0 0 0 0 0 0 Now this image matrix has all filled up with 0. All the pixels have a value of 0. If we were to calculate the total number of pixels form this matrix, this is how we are going to do it. Total no of pixels = total no. of rows X total no. of columns = 3 X 3 = 9. It means that an image would be formed with 9 pixels, and that image would have a dimension of 3 rows and 3 column and most importantly that image would be black. The resulting image that would be made would be something like this Now why is this image all black. Because all the pixels in the image had a value of 0. Print Page Previous Next Advertisements ”;
Grayscale to RGB Conversion ”; Previous Next We have already define the RGB color model and gray scale format in our tutorial of Image types. Now we will convert an color image into a grayscale image. There are two methods to convert it. Both has their own merits and demerits. The methods are: Average method Weighted method or luminosity method Average method Average method is the most simple one. You just have to take the average of three colors. Since its an RGB image, so it means that you have add r with g with b and then divide it by 3 to get your desired grayscale image. Its done in this way. Grayscale = (R + G + B / 3) For example: If you have an color image like the image shown above and you want to convert it into grayscale using average method. The following result would appear. Explanation There is one thing to be sure, that something happens to the original works. It means that our average method works. But the results were not as expected. We wanted to convert the image into a grayscale, but this turned out to be a rather black image. Problem This problem arise due to the fact, that we take average of the three colors. Since the three different colors have three different wavelength and have their own contribution in the formation of image, so we have to take average according to their contribution, not done it averagely using average method. Right now what we are doing is this, 33% of Red, 33% of Green, 33% of Blue We are taking 33% of each, that means, each of the portion has same contribution in the image. But in reality thats not the case. The solution to this has been given by luminosity method. Weighted method or luminosity method You have seen the problem that occur in the average method. Weighted method has a solution to that problem. Since red color has more wavelength of all the three colors, and green is the color that has not only less wavelength then red color but also green is the color that gives more soothing effect to the eyes. It means that we have to decrease the contribution of red color, and increase the contribution of the green color, and put blue color contribution in between these two. So the new equation that form is: New grayscale image = ( (0.3 * R) + (0.59 * G) + (0.11 * B) ). According to this equation, Red has contribute 30%, Green has contributed 59% which is greater in all three colors and Blue has contributed 11%. Applying this equation to the image, we get this Original Image: Grayscale Image: Explanation As you can see here, that the image has now been properly converted to grayscale using weighted method. As compare to the result of average method, this image is more brighter. Print Page Previous Next Advertisements ”;
DIP – Concept of Sampling
Concept of Sampling ”; Previous Next Conversion of analog signal to digital signal: The output of most of the image sensors is an analog signal, and we can not apply digital processing on it because we can not store it. We can not store it because it requires infinite memory to store a signal that can have infinite values. So we have to convert an analog signal into a digital signal. To create an image which is digital, we need to covert continuous data into digital form. There are two steps in which it is done. Sampling Quantization We will discuss sampling now, and quantization will be discussed later on but for now on we will discuss just a little about the difference between these two and the need of these two steps. Basic idea: The basic idea behind converting an analog signal to its digital signal is to convert both of its axis (x,y) into a digital format. Since an image is continuous not just in its co-ordinates (x axis), but also in its amplitude (y axis), so the part that deals with the digitizing of co-ordinates is known as sampling. And the part that deals with digitizing the amplitude is known as quantization. Sampling. Sampling has already been introduced in our tutorial of introduction to signals and system. But we are going to discuss here more. Here what we have discussed of the sampling. The term sampling refers to take samples We digitize x axis in sampling It is done on independent variable In case of equation y = sin(x), it is done on x variable It is further divided into two parts , up sampling and down sampling If you will look at the above figure, you will see that there are some random variations in the signal. These variations are due to noise. In sampling we reduce this noise by taking samples. It is obvious that more samples we take, the quality of the image would be more better, the noise would be more removed and same happens vice versa. However, if you take sampling on the x axis, the signal is not converted to digital format, unless you take sampling of the y-axis too which is known as quantization. The more samples eventually means you are collecting more data, and in case of image, it means more pixels. Relation ship with pixels Since a pixel is a smallest element in an image. The total number of pixels in an image can be calculated as Pixels = total no of rows * total no of columns. Lets say we have total of 25 pixels, that means we have a square image of 5 X 5. Then as we have dicussed above in sampling, that more samples eventually result in more pixels. So it means that of our continuous signal, we have taken 25 samples on x axis. That refers to 25 pixels of this image. This leads to another conclusion that since pixel is also the smallest division of a CCD array. So it means it has a relationship with CCD array too, which can be explained as this. Relationship with CCD array The number of sensors on a CCD array is directly equal to the number of pixels. And since we have concluded that the number of pixels is directly equal to the number of samples, that means that number sample is directly equal to the number of sensors on CCD array. Oversampling. In the beginning we have define that sampling is further categorize into two types. Which is up sampling and down sampling. Up sampling is also called as over sampling. The oversampling has a very deep application in image processing which is known as Zooming. Zooming We will formally introduce zooming in the upcoming tutorial, but for now on, we will just briefly explain zooming. Zooming refers to increase the quantity of pixels, so that when you zoom an image, you will see more detail. The increase in the quantity of pixels is done through oversampling. The one way to zoom is, or to increase samples, is to zoom optically, through the motor movement of the lens and then capture the image. But we have to do it, once the image has been captured. There is a difference between zooming and sampling The concept is same, which is, to increase samples. But the key difference is that while sampling is done on the signals, zooming is done on the digital image. Print Page Previous Next Advertisements ”;
DIP – Types of Images
Types of Images ”; Previous Next There are many type of images, and we will look in detail about different types of images, and the color distribution in them. The binary image The binary image as it name states, contain only two pixel values. 0 and 1. In our previous tutorial of bits per pixel, we have explained this in detail about the representation of pixel values to their respective colors. Here 0 refers to black color and 1 refers to white color. It is also known as Monochrome. Black and white image: The resulting image that is formed hence consist of only black and white color and thus can also be called as Black and White image. No gray level One of the interesting this about this binary image that there is no gray level in it. Only two colors that are black and white are found in it. Format Binary images have a format of PBM ( Portable bit map ) 2, 3, 4,5, 6 bit color format The images with a color format of 2, 3, 4, 5 and 6 bit are not widely used today. They were used in old times for old TV displays, or monitor displays. But each of these colors have more then two gray levels, and hence has gray color unlike the binary image. In a 2 bit 4, in a 3 bit 8, in a 4 bit 16, in a 5 bit 32, in a 6 bit 64 different colors are present. 8 bit color format 8 bit color format is one of the most famous image format. It has 256 different shades of colors in it. It is commonly known as Grayscale image. The range of the colors in 8 bit vary from 0-255. Where 0 stands for black, and 255 stands for white, and 127 stands for gray color. This format was used initially by early models of the operating systems UNIX and the early color Macintoshes. A grayscale image of Einstein is shown below: Format The format of these images are PGM ( Portable Gray Map ). This format is not supported by default from windows. In order to see gray scale image, you need to have an image viewer or image processing toolbox such as Matlab. Behind gray scale image: As we have explained it several times in the previous tutorials, that an image is nothing but a two dimensional function, and can be represented by a two dimensional array or matrix. So in the case of the image of Einstein shown above, there would be two dimensional matrix in behind with values ranging between 0 and 255. But thats not the case with the color images. 16 bit color format It is a color image format. It has 65,536 different colors in it. It is also known as High color format. It has been used by Microsoft in their systems that support more then 8 bit color format. Now in this 16 bit format and the next format we are going to discuss which is a 24 bit format are both color format. The distribution of color in a color image is not as simple as it was in grayscale image. A 16 bit format is actually divided into three further formats which are Red , Green and Blue. The famous (RGB) format. It is pictorially represented in the image below. Now the question arises, that how would you distribute 16 into three. If you do it like this, 5 bits for R, 5 bits for G, 5 bits for B Then there is one bit remains in the end. So the distribution of 16 bit has been done like this. 5 bits for R, 6 bits for G, 5 bits for B. The additional bit that was left behind is added into the green bit. Because green is the color which is most soothing to eyes in all of these three colors. Note this is distribution is not followed by all the systems. Some have introduced an alpha channel in the 16 bit. Another distribution of 16 bit format is like this: 4 bits for R, 4 bits for G, 4 bits for B, 4 bits for alpha channel. Or some distribute it like this 5 bits for R, 5 bits for G, 5 bits for B, 1 bits for alpha channel. 24 bit color format 24 bit color format also known as true color format. Like 16 bit color format, in a 24 bit color format, the 24 bits are again distributed in three different formats of Red, Green and Blue. Since 24 is equally divided on 8, so it has been distributed equally between three different color channels. Their distribution is like this. 8 bits for R, 8 bits for G, 8 bits for B. Behind a 24 bit image. Unlike a 8 bit gray scale image, which has one matrix behind it, a 24 bit image has three different matrices of R, G, B. Format It is the most common used format. Its format is PPM ( Portable pixMap) which is supported by Linux operating system. The famous windows has its own format for it which is BMP ( Bitmap ). Print Page Previous Next Advertisements ”;