Learning DIP – Camera Mechanism work project make money

Camera Mechanism In this tutorial, we will discuss some of the basic camera concepts, like aperture, shutter, shutter speed, ISO and we will discuss the collective use of these concepts to capture a good image. Aperture Aperture is a small opening which allows the light to travel inside into camera. Here is the picture of aperture. You will see some small blades like stuff inside the aperture. These blades create a octagonal shape that can be opened closed. And thus it make sense that, the more blades will open, the hole from which the light would have to pass would be bigger. The bigger the hole, the more light is allowed to enter. Effect The effect of the aperture directly corresponds to brightness and darkness of an image. If the aperture opening is wide, it would allow more light to pass into the camera. More light would result in more photons, which ultimately result in a brighter image. The example of this is shown below Consider these two photos The one on the right side looks brighter, it means that when it was captured by the camera, the aperture was wide open. As compare to the other picture on the left side, which is very dark as compare to the first one, that shows that when that image was captured, its aperture was not wide open. Size Now lets discuss the maths behind the aperture. The size of the aperture is denoted by a f value. And it is inversely proportional to the opening of aperture. Here are the two equations, that best explain this concept. Large aperture size = Small f value Small aperture size = Greater f value Pictorially it can be represented as: Shutter After the aperture, there comes the shutter. The light when allowed to pass from the aperture, falls directly on to the shutter. Shutter is actually a cover, a closed window, or can be thought of as a curtain. Remember when we talk about the CCD array sensor on which the image is formed. Well behind the shutter is the sensor. So shutter is the only thing that is between the image formation and the light, when it is passed from aperture. As soon as the shutter is open, light falls on the image sensor, and the image is formed on the array. Effect If the shutter allows light to pass a bit longer, the image would be brighter. Similarly a darker picture is produced, when a shutter is allowed to move very quickly and hence, the light that is allowed to pass has very less photons, and the image that is formed on the CCD array sensor is very dark. Shutter has further two main concepts: Shutter Speed Shutter time Shutter speed The shutter speed can be referred to as the number of times the shutter get open or close. Remember we are not talking about for how long the shutter get open or close. Shutter time The shutter time can be defined as When the shutter is open, then the amount of wait time it take till it is closed is called shutter time. In this case we are not talking about how many times, the shutter got open or close, but we are talking about for how much time does it remain wide open. For example: We can better understand these two concepts in this way. That lets say that a shutter opens 15 times and then get closed, and for each time it opens for 1 second and then get closed. In this example, 15 is the shutter speed and 1 second is the shutter time. Relationship The relationship between shutter speed and shutter time is that they are both inversely proportional to each other. This relationship can be defined in the equation below. More shutter speed = less shutter time. Less shutter speed = more shutter time. Explanation: The lesser the time required, the more is the speed. And the greater the time required, the less is the speed. Applications These two concepts together make a variety of applications. Some of them are given below. Fast moving objects: If you were to capture the image of a fast moving object, could be a car or anything. The adjustment of shutter speed and its time would effect a lot. So, in order to capture an image like this, we will make two amendments: Increase shutter speed Decrease shutter time What happens is, that when we increase shutter speed, the more number of times, the shutter would open or close. It means different samples of light would allow to pass in. And when we decrease shutter time, it means we will immediately captures the scene, and close the shutter gate. If you will do this, you get a crisp image of a fast moving object. In order to understand it, we will look at this example. Suppose you want to capture the image of fast moving water fall. You set your shutter speed to 1 second and you capture a photo. This is what you get Then you set your shutter speed to a faster speed and you get. Then again you set your shutter speed to even more faster and you get. You can see in the last picture, that we have increase our shutter speed to very fast, that means that a shutter get opened or closed in 200th of 1 second and so we got a crisp image. ISO ISO factor is measured in numbers. It denotes the sensitivity of light to camera. If ISO number is lowered, it means our camera is less sensitive to light and if the ISO number is high, it means it is more sensitive. Effect The higher is the ISO, the more brighter the picture would be. IF ISO is set to 1600, the picture would be very brighter and vice versa. Side effect If the ISO increases, the noise in the image also increases. Today most of the camera manufacturing

Learning DIP – Laplacian Operator work project make money

Laplacian Operator Laplacian Operator is also a derivative operator which is used to find edges in an image. The major difference between Laplacian and other operators like Prewitt, Sobel, Robinson and Kirsch is that these all are first order derivative masks but Laplacian is a second order derivative mask. In this mask we have two further classifications one is Positive Laplacian Operator and other is Negative Laplacian Operator. Another difference between Laplacian and other operators is that unlike other operators Laplacian didn’t take out edges in any particular direction but it take out edges in following classification. Inward Edges Outward Edges Let’s see that how Laplacian operator works. Positive Laplacian Operator In Positive Laplacian we have standard mask in which center element of the mask should be negative and corner elements of mask should be zero. 0 1 0 1 -4 1 0 1 0 Positive Laplacian Operator is use to take out outward edges in an image. Negative Laplacian Operator In negative Laplacian operator we also have a standard mask, in which center element should be positive. All the elements in the corner should be zero and rest of all the elements in the mask should be -1. 0 -1 0 -1 4 -1 0 -1 0 Negative Laplacian operator is use to take out inward edges in an image How it works Laplacian is a derivative operator; its uses highlight gray level discontinuities in an image and try to deemphasize regions with slowly varying gray levels. This operation in result produces such images which have grayish edge lines and other discontinuities on a dark background. This produces inward and outward edges in an image The important thing is how to apply these filters onto image. Remember we can’t apply both the positive and negative Laplacian operator on the same image. we have to apply just one but the thing to remember is that if we apply positive Laplacian operator on the image then we subtract the resultant image from the original image to get the sharpened image. Similarly if we apply negative Laplacian operator then we have to add the resultant image onto original image to get the sharpened image. Let’s apply these filters onto an image and see how it will get us inward and outward edges from an image. Suppose we have a following sample image. Sample Image After applying Positive Laplacian Operator After applying positive Laplacian operator we will get the following image. After applying Negative Laplacian Operator After applying negative Laplacian operator we will get the following image. Learning working make money

Learning DIP – Zooming methods work project make money

Zooming Methods In this tutorial we are going to formally introduce three methods of zooming that were introduced in the tutorial of Introduction to zooming. Methods Pixel replication or (Nearest neighbor interpolation) Zero order hold method Zooming K times Each of the methods have their own advantages and disadvantages. We will start by discussing pixel replication. Method 1: Pixel replication: Introduction: It is also known as Nearest neighbor interpolation. As its name suggest, in this method, we just replicate the neighboring pixels. As we have already discussed in the tutorial of Sampling, that zooming is nothing but increase amount of sample or pixels. This algorithm works on the same principle. Working: In this method we create new pixels form the already given pixels. Each pixel is replicated in this method n times row wise and column wise and you got a zoomed image. Its as simple as that. For example: if you have an image of 2 rows and 2 columns and you want to zoom it twice or 2 times using pixel replication, here how it can be done. For a better understanding, the image has been taken in the form of matrix with the pixel values of the image. 1 2 3 4 The above image has two rows and two columns, we will first zoom it row wise. Row wise zooming: When we zoom it row wise, we will just simple copy the rows pixels to its adjacent new cell. Here how it would be done. 1 1 2 2 3 3 4 4 As you can that in the above matrix, each pixel is replicated twice in the rows. Column size zooming: The next step is to replicate each of the pixel column wise, that we will simply copy the column pixel to its adjacent new column or simply below it. Here how it would be done. 1 1 2 2 1 1 2 2 3 3 4 4 3 3 4 4 New image size: As it can be seen from the above example, that an original image of 2 rows and 2 columns has been converted into 4 rows and 4 columns after zooming. That means the new image has a dimensions of (Original image rows * zooming factor, Original Image cols * zooming factor) Advantage and disadvantage: One of the advantage of this zooming technique is, it is very simple. You just have to copy the pixels and nothing else. The disadvantage of this technique is that image got zoomed but the output is very blurry. And as the zooming factor increased, the image got more and more blurred. That would eventually result in fully blurred image. Method 2: Zero order hold Introduction Zero order hold method is another method of zooming. It is also known as zoom twice. Because it can only zoom twice. We will see in the below example that why it does that. Working In zero order hold method, we pick two adjacent elements from the rows respectively and then we add them and divide the result by two, and place their result in between those two elements. We first do this row wise and then we do this column wise. For example Lets take an image of the dimensions of 2 rows and 2 columns and zoom it twice using zero order hold. 1 2 3 4 First we will zoom it row wise and then column wise. Row wise zooming 1 1 2 3 3 4 As we take the first two numbers : (2 + 1) = 3 and then we divide it by 2, we get 1.5 which is approximated to 1. The same method is applied in the row 2. Column wise zooming 1 1 2 2 2 3 3 3 4 We take two adjacent column pixel values which are 1 and 3. We add them and got 4. 4 is then divided by 2 and we get 2 which is placed in between them. The same method is applied in all the columns. New image size As you can see that the dimensions of the new image are 3 x 3 where the original image dimensions are 2 x 2. So it means that the dimensions of the new image are based on the following formula (2(number of rows) minus 1) X (2(number of columns) minus 1) Advantages and disadvantage. One of the advantage of this zooming technique , that it does not create as blurry picture as compare to the nearest neighbor interpolation method. But it also has a disadvantage that it can only run on the power of 2. It can be demonstrated here. Reason behind twice zooming: Consider the above image of 2 rows and 2 columns. If we have to zoom it 6 times, using zero order hold method , we can not do it. As the formula shows us this. It could only zoom in the power of 2 2,4,8,16,32 and so on. Even if you try to zoom it, you can not. Because at first when you will zoom it two times, and the result would be same as shown in the column wise zooming with dimensions equal to 3×3. Then you will zoom it again and you will get dimensions equal to 5 x 5. Now if you will do it again, you will get dimensions equal to 9 x 9. Whereas according to the formula of yours the answer should be 11×11. As (6(2) minus 1) X (6(2) minus 1) gives 11 x 11. Method 3: K-Times zooming Introduction: K times is the third zooming method we are going to discuss. It is one of the most perfect zooming algorithm discussed so far. It caters the challenges of both twice zooming and pixel replication. K in this zooming algorithm stands for zooming factor. Working: It works like this way. First of all, you have to take two adjacent pixels as you did in the zooming twice. Then you have to subtract the smaller

Learning DIP – Pixel Resolution work project make money

Pixel Resolution Before we define pixel resolution, it is necessary to define a pixel. Pixel We have already defined a pixel in our tutorial of concept of pixel, in which we define a pixel as the smallest element of an image. We also defined that a pixel can store a value proportional to the light intensity at that particular location. Now since we have defined a pixel, we are going to define what is resolution. Resolution The resolution can be defined in many ways. Such as pixel resolution, spatial resolution, temporal resolution, spectral resolution. Out of which we are going to discuss pixel resolution. You have probably seen that in your own computer settings, you have monitor resolution of 800 x 600, 640 x 480 e.t.c In pixel resolution, the term resolution refers to the total number of count of pixels in an digital image. For example. If an image has M rows and N columns, then its resolution can be defined as M X N. If we define resolution as the total number of pixels, then pixel resolution can be defined with set of two numbers. The first number the width of the picture, or the pixels across columns, and the second number is height of the picture, or the pixels across its width. We can say that the higher is the pixel resolution, the higher is the quality of the image. We can define pixel resolution of an image as 4500 X 5500. Megapixels We can calculate mega pixels of a camera using pixel resolution. Column pixels (width ) X row pixels ( height ) / 1 Million. The size of an image can be defined by its pixel resolution. Size = pixel resolution X bpp ( bits per pixel ) Calculating the mega pixels of the camera Lets say we have an image of dimension: 2500 X 3192. Its pixel resolution = 2500 * 3192 = 7982350 bytes. Dividing it by 1 million = 7.9 = 8 mega pixel (approximately). Aspect ratio Another important concept with the pixel resolution is aspect ratio. Aspect ratio is the ratio between width of an image and the height of an image. It is commonly explained as two numbers separated by a colon (8:9). This ratio differs in different images, and in different screens. The common aspect ratios are: 1.33:1, 1.37:1, 1.43:1, 1.50:1, 1.56:1, 1.66:1, 1.75:1, 1.78:1, 1.85:1, 2.00:1, e.t.c Advantage Aspect ratio maintains a balance between the appearance of an image on the screen, means it maintains a ratio between horizontal and vertical pixels. It does not let the image to get distorted when aspect ratio is increased. For example This is a sample image, which has 100 rows and 100 columns. If we wish to make is smaller, and the condition is that the quality remains the same or in other way the image does not get distorted, here how it happens. Original image Changing the rows and columns by maintain the aspect ratio in MS Paint. Result Smaller image, but with same balance. You have probably seen aspect ratios in the video players, where you can adjust the video according to your screen resolution. Finding the dimensions of the image from aspect ratio: Aspect ratio tells us many things. With the aspect ratio, you can calculate the dimensions of the image along with the size of the image. For example If you are given an image with aspect ratio of 6:2 of an image of pixel resolution of 480000 pixels given the image is an gray scale image. And you are asked to calculate two things. Resolve pixel resolution to calculate the dimensions of image Calculate the size of the image Solution: Given: Aspect ratio: c:r = 6:2 Pixel resolution: c * r = 480000 Bits per pixel: grayscale image = 8bpp Find: Number of rows = ? Number of cols = ? Solving first part: Solving 2nd part: Size = rows * cols * bpp Size of image in bits = 400 * 1200 * 8 = 3840000 bits Size of image in bytes = 480000 bytes Size of image in kilo bytes = 48 kb (approx). Learning working make money

Learning DIP – Concept of Zooming work project make money

Concept of Zooming In this tutorial we are going to introduce the concept of zooming, and the common techniques that are used to zoom an image. Zooming Zooming simply means enlarging a picture in a sense that the details in the image became more visible and clear. Zooming an image has many wide applications ranging from zooming through a camera lens, to zoom an image on internet e.t.c. For example is zoomed into You can zoom something at two different steps. The first step includes zooming before taking an particular image. This is known as pre processing zoom. This zoom involves hardware and mechanical movement. The second step is to zoom once an image has been captured. It is done through many different algorithms in which we manipulate pixels to zoom in the required portion. We will discuss them in detail in the next tutorial. Optical Zoom vs digital Zoom These two types of zoom are supported by the cameras. Optical Zoom: The optical zoom is achieved using the movement of the lens of your camera. An optical zoom is actually a true zoom. The result of the optical zoom is far better then that of digital zoom. In optical zoom, an image is magnified by the lens in such a way that the objects in the image appear to be closer to the camera. In optical zoom the lens is physically extend to zoom or magnify an object. Digital Zoom: Digital zoom is basically image processing within a camera. During a digital zoom, the center of the image is magnified and the edges of the picture got crop out. Due to magnified center, it looks like that the object is closer to you. During a digital zoom, the pixels got expand , due to which the quality of the image is compromised. The same effect of digital zoom can be seen after the image is taken through your computer by using an image processing toolbox / software, such as Photoshop. The following picture is the result of digital zoom done through one of the following methods given below in the zooming methods. Now since we are leaning digital image processing, we will not focus, on how an image can be zoomed optically using lens or other stuff. Rather we will focus on the methods, that enable to zoom a digital image. Zooming methods: Although there are many methods that does this job, but we are going to discuss the most common of them here. They are listed below. Pixel replication or (Nearest neighbor interpolation) Zero order hold method Zooming K times All these three methods are formally introduced in the next tutorial. Learning working make money

Learning DIP – Concept of Dimensions work project make money

Concept of Dimensions We will look at this example in order to understand the concept of dimension. Consider you have a friend who lives on moon, and he wants to send you a gift on your birthday present. He ask you about your residence on earth. The only problem is that the courier service on moon doesnot understand the alphabetical address, rather it only understand the numerical co-ordinates. So how do you send him your position on earth? Thats where comes the concept of dimensions. Dimensions define the minimum number of points required to point a position of any particular object within a space. So lets go back to our example again in which you have to send your position on earth to your friend on moon. You send him three pair of co-ordinates. The first one is called longitude , the second one is called latitude, and the third one is called altitude. These three co-ordinates define your position on the earth. The first two defines your location, and the third one defines your height above the sea level. So that means that only three co-ordinates are required to define your position on earth. That means you live in world which is 3 dimensional. And thus this not only answers the question about dimension, but also answers the reason, that why we live in a 3d world. Since we are studying this concept in reference to the digital image processing, so we are now going to relate this concept of dimension with an image. Dimensions of image So if we live in the 3d world, means a 3 dimensional world, then what are the dimensions of an image that we capture. An image is a two dimensional, thats why we also define an image as a 2 dimensional signal. An image has only height and width. An image does not have depth. Just have a look at this image below. If you would look at the above figure, it shows that it has only two axis which are the height and width axis. You cannot perceive depth from this image. Thats why we say that an image is two dimensional signal. But our eye is able to perceive three dimensional objects, but this would be more explained in the next tutorial of how the camera works, and image is perceived. This discussion leads to some other questions that how 3 dimension systems is formed from 2 dimension. How does television works? If we look the image above, we will see that it is a two dimensional image. In order to convert it into three dimension, we need one other dimension. Lets take time as the third dimension, in that case we will move this two dimensional image over the third dimension time. The same concept that happens in television, that helps us perceive the depth of different objects on a screen. Does that mean that what comes on the T.V or what we see in the television screen is 3d. Well we can yes. The reason is that, in case of T.V we if we are playing a video. Then a video is nothing else but two dimensional pictures move over time dimension. As two dimensional objects are moving over the third dimension which is a time so we can say it is 3 dimensional. Different dimensions of signals 1 dimension signal The common example of a 1 dimension signal is a waveform. It can be mathematically represented as F(x) = waveform Where x is an independent variable. Since it is a one dimension signal , so thats why there is only one variable x is used. Pictorial representation of a one dimensional signal is given below: The above figure shows a one dimensional signal. Now this lead to another question, which is, even though it is a one dimensional signal ,then why does it have two axis?. The answer to this question is that even though it is a one dimensional signal, but we are drawing it in a two dimensional space. Or we can say that the space in which we are representing this signal is two dimensional. Thats why it looks like a two dimensional signal. Perhaps you can understand the concept of one dimension more better by looking at the figure below. Now refer back to our initial discussion on dimension, Consider the above figure a real line with positive numbers from one point to the other. Now if we have to explain the location of any point on this line, we just need only one number, which means only one dimension. 2 dimensions signal The common example of a two dimensional signal is an image, which has already been discussed above. As we have already seen that an image is two dimensional signal, i-e: it has two dimensions. It can be mathematically represented as: F (x , y) = Image Where x and y are two variables. The concept of two dimension can also be explained in terms of mathematics as: Now in the above figure, label the four corners of the square as A,B,C and D respectively. If we call, one line segment in the figure AB and the other CD, then we can see that these two parallel segments join up and make a square. Each line segment corresponds to one dimension, so these two line segments correspond to 2 dimensions. 3 dimension signal Three dimensional signal as it names refers to those signals which has three dimensions. The most common example has been discussed in the beginning which is of our world. We live in a three dimensional world. This example has been discussed very elaborately. Another example of a three dimensional signal is a cube or a volumetric data or the most common example would be animated or 3d cartoon character. The mathematical representation of three dimensional signal is: F(x,y,z) = animated character. Another axis or dimension Z is involved in a three dimension, that gives the illusion of

Learning DIP – Spatial Resolution work project make money

Spatial Resolution Image resolution Image resolution can be defined in many ways. One type of it which is pixel resolution that has been discussed in the tutorial of pixel resolution and aspect ratio. In this tutorial, we are going to define another type of resolution which is spatial resolution. Spatial resolution Spatial resolution states that the clarity of an image cannot be determined by the pixel resolution. The number of pixels in an image does not matter. Spatial resolution can be defined as the smallest discernible detail in an image. (Digital Image Processing – Gonzalez, Woods – 2nd Edition) Or in other way we can define spatial resolution as the number of independent pixels values per inch. In short what spatial resolution refers to is that we cannot compare two different types of images to see that which one is clear or which one is not. If we have to compare the two images, to see which one is more clear or which has more spatial resolution, we have to compare two images of the same size. For example: You cannot compare these two images to see the clarity of the image. Although both images are of the same person, but that is not the condition we are judging on. The picture on the left is zoomed out picture of Einstein with dimensions of 227 x 222. Whereas the picture on the right side has the dimensions of 980 X 749 and also it is a zoomed image. We cannot compare them to see that which one is more clear. Remember the factor of zoom does not matter in this condition, the only thing that matters is that these two pictures are not equal. So in order to measure spatial resolution , the pictures below would server the purpose. Now you can compare these two pictures. Both the pictures has same dimensions which are of 227 X 222. Now when you compare them, you will see that the picture on the left side has more spatial resolution or it is more clear then the picture on the right side. That is because the picture on the right is a blurred image. Measuring spatial resolution Since the spatial resolution refers to clarity, so for different devices, different measure has been made to measure it. For example Dots per inch Lines per inch Pixels per inch They are discussed in more detail in the next tutorial but just a brief introduction has been given below. Dots per inch Dots per inch or DPI is usually used in monitors. Lines per inch Lines per inch or LPI is usually used in laser printers. Pixel per inch Pixel per inch or PPI is measure for different devices such as tablets , Mobile phones e.t.c. Learning working make money

Learning DIP – Histogram Stretching work project make money

Histogram stretching One of the other advantage of Histogram s that we discussed in our tutorial of introduction to histograms is contrast enhancement. There are two methods of enhancing contrast. The first one is called Histogram stretching that increase contrast. The second one is called Histogram equalization that enhance contrast and it has been discussed in our tutorial of histogram equalization. Before we will discuss the histogram stretching to increase contrast, we will briefly define contrast. Contrast Contrast is the difference between maximum and minimum pixel intensity. Consider this image. The histogram of this image is shown below. Now we calculate contrast from this image. Contrast = 225. Now we will increase the contrast of the image. Increasing the contrast of the image The formula for stretching the histogram of the image to increase the contrast is The formula requires finding the minimum and maximum pixel intensity multiply by levels of gray. In our case the image is 8bpp, so levels of gray are 256. The minimum value is 0 and the maximum value is 225. So the formula in our case is where f(x,y) denotes the value of each pixel intensity. For each f(x,y) in an image , we will calculate this formula. After doing this, we will be able to enhance our contrast. The following image appear after applying histogram stretching. The stretched histogram of this image has been shown below. Note the shape and symmetry of histogram. The histogram is now stretched or in other means expand. Have a look at it. In this case the contrast of the image can be calculated as Contrast = 240 Hence we can say that the contrast of the image is increased. Note : this method of increasing contrast doesnot work always, but it fails on some cases. Failing of histogram stretching As we have discussed , that the algorithm fails on some cases. Those cases include images with when there is pixel intensity 0 and 255 are present in the image Because when pixel intensities 0 and 255 are present in an image, then in that case they become the minimum and maximum pixel intensity which ruins the formula like this. Original Formula Putting fail case values in the formula: Simplify that expression gives That means the output image is equal to the processed image. That means there is no effect of histogram stretching has been done at this image. Learning working make money

Learning DIP – Image Formation on Camera work project make money

Image Formation on Camera How human eye works? Before we discuss , the image formation on analog and digital cameras , we have to first discuss the image formation on human eye. Because the basic principle that is followed by the cameras has been taken from the way , the human eye works. When light falls upon the particular object , it is reflected back after striking through the object. The rays of light when passed through the lens of eye , form a particular angle , and the image is formed on the retina which is the back side of the wall. The image that is formed is inverted. This image is then interpreted by the brain and that makes us able to understand things. Due to angle formation , we are able to perceive the height and depth of the object we are seeing. This has been more explained in the tutorial of perspective transformation. As you can see in the above figure, that when sun light falls on the object (in this case the object is a face), it is reflected back and different rays form different angle when they are passed through the lens and an invert image of the object has been formed on the back wall. The last portion of the figure denotes that the object has been interpreted by the brain and re-inverted. Now lets take our discussion back to the image formation on analog and digital cameras. Image formation on analog cameras In analog cameras , the image formation is due to the chemical reaction that takes place on the strip that is used for image formation. A 35mm strip is used in analog camera. It is denoted in the figure by 35mm film cartridge. This strip is coated with silver halide ( a chemical substance). A 35mm strip is used in analog camera. It is denoted in the figure by 35mm film cartridge. This strip is coated with silver halide ( a chemical substance). Light is nothing but just the small particles known as photon particles.So when these photon particles are passed through the camera, it reacts with the silver halide particles on the strip and it results in the silver which is the negative of the image. In order to understand it better , have a look at this equation. Photons (light particles) + silver halide ? silver ? image negative. This is just the basics, although image formation involves many other concepts regarding the passing of light inside , and the concepts of shutter and shutter speed and aperture and its opening but for now we will move on to the next part. Although most of these concepts have been discussed in our tutorial of shutter and aperture. This is just the basics, although image formation involves many other concepts regarding the passing of light inside , and the concepts of shutter and shutter speed and aperture and its opening but for now we will move on to the next part. Although most of these concepts have been discussed in our tutorial of shutter and aperture. Image formation on digital cameras In the digital cameras , the image formation is not due to the chemical reaction that take place , rather it is a bit more complex then this. In the digital camera , a CCD array of sensors is used for the image formation. Image formation through CCD array CCD stands for charge-coupled device. It is an image sensor, and like other sensors it senses the values and converts them into an electric signal. In case of CCD it senses the image and convert it into electric signal e.t.c. This CCD is actually in the shape of array or a rectangular grid. It is like a matrix with each cell in the matrix contains a censor that senses the intensity of photon. Like analog cameras , in the case of digital too , when light falls on the object , the light reflects back after striking the object and allowed to enter inside the camera. Each sensor of the CCD array itself is an analog sensor. When photons of light strike on the chip , it is held as a small electrical charge in each photo sensor. The response of each sensor is directly equal to the amount of light or (photon) energy striked on the surface of the sensor. Since we have already define an image as a two dimensional signal and due to the two dimensional formation of the CCD array , a complete image can be achieved from this CCD array. It has limited number of sensors , and it means a limited detail can be captured by it. Also each sensor can have only one value against the each photon particle that strike on it. So the number of photons striking(current) are counted and stored. In order to measure accurately these , external CMOS sensors are also attached with CCD array. Introduction to pixel The value of each sensor of the CCD array refers to each the value of the individual pixel. The number of sensors = number of pixels. It also means that each sensor could have only one and only one value. Storing image The charges stored by the CCD array are converted to voltage one pixel at a time. With the help of additional circuits , this voltage is converted into a digital information and then it is stored. Each company that manufactures digital camera, make their own CCD sensors. That include , Sony , Mistubishi , Nikon ,Samsung , Toshiba , FujiFilm , Canon e.t.c. Apart from the other factors , the quality of the image captured also depends on the type and quality of the CCD array that has been used. Learning working make money

Learning DIP – Concept of Bits Per Pixel work project make money

Concept of Bits Per Pixel Bpp or bits per pixel denotes the number of bits per pixel. The number of different colors in an image is depends on the depth of color or bits per pixel. Bits in mathematics: Its just like playing with binary bits. How many numbers can be represented by one bit. 0 1 How many two bits combinations can be made. 00 01 10 11 If we devise a formula for the calculation of total number of combinations that can be made from bit, it would be like this. Where bpp denotes bits per pixel. Put 1 in the formula you get 2, put 2 in the formula, you get 4. It grows exponentially. Number of different colors: Now as we said it in the beginning, that the number of different colors depend on the number of bits per pixel. The table for some of the bits and their color is given below. Bits per pixel Number of colors 1 bpp 2 colors 2 bpp 4 colors 3 bpp 8 colors 4 bpp 16 colors 5 bpp 32 colors 6 bpp 64 colors 7 bpp 128 colors 8 bpp 256 colors 10 bpp 1024 colors 16 bpp 65536 colors 24 bpp 16777216 colors (16.7 million colors) 32 bpp 4294967296 colors (4294 million colors) This table shows different bits per pixel and the amount of color they contain. Shades You can easily notice the pattern of the exponentional growth. The famous gray scale image is of 8 bpp , means it has 256 different colors in it or 256 shades. Shades can be represented as: Color images are usually of the 24 bpp format, or 16 bpp. We will see more about other color formats and image types in the tutorial of image types. Color values: We have previously seen in the tutorial of concept of pixel, that 0 pixel value denotes black color. Black color: Remember, 0 pixel value always denotes black color. But there is no fixed value that denotes white color. White color: The value that denotes white color can be calculated as : In case of 1 bpp, 0 denotes black, and 1 denotes white. In case 8 bpp, 0 denotes black, and 255 denotes white. Gray color: When you calculate the black and white color value, then you can calculate the pixel value of gray color. Gray color is actually the mid point of black and white. That said, In case of 8bpp, the pixel value that denotes gray color is 127 or 128bpp (if you count from 1, not from 0). Image storage requirements After the discussion of bits per pixel, now we have every thing that we need to calculate a size of an image. Image size The size of an image depends upon three things. Number of rows Number of columns Number of bits per pixel The formula for calculating the size is given below. Size of an image = rows * cols * bpp It means that if you have an image, lets say this one: Assuming it has 1024 rows and it has 1024 columns. And since it is a gray scale image, it has 256 different shades of gray or it has bits per pixel. Then putting these values in the formula, we get Size of an image = rows * cols * bpp = 1024 * 1024 * 8 = 8388608 bits. But since its not a standard answer that we recognize, so will convert it into our format. Converting it into bytes = 8388608 / 8 = 1048576 bytes. Converting into kilo bytes = 1048576 / 1024 = 1024kb. Converting into Mega bytes = 1024 / 1024 = 1 Mb. Thats how an image size is calculated and it is stored. Now in the formula, if you are given the size of image and the bits per pixel, you can also calculate the rows and columns of the image, provided the image is square(same rows and same column). Learning working make money