Line Generation Algorithm ”; Previous Next A line connects two points. It is a basic element in graphics. To draw a line, you need two points between which you can draw a line. In the following three algorithms, we refer the one point of line as $X_{0}, Y_{0}$ and the second point of line as $X_{1}, Y_{1}$. DDA Algorithm Digital Differential Analyzer (DDA) algorithm is the simple line generation algorithm which is explained step by step here. Step 1 − Get the input of two end points $(X_{0}, Y_{0})$ and $(X_{1}, Y_{1})$. Step 2 − Calculate the difference between two end points. dx = X1 – X0 dy = Y1 – Y0 Step 3 − Based on the calculated difference in step-2, you need to identify the number of steps to put pixel. If dx > dy, then you need more steps in x coordinate; otherwise in y coordinate. if (absolute(dx) > absolute(dy)) Steps = absolute(dx); else Steps = absolute(dy); Step 4 − Calculate the increment in x coordinate and y coordinate. Xincrement = dx / (float) steps; Yincrement = dy / (float) steps; Step 5 − Put the pixel by successfully incrementing x and y coordinates accordingly and complete the drawing of the line. for(int v=0; v < Steps; v++) { x = x + Xincrement; y = y + Yincrement; putpixel(Round(x), Round(y)); } Bresenham’s Line Generation The Bresenham algorithm is another incremental scan conversion algorithm. The big advantage of this algorithm is that, it uses only integer calculations. Moving across the x axis in unit intervals and at each step choose between two different y coordinates. For example, as shown in the following illustration, from position (2, 3) you need to choose between (3, 3) and (3, 4). You would like the point that is closer to the original line. At sample position $X_{k}+1,$ the vertical separations from the mathematical line are labelled as $d_{upper}$ and $d_{lower}$. From the above illustration, the y coordinate on the mathematical line at $x_{k}+1$ is − Y = m($X_{k}$+1) + b So, $d_{upper}$ and $d_{lower}$ are given as follows − $$d_{lower} = y-y_{k}$$ $$= m(X_{k} + 1) + b – Y_{k}$$ and $$d_{upper} = (y_{k} + 1) – y$$ $= Y_{k} + 1 – m (X_{k} + 1) – b$ You can use these to make a simple decision about which pixel is closer to the mathematical line. This simple decision is based on the difference between the two pixel positions. $$d_{lower} – d_{upper} = 2m(x_{k} + 1) – 2y_{k} + 2b – 1$$ Let us substitute m with dy/dx where dx and dy are the differences between the end-points. $$dx (d_{lower} – d_{upper}) =dx(2frac{mathrm{d} y}{mathrm{d} x}(x_{k} + 1) – 2y_{k} + 2b – 1)$$ $$ = 2dy.x_{k} – 2dx.y_{k} + 2dy + 2dx(2b-1)$$ $$ = 2dy.x_{k} – 2dx.y_{k} + C$$ So, a decision parameter $P_{k}$ for the kth step along a line is given by − $$p_{k} = dx(d_{lower} – d_{upper})$$ $$ = 2dy.x_{k} – 2dx.y_{k} + C$$ The sign of the decision parameter $P_{k}$ is the same as that of $d_{lower} – d_{upper}$. If $p_{k}$ is negative, then choose the lower pixel, otherwise choose the upper pixel. Remember, the coordinate changes occur along the x axis in unit steps, so you can do everything with integer calculations. At step k+1, the decision parameter is given as − $$p_{k +1} = 2dy.x_{k + 1} – 2dx.y_{k + 1} + C$$ Subtracting $p_{k}$ from this we get − $$p_{k + 1} – p_{k} = 2dy(x_{k + 1} – x_{k}) – 2dx(y_{k + 1} – y_{k})$$ But, $x_{k+1}$ is the same as $(xk)+1$. So − $$p_{k+1} = p_{k} + 2dy – 2dx(y_{k+1} – y_{k})$$ Where, $Y_{k+1} – Y_{k}$ is either 0 or 1 depending on the sign of $P_{k}$. The first decision parameter $p_{0}$ is evaluated at $(x_{0}, y_{0})$ is given as − $$p_{0} = 2dy – dx$$ Now, keeping in mind all the above points and calculations, here is the Bresenham algorithm for slope m < 1 − Step 1 − Input the two end-points of line, storing the left end-point in $(x_{0}, y_{0})$. Step 2 − Plot the point $(x_{0}, y_{0})$. Step 3 − Calculate the constants dx, dy, 2dy, and (2dy – 2dx) and get the first value for the decision parameter as − $$p_{0} = 2dy – dx$$ Step 4 − At each $X_{k}$ along the line, starting at k = 0, perform the following test − If $p_{k}$ < 0, the next point to plot is $(x_{k}+1, y_{k})$ and $$p_{k+1} = p_{k} + 2dy$$ Otherwise, $$(x_{k}, y_{k}+1)$$ $$p_{k+1} = p_{k} + 2dy – 2dx$$ Step 5 − Repeat step 4 (dx – 1) times. For m > 1, find out whether you need to increment x while incrementing y each time. After solving, the equation for decision parameter $P_{k}$ will be very similar, just the x and y in the equation gets interchanged. Mid-Point Algorithm Mid-point algorithm is due to Bresenham which was modified by Pitteway and Van Aken. Assume that you have already put the point P at (x, y) coordinate and the slope of the line is 0 ≤ k ≤ 1 as shown in the following illustration. Now you need to decide whether to put the next point at E or N. This can be chosen by identifying the intersection point Q closest to the point N or E. If the intersection point Q is closest to the point N then N is considered as the next point; otherwise E. To determine that, first calculate the mid-point M(x+1, y + ½). If the intersection point Q of the line with the vertical line connecting E and N is below M, then take E as the next point; otherwise take N as the next point. In order to check this, we need to consider the implicit equation − F(x,y) = mx + b – y For positive m at any given X, If y is on the line, then F(x, y) =
Category: computer Graphics
Computer Graphics Basics
Computer Graphics Basics ”; Previous Next Computer graphics is an art of drawing pictures on computer screens with the help of programming. It involves computations, creation, and manipulation of data. In other words, we can say that computer graphics is a rendering tool for the generation and manipulation of images. Cathode Ray Tube The primary output device in a graphical system is the video monitor. The main element of a video monitor is the Cathode Ray Tube (CRT), shown in the following illustration. The operation of CRT is very simple − The electron gun emits a beam of electrons (cathode rays). The electron beam passes through focusing and deflection systems that direct it towards specified positions on the phosphor-coated screen. When the beam hits the screen, the phosphor emits a small spot of light at each position contacted by the electron beam. It redraws the picture by directing the electron beam back over the same screen points quickly. There are two ways (Random scan and Raster scan) by which we can display an object on the screen. Raster Scan In a raster scan system, the electron beam is swept across the screen, one row at a time from top to bottom. As the electron beam moves across each row, the beam intensity is turned on and off to create a pattern of illuminated spots. Picture definition is stored in memory area called the Refresh Buffer or Frame Buffer. This memory area holds the set of intensity values for all the screen points. Stored intensity values are then retrieved from the refresh buffer and “painted” on the screen one row (scan line) at a time as shown in the following illustration. Each screen point is referred to as a pixel (picture element) or pel. At the end of each scan line, the electron beam returns to the left side of the screen to begin displaying the next scan line. Random Scan (Vector Scan) In this technique, the electron beam is directed only to the part of the screen where the picture is to be drawn rather than scanning from left to right and top to bottom as in raster scan. It is also called vector display, stroke-writing display, or calligraphic display. Picture definition is stored as a set of line-drawing commands in an area of memory referred to as the refresh display file. To display a specified picture, the system cycles through the set of commands in the display file, drawing each component line in turn. After all the line-drawing commands are processed, the system cycles back to the first line command in the list. Random-scan displays are designed to draw all the component lines of a picture 30 to 60 times each second. Application of Computer Graphics Computer Graphics has numerous applications, some of which are listed below − Computer graphics user interfaces (GUIs) − A graphic, mouse-oriented paradigm which allows the user to interact with a computer. Business presentation graphics − “A picture is worth a thousand words”. Cartography − Drawing maps. Weather Maps − Real-time mapping, symbolic representations. Satellite Imaging − Geodesic images. Photo Enhancement − Sharpening blurred photos. Medical imaging − MRIs, CAT scans, etc. – Non-invasive internal examination. Engineering drawings − mechanical, electrical, civil, etc. – Replacing the blueprints of the past. Typography − The use of character images in publishing – replacing the hard type of the past. Architecture − Construction plans, exterior sketches – replacing the blueprints and hand drawings of the past. Art − Computers provide a new medium for artists. Training − Flight simulators, computer aided instruction, etc. Entertainment − Movies and games. Simulation and modeling − Replacing physical modeling and enactments Print Page Previous Next Advertisements ”;
Visible Surface Detection
Visible Surface Detection ”; Previous Next When we view a picture containing non-transparent objects and surfaces, then we cannot see those objects from view which are behind from objects closer to eye. We must remove these hidden surfaces to get a realistic screen image. The identification and removal of these surfaces is called Hidden-surface problem. There are two approaches for removing hidden surface problems − Object-Space method and Image-space method. The Object-space method is implemented in physical coordinate system and image-space method is implemented in screen coordinate system. When we want to display a 3D object on a 2D screen, we need to identify those parts of a screen that are visible from a chosen viewing position. Depth Buffer (Z-Buffer) Method This method is developed by Cutmull. It is an image-space approach. The basic idea is to test the Z-depth of each surface to determine the closest (visible) surface. In this method each surface is processed separately one pixel position at a time across the surface. The depth values for a pixel are compared and the closest (smallest z) surface determines the color to be displayed in the frame buffer. It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order. To override the closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used. Depth buffer is used to store depth values for (x, y) position, as surfaces are processed (0 ≤ depth ≤ 1). The frame buffer is used to store the intensity value of color value at each position (x, y). The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate indicates back clipping pane and 1 value for z-coordinates indicates front clipping pane. Algorithm Step-1 − Set the buffer values − Depthbuffer (x, y) = 0 Framebuffer (x, y) = background color Step-2 − Process each polygon (One at a time) For each projected (x, y) pixel position of a polygon, calculate depth z. If Z > depthbuffer (x, y) Compute surface color, set depthbuffer (x, y) = z, framebuffer (x, y) = surfacecolor (x, y) Advantages It is easy to implement. It reduces the speed problem if implemented in hardware. It processes one object at a time. Disadvantages It requires large memory. It is time consuming process. Scan-Line Method It is an image-space method to identify visible surface. This method has a depth information for only single scan-line. In order to require one scan-line of depth values, we must group and process all polygons intersecting a given scan-line at the same time before processing the next scan-line. Two important tables, edge table and polygon table, are maintained for this. The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse slope of each line, and pointers into the polygon table to connect edges to surfaces. The Polygon Table − It contains the plane coefficients, surface material properties, other surface data, and may be pointers to the edge table. To facilitate the search for surfaces crossing a given scan-line, an active list of edges is formed. The active list stores only those edges that cross the scan-line in order of increasing x. Also a flag is set for each surface to indicate whether a position along a scan-line is either inside or outside the surface. Pixel positions across each scan-line are processed from left to right. At the left intersection with a surface, the surface flag is turned on and at the right, the flag is turned off. You only need to perform depth calculations when multiple surfaces have their flags turned on at a certain scan-line position. Area-Subdivision Method The area-subdivision method takes advantage by locating those view areas that represent part of a single surface. Divide the total viewing area into smaller and smaller rectangles until each small area is the projection of part of a single visible surface or no surface at all. Continue this process until the subdivisions are easily analyzed as belonging to a single surface or until they are reduced to the size of a single pixel. An easy way to do this is to successively divide the area into four equal parts at each step. There are four possible relationships that a surface can have with a specified area boundary. Surrounding surface − One that completely encloses the area. Overlapping surface − One that is partly inside and partly outside the area. Inside surface − One that is completely inside the area. Outside surface − One that is completely outside the area. The tests for determining surface visibility within an area can be stated in terms of these four classifications. No further subdivisions of a specified area are needed if one of the following conditions is true − All surfaces are outside surfaces with respect to the area. Only one inside, overlapping or surrounding surface is in the area. A surrounding surface obscures all other surfaces within the area boundaries. Back-Face Detection A fast and simple object-space method for identifying the back faces of a polyhedron is based on the “inside-outside” tests. A point (x, y, z) is “inside” a polygon surface with plane parameters A, B, C, and D if When an inside point is along the line of sight to the surface, the polygon must be a back face (we are inside that face and cannot see the front of it from our viewing position). We can simplify this test by considering the normal vector N to a polygon surface, which has Cartesian components (A, B, C). In general, if V is a vector in the viewing direction from the eye (or “camera”) position, then this polygon is a back face if V.N > 0 Furthermore, if object descriptions are converted to projection coordinates and your viewing direction is parallel to the viewing z-axis, then − V = (0, 0, Vz) and V.N = VZC So that we only need to consider the
3D Transformation
3D Transformation ”; Previous Next Rotation 3D rotation is not same as 2D rotation. In 3D rotation, we have to specify the angle of rotation along with the axis of rotation. We can perform 3D rotation about X, Y, and Z axes. They are represented in the matrix form as below − $$R_{x}(theta) = begin{bmatrix} 1& 0& 0& 0\ 0& costheta & −sintheta& 0\ 0& sintheta & costheta& 0\ 0& 0& 0& 1\ end{bmatrix} R_{y}(theta) = begin{bmatrix} costheta& 0& sintheta& 0\ 0& 1& 0& 0\ −sintheta& 0& costheta& 0\ 0& 0& 0& 1\ end{bmatrix} R_{z}(theta) =begin{bmatrix} costheta & −sintheta & 0& 0\ sintheta & costheta & 0& 0\ 0& 0& 1& 0\ 0& 0& 0& 1 end{bmatrix}$$ The following figure explains the rotation about various axes − Scaling You can change the size of an object using scaling transformation. In the scaling process, you either expand or compress the dimensions of the object. Scaling can be achieved by multiplying the original coordinates of the object with the scaling factor to get the desired result. The following figure shows the effect of 3D scaling − In 3D scaling operation, three coordinates are used. Let us assume that the original coordinates are (X, Y, Z), scaling factors are $(S_{X,} S_{Y,} S_{z})$ respectively, and the produced coordinates are (X’, Y’, Z’). This can be mathematically represented as shown below − $S = begin{bmatrix} S_{x}& 0& 0& 0\ 0& S_{y}& 0& 0\ 0& 0& S_{z}& 0\ 0& 0& 0& 1 end{bmatrix}$ P’ = P∙S $[{X}” ::: {Y}” ::: {Z}” ::: 1] = [X :::Y ::: Z ::: 1] :: begin{bmatrix} S_{x}& 0& 0& 0\ 0& S_{y}& 0& 0\ 0& 0& S_{z}& 0\ 0& 0& 0& 1 end{bmatrix}$ $ = [X.S_{x} ::: Y.S_{y} ::: Z.S_{z} ::: 1]$ Shear A transformation that slants the shape of an object is called the shear transformation. Like in 2D shear, we can shear an object along the X-axis, Y-axis, or Z-axis in 3D. As shown in the above figure, there is a coordinate P. You can shear it to get a new coordinate P”, which can be represented in 3D matrix form as below − $Sh = begin{bmatrix} 1 & sh_{x}^{y} & sh_{x}^{z} & 0 \ sh_{y}^{x} & 1 & sh_{y}^{z} & 0 \ sh_{z}^{x} & sh_{z}^{y} & 1 & 0 \ 0 & 0 & 0 & 1 end{bmatrix}$ P’ = P ∙ Sh $X’ = X + Sh_{x}^{y} Y + Sh_{x}^{z} Z$ $Y” = Sh_{y}^{x}X + Y +sh_{y}^{z}Z$ $Z” = Sh_{z}^{x}X + Sh_{z}^{y}Y + Z$ Transformation Matrices Transformation matrix is a basic tool for transformation. A matrix with n x m dimensions is multiplied with the coordinate of objects. Usually 3 x 3 or 4 x 4 matrices are used for transformation. For example, consider the following matrix for various operation. $T = begin{bmatrix} 1& 0& 0& 0\ 0& 1& 0& 0\ 0& 0& 1& 0\ t_{x}& t_{y}& t_{z}& 1\ end{bmatrix}$ $S = begin{bmatrix} S_{x}& 0& 0& 0\ 0& S_{y}& 0& 0\ 0& 0& S_{z}& 0\ 0& 0& 0& 1 end{bmatrix}$ $Sh = begin{bmatrix} 1& sh_{x}^{y}& sh_{x}^{z}& 0\ sh_{y}^{x}& 1 & sh_{y}^{z}& 0\ sh_{z}^{x}& sh_{z}^{y}& 1& 0\ 0& 0& 0& 1 end{bmatrix}$ Translation Matrix Scaling Matrix Shear Matrix $R_{x}(theta) = begin{bmatrix} 1& 0& 0& 0\ 0& costheta & -sintheta& 0\ 0& sintheta & costheta& 0\ 0& 0& 0& 1\ end{bmatrix}$ $R_{y}(theta) = begin{bmatrix} costheta& 0& sintheta& 0\ 0& 1& 0& 0\ -sintheta& 0& costheta& 0\ 0& 0& 0& 1\ end{bmatrix}$ $R_{z}(theta) = begin{bmatrix} costheta & -sintheta & 0& 0\ sintheta & costheta & 0& 0\ 0& 0& 1& 0\ 0& 0& 0& 1 end{bmatrix}$ Rotation Matrix Print Page Previous Next Advertisements ”;
Computer Graphics Home
Computer Graphics Tutorial PDF Version Quick Guide Resources Job Search Discussion To display a picture of any size on a computer screen is a difficult process. Computer graphics are used to simplify this process. Various algorithms and techniques are used to generate graphics in computers. This tutorial will help you understand how all these are processed by the computer to give a rich visual experience to the user. Audience This tutorial has been prepared for students who don’t know how graphics are used in computers. It explains the basics of graphics and how they are implemented in computers to generate various visuals. Prerequisites Before you start proceeding with this tutorial, we assume that you are already aware of the basic concepts of C programming language and basic mathematics. Print Page Previous Next Advertisements ”;