Learn Mahotas – Distance Transform work project make money

Mahotas – Distance Transform Distance transformation is a technique that calculates the distance between each pixel and the nearest background pixel. Distance transformation works on distance metrics, which define how the distance between two points is calculated in a space. Applying distance transformation on an image creates the distance map. In the distance map, dark shades are assigned to the pixels near the boundary, indicating a short distance, and lighter shades are assigned to pixels further away, indicating a larger distance to the nearest background pixel. Distance Transform in Mahotas In Mahotas, we can use the mahotas.distance() function to perform distance transformation on an image. It uses an iterative approach to create a distance map. The function first initializes the distance values for all pixels in the image. The background pixels are assigned a distance value of infinity, while the foreground pixels are assigned a distance value of zero. Then, the function updates the distance value of each background pixel based on the distances of its neighboring pixels. This occurs until all the distance value of all the background pixels has been computed. The mahotas.distance() function The mahotas.distance() function takes an image as input and returns a distance map as output. The distance map is an image that contains the distance between each pixel in the input image and the nearest background pixel. Syntax Following is the basic syntax of the distance() function in mahotas − mahotas.distance(bw, metric=”euclidean2”) Where, bw − It is the input image. metric (optional) − It specifies the type of distance used to determine the distance between a pixel and a background pixel (default is euclidean2). Example In the following example, we are performing distance transformation on an image using the mh.distance() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Finding distance map distance_map = mh.distance(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the distance transformed image axes[1].imshow(distance_map) axes[1].set_title(”Distance Transformed Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Using Labeled Image We can also perform distance transformation using a labeled image. A labeled image refers to an image where distinct regions are assigned unique labels for segmenting the image into different regions. In mahotas, we can apply distance transformation on an input image by first reducing its noise using the mh.gaussian_filter() function. Then, we use the mh.label() function to separate the foreground regions from the background regions. We can then create a distance map using the mh.distance() function. This will calculate the distance between the pixels of the foreground regions and the pixels of the background region. Example In the example mentioned below, we are finding the distance map of a filtered labeled image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Applying gaussian filtering gauss_image = mh.gaussian_filter(image, 3) gauss_image = (gauss_image > gauss_image.mean()) # Converting it to a labeled image labeled, num_objects = mh.label(gauss_image) # Finding distance map distance_map = mh.distance(labeled) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the distance transformed image axes[1].imshow(distance_map) axes[1].set_title(”Distance Transformed Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows Using Euclidean Distance Another way of performing distance transformation on an image is using euclidean distance. Euclidean distance is the straight-line distance between two points in a coordinate system. It is calculated as the square root of the sum of squared differences between the coordinates. For example, let”s say there are two points A and B with coordinate values of (2, 3) and (5, 7) respectively. Then the squared difference of x and y coordinate will be (5−2)2 = 9 and (7−3)2 = 16. Sum of the square will be 9 + 16 = 25 and square root of this will be 5, which is the euclidean distance between point A and point B. In mahotas, we can use euclidean distance instead of the default euclidean2 as the distance metric. To do this, we pass the value ”euclidean” to the metric parameter. Note − The euclidean should be written as ”euclidean” (in single quotes), since the data type of metric parameter is string. Example In this example, we are using euclidean distance type for distance transformation of an image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sun.png”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Applying gaussian filtering gauss_image = mh.gaussian_filter(image, 3) gauss_image = (gauss_image > gauss_image.mean()) # Finding distance map distance_map = mh.distance(gauss_image, metric=”euclidean”) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the distance transformed image axes[1].imshow(distance_map) axes[1].set_title(”Distance Transformed Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output After executing the above code, we get the following output −

Learn Mahotas – Zernike Moments work project make money

Mahotas – Zernike Moments Like Zernike features, Zernike moments are also a set of mathematical values that describe the shape of objects in an image. They provide specific details about the shape, like how round or symmetrical it is, or if there are any particular patterns present. Zernike moments have some special properties as discussed below − Size Invariance − They can describe the shape regardless of its size. So, even if you have a small or large object, the Zernike moments will still capture its shape accurately. Rotation Invariance − If you rotate the object in the image, the Zernike moments will remain the same. Scaling Invariance − If you resize the object in the image, the Zernike moments will remain the same. Zernike moments break down the shape into smaller pieces using special mathematical functions called Zernike polynomials. These polynomials act like building blocks. By combining different Zernike polynomials, we can recreate and represent the unique features of the object”s shape. Zernike Moments in Mahotas To calculate the Zernike moments in mahotas, we can use the mahotas.features.zernike_moments() function. In Mahotas, Zernike moments are calculated by generating a set of Zernike polynomials, which are special mathematical functions representing various shapes and contours. These polynomials act as building blocks for analyzing the object”s shape. After that, compute Zernike moments by projecting the shape of the object onto the Zernike polynomials. These moments capture important shape characteristics. The mahotas.features.zernike_moments() function The mahotas.features.zernike_moments() function takes two arguments: the image object and the maximum radius for the Zernike polynomials. The function returns a 1−D array of the Zernike moments for the image. Syntax Following is the basic syntax of the mahotas.features.zernike_moments() function − mahotas.features.zernike_moments(im, radius, degree=8, cm={center_of_mass(im)}) Where, im − It is the input image on which the Zernike moments will be computed. radius − It defines the radius of the circular region, in pixels, over which the Zernike moments will be calculated. The area outside the circle defined by this radius, centered around the center of mass, is ignored. degree (optional) − It specifies the maximum number of the Zernike moments to be calculated. By default, the degree value is 8. cm (optional) − It specifies the center of mass of the image. By default, the center of mass of the image is used. Example Following is the basic example to calculate the Zernike moments of an image with a default degree value − import mahotas as mh # Load images of shapes image = mh.imread(”sun.png”, as_grey=True) # Compute Zernike moments moments = mh.features.zernike_moments(image, radius=10) # Compare the moments for shape recognition print(moments) Output After executing the above code, we get the output as follows − [0.31830989 0.00534998 0.00281258 0.0057374 0.01057919 0.00429721 0.00178094 0.00918145 0.02209622 0.01597089 0.00729495 0.00831211 0.00364554 0.01171028 0.02789188 0.01186194 0.02081316 0.01146935 0.01319499 0.03367388 0.01580632 0.01314671 0.02947629 0.01304526 0.00600012] Using Custom Center of Mass The center of mass of an image is the point in the image where the mass is evenly distributed. The custom center of mass is a point in an image that is not necessarily the center of mass of the image. This can be useful in cases where you want to use a different center of mass for your calculations. For example, you might want to use the custom center of mass of an object in an image to calculate the Zernike moments of the object. To calculate the Zernike moments of an image using a custom center of mass in mahotas, we need to pass the cm parameter to the mahotas.features.zernike_moments() function. The cm parameter takes a tuple of two numbers, which represent the coordinates of the custom center of mass. Example In here, we are trying to calculate the Zernike moments of an image using the custom center of mass − import mahotas import numpy as np # Load the image image = mahotas.imread(”nature.jpeg”, as_grey = True) # Calculate the center of mass of the image center_of_mass = np.array([100, 100]) # Calculate the Zernike moments of the image, using the custom center of mass zernike_moments = mahotas.features.zernike_moments(image, radius = 5, cm=center_of_mass) # Print the Zernike moments print(zernike_moments) Output Following is the output of the above code − [3.18309886e-01 3.55572603e-04 3.73132619e-02 5.98944983e-04 3.23622041e-04 1.72293481e-04 9.16757235e-02 3.35704966e-04 7.09426259e-02 1.17847972e-04 2.12625026e-04 3.06537827e-04 1.94379185e-01 1.32093249e-04 8.54616882e-02 1.83274207e-04 1.86728282e-04 3.08004108e-04 4.79437809e-04 1.97726337e-04 3.61630733e-01 5.27467687e-04 8.25534856e-02 7.75593823e-06 1.99419391e-01] Using a Specific Order The order of a Zernike moment is a measure of the complexity of the shape that it can represent. The higher the order, the more complex the shape that the moment can represent. To compute the Zernike moments of an image with a specific order in mahotas, we need to pass the degree parameter to the mahotas.features.zernike_moments() function. Example In the following example, we are trying to compute the Zernike moments of an image with a specified order. import mahotas import numpy as np # Load the image image = mahotas.imread(”nature.jpeg”, as_grey = True) # Calculate the Zernike moments of the image, using the specific order zernike_moments = mahotas.features.zernike_moments(image,1, 4) # Print the Zernike moments print(zernike_moments) Output Output of the above code is as shown below − [0.31830989 0.17086131 0.03146824 0.1549947 0.30067136 0.5376049 0.30532715 0.33032683 0.47908119]

Learn Mahotas – Regional Maxima of Image work project make money

Mahotas – Regional Maxima of Image Regional maxima refer to a point where the intensity value of pixels is the highest within an image. In an image, regions which form the regional maxima are the brightest amongst all other regions. Regional maxima are also known as global maxima. Regional maxima consider the entire image, while local maxima only consider a local neighborhood, to find the pixels with highest intensity. Regional maxima are a subset of local maxima, so all regional maxima are a local maxima but not all local maxima are regional maxima. An image can contain multiple regional maxima, but all regional maxima will be of equal intensity. This happens because only the highest intensity value is considered for regional maxima. Regional Maxima of Image in Mahotas In Mahotas, we can find the regional maxima in an image using the mahotas.regmax() function. Regional maxima are identified through intensity peaks within an image because they represent high intensity regions. The regional maxima points are highlighted as white while other points are colored in black. The mahotas.regmax() function The mahotas.regmax() function extracts regional maxima from an input grayscale image. It outputs an image where the 1”s represent presence of regional maxima points and 0”s represent normal points. The regmax() function uses a morphological reconstruction−based approach to find the regional maxima. In this approach, the intensity value of each local maxima region is compared with its neighbors. If a neighbor is found to have a higher intensity, it becomes the new regional maxima. This process continues until there no region of higher intensity is left, indicating that the regional maxima has been reached. Syntax Following is the basic syntax of the regmax() function in mahotas − mahotas.regmax(f, Bc={3×3 cross}, out={np.empty(f.shape, bool)}) Where, f − It is the input grayscale image. Bc (optional) − It is the structuring element used for connectivity. out(optional) − It is the output array of Boolean data type (defaults to new array of same size as f). Example In the following example, we are getting the regional maxima of an image using mh.regmax() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Getting the regional maxima regional_maxima = mh.regmax(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the regional maxima axes[1].imshow(regional_maxima, cmap=”gray”) axes[1].set_title(”Regional Maxima”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Using Custom Structuring Element We can also use a custom structuring element to get the regional maxima from an image. A structuring element is a binary array of odd dimensions consisting of ones and zeroes that defines the connectivity pattern of the neighborhood pixels during image labeling. The ones indicate the neighboring pixels that are included in the connectivity analysis, while the zeros represent the neighbors that are excluded or ignored. In mahotas, while extracting regional maxima regions we can use a custom structuring element to define the connectivity of neighboring pixels. We do this by first creating an odd dimension structuring element using the numpy.array() function. Then, we input this custom structuring element to the Bc parameter in the regmax() function. For example, let”s consider the custom structuring element: [[0, 0, 0, 0, 1], [0, 0, 1, 0, 0], [1, 0, 0, 0, 0], [0, 0, 0, 1, 0], [0, 1, 0, 0, 0]]. This structuring element implies horizontal connectivity, i.e., only the pixels horizontally left or right of another pixel are considered its neighbors. Example In this example, we are using a custom structuring element to get the regional maxima of an image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sun.png”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Setting custom structuring element struct_element = np.array([[0, 0, 0, 0, 1], [0, 0, 1, 0, 0], [1, 0, 0, 0, 0], [0, 0, 0, 1, 0], [0, 1, 0, 0, 0]]) # Getting the regional maxima regional_maxima = mh.regmax(image, Bc=struct_element) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the regional maxima axes[1].imshow(regional_maxima, cmap=”gray”) axes[1].set_title(”Regional Maxima”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − Using a Specific Region of an Image We can also find the regional maxima of a specific region of an image. A specific region of an image refers to a small part of a larger image. The specific region can be extracted by cropping the original image to remove unnecessary areas. In mahotas, we can find the regional maxima within a portion of an image. First, we crop the original image by specifying the required dimensions of the x and y axis. Then we use the cropped image and get the regional maxima using the regmax() function. For example, let’s say we specify [:800, 70:] as the dimensions of x and y axis respectively. Then, the cropped image will be in range of 0 to 800 pixels for x−axis and 70 to max dimension for y−axis. Example In this example, we are getting the regional maxima within a specific region of an image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Using specific region of the image image = image[:800, 70:] # Getting the regional maxima regional_maxima = mh.regmax(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the regional maxima axes[1].imshow(regional_maxima, cmap=”gray”) axes[1].set_title(”Regional Maxima”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output After executing the

Learn Mahotas – RGB to LAB Conversion work project make money

Mahotas – RGB to LAB Conversion The LAB color space is a color model that approximates human perception of color. It separates color information into three channels − L (Lightness) − The L channel represents the perceived lightness (brightness) of the color. It ranges from 0 (darkest black) to 100 (brightest white). A (Green-Red axis) − Represents the color’s position on the green−red axis. Negative values indicate green, and positive values indicates red. B (Blue-Yellow axis) − Represents the color’s position on the blue-yellow axis. Negative values indicate blue, and positive values indicate yellow. In the process of converting from RGB to LAB, each RGB pixel value is normalized to a range of 0 and 1. Then, various mathematical transformations are applied, like adjusting the brightness, making the colors more accurate to how we perceive them, and converting them to LAB values. These adjustments help us represent colors in a way that matches how humans see them. RGB to LAB Conversion in Mahotas In Mahotas, we can convert an RGB image to an LAB image using the colors.rgb2lab() function. The RGB to LAB conversion in Mahotas involves the following steps − Normalize RGB values − The RGB values of each pixel are first adjusted to a standardized range between 0 and 1. Gamma correction − Gamma correction is applied to the normalized RGB values to adjust the brightness levels of the image. Linearize RGB values − The gamma-corrected RGB values are transformed into a linear RGB color space, ensuring a linear relationship between the input and output values. Convert to XYZ color space − Using a transformation matrix, the linear RGB values are converted to the XYZ color space, which represents the image”s color information. Calculate LAB values − From the XYZ values, LAB values are calculated using specific formulas, accounting for how our eyes perceive colors. The LAB color space separates brightness (L) from color components (A and B). Apply reference white values − The LAB values are adjusted based on reference white values to ensure accurate color representation. LAB representation − The resulting LAB values represent the image”s color information. The L channel represents lightness, while the A and B channels represent color information along two axes. Using the mahotas.colors.rgb2lab() Function The mahotas.colors.rgb2lab() function takes an RGB image as input and returns the LAB color space version of the image. The resulting LAB image retains the structure and content of the original RGB image while providing enhanced color representation. Syntax Following is the basic syntax of the rgb2lab() function in mahotas − mahotas.colors.rgb2lab(rgb, dtype={float}) where, rgb − It is the input image in RGB color space. dtype (optional) − It is the data type of the returned image (default is float). Example In the following example, we are converting an RGB image to an LAB image using the mh.colors.rgb2lab() function − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sea.bmp”) # Converting it to LAB lab_image = mh.colors.rgb2lab(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original RGB image axes[0].imshow(image) axes[0].set_title(”RGB Image”) axes[0].set_axis_off() # Displaying the LAB image axes[1].imshow(lab_image) axes[1].set_title(”LAB Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − RGB to LAB Conversion of a Random Image We can convert a randomly generated RGB image to LAB color space by − First, defining the desired size of the image specifying its width and height. We also determine the color depth, usually 8−bit, which ranges from 0 to 255. Next, we generate random RGB values for each pixel in the image using the “random.randint()” function from NumPy. Once we have the RGB image, we proceed to convert it to the LAB color space. The resulting image will be in the LAB color space, where the image”s lightness and color information are separated into distinct channels. Example The following example shows conversion of a randomly generated RGB image to an image in LAB color space − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Creating a random RGB image image = np.random.randint(0, 256, (100, 100, 3), dtype=np.uint8) # Converting it to LAB lab_image = mh.colors.rgb2lab(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original RGB image axes[0].imshow(image) axes[0].set_title(”RGB Image”) axes[0].set_axis_off() # Displaying the LAB image axes[1].imshow(lab_image) axes[1].set_title(”LAB Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows −

Learn Mahotas – Mean Filter work project make money

Mahotas – Mean Filter The mean filter is used to smooth out an image to reduce noise. It works by calculating the average value of all the pixels within a specified neighborhood and then, replaces the value of the original pixel with the average value. Let”s imagine we have a grayscale image with varying intensity values. Hence, some pixels may have a higher intensity value than other pixels. Thus, the mean filter is used to create a uniform appearance of the pixels by slightly blurring the image. Mean Filter in Mahotas To apply the mean filter in mahotas, we can use the mean_filter() function. The mean filter in Mahotas uses a structuring element to examine pixels in a neighborhood. The structuring element replaces each value of pixel with the average value of its neighbouring pixels. The size of the structuring element determines the extent of smoothing. A larger neighborhood results in stronger smoothing effect, while reducing some finer details, whereas a smaller neightborhood results in less smoothing but maintains more details. The mahotas.mean_filter() function The mean_filter() function applies the mean filter to the input image using the specified neighborhood size. It replaces each pixel value with the majority value among its neighbors. The filtered image is stored in the output array. Syntax Following is the basic syntax of the mean filter() function in mahotas − mahotas.mean_filter(f, Bc, mode=”ignore”, cval=0.0, out=None) Where, img − It is the input image. Bc − It is the structuring element that defines the neighbourhood. mode (optional) − It specifies how the function handles the borders of the image. It can take different values such as ”reflect”, ”constant”, ”nearest”, ”mirror” or ”wrap”. By default, it is set to ”ignore”, which means the filter ignores pixels beyond the image”s borders. cval (optional) − The value to be used when mode=”constant”. The default value is 0.0. out (optional) − It specifies the output array where the filtered image will be stored. It must be of the same shape as the input image. Example Following is the basic example to filter the image using the mean_filter() function − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt image=mh.imread(”tree.tiff”, as_grey = True) structuring_element = mh.disk(12) filtered_image = mh.mean_filter(image, structuring_element) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the mean filtered image axes[1].imshow(filtered_image, cmap=”gray”) axes[1].set_title(”Mean Filtered”) axes[1].axis(”off”) mtplt.show() Output After executing the above code, we get the following output − Mean Filter with Reflect Mode When we apply the mean filter to an image, we need to consider the neighboring pixels around each pixel to calculate the average. However, at the edges of the image, there are pixels that don”t have neighbors on one or more sides. To address this issue, we use the ”reflect” mode. Reflect mode creates a mirror−like effect along the edges of the image. It allows us to virtually extend the image by duplicating its pixels in a mirrored manner. This way, we can provide the mean filter with neighboring pixels even at the edges. By reflecting the image values, the mean filter can now consider these mirrored pixels as if they were real neighbors. It calculates the average value using these virtual neighbors, resulting in a more accurate smoothing process at the image edges. Example In here, we are trying to calculate the mean filter with the reflect mode − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt image=mh.imread(”nature.jpeg”, as_grey = True) structuring_element = mh.morph.dilate(mh.disk(12), Bc=mh.disk(12)) filtered_image = mh.mean_filter(image, structuring_element, mode=”reflect”) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the mean filtered image axes[1].imshow(filtered_image, cmap=”gray”) axes[1].set_title(”Mean Filtered”) axes[1].axis(”off”) mtplt.show() Output Output of the above code is as follows − By Storing Result in an Output Array We can store the result of the mean filter in an output array as well using Mahotas. To achieve this, we first need to create an empty array using the NumPy library. This array is initialized with the same shape as the input image to store the resultant filtered image. The data type of the array is specified as float (default). Finally, we store the resultant filtered image in the output array by passing it as a parameter to the mean_filter() function. Example Now, we are trying to apply mean filter to a grayscale image and store the result in a specific output array − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt image=mh.imread(”pic.jpg”, as_grey = True) # Create an output array for the filtered image output = np.empty(image.shape) # store the result in the output array mh.mean_filter(image, Bc=mh.disk(12), out=output) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the mean filtered image axes[1].imshow(output, cmap=”gray”) axes[1].set_title(”Mean Filtered”) axes[1].axis(”off”) mtplt.show() Output Following is the output of the above code −

Learn Mahotas – Local Minima in Image work project make money

Mahotas – Local Minima in Image Local minima in an image refers to regions where the pixel intensity is the lowest within a local neighborhood. A local neighborhood consists of only immediate neighbors of a pixel; hence it uses only a portion of an image when identifying local minima. An image can contain multiple local minima, each having different intensities. This happens because a region can have lower intensity than its neighboring regions, but it may not be the region with lowest intensity in the entire image. Local Minima in Image in Mahotas In Mahotas, we can find the local minima of an image using the mahotas.locmin() function. The local minima regions are found using regional minima, which refers to regions having lower intensity values than all their neighboring pixels in an image. The mahotas.locmin() function The ”mahotas.locmin()” function takes an image as input and finds the local minima regions. It returns a binary image where each local minima region is represented by 1. The function works in the following way to find local minima in an image − It first applies a morphological erosion on the input image to find the regional minima points. Next, it compares the eroded image with the original image. If the pixel intensity is lower in the original image, then the pixel represents a regional minima. Finally, a binary array is generated where 1 corresponds to presence of local minima and 0 elsewhere. Syntax Following is the basic syntax of the locmin() function in mahotas − mahotas.locmin(f, Bc={3×3 cross}, out={np.empty(f.shape, bool)}) Where, f − It is the input grayscale image. Bc (optional) − It is the structuring element used for connectivity. out(optional) − It is the output array of Boolean data type (defaults to new array of same size as f). Example In the following example, we are getting the local minima of an image using mh.locmin() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sun.png”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Getting the local minima local_minima = mh.locmin(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the local minima axes[1].imshow(local_minima, cmap=”gray”) axes[1].set_title(”Local Minima”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Using Custom Structuring Element We can use a custom structuring element to get local minima of an image. A structuring element is a binary array of odd dimensions consisting of ones and zeroes that defines the connectivity pattern of the neighborhood pixels. The ones indicate the neighboring pixels that are included in the connectivity analysis, while the zeros represent the neighbors that are excluded or ignored. In mahotas, we can use custom structuring element to define the neighboring pixels of an image during local minima extraction. It allows us to find local minima regions as per our requirement. To use a structuring element, we need to pass it to the Bc parameter of the locmin() function. For example, let”s consider the custom structuring element: [[1, 0, 0, 0, 1], [0, 1, 0, 1,0], [0, 0, 1, 0, 0], [0, 1, 0, 1, 1], [1, 0, 0, 1, 0]]. This structuring element implies vertical, horizontal, and diagonal connectivity. It means that for each pixel in the image, only the pixels vertically, horizontally or diagonally above and below it is considered its neighbor during local minima extraction. Example In the example below, we are defining a custom structuring element to get the local minima of an image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Setting custom structuring element struct_element = np.array([[1, 0, 0, 0, 1], [0, 1, 0, 1, 0], [0, 0, 1, 0, 0], [0, 1, 0, 1, 1], [1, 0, 0, 1, 0]]) # Getting the local minima local_minima = mh.locmin(image, Bc=struct_element) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the local minima axes[1].imshow(local_minima, cmap=”gray”) axes[1].set_title(”Local Minima”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − Using a Binary Image We can also find the local minima in a binary image. A binary image is an image where each pixel is represented by a 1 or a 0, indicating either a foreground or background respectively. Binary images can be created using the array() function from the numpy library. In mahotas, we can use the locmin() function to find local minima regions of a binary image. Since binary images consists of only 1’s and 0’s, the pixels having a value 1 will be considered local minima since the intensity of foreground pixels is less than background pixels. For example, let’s assume a binary image is created from the following array: [[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 0], [1, 0, 1, 0]]. Then the number of 1’s in an array will determine the number of local minima regions. Hence, the first array will have 1 local minima region, the second array will have 2 local minima regions and so on. Example Here, we are getting the local minima in a binary image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Creating a binary image binary_image = np.array([[0, 1, 0, 0], [1, 0, 1, 0], [0, 1, 0, 0], [1, 0, 1, 0]], dtype=np.uint8) # Getting the local minima local_minima = mh.locmin(binary_image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the binary image axes[0].imshow(binary_image, cmap=”gray”) axes[0].set_title(”Binary Image”) axes[0].set_axis_off() # Displaying the local minima axes[1].imshow(local_minima, cmap=”gray”) axes[1].set_title(”Local Minima”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output After

Learn Mahotas – Wavelet Transforms work project make money

Mahotas – Wavelet Transforms Wavelet transforms are mathematical techniques used to break the images into different frequency components. Wavelet transforms captures both the local and the global details of an image. Wavelet transforms use small wave−shaped functions, known as wavelets, to analyze signals. These wavelets are scaled and transformed to match different patterns present in an image. Wavelet transform involves modifying the high and low frequency coefficients of the frequency components to produce identify patterns and enhance and image. The original image can be recovered by inversing the wavelet transform. Let us discuss about the wavelet transformation techniques along with their inverse variations. Daubechies Transformation The Daubechies transformation is a wavelet transformation technique used to break a signal into different frequency components. It allows us to analyze the signals in both the time and the frequency domains. Let”s see Daubechies transformation image below − Inverse Daubechies Transformation The inverse Daubechies transformation is the reverse process of the Daubechies transformation. It reconstructs the original image from the individual frequency components, which are obtained through the Daubechies transformation. By applying the inverse transform, we can recover the signal while preserving important details. Here, we look at inverse of Daubechies transformation − Haar Transformation The Haar transformation technique breaks down an image into different frequency components by dividing it into sub−regions. It then calculates the difference between the average values to apply wavelet transformation on an image. In the image below, we see the Haar transformed image − Inverse Haar The inverse Haar transformation reconstructs the original image from the frequency components obtained through the Haar transformation. It is the reverse operation of the Haar transformation. Let”s look at inverse of Haar transformation − Example In the following example, we are trying to perform all the above explained wavelet transformations − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt image = mh.imread(”sun.png”, as_grey=True) # Daubechies transformation daubechies = mh.daubechies(image, ”D6”) mtplt.imshow(daubechies) mtplt.title(”Daubechies Transformation”) mtplt.axis(”off”) mtplt.show() # Inverse Daubechies transformation daubechies = mh.daubechies(image, ”D6”) inverse_daubechies = mh.idaubechies(daubechies, ”D6”) mtplt.imshow(inverse_daubechies) mtplt.title(”Inverse Daubechies Transformation”) mtplt.axis(”off”) mtplt.show() # Haar transformation haar = mh.haar(image) mtplt.imshow(haar) mtplt.title(”Haar Transformation”) mtplt.axis(”off”) mtplt.show() # Inverse Haar transformation haar = mh.haar(image) inverse_haar = mh.ihaar(haar) mtplt.imshow(inverse_haar) mtplt.title(”Inverse Haar Transformation”) mtplt.axis(”off”) mtplt.show() Output The output obtained is as shown below − Daubechies Transformation: Inverse Daubechies Transformation: Haar Transformation: Inverse Haar Transformation: We will discuss about all the wavelet transformations in detail in the remaining chapters.

Learn Mahotas – Conditional Watershed of Image work project make money

Mahotas – Conditional Watershed of Image The term “watershed” is derived from the concept of a physical watershed, which is the boundary line separating different drainage basins. Similarly, the watershed algorithm aims to find boundaries or regions of separation in an image. The watershed algorithm is a popular method used for image segmentation, which is the process of dividing an image into different regions. Therefore, in image processing, a watershed image refers to an image that has undergone a process called watershed segmentation. The watershed segmentation technique treats the pixel intensities in the image as a topographic surface, where the bright areas represent high elevations and the dark areas represent low elevations. Watershed in Mahotas Mahotas provides the conditional watershed function instead of the traditional watershed algorithm. The conditional watershed in Mahotas is an enhanced version of the watershed algorithm that allows us to guide the segmentation process by providing specific markers. Let us see the step−by−step procedure of how conditional watershed algorithm works in Mahotas − Step 1 − Imagine we have an image and we want to divide it into different regions. With conditional watershed, we can mark certain areas in the image as markers that represent the regions we are interested in. Step 2 − The algorithm then starts by filling these marked areas with water. The water will only flow within each marked region and won”t cross the boundaries of other markers. Step 3 − The result is a segmented image where each region is delineated by the boundaries defined by the markers you provided. The mahotas.cwatershed() function The cwatershed() function in Mahotas takes two inputs− the input image and a marker image, and returns an an output image which is segmented into distinct regions. The marker image is a binary image where the foreground pixels (Boolean value 1) represent the boundaries for different regions. Syntax Following is the basic syntax of cwatershed() function in mahotas − mahotas.cwatershed(surface, markers, Bc=None, return_lines=False) W, WL = cwatershed(surface, markers, Bc=None, return_lines=True) Parameters The parameters accepted by the cwatershed() function is as follows − surface − It represents the input image on which watershed segmentation will be performed. It is usually a grayscale image. markers − It represents the markers for the watershed segmentation. The markers indicate regions of interest in an image. Bc (optional) − It represents the structuring element used for neighborhood operations. If set to None, a default connectivity is used. return_lines − It is a boolean flag that specifies whether to return the watershed lines in addition to the labeled image. If True, the function returns both the labeled image and the watershed lines. If False, only the labeled image is returned. By default, it is set to False. Return Values W or WL − It represents the labeled image obtained from the watershed segmentation, where each region is assigned a unique label. The shape of the labeled image is the same as the input image. WL (optional) − This is only returned when return_lines parameter is set to True. It represents the watershed lines, which are the boundaries between the segmented regions in the image. Example In the following example, we are trying to display the basic conditional watershed segmentation of an image − import mahotas as mh import matplotlib.pyplot as plt # Loading the input image image = mh.imread(”sea.bmp”) # Creating markers or seeds markers = mh.imread(”tree.tiff”) # Perform conditional watershed segmentation segmented_image = mh.cwatershed(image, markers) # Display all three images in one plot plt.figure(figsize=(10, 5)) # Display image1 plt.subplot(1, 3, 1) plt.imshow(image) plt.title(”Sea”) plt.axis(”off”) # Display image2 plt.subplot(1, 3, 2) plt.imshow(markers) plt.title(”Tree”) plt.axis(”off”) # Display the segmented image plt.subplot(1, 3, 3) plt.imshow(segmented_image, cmap=”gray”) plt.title(”Segmented Image”) plt.axis(”off”) plt.tight_layout() plt.show() Output The output produced is as follows − Conditional Watershed with Custom Structuring Element A structuring element is a small binary image commonly represented as a matrix. It is used to analyze the local neighborhood of a reference pixel. In the context of conditional watershed, a custom structuring element allows us to define the connectivity between pixels during the watershed process. By customizing the structuring element, we can control how the the neighborhood of each pixel influences segmentation of an image. Example import mahotas as mh import numpy as np from pylab import imshow, show # Load the image image = mh.imread(”nature.jpeg”) # Convert the image to grayscale image_gray = mh.colors.rgb2grey(image).astype(np.uint8) # Threshold the image threshold = mh.thresholding.otsu(image_gray) image_thresholded = image_gray > threshold # Perform conditional watershed with custom structuring element struct_element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]]) labels, _ = mh.label(image_thresholded, struct_element) watershed = mh.cwatershed(image_gray.max() – image_gray, labels) # Show the result imshow(watershed) show() Output Output of the above code is as follows −

Learn Mahotas – Bernsen Local Thresholding work project make money

Mahotas – Bernsen Local Thresholding Bernsen local thresholding is a technique used for segmenting images into foreground and background regions. It uses intensity variations in a local neighborhood to assign a threshold value to each pixel in an image. The size of the local neighborhood is determined using a window. A large window size considers more neighboring pixels in the thresholding process, creating smoother transitions between regions but removing finer details. On the other hand, a small window size captures more details but may be susceptible to noise. The difference between Bernsen local thresholding and other thresholding techniques is that Bernsen local thresholding uses a dynamic threshold value, while the other thresholding techniques use a single threshold value to separate foreground and background regions. Bernsen Local Thresholding in Mahotas In Mahotas, we can use the thresholding.bernsen() and thresholding.gbernsen() functions to apply Bernsen local thresholding on an image. These functions create a window of a fixed size and calculate the local contrast range of each pixel within the window to segment the image. The local contrast range is the minimum and maximum grayscale values within the window. The threshold value is then calculated as the average of minimum and maximum grayscale values. If the pixel intensity is higher than the threshold, it is assigned to the foreground (white), otherwise it is assigned to the background (black). The window is then moved across the image to cover all the pixels to create a binary image, where the foreground and background regions are separated based on local intensity variations. The mahotas.tresholding.bernsen() function The mahotas.thresholding.bernsen() function takes a grayscale image as input and applies Bernsen local thresholding on it. It outputs an image where each pixel is assigned a value of 0 (black) or 255 (white). The foreground pixels correspond to the regions of the image that have a relatively high intensity, whereas the background pixels correspond to the regions of the image that have a relatively low intensity. Syntax Following is the basic syntax of the bernsen() function in mahotas − mahotas.thresholding.bernsen(f, radius, contrast_threshold, gthresh={128}) where, f − It is the input grayscale image. radius − It is the size of the window around each pixel. contrast_threshold − It is the local threshold value. gthresh (optional) − It is the global threshold value (default is 128). Example The following example shows the usage of mh.thresholding.bernsen() function to apply Bernsen local thresholding on an image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Creating Bernsen threshold image threshold_image = mh.thresholding.bernsen(image, 5, 200) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the threshold image axes[1].imshow(threshold_image, cmap=”gray”) axes[1].set_title(”Bernsen Threshold Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − The mahotas.thresholding.gbernsen() function The mahotas.thresholding.gbernsen() function also applies Bernsen local thresholding on an input grayscale image. It is the generalized version of Bernsen local thresholding algorithm. It outputs a segmented image where each pixel is assigned a value of 0 or 255 depending on whether it is background or foreground. The difference between gbernsen() and bernsen() function is that the gbernsen() function uses a structuring element to define the local neighborhood, while the bernsen() function uses a window of fixed size to define the local neighborhood around a pixel. Also, gbernsen() calculates the threshold value based on the contrast threshold and the global threshold, while bernsen() only uses the contrast threshold to calculate the threshold value of each pixel. Syntax Following is the basic syntax of the gbernsen() function in mahotas − mahotas.thresholding.gbernsen(f, se, contrast_threshold, gthresh) Where, f − It is the input grayscale image. se − It is the structuring element. contrast_threshold − It is the local threshold value. gthresh (optional) − It is the global threshold value. Example In this example, we are using mh.thresholding.gbernsen() function to apply generalized Bernsen local thresholding on an image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Creating a structuring element structuring_element = np.array([[0, 0, 0],[0, 0, 0],[1, 1, 1]]) # Creating generalized Bernsen threshold image threshold_image = mh.thresholding.gbernsen(image, structuring_element, 200, 128) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the threshold image axes[1].imshow(threshold_image, cmap=”gray”) axes[1].set_title(”Generalized Bernsen Threshold Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − Bernsen Local Thresholding using Mean We can apply Bernsen local thresholding using the mean value of the pixel intensities as the threshold value. It refers to the average intensity of an image and is calculated by summing the intensity value of all pixels and then dividing it by the total number of pixels. In mahotas, we can do this by first finding the mean pixel intensities of all the pixels using the numpy.mean() function. Then, we define a window size to get the local neighborhood of a pixel. Finally, we set the mean value as the threshold value by passing it to the to the contrast_threshold parameter of the bernsen() or gbernsen() function. Example Here, we are applying Bernsen local thresholding on an image where threshold value is the mean value of all pixel intensities. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Calculating mean pixel value mean = np.mean(image) # Creating bernsen threshold image threshold_image = mh.thresholding.bernsen(image, 15, mean) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the threshold image axes[1].imshow(threshold_image, cmap=”gray”)

Learn Mahotas – Haralic Features work project make money

Mahotas – Haralic Features Haralick features describes the texture of an image. Texture refers to the patterns in an image that give it a specific look, such as the smoothness of a surface or the arrangement of objects. To work on Haralick features, we use a special matrix known as gray−level co−occurrence matrix (GLCM). It is a matrix that represents the relationship between pairs of pixel intensities in an image. It provides information about how frequently different combinations of pixel intensity values occur at specific distances within the image. Haralic Features in Mahotas To calculate Haralick features using Mahotas, create the GLCM by specifying the distance and direction of pixel pairs. Next, use the GLCM to calculate various statistical measures that form the Haralick features. These measures include contrast, correlation, energy, entropy, homogeneity, and more. Finally, retrieve the computed Haralick features. For example, the contrast feature in Haralick”s texture analysis tells us how much the brightness or darkness of neighboring pixels varies in an image. To calculate this feature, we analyze the GLCM matrix. This matrix shows how often pairs of pixelsx with different brightness levels appear together and where they are located in the image. We can use the mahotas.features.haralick() function to calculate the haralick features in mahotas. The mahotas.features.haralick() function The haralick() function takes a grayscale image as input and returns the calculated Haralick features. Haralick features are calculated based on grayscale images. Mahotas allows us to calculate Haralick features by analyzing the GLCM of an image. In such way, we can extract information about the texture patterns present in an image. Syntax Following is the basic syntax of the haralick() function in mahotas − mahotas.features.haralick(f, ignore_zeros=False, preserve_haralick_bug=False, compute_14th_feature=False, return_mean=False, return_mean_ptp=False, use_x_minus_y_variance=False, distance=1) Parameters Following are the parameters accepted by the haralick() function in mahotas − f − It is the input image. ignore_zeros (optional) − It computes whether zero values in the input matrix should be ignored (True) or considered (False) when computing the Haralick features. preserve_haralick_bug (optional) − It determines whether to replicate Haralick”s typo in the equations compute_14th_feature (optional) − It indicates whether to compute the 14th Haralick feature (dissimilarity) or not. By default, it is set to False. use_x_minus_y_variance (optional) − By default, mahotas uses VAR[P(|x−y|)], but if this parameter is True, it uses VAR[|x−y|]. distance (optional) − It represents the pixel distance used when computing the GLCM. It determines the neighborhood size considered when analyzing the spatial relationship between pixels. By default, it is set to 1. return_mean − When set to True, the function returns the mean value across all the directions. return_mean_ptp − When set to True, the function returns the mean value and the point−to−point (ptp) value (difference between max() and min()) across all the directions. Example Following is the basic example to calculate the haralic features in mahotas − import mahotas import numpy as np import matplotlib.pyplot as mtplt # Load a grayscale image image = mahotas.imread(”nature.jpeg”, as_grey=True).astype(np.uint8) # Compute Haralick texture features features = mahotas.features.haralick(image) print(features) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the haralick featured image axes[1].imshow(features, cmap=”gray”) axes[1].set_title(”Haralick Feature”) axes[1].axis(”off”) mtplt.show() Output After executing the above code, we get the output as shown below − [[ 2.77611344e-03 2.12394600e+02 9.75234595e-01 4.28813094e+03 4.35886838e-01 2.69140151e+02 1.69401291e+04 8.31764345e+00 1.14305862e+01 6.40277627e-04 4.00793348e+00 -4.61407168e-01 9.99473205e-01] [ 1.61617121e-03 3.54272691e+02 9.58677001e-01 4.28662846e+03 3.50998369e-01 2.69132899e+02 1.67922411e+04 8.38274113e+00 1.20062562e+01 4.34549344e-04 4.47398649e+00 -3.83903098e-01 9.98332575e-01] [ 1.92630414e-03 2.30755916e+02 9.73079650e-01 4.28590105e+03 3.83777866e-01 2.69170823e+02 1.69128483e+04 8.37735303e+00 1.17467122e+01 5.06580792e-04 4.20197981e+00 -4.18866103e-01 9.99008620e-01] [ 1.61214638e-03 3.78211585e+02 9.55884630e-01 4.28661922e+03 3.49497239e-01 2.69133049e+02 1.67682653e+04 8.38060403e+00 1.20309899e+01 4.30756183e-04 4.49912123e+00 -3.80573424e-01 9.98247930e-01]] The image displayed is as shown below − Haralick Features with Ignore Zeros In certain image analysis scenarios, it is required to ignore specific pixel values during the computation of Haralick texture features. One common case is when zero values represent a specific background or noise that should be excluded from the analysis. In Mahotas, we can ignore zero values by setting the ignore_zeros parameter to True. This will disregard zero values. Example In here, we are trying to calculate tha haralicks feature of an image by ignoring its zero values − import mahotas import numpy as np import matplotlib.pyplot as mtplt # Load a grayscale image image = mahotas.imread(”sun.png”, as_grey=True).astype(np.uint8) # Compute Haralick texture features while ignoring zero pixels g = ignore_zeros=True features = mahotas.features.haralick(image,g) print(features) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the haralick featured image axes[1].imshow(features, cmap=”gray”) axes[1].set_title(”Haralick Feature”) axes[1].axis(”off”) mtplt.show() Output Following is the output of the above code − [[ 2.67939014e-03 5.27444410e+01 9.94759846e-01 5.03271870e+03 5.82786178e-01 2.18400839e+02 2.00781303e+04 8.26680366e+00 1.06263358e+01 1.01107651e-03 2.91875064e+00 -5.66759616e-01 9.99888025e-01] [ 2.00109668e-03 1.00750583e+02 9.89991374e-01 5.03318740e+03 4.90503673e-01 2.18387049e+02 2.00319990e+04 8.32862989e+00 1.12183182e+01 7.15118996e-04 3.43564495e+00 -4.86983515e-01 9.99634586e-01] [ 2.29690324e-03 6.34944689e+01 9.93691944e-01 5.03280779e+03 5.33850851e-01 2.18354256e+02 2.00677367e+04 8.30278737e+00 1.09228656e+01 8.42614942e-04 3.16166477e+00 -5.26842246e-01 9.99797686e-01] [ 2.00666032e-03 1.07074413e+02 9.89363195e-01 5.03320370e+03 4.91882840e-01 2.18386605e+02 2.00257404e+04 8.32829316e+00 1.12259184e+01 7.18459598e-04 3.44609033e+00 -4.85960134e-01 9.99629000e-01]] The image obtained is as follows − Computing Haralick Features with 14th Feature The 14th feature, Sum of Squares Variance, is calculated as the variance of the GLCM elements weighted by the square of their distances. It provides information about the smoothness of the texture. A high value indicates a more diverse distribution of pixel pairs in terms of their intensities and distances, indicating a rough texture. Whereas, a low value indicates a more uniform or smooth texture. In Mahotas, we can compute the Haralicks 14th feature by setting the compute_14th_feature parameter to True. Example Now, we are computing the 14th haralick features of an image − import mahotas import numpy as np import matplotlib.pyplot as mtplt # Load a grayscale image image = mahotas.imread(”tree.tiff”, as_grey=True).astype(np.uint8) # Compute Haralick texture features and include the 14th feature features = mahotas.features.haralick(image, compute_14th_feature=True) print(features) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the haralick featured image axes[1].imshow(features, cmap=”gray”) axes[1].set_title(”Haralick Feature”) axes[1].axis(”off”) mtplt.show() Output The output produced is as shown below − [[