Learn Mahotas – Wavelet Transforms work project make money

Mahotas – Wavelet Transforms Wavelet transforms are mathematical techniques used to break the images into different frequency components. Wavelet transforms captures both the local and the global details of an image. Wavelet transforms use small wave−shaped functions, known as wavelets, to analyze signals. These wavelets are scaled and transformed to match different patterns present in an image. Wavelet transform involves modifying the high and low frequency coefficients of the frequency components to produce identify patterns and enhance and image. The original image can be recovered by inversing the wavelet transform. Let us discuss about the wavelet transformation techniques along with their inverse variations. Daubechies Transformation The Daubechies transformation is a wavelet transformation technique used to break a signal into different frequency components. It allows us to analyze the signals in both the time and the frequency domains. Let”s see Daubechies transformation image below − Inverse Daubechies Transformation The inverse Daubechies transformation is the reverse process of the Daubechies transformation. It reconstructs the original image from the individual frequency components, which are obtained through the Daubechies transformation. By applying the inverse transform, we can recover the signal while preserving important details. Here, we look at inverse of Daubechies transformation − Haar Transformation The Haar transformation technique breaks down an image into different frequency components by dividing it into sub−regions. It then calculates the difference between the average values to apply wavelet transformation on an image. In the image below, we see the Haar transformed image − Inverse Haar The inverse Haar transformation reconstructs the original image from the frequency components obtained through the Haar transformation. It is the reverse operation of the Haar transformation. Let”s look at inverse of Haar transformation − Example In the following example, we are trying to perform all the above explained wavelet transformations − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt image = mh.imread(”sun.png”, as_grey=True) # Daubechies transformation daubechies = mh.daubechies(image, ”D6”) mtplt.imshow(daubechies) mtplt.title(”Daubechies Transformation”) mtplt.axis(”off”) mtplt.show() # Inverse Daubechies transformation daubechies = mh.daubechies(image, ”D6”) inverse_daubechies = mh.idaubechies(daubechies, ”D6”) mtplt.imshow(inverse_daubechies) mtplt.title(”Inverse Daubechies Transformation”) mtplt.axis(”off”) mtplt.show() # Haar transformation haar = mh.haar(image) mtplt.imshow(haar) mtplt.title(”Haar Transformation”) mtplt.axis(”off”) mtplt.show() # Inverse Haar transformation haar = mh.haar(image) inverse_haar = mh.ihaar(haar) mtplt.imshow(inverse_haar) mtplt.title(”Inverse Haar Transformation”) mtplt.axis(”off”) mtplt.show() Output The output obtained is as shown below − Daubechies Transformation: Inverse Daubechies Transformation: Haar Transformation: Inverse Haar Transformation: We will discuss about all the wavelet transformations in detail in the remaining chapters.

Learn Mahotas – Conditional Watershed of Image work project make money

Mahotas – Conditional Watershed of Image The term “watershed” is derived from the concept of a physical watershed, which is the boundary line separating different drainage basins. Similarly, the watershed algorithm aims to find boundaries or regions of separation in an image. The watershed algorithm is a popular method used for image segmentation, which is the process of dividing an image into different regions. Therefore, in image processing, a watershed image refers to an image that has undergone a process called watershed segmentation. The watershed segmentation technique treats the pixel intensities in the image as a topographic surface, where the bright areas represent high elevations and the dark areas represent low elevations. Watershed in Mahotas Mahotas provides the conditional watershed function instead of the traditional watershed algorithm. The conditional watershed in Mahotas is an enhanced version of the watershed algorithm that allows us to guide the segmentation process by providing specific markers. Let us see the step−by−step procedure of how conditional watershed algorithm works in Mahotas − Step 1 − Imagine we have an image and we want to divide it into different regions. With conditional watershed, we can mark certain areas in the image as markers that represent the regions we are interested in. Step 2 − The algorithm then starts by filling these marked areas with water. The water will only flow within each marked region and won”t cross the boundaries of other markers. Step 3 − The result is a segmented image where each region is delineated by the boundaries defined by the markers you provided. The mahotas.cwatershed() function The cwatershed() function in Mahotas takes two inputs− the input image and a marker image, and returns an an output image which is segmented into distinct regions. The marker image is a binary image where the foreground pixels (Boolean value 1) represent the boundaries for different regions. Syntax Following is the basic syntax of cwatershed() function in mahotas − mahotas.cwatershed(surface, markers, Bc=None, return_lines=False) W, WL = cwatershed(surface, markers, Bc=None, return_lines=True) Parameters The parameters accepted by the cwatershed() function is as follows − surface − It represents the input image on which watershed segmentation will be performed. It is usually a grayscale image. markers − It represents the markers for the watershed segmentation. The markers indicate regions of interest in an image. Bc (optional) − It represents the structuring element used for neighborhood operations. If set to None, a default connectivity is used. return_lines − It is a boolean flag that specifies whether to return the watershed lines in addition to the labeled image. If True, the function returns both the labeled image and the watershed lines. If False, only the labeled image is returned. By default, it is set to False. Return Values W or WL − It represents the labeled image obtained from the watershed segmentation, where each region is assigned a unique label. The shape of the labeled image is the same as the input image. WL (optional) − This is only returned when return_lines parameter is set to True. It represents the watershed lines, which are the boundaries between the segmented regions in the image. Example In the following example, we are trying to display the basic conditional watershed segmentation of an image − import mahotas as mh import matplotlib.pyplot as plt # Loading the input image image = mh.imread(”sea.bmp”) # Creating markers or seeds markers = mh.imread(”tree.tiff”) # Perform conditional watershed segmentation segmented_image = mh.cwatershed(image, markers) # Display all three images in one plot plt.figure(figsize=(10, 5)) # Display image1 plt.subplot(1, 3, 1) plt.imshow(image) plt.title(”Sea”) plt.axis(”off”) # Display image2 plt.subplot(1, 3, 2) plt.imshow(markers) plt.title(”Tree”) plt.axis(”off”) # Display the segmented image plt.subplot(1, 3, 3) plt.imshow(segmented_image, cmap=”gray”) plt.title(”Segmented Image”) plt.axis(”off”) plt.tight_layout() plt.show() Output The output produced is as follows − Conditional Watershed with Custom Structuring Element A structuring element is a small binary image commonly represented as a matrix. It is used to analyze the local neighborhood of a reference pixel. In the context of conditional watershed, a custom structuring element allows us to define the connectivity between pixels during the watershed process. By customizing the structuring element, we can control how the the neighborhood of each pixel influences segmentation of an image. Example import mahotas as mh import numpy as np from pylab import imshow, show # Load the image image = mh.imread(”nature.jpeg”) # Convert the image to grayscale image_gray = mh.colors.rgb2grey(image).astype(np.uint8) # Threshold the image threshold = mh.thresholding.otsu(image_gray) image_thresholded = image_gray > threshold # Perform conditional watershed with custom structuring element struct_element = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]]) labels, _ = mh.label(image_thresholded, struct_element) watershed = mh.cwatershed(image_gray.max() – image_gray, labels) # Show the result imshow(watershed) show() Output Output of the above code is as follows −

Learn Mahotas – Bernsen Local Thresholding work project make money

Mahotas – Bernsen Local Thresholding Bernsen local thresholding is a technique used for segmenting images into foreground and background regions. It uses intensity variations in a local neighborhood to assign a threshold value to each pixel in an image. The size of the local neighborhood is determined using a window. A large window size considers more neighboring pixels in the thresholding process, creating smoother transitions between regions but removing finer details. On the other hand, a small window size captures more details but may be susceptible to noise. The difference between Bernsen local thresholding and other thresholding techniques is that Bernsen local thresholding uses a dynamic threshold value, while the other thresholding techniques use a single threshold value to separate foreground and background regions. Bernsen Local Thresholding in Mahotas In Mahotas, we can use the thresholding.bernsen() and thresholding.gbernsen() functions to apply Bernsen local thresholding on an image. These functions create a window of a fixed size and calculate the local contrast range of each pixel within the window to segment the image. The local contrast range is the minimum and maximum grayscale values within the window. The threshold value is then calculated as the average of minimum and maximum grayscale values. If the pixel intensity is higher than the threshold, it is assigned to the foreground (white), otherwise it is assigned to the background (black). The window is then moved across the image to cover all the pixels to create a binary image, where the foreground and background regions are separated based on local intensity variations. The mahotas.tresholding.bernsen() function The mahotas.thresholding.bernsen() function takes a grayscale image as input and applies Bernsen local thresholding on it. It outputs an image where each pixel is assigned a value of 0 (black) or 255 (white). The foreground pixels correspond to the regions of the image that have a relatively high intensity, whereas the background pixels correspond to the regions of the image that have a relatively low intensity. Syntax Following is the basic syntax of the bernsen() function in mahotas − mahotas.thresholding.bernsen(f, radius, contrast_threshold, gthresh={128}) where, f − It is the input grayscale image. radius − It is the size of the window around each pixel. contrast_threshold − It is the local threshold value. gthresh (optional) − It is the global threshold value (default is 128). Example The following example shows the usage of mh.thresholding.bernsen() function to apply Bernsen local thresholding on an image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Creating Bernsen threshold image threshold_image = mh.thresholding.bernsen(image, 5, 200) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the threshold image axes[1].imshow(threshold_image, cmap=”gray”) axes[1].set_title(”Bernsen Threshold Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − The mahotas.thresholding.gbernsen() function The mahotas.thresholding.gbernsen() function also applies Bernsen local thresholding on an input grayscale image. It is the generalized version of Bernsen local thresholding algorithm. It outputs a segmented image where each pixel is assigned a value of 0 or 255 depending on whether it is background or foreground. The difference between gbernsen() and bernsen() function is that the gbernsen() function uses a structuring element to define the local neighborhood, while the bernsen() function uses a window of fixed size to define the local neighborhood around a pixel. Also, gbernsen() calculates the threshold value based on the contrast threshold and the global threshold, while bernsen() only uses the contrast threshold to calculate the threshold value of each pixel. Syntax Following is the basic syntax of the gbernsen() function in mahotas − mahotas.thresholding.gbernsen(f, se, contrast_threshold, gthresh) Where, f − It is the input grayscale image. se − It is the structuring element. contrast_threshold − It is the local threshold value. gthresh (optional) − It is the global threshold value. Example In this example, we are using mh.thresholding.gbernsen() function to apply generalized Bernsen local thresholding on an image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Creating a structuring element structuring_element = np.array([[0, 0, 0],[0, 0, 0],[1, 1, 1]]) # Creating generalized Bernsen threshold image threshold_image = mh.thresholding.gbernsen(image, structuring_element, 200, 128) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the threshold image axes[1].imshow(threshold_image, cmap=”gray”) axes[1].set_title(”Generalized Bernsen Threshold Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − Bernsen Local Thresholding using Mean We can apply Bernsen local thresholding using the mean value of the pixel intensities as the threshold value. It refers to the average intensity of an image and is calculated by summing the intensity value of all pixels and then dividing it by the total number of pixels. In mahotas, we can do this by first finding the mean pixel intensities of all the pixels using the numpy.mean() function. Then, we define a window size to get the local neighborhood of a pixel. Finally, we set the mean value as the threshold value by passing it to the to the contrast_threshold parameter of the bernsen() or gbernsen() function. Example Here, we are applying Bernsen local thresholding on an image where threshold value is the mean value of all pixel intensities. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Calculating mean pixel value mean = np.mean(image) # Creating bernsen threshold image threshold_image = mh.thresholding.bernsen(image, 15, mean) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the threshold image axes[1].imshow(threshold_image, cmap=”gray”)

Learn Mahotas – 2D Laplacian Filter work project make money

Mahotas – 2D Laplacian Filter A Laplacian filter is used to detect edges and changes in intensity within an image. Mathematically, the Laplacian filter is defined as the sum of the second derivatives of the image in the x and y directions. The second derivative provides information about the rate of change of intensity of each pixel. The Laplacian filter emphasizes regions of an image where the intensity changes rapidly, such as edges and corners. It works by subtracting the average of the surrounding pixels from the center pixel, which gives a measure of the second derivative of intensity. In 2D, the Laplacian filter is typically represented by a square matrix, often 3×3 or 5×5. Here”s an example of a 3×3 Laplacian filter − 0 1 0 1 -4 1 0 1 0 2D Laplacian Filter in Mahotas To apply the 2D Laplacian filter in mahotas, we can use the mahotas.laplacian_2D() function. Here is an overview of the working of the 2D Laplacian filter in Mahotas − Input Image The filter takes a grayscale input image. Convolution The Laplacian filter applies a convolution operation on the input image using a kernel. The kernel determines the weights applied to the neighboring pixels during convolution. The convolution operation involves sliding the kernel over the entire image. At each pixel position, the Laplacian filter multiplies the corresponding kernel weights with the pixel values in the neighborhood and calculates the sum. Laplacian Response The Laplacian response is obtained by applying the Laplacian operator to the image. It represents the intensity changes or discontinuities in the image, which are associated with edges. The mahotas.laplacian_2D() function The mahotas.laplacian_2D() function takes a grayscale image as input and performs a 2D laplacian operation on it. The resulting image highlights regions of rapid intensity changes, such as edges. Syntax Following is the basic syntax of the laplacian_2D() function in mahotas − mahotas.laplacian_2D(array, alpha=0.2) Where, array − It is the input image. alpha (optional) − It is a scalar value between 0 and 1 that controls the shape of Laplacian filter. A larger value of alpha increases the sensitivity of the Laplacian filter to the edges. The default value is 0.2. Example Following is the basic example of detecting edges in an image using the laplacian_2D() function − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt image = mh.imread(”picture.jpg”, as_grey = True) # Applying a laplacian filter filtered_image = mh.laplacian_2D(image) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the laplacian filtered featured image axes[1].imshow(filtered_image, cmap=”gray”) axes[1].set_title(”Laplacian Filtered”) axes[1].axis(”off”) mtplt.show() Output After executing the above code, we get the following output − Scaling the Laplacian Response By varying the alpha parameter, we can control the scale of the Laplacian response. A smaller alpha value, such as 0.2, produces a relatively subtle response, highlighting finer details and edges. On the other hand, a larger alpha value, like 0.8, amplifies the response, making it more pronounced and emphasizing more prominent edges and structures. Hence, we can use different alpha values in the laplacian filter to reflect the changes in edge detection. Example In here, we are using different alpha values in the laplacian filter to reflect the changes in edge detection − import mahotas as mh import matplotlib.pyplot as plt # Load an example image image = mh.imread(”pic.jpg”, as_grey=True) # Apply the Laplacian filter with different alpha values filtered_image_1 = mh.laplacian_2D(image, alpha=0) filtered_image_2 = mh.laplacian_2D(image, alpha=0.5) filtered_image_3 = mh.laplacian_2D(image, alpha=1) # Display the original and filtered images with different scales fig, axes = plt.subplots(1, 4, figsize=(12, 3)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) axes[1].imshow(filtered_image_1, cmap=”gray”) axes[1].set_title(”Filtered Image (alpha=0)”) axes[1].axis(”off”) axes[2].imshow(filtered_image_2, cmap=”gray”) axes[2].set_title(”Filtered Image (alpha=0.5)”) axes[2].axis(”off”) axes[3].imshow(filtered_image_3, cmap=”gray”) axes[3].set_title(”Filtered Image (alpha=1)”) axes[3].axis(”off”) plt.show() Output Followng is the output of the above code − Using Randomly Generated Array We can apply laplacian filter on a randomly generated array as well. To achieve this, first we need to create a random 2D array. We can create the array using the np.random.rand() function of the NumPy library. This function generates values between 0 and 1, representing the pixel intensities of the array. Next, we pass the randomly generated array to the mahotas.laplacian_2D() function. This function applies the Laplacian filter to the input array and returns the filtered array. Example Now, we are trying to apply laplacian filter to a randomly generated array − import mahotas as mh import numpy as np import matplotlib.pyplot as plt # Generate a random 2D array array = np.random.rand(100, 100) # Apply the Laplacian filter filtered_array = mh.laplacian_2D(array) # Display the original and filtered arrays plt.figure(figsize=(10, 5)) plt.subplot(1, 2, 1) plt.imshow(array, cmap=”gray”) plt.title(”Original Array”) plt.axis(”off”) plt.subplot(1, 2, 2) plt.imshow(filtered_array, cmap=”gray”) plt.title(”Filtered Array”) plt.axis(”off”) plt.show() Output Output of the above code is as follows −

Learn Mahotas – Hit & Miss Transform work project make money

Mahotas – Hit & Miss Transform The Hit & Miss transform is a binary that detects specific patterns = or shapes within an image. The operation compares a structuring element with the input binary image. The structuring element consists of foreground (1) and background (0) pixels arranged in a specific pattern that represents the desired shape or pattern to be detected. The Hit−or−Miss Transform performs a pixel−wise logical AND operation between the structuring element and the image, and then checks if the result matches a pre−defined condition. The condition specifies the exact arrangement of foreground and background pixels that should be present in the matched pattern. If the condition is satisfied, the output pixel is set to 1, indicating a match; otherwise, it is set to 0. Hit & Miss transform in Mahotas In Mahotas, we can use the mahotas.hitmiss() function to perform Hit & Miss transformation on an image. The function uses a structuring element ”Bc” to determine whether a specific pattern exists in the input image. The structuring element in Mahotas can take on three values: 0, 1, or 2. A value of 1 indicates the foreground of the structuring element, while 0 represents the background. The value 2 is used as a “don”t care” value, meaning that a match should not be performed for that particular pixel. To identify a match, the structuring element”s values must overlap with the corresponding pixel values in the input image. If the overlap satisfies the conditions specified by the structuring element, the pixel is considered a match. The mahotas.hitmiss() function The mahotas.hitmiss() takes a grayscale image as an input and returns a binary image as output. The white pixels represent areas where there is a match between the structuring element and the input image, while the black pixels represent areas where there is no match. Syntax Following is the basic syntax of the hitmiss() function in mahotas − mahotas.hitmiss(input, Bc, out=np.zeros_like(input)) Where, input − It is the input grayscale image. Bc − It is the pattern that needs to be matched in the input image. It can have a value of 0, 1, or 2. out (optional) − It defines in which array to store the output image (default is of same size as input). Example The following example shows hit & miss transformation on an image using the mh.hitmiss() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Applying thresholding threshold_image = mh.thresholding.bernsen(image, 5, 200) # Creating hit & miss template template = np.array([[1, 2, 1, 2, 1],[2, 1, 1, 1, 2],[2, 2, 1, 2, 2]]) # Applying hit & miss transformation hit_miss = mh.hitmiss(threshold_image, template) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the hit & miss transformed image axes[1].imshow(hit_miss, cmap=”gray”) axes[1].set_title(”Hit & Miss Transformed Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − By Detecting Edges We can also detect edges of an image by applying the Hit & Miss transformation. The edges represent the boundary between different regions in an image. These are areas where the difference in intensity value between the neighboring pixels is high. In mahotas, to detect edges using Hit & Miss transform, we first create a structuring element. This structuring element matches the edges of the template with the input image. We then perform on the image and then pass the structuring element as the Bc parameter to the hitmiss() function. For example, the following structuring element can be used to detect edges in an input image − [[1, 2, 1] [2, 2, 2] [1, 2, 1]] In here, the 1s are present at the top−rightmost, top−leftmost, bottom−rightmost, and bottom−leftmost positions of the structuring element. The edges are usually present at these locations in an image. The 1s present in the structuring element matches with the pixels having intensity value 1 in the image, thus highlighting the edges as the foreground. Example In this example, we are trying to detect edges of an image by applying the Hit & Miss transformation − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image).astype(np.uint8) # Applying thresholding threshold_value = mh.thresholding.rc(image) threshold_image = image > threshold_value # Creating hit & miss template template = np.array([[1, 2, 1],[2, 2, 2],[1, 2, 1]]) # Applying hit & miss transformation hit_miss = mh.hitmiss(threshold_image, template) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the hit & miss transformed image axes[1].imshow(hit_miss, cmap=”gray”) axes[1].set_title(”Hit & Miss Transformed Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − By Detecting Diagonals We can use the Hit & Miss transformation to detect the diagonals of an image as well. Diagonals are indicated by a linear pattern connecting opposite corners of an image. These are the region where the pixel intensity changes along a diagonal path. In mahotas, we first perform on the input image. We then pass a structuring element as the Bc parameter to the hitmiss() function. This structuring element matches the diagonals of the template with the diagonals of the input image. For example, the following structuring element can be used to detect diagonals in an input image − [[0, 2, 0] [2, 0, 2] [0, 2, 0]] In here, the 0s run along a diagonal path from the top leftmost to the bottom rightmost position, and from the top rightmost to the bottom leftmost position. The diagonals are usually present at these locations in an image. The 0s present in the structuring element matches with the pixels having intensity value 0 in

Learn Mahotas – Soft Threshold work project make money

Mahotas – Soft Threshold Soft threshold refers to decreasing the noise (denoising) of an image to improve its quality. It assigns a continuous range of values to the pixels based on their closeness to the threshold value. This results in a gradual transition between the foreground and background regions. In soft threshold, the threshold value determines the balance between denoising and image preservation. A higher threshold value results in stronger denoising but leads to loss of information. Conversely, a lower threshold value retains more information but results in unwanted noise. Soft Threshold in Mahotas In Mahotas, we can use the thresholding.soft_threshold() function to apply a soft threshold on an image. It dynamically adjusts the threshold value based on the neighboring pixels to enhance images with non−uniform noise levels. By using dynamic adjustment, the function proportionally reduces intensity of those pixels whose intensity exceeds the threshold value and assigns them to the foreground. On the other hand, if a pixel”s intensity is below the threshold it is assigned to the background. The mahotas.thresholding.soft_threshold() function The mahotas.thresholding.soft_threshold() function takes a grayscale image as input and returns an image on which soft threshold has been applied. It works by comparing the pixel intensity with the provided threshold value. Syntax Following is the basic syntax of the soft_threshold() function in mahotas − mahotas.thresholding.soft_threshold(f, tval) Where, f − It is the input grayscale image. tval − It is the threshold value. Example In the following example, we are applying a soft threshold on a grayscale image using the mh.thresholding.soft_threshold() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Setting threshold value tval = 150 # Applying soft threshold on the image threshold_image = mh.thresholding.soft_threshold(image, tval) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the threshold image axes[1].imshow(threshold_image, cmap=”gray”) axes[1].set_title(”Soft Threshold Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Soft Threshold using the Mean value We can apply a soft threshold using the mean value of the pixel intensities on an image. The mean value refers to the average intensity of an image. It is calculated by summing the intensity value of all pixels and then dividing it by the total number of pixels. In mahotas, we can find the mean pixel intensities of all the pixels of an image using the numpy.mean() function. The mean value can then be passed to the tval parameter of mahotas.thresholding.soft_threshold() function to generate a soft threshold image. This approach of applying soft threshold maintains a good balance between denoising and image quality as the threshold value is neither too high nor too low. Example The following example shows a soft threshold being applied on a grayscale image when the threshold is the mean value of pixel intensities. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Setting mean threshold value tval = np.mean(image) # Applying soft threshold on the image threshold_image = mh.thresholding.soft_threshold(image, tval) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the threshold image axes[1].imshow(threshold_image, cmap=”gray”) axes[1].set_title(”Soft Threshold Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − Soft Threshold using the Percentile value In addition to mean value, we can also apply a soft threshold using the percentile value on the pixel”s intensities of an image. Percentile refers to the value below which a given percentage of data falls; in image processing it refers to distribution of pixel intensities in an image. For example, let”s set the threshold percentile to 85. This means that only pixels with intensities greater than 85% of the other pixels in the image will be classified as foreground, while the remaining pixels will be classified as background. In mahotas, we can use numpy.percentile() function to set a threshold value based on a percentile of pixel intensity. This value is then used in the soft_thresholding() function to apply a soft threshold on an image. Example In this example, we show how soft threshold is applied when threshold is found using percentile value. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Setting percentile threshold value tval = np.percentile(image, 85) # Applying soft threshold on the image threshold_image = mh.thresholding.soft_threshold(image, tval) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the threshold image axes[1].imshow(threshold_image, cmap=”gray”) axes[1].set_title(”Soft Threshold Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output After executing the above code, we get the following output −

Learn Mahotas – Distance Transform work project make money

Mahotas – Distance Transform Distance transformation is a technique that calculates the distance between each pixel and the nearest background pixel. Distance transformation works on distance metrics, which define how the distance between two points is calculated in a space. Applying distance transformation on an image creates the distance map. In the distance map, dark shades are assigned to the pixels near the boundary, indicating a short distance, and lighter shades are assigned to pixels further away, indicating a larger distance to the nearest background pixel. Distance Transform in Mahotas In Mahotas, we can use the mahotas.distance() function to perform distance transformation on an image. It uses an iterative approach to create a distance map. The function first initializes the distance values for all pixels in the image. The background pixels are assigned a distance value of infinity, while the foreground pixels are assigned a distance value of zero. Then, the function updates the distance value of each background pixel based on the distances of its neighboring pixels. This occurs until all the distance value of all the background pixels has been computed. The mahotas.distance() function The mahotas.distance() function takes an image as input and returns a distance map as output. The distance map is an image that contains the distance between each pixel in the input image and the nearest background pixel. Syntax Following is the basic syntax of the distance() function in mahotas − mahotas.distance(bw, metric=”euclidean2”) Where, bw − It is the input image. metric (optional) − It specifies the type of distance used to determine the distance between a pixel and a background pixel (default is euclidean2). Example In the following example, we are performing distance transformation on an image using the mh.distance() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Finding distance map distance_map = mh.distance(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the distance transformed image axes[1].imshow(distance_map) axes[1].set_title(”Distance Transformed Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Using Labeled Image We can also perform distance transformation using a labeled image. A labeled image refers to an image where distinct regions are assigned unique labels for segmenting the image into different regions. In mahotas, we can apply distance transformation on an input image by first reducing its noise using the mh.gaussian_filter() function. Then, we use the mh.label() function to separate the foreground regions from the background regions. We can then create a distance map using the mh.distance() function. This will calculate the distance between the pixels of the foreground regions and the pixels of the background region. Example In the example mentioned below, we are finding the distance map of a filtered labeled image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Applying gaussian filtering gauss_image = mh.gaussian_filter(image, 3) gauss_image = (gauss_image > gauss_image.mean()) # Converting it to a labeled image labeled, num_objects = mh.label(gauss_image) # Finding distance map distance_map = mh.distance(labeled) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the distance transformed image axes[1].imshow(distance_map) axes[1].set_title(”Distance Transformed Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows Using Euclidean Distance Another way of performing distance transformation on an image is using euclidean distance. Euclidean distance is the straight-line distance between two points in a coordinate system. It is calculated as the square root of the sum of squared differences between the coordinates. For example, let”s say there are two points A and B with coordinate values of (2, 3) and (5, 7) respectively. Then the squared difference of x and y coordinate will be (5−2)2 = 9 and (7−3)2 = 16. Sum of the square will be 9 + 16 = 25 and square root of this will be 5, which is the euclidean distance between point A and point B. In mahotas, we can use euclidean distance instead of the default euclidean2 as the distance metric. To do this, we pass the value ”euclidean” to the metric parameter. Note − The euclidean should be written as ”euclidean” (in single quotes), since the data type of metric parameter is string. Example In this example, we are using euclidean distance type for distance transformation of an image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sun.png”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Applying gaussian filtering gauss_image = mh.gaussian_filter(image, 3) gauss_image = (gauss_image > gauss_image.mean()) # Finding distance map distance_map = mh.distance(gauss_image, metric=”euclidean”) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the distance transformed image axes[1].imshow(distance_map) axes[1].set_title(”Distance Transformed Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output After executing the above code, we get the following output −

Learn Mahotas – Zernike Moments work project make money

Mahotas – Zernike Moments Like Zernike features, Zernike moments are also a set of mathematical values that describe the shape of objects in an image. They provide specific details about the shape, like how round or symmetrical it is, or if there are any particular patterns present. Zernike moments have some special properties as discussed below − Size Invariance − They can describe the shape regardless of its size. So, even if you have a small or large object, the Zernike moments will still capture its shape accurately. Rotation Invariance − If you rotate the object in the image, the Zernike moments will remain the same. Scaling Invariance − If you resize the object in the image, the Zernike moments will remain the same. Zernike moments break down the shape into smaller pieces using special mathematical functions called Zernike polynomials. These polynomials act like building blocks. By combining different Zernike polynomials, we can recreate and represent the unique features of the object”s shape. Zernike Moments in Mahotas To calculate the Zernike moments in mahotas, we can use the mahotas.features.zernike_moments() function. In Mahotas, Zernike moments are calculated by generating a set of Zernike polynomials, which are special mathematical functions representing various shapes and contours. These polynomials act as building blocks for analyzing the object”s shape. After that, compute Zernike moments by projecting the shape of the object onto the Zernike polynomials. These moments capture important shape characteristics. The mahotas.features.zernike_moments() function The mahotas.features.zernike_moments() function takes two arguments: the image object and the maximum radius for the Zernike polynomials. The function returns a 1−D array of the Zernike moments for the image. Syntax Following is the basic syntax of the mahotas.features.zernike_moments() function − mahotas.features.zernike_moments(im, radius, degree=8, cm={center_of_mass(im)}) Where, im − It is the input image on which the Zernike moments will be computed. radius − It defines the radius of the circular region, in pixels, over which the Zernike moments will be calculated. The area outside the circle defined by this radius, centered around the center of mass, is ignored. degree (optional) − It specifies the maximum number of the Zernike moments to be calculated. By default, the degree value is 8. cm (optional) − It specifies the center of mass of the image. By default, the center of mass of the image is used. Example Following is the basic example to calculate the Zernike moments of an image with a default degree value − import mahotas as mh # Load images of shapes image = mh.imread(”sun.png”, as_grey=True) # Compute Zernike moments moments = mh.features.zernike_moments(image, radius=10) # Compare the moments for shape recognition print(moments) Output After executing the above code, we get the output as follows − [0.31830989 0.00534998 0.00281258 0.0057374 0.01057919 0.00429721 0.00178094 0.00918145 0.02209622 0.01597089 0.00729495 0.00831211 0.00364554 0.01171028 0.02789188 0.01186194 0.02081316 0.01146935 0.01319499 0.03367388 0.01580632 0.01314671 0.02947629 0.01304526 0.00600012] Using Custom Center of Mass The center of mass of an image is the point in the image where the mass is evenly distributed. The custom center of mass is a point in an image that is not necessarily the center of mass of the image. This can be useful in cases where you want to use a different center of mass for your calculations. For example, you might want to use the custom center of mass of an object in an image to calculate the Zernike moments of the object. To calculate the Zernike moments of an image using a custom center of mass in mahotas, we need to pass the cm parameter to the mahotas.features.zernike_moments() function. The cm parameter takes a tuple of two numbers, which represent the coordinates of the custom center of mass. Example In here, we are trying to calculate the Zernike moments of an image using the custom center of mass − import mahotas import numpy as np # Load the image image = mahotas.imread(”nature.jpeg”, as_grey = True) # Calculate the center of mass of the image center_of_mass = np.array([100, 100]) # Calculate the Zernike moments of the image, using the custom center of mass zernike_moments = mahotas.features.zernike_moments(image, radius = 5, cm=center_of_mass) # Print the Zernike moments print(zernike_moments) Output Following is the output of the above code − [3.18309886e-01 3.55572603e-04 3.73132619e-02 5.98944983e-04 3.23622041e-04 1.72293481e-04 9.16757235e-02 3.35704966e-04 7.09426259e-02 1.17847972e-04 2.12625026e-04 3.06537827e-04 1.94379185e-01 1.32093249e-04 8.54616882e-02 1.83274207e-04 1.86728282e-04 3.08004108e-04 4.79437809e-04 1.97726337e-04 3.61630733e-01 5.27467687e-04 8.25534856e-02 7.75593823e-06 1.99419391e-01] Using a Specific Order The order of a Zernike moment is a measure of the complexity of the shape that it can represent. The higher the order, the more complex the shape that the moment can represent. To compute the Zernike moments of an image with a specific order in mahotas, we need to pass the degree parameter to the mahotas.features.zernike_moments() function. Example In the following example, we are trying to compute the Zernike moments of an image with a specified order. import mahotas import numpy as np # Load the image image = mahotas.imread(”nature.jpeg”, as_grey = True) # Calculate the Zernike moments of the image, using the specific order zernike_moments = mahotas.features.zernike_moments(image,1, 4) # Print the Zernike moments print(zernike_moments) Output Output of the above code is as shown below − [0.31830989 0.17086131 0.03146824 0.1549947 0.30067136 0.5376049 0.30532715 0.33032683 0.47908119]

Learn Mahotas – Regional Maxima of Image work project make money

Mahotas – Regional Maxima of Image Regional maxima refer to a point where the intensity value of pixels is the highest within an image. In an image, regions which form the regional maxima are the brightest amongst all other regions. Regional maxima are also known as global maxima. Regional maxima consider the entire image, while local maxima only consider a local neighborhood, to find the pixels with highest intensity. Regional maxima are a subset of local maxima, so all regional maxima are a local maxima but not all local maxima are regional maxima. An image can contain multiple regional maxima, but all regional maxima will be of equal intensity. This happens because only the highest intensity value is considered for regional maxima. Regional Maxima of Image in Mahotas In Mahotas, we can find the regional maxima in an image using the mahotas.regmax() function. Regional maxima are identified through intensity peaks within an image because they represent high intensity regions. The regional maxima points are highlighted as white while other points are colored in black. The mahotas.regmax() function The mahotas.regmax() function extracts regional maxima from an input grayscale image. It outputs an image where the 1”s represent presence of regional maxima points and 0”s represent normal points. The regmax() function uses a morphological reconstruction−based approach to find the regional maxima. In this approach, the intensity value of each local maxima region is compared with its neighbors. If a neighbor is found to have a higher intensity, it becomes the new regional maxima. This process continues until there no region of higher intensity is left, indicating that the regional maxima has been reached. Syntax Following is the basic syntax of the regmax() function in mahotas − mahotas.regmax(f, Bc={3×3 cross}, out={np.empty(f.shape, bool)}) Where, f − It is the input grayscale image. Bc (optional) − It is the structuring element used for connectivity. out(optional) − It is the output array of Boolean data type (defaults to new array of same size as f). Example In the following example, we are getting the regional maxima of an image using mh.regmax() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Getting the regional maxima regional_maxima = mh.regmax(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the regional maxima axes[1].imshow(regional_maxima, cmap=”gray”) axes[1].set_title(”Regional Maxima”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Using Custom Structuring Element We can also use a custom structuring element to get the regional maxima from an image. A structuring element is a binary array of odd dimensions consisting of ones and zeroes that defines the connectivity pattern of the neighborhood pixels during image labeling. The ones indicate the neighboring pixels that are included in the connectivity analysis, while the zeros represent the neighbors that are excluded or ignored. In mahotas, while extracting regional maxima regions we can use a custom structuring element to define the connectivity of neighboring pixels. We do this by first creating an odd dimension structuring element using the numpy.array() function. Then, we input this custom structuring element to the Bc parameter in the regmax() function. For example, let”s consider the custom structuring element: [[0, 0, 0, 0, 1], [0, 0, 1, 0, 0], [1, 0, 0, 0, 0], [0, 0, 0, 1, 0], [0, 1, 0, 0, 0]]. This structuring element implies horizontal connectivity, i.e., only the pixels horizontally left or right of another pixel are considered its neighbors. Example In this example, we are using a custom structuring element to get the regional maxima of an image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sun.png”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Setting custom structuring element struct_element = np.array([[0, 0, 0, 0, 1], [0, 0, 1, 0, 0], [1, 0, 0, 0, 0], [0, 0, 0, 1, 0], [0, 1, 0, 0, 0]]) # Getting the regional maxima regional_maxima = mh.regmax(image, Bc=struct_element) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the regional maxima axes[1].imshow(regional_maxima, cmap=”gray”) axes[1].set_title(”Regional Maxima”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − Using a Specific Region of an Image We can also find the regional maxima of a specific region of an image. A specific region of an image refers to a small part of a larger image. The specific region can be extracted by cropping the original image to remove unnecessary areas. In mahotas, we can find the regional maxima within a portion of an image. First, we crop the original image by specifying the required dimensions of the x and y axis. Then we use the cropped image and get the regional maxima using the regmax() function. For example, let’s say we specify [:800, 70:] as the dimensions of x and y axis respectively. Then, the cropped image will be in range of 0 to 800 pixels for x−axis and 70 to max dimension for y−axis. Example In this example, we are getting the regional maxima within a specific region of an image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Using specific region of the image image = image[:800, 70:] # Getting the regional maxima regional_maxima = mh.regmax(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the regional maxima axes[1].imshow(regional_maxima, cmap=”gray”) axes[1].set_title(”Regional Maxima”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output After executing the

Learn Mahotas – Haralic Features work project make money

Mahotas – Haralic Features Haralick features describes the texture of an image. Texture refers to the patterns in an image that give it a specific look, such as the smoothness of a surface or the arrangement of objects. To work on Haralick features, we use a special matrix known as gray−level co−occurrence matrix (GLCM). It is a matrix that represents the relationship between pairs of pixel intensities in an image. It provides information about how frequently different combinations of pixel intensity values occur at specific distances within the image. Haralic Features in Mahotas To calculate Haralick features using Mahotas, create the GLCM by specifying the distance and direction of pixel pairs. Next, use the GLCM to calculate various statistical measures that form the Haralick features. These measures include contrast, correlation, energy, entropy, homogeneity, and more. Finally, retrieve the computed Haralick features. For example, the contrast feature in Haralick”s texture analysis tells us how much the brightness or darkness of neighboring pixels varies in an image. To calculate this feature, we analyze the GLCM matrix. This matrix shows how often pairs of pixelsx with different brightness levels appear together and where they are located in the image. We can use the mahotas.features.haralick() function to calculate the haralick features in mahotas. The mahotas.features.haralick() function The haralick() function takes a grayscale image as input and returns the calculated Haralick features. Haralick features are calculated based on grayscale images. Mahotas allows us to calculate Haralick features by analyzing the GLCM of an image. In such way, we can extract information about the texture patterns present in an image. Syntax Following is the basic syntax of the haralick() function in mahotas − mahotas.features.haralick(f, ignore_zeros=False, preserve_haralick_bug=False, compute_14th_feature=False, return_mean=False, return_mean_ptp=False, use_x_minus_y_variance=False, distance=1) Parameters Following are the parameters accepted by the haralick() function in mahotas − f − It is the input image. ignore_zeros (optional) − It computes whether zero values in the input matrix should be ignored (True) or considered (False) when computing the Haralick features. preserve_haralick_bug (optional) − It determines whether to replicate Haralick”s typo in the equations compute_14th_feature (optional) − It indicates whether to compute the 14th Haralick feature (dissimilarity) or not. By default, it is set to False. use_x_minus_y_variance (optional) − By default, mahotas uses VAR[P(|x−y|)], but if this parameter is True, it uses VAR[|x−y|]. distance (optional) − It represents the pixel distance used when computing the GLCM. It determines the neighborhood size considered when analyzing the spatial relationship between pixels. By default, it is set to 1. return_mean − When set to True, the function returns the mean value across all the directions. return_mean_ptp − When set to True, the function returns the mean value and the point−to−point (ptp) value (difference between max() and min()) across all the directions. Example Following is the basic example to calculate the haralic features in mahotas − import mahotas import numpy as np import matplotlib.pyplot as mtplt # Load a grayscale image image = mahotas.imread(”nature.jpeg”, as_grey=True).astype(np.uint8) # Compute Haralick texture features features = mahotas.features.haralick(image) print(features) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the haralick featured image axes[1].imshow(features, cmap=”gray”) axes[1].set_title(”Haralick Feature”) axes[1].axis(”off”) mtplt.show() Output After executing the above code, we get the output as shown below − [[ 2.77611344e-03 2.12394600e+02 9.75234595e-01 4.28813094e+03 4.35886838e-01 2.69140151e+02 1.69401291e+04 8.31764345e+00 1.14305862e+01 6.40277627e-04 4.00793348e+00 -4.61407168e-01 9.99473205e-01] [ 1.61617121e-03 3.54272691e+02 9.58677001e-01 4.28662846e+03 3.50998369e-01 2.69132899e+02 1.67922411e+04 8.38274113e+00 1.20062562e+01 4.34549344e-04 4.47398649e+00 -3.83903098e-01 9.98332575e-01] [ 1.92630414e-03 2.30755916e+02 9.73079650e-01 4.28590105e+03 3.83777866e-01 2.69170823e+02 1.69128483e+04 8.37735303e+00 1.17467122e+01 5.06580792e-04 4.20197981e+00 -4.18866103e-01 9.99008620e-01] [ 1.61214638e-03 3.78211585e+02 9.55884630e-01 4.28661922e+03 3.49497239e-01 2.69133049e+02 1.67682653e+04 8.38060403e+00 1.20309899e+01 4.30756183e-04 4.49912123e+00 -3.80573424e-01 9.98247930e-01]] The image displayed is as shown below − Haralick Features with Ignore Zeros In certain image analysis scenarios, it is required to ignore specific pixel values during the computation of Haralick texture features. One common case is when zero values represent a specific background or noise that should be excluded from the analysis. In Mahotas, we can ignore zero values by setting the ignore_zeros parameter to True. This will disregard zero values. Example In here, we are trying to calculate tha haralicks feature of an image by ignoring its zero values − import mahotas import numpy as np import matplotlib.pyplot as mtplt # Load a grayscale image image = mahotas.imread(”sun.png”, as_grey=True).astype(np.uint8) # Compute Haralick texture features while ignoring zero pixels g = ignore_zeros=True features = mahotas.features.haralick(image,g) print(features) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the haralick featured image axes[1].imshow(features, cmap=”gray”) axes[1].set_title(”Haralick Feature”) axes[1].axis(”off”) mtplt.show() Output Following is the output of the above code − [[ 2.67939014e-03 5.27444410e+01 9.94759846e-01 5.03271870e+03 5.82786178e-01 2.18400839e+02 2.00781303e+04 8.26680366e+00 1.06263358e+01 1.01107651e-03 2.91875064e+00 -5.66759616e-01 9.99888025e-01] [ 2.00109668e-03 1.00750583e+02 9.89991374e-01 5.03318740e+03 4.90503673e-01 2.18387049e+02 2.00319990e+04 8.32862989e+00 1.12183182e+01 7.15118996e-04 3.43564495e+00 -4.86983515e-01 9.99634586e-01] [ 2.29690324e-03 6.34944689e+01 9.93691944e-01 5.03280779e+03 5.33850851e-01 2.18354256e+02 2.00677367e+04 8.30278737e+00 1.09228656e+01 8.42614942e-04 3.16166477e+00 -5.26842246e-01 9.99797686e-01] [ 2.00666032e-03 1.07074413e+02 9.89363195e-01 5.03320370e+03 4.91882840e-01 2.18386605e+02 2.00257404e+04 8.32829316e+00 1.12259184e+01 7.18459598e-04 3.44609033e+00 -4.85960134e-01 9.99629000e-01]] The image obtained is as follows − Computing Haralick Features with 14th Feature The 14th feature, Sum of Squares Variance, is calculated as the variance of the GLCM elements weighted by the square of their distances. It provides information about the smoothness of the texture. A high value indicates a more diverse distribution of pixel pairs in terms of their intensities and distances, indicating a rough texture. Whereas, a low value indicates a more uniform or smooth texture. In Mahotas, we can compute the Haralicks 14th feature by setting the compute_14th_feature parameter to True. Example Now, we are computing the 14th haralick features of an image − import mahotas import numpy as np import matplotlib.pyplot as mtplt # Load a grayscale image image = mahotas.imread(”tree.tiff”, as_grey=True).astype(np.uint8) # Compute Haralick texture features and include the 14th feature features = mahotas.features.haralick(image, compute_14th_feature=True) print(features) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the haralick featured image axes[1].imshow(features, cmap=”gray”) axes[1].set_title(”Haralick Feature”) axes[1].axis(”off”) mtplt.show() Output The output produced is as shown below − [[