Learn Mahotas – Labeled Max Array work project make money

Mahotas – Labeled Max Array A labeled max array refers to an array that stores the maximum intensity value of each region in a . To find the maximum intensity value of a region, every pixel in that region is examined. Then, the intensity value of the brightest pixel is selected as the maximum intensity value. In simple terms, labeled max arrays are used to find the brightest regions of an image. For example, let”s assume we have a region which consists of three pixels. The intensity value of the three pixels is 0.5, 0.2, and 0.8 respectively. Then the maximum intensity value of the region will be 0.8. Labeled Max Array in Mahotas In Mahotas, we can use the mahotas.labeled.labeled_max() function to create a labeled max array. The function iteratively searches for the brightest pixel in a region. Then it stores the intensity value of the brightest pixel in an array. The resultant array is a labeled max array, having the maximum intensity value of each region of the image. The mahotas.labeled.labeled_max() function The mahotas.labeled.labeled_max() function takes an image and a labeled image as inputs. It returns an array that contains the maximum intensity value of each labeled region. Syntax Following is the basic syntax of the labeled_max() function in mahotas − mahotas.labeled.labeled_max(array, labeled, minlength=None) Where, array − It is the input image. labeled − It is the labeled image. minlength (optional) − It specifies the minimum number of regions to include in the output array (default is None). Example In the following example, we are finding the labeled max arrays in a labeled image using the labeled_max() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the images image = mh.imread(”sea.bmp”) # Converting it to grayscale image = mh.colors.rgb2gray(image).astype(np.uint8) # Applying thresholding threshold = mh.thresholding.rc(image) threshold_image = image > threshold # Labeling the image label, num_objects = mh.label(threshold_image) # Getting the labeled max array labeled_max = mh.labeled.labeled_max(image, label) # Printing the labeled max array print(”Labeled max array:”, labeled_max) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the labeled image axes[1].imshow(label, cmap=”gray”) axes[1].set_title(”Labeled Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Labeled max array: [107 111 129 … 141 119 109] The image obtained is as follows − Labeled Max Arrays of a Random Boolean Image We can also find the labeled max arrays of a random Boolean image. A random boolean image refers to an image where each pixel has a value of either 0 or 1. The foreground pixels are represented by ”1”, and the background pixels are represented by ”0”. In mahotas, to find the labeled max arrays of a random Boolean image, we first need to generate a random boolean image of a specific size using the np.zeros() function. This image initially consists of only background pixels. We then assign integer values to a few portions of the image to create distinct regions. Then, we find the labeled max arrays of the image using the labeled_max() function. Example In the example mentioned below, we are finding the labeled max arrays of a random boolean image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Creating a random image image = np.zeros((10, 10), bool) # Assigning values to the regions image[:2, :2] = 1 image[4:6, 4:6] = 1 image[8:, 8:] = 1 # Labeling the image label, num_objects = mh.label(image) # Random sampling random_sample = np.random.random_sample(image.shape) # Getting the labeled max array labeled_max = mh.labeled.labeled_max(random_sample, label) # Printing the labeled max array print(”Labeled max array”) for i, intensity in enumerate(labeled_max): print(”Region”, i, ”:”, intensity) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the labeled image axes[1].imshow(label) axes[1].set_title(”Labeled Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − Labeled max array Region 0 : 0.9950607583625318 Region 1 : 0.8626363785944107 Region 2 : 0.6343883551171169 Region 3 : 0.8162320509314726 We get the following image as output −

Learn Computing Linear Binary Patterns work project make money

Mahotas – Computing Linear Binary Patterns Linear Binary Patterns (LBP) is used to analyze the patterns in an image. It compares the intensity value of a central pixel in the image with its neighboring pixels, and encodes the results into binary patterns (either 0 or 1). Imagine you have a grayscale image, where each pixel represents a shade of gray ranging from black to white. LBP divides the image into small regions. For each region, it looks at the central pixel and compares its brightness with the neighboring pixels. If a neighboring pixel is brighter or equal to the central pixel, it”s assigned a value of 1; otherwise, it”s assigned a value of 0. This process is repeated for all the neighboring pixels, creating a binary pattern. Computing Linear Binary Patterns in Mahotas In Mahotas, we can use the features.lbp() function to compute linear binary patterns in an image. The function compares the brightness of the central pixel with its neighbors and assigns binary values (0 or 1) based on the comparisons. These binary values are then combined to create a binary pattern that describes the texture in each region. By doing this for all regions, a histogram is created to count the occurrence of each pattern in the image. The histogram helps us to understand the distribution of textures in the image. The mahotas.features.lbp() function The mahotas.features.lbp() function takes a grayscale image as an input and returns binary value of each pixel. The binary value is then used to create a histogram of the linear binary patterns. The x−axis of the histogram represents the computed LBP value while the y−axis represents the frequency of the LBP value. Syntax Following is the basic syntax of the lbp() function in mahotas − mahotas.features.lbp(image, radius, points, ignore_zeros=False) Where, image − It is the input grayscale image. radius − It specifies the size of the region considered for comparing pixel intensities. points − It determines the number of neighboring pixels that should be considered when computing LBP for each pixel. ignore_zeros (optional) − It is a flag which specifies whether to ignore zero valued pixels (default is false). Example In the following example, we are computing linear binary patterns using the mh.features.lbp() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Computing linear binary patterns lbp = mh.features.lbp(image, 5, 5) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the linear binary patterns axes[1].hist(lbp) axes[1].set_title(”Linear Binary Patterns”) axes[1].set_xlabel(”LBP Value”) axes[1].set_ylabel(”Frequency”) # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Ignoring the Zero Valued Pixels We can ignore the zero valued pixels when computing linear binary patterns. Zero valued pixels are referred to the pixels having an intensity value of 0. They usually represent the background of an image but may also represent noise. In grayscale images, zero valued pixels are represented by the color ”black”. In mahotas, we can set the ignore_zeros parameter to the boolean value ”True” to exclude zero valued pixels in the mh.features.lbp() function. Example The following example shows computation of linear binary patterns by ignoring the zero valued pixels. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sea.bmp”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Computing linear binary patterns lbp = mh.features.lbp(image, 20, 10, ignore_zeros=True) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the linear binary patterns axes[1].hist(lbp) axes[1].set_title(”Linear Binary Patterns”) axes[1].set_xlabel(”LBP Value”) axes[1].set_ylabel(”Frequency”) # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − LBP of a Specific Region We can also compute the linear binary patterns of a specific region in the image. Specific regions refer to a portion of an image having any dimension. It can be obtained by cropping the original image. In mahotas, to compute linear binary patterns of a specific region, we first need to find the region of interest from the image. To do this, we specify the starting and ending pixel values for the x and y coordinates respectively. Then we can compute the LBP of this region using the lbp() function. For example, if we specify the values as [300:800], then the region will start from 300 pixels and go up to 800 pixels in the vertical direction (y−axis). Example Here, we are computing the LBP of a specific portion of the specified grayscale image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Specifying a region of interest image = image[300:800] # Computing linear binary patterns lbp = mh.features.lbp(image, 20, 10) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the linear binary patterns axes[1].hist(lbp) axes[1].set_title(”Linear Binary Patterns”) axes[1].set_xlabel(”LBP Value”) axes[1].set_ylabel(”Frequency”) # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output After executing the above code, we get the following output −

Learn Mahotas – Gaussian Filtering work project make money

Mahotas – Gaussian Filtering Gaussian filtering is a technique used to blur or smoothen an image. It reduces the noise in the image and softens the sharp edges. Imagine your image as a grid of tiny dots, and each dot represents a pixel. Gaussian filtering works by taking each pixel and adjusting its value based on the surrounding pixels. It calculates a weighted average of the pixel values in its neighborhood, placing more emphasis on the closer pixels and less emphasis on the farther ones. By repeating this process for every pixel in the image, the Gaussian filter blurs the image by smoothing out the sharp transitions between different areas and reducing the noise. The size of the filter determines the extent of blurring. A larger filter size means a broader region is considered, resulting in more significant blurring. In simpler terms, Gaussian filtering makes an image look smoother by averaging the nearby pixel values, giving more importance to the closer pixels and less to the ones farther away. This helps to reduce noise and make the image less sharp. Gaussian Filtering in Mahotas In Mahotas, we can perform Gaussian filtering on an image using the mahotas.gaussian_filter() function. This function applies a blurring effect to an image by using a special matrix called a Gaussian kernel. A Gaussian kernel is a special matrix with numbers arranged in a specific way. Each number in the kernel represents a weight. The kernel is placed over each pixel in the image, and the values of the neighboring pixels are multiplied by their corresponding weights in the kernel. The multiplied values are then summed, and assigned as the new value to the central pixel. This process is repeated for every pixel in the image, resulting in a blurred image where the sharp details and noise are reduced. The mahotas.gaussian_filter() function The mahotas.gaussian_filter() function takes a grayscale image as an input and returns a blurred version of the image as output. The amount of blurring is determined by the sigma value. The higher the sigma value, more will be the blurring applied to the output image. Syntax Following is the basic syntax of the gaussian_filter() function in mahotas − mahotas.gaussian_filter(array, sigma, order=0, mode=”reflect”, cval=0., out={np.empty_like(array)}) Where, array − It is the input image. sigma − It determines the standard deviation of the Gaussian kernel. order (optional) − It specifies the order of the Gaussian filter. Its value can be 0, 1, 2, or 3 (default is 0). mode (optional) − It specifies how the border should be handled (default is ”reflect”). cval (optional) − It represents the padding value applied when mode is ”constant”(default is 0). out (optional) − It specifies where to store the output image (default is an array of same size as array). Example In the following example, we are applying Gaussian filtering on an image using the mh.gaussian_filter() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Applying gaussian filtering gauss_filter = mh.gaussian_filter(image, 4) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the gaussian filtered image axes[1].imshow(gauss_filter) axes[1].set_title(”Gaussian Filtered Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Filtering with Different Order We can perform Gaussian filtering on an image with different order. Order in Gaussian filtering determines the degree of smoothness (blurring) applied on the image. Higher the order value, more will be the smoothing effect applied on the image. Higher orders are useful when dealing with very noisy images. However, higher orders also increase the processing time as the filter is applied multiple times. An order of 0 applies Gaussian filter once, an order of 1, 2, or 3 applies Gaussian filter twice, thrice and four times respectively. In mahotas, to perform gaussian filtering with different order, we pass any value other than 0 as the order parameter to the gaussian_filter() function. Example In the example mentioned below, we are applying Gaussian filtering on an image with different orders. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sea.bmp”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Applying gaussian filtering gauss_filter = mh.gaussian_filter(image, 3, 1) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the gaussian filtered image axes[1].imshow(gauss_filter) axes[1].set_title(”Gaussian Filtered Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − Filtering with ”Mirror” Mode When applying filters to images, it is important to determine how to handle the borders of the image. The mirror mode is a common approach that handles border pixels by mirroring the image content at the borders. This means that the values beyond the image boundaries are obtained by mirroring the nearest pixels within the image. This is done by mirroring the existing pixels along the edges. This mirroring technique ensures a smooth transition between the actual image and the mirrored image, resulting in better continuity. Example In here, we are applying Gaussian filtering on an image with ”mirror” mode. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sun.png”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Applying gaussian filtering gauss_filter = mh.gaussian_filter(image, 3, 0, mode=”mirror”) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the gaussian filtered image axes[1].imshow(gauss_filter) axes[1].set_title(”Gaussian Filtered Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output After executing the above code, we get the following output −

Learn Mahotas – Median Filter work project make money

Mahotas – Median Filter The median filter is another commonly used technique for noise reduction in an image. It works by calculating the middle (median) value among the neighboring pixels and replaces the original pixel value with that middle value. To understand the median filter, let”s consider the same black−and−white image scenario with small black dots representing noise. Each pixel in the image has a binary value − white (representing the object of interest) or black (representing the background). For each pixel, the median filter takes the pixel values of its neighboring pixels within the window. It then arranges them in ascending order based on their intensity value. After that, it selects the middle value, which is the median, and replaces the original pixel value with that median value. Median Filter in Mahotas To apply the median filter in Mahotas, you can use the median_filter() function. The median filter function in Mahotas uses a structuring element to examine pixels in a neighborhood. The structuring element replaces the value of each pixel by calculating the middle value within its neighborhood. The size of the structuring element determines the extent of smoothing applied by the median filter. A larger neighborhood will result in a stronger smoothing effect, while reducing finer details of the image. On the other hand, a smaller neighborhood will result in less smoothing but maintains more details. The mahotas.median_filter() function The median_filter() function applies the median filter to the input image using the specified neighborhood size. It replaces each pixel value with the median value calculated among its neighbors. The filtered image is stored in the output array. Syntax Following is the basic syntax of the median filter function in mahotas − mahotas.median_filter(img, Bc={square}, mode=”reflect”, cval=0.0, out={np.empty(f.shape, f.dtype}) Where, img − It is the input image. Bc − It is the structuring element that defines the neighbourhood. By default, it is a square of side 3. mode (optional) − It specifies how the function handles the borders of the image. It can take different values such as ”reflect”, ”constant”, ”nearest”, ”mirror” or ”wrap”. By default, it is set to ”reflect”. cval (optional) − The value to be used when mode=”constant”. The default value is 0.0. out (optional) − It specifies the output array where the filtered image will be stored. It must be of the same shape and data type as the input image. Example Following is the basic example to filter the image using the median_filter() function − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt image=mh.imread(”tree.tiff”, as_grey = True) structuring_element = mh.disk(12) filtered_image = mh.median_filter(image, structuring_element) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the median filtered image axes[1].imshow(filtered_image, cmap=”gray”) axes[1].set_title(”Median Filtered”) axes[1].axis(”off”) mtplt.show() Output After executing the above code, we get the following output − Median Filter with Reflect Mode When we apply the median filter to an image, we need to consider the neighboring pixels around each pixel to calculate the median. However, at the edges of the image, there are pixels that don”t have neighbors on one or more sides. To address this issue, we use the ”reflect” mode. Reflect mode creates a mirror−like effect along the edges of the image. It allows us to virtually extend the image by duplicating its pixels in a mirrored manner. This way, we can provide the median filter with neighboring pixels even at the edges. By reflecting the image values, the medan filter can now consider these mirrored pixels as if they were real neighbors. It calculates the median value using these virtual neighbors, resulting in a more accurate smoothing process at the image edges. Example In here, we are trying to calculate the median filter with the reflect mode − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt image=mh.imread(”nature.jpeg”, as_grey = True) structuring_element = mh.morph.dilate(mh.disk(12), Bc=mh.disk(12)) filtered_image = mh.median_filter(image, structuring_element, mode=”reflect”) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the median filtered image axes[1].imshow(filtered_image, cmap=”gray”) axes[1].set_title(”Median Filtered”) axes[1].axis(”off”) mtplt.show() Output Output of the above code is as follows − By Storing Result in an Output Array We can store the result of the median filter in an output array as well using Mahotas. To achieve this, we first need to create an empty array using the NumPy library. This array is initialized with the same shape and data type as the input image to store the resultant filtered image. Finally, we store the resultant filtered image in the output array by passing it as a parameter to the median_filter() function. Example Now, we are trying to apply median filter to a grayscale image and store the result in a specific output array − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt image=mh.imread(”pic.jpg”, as_grey = True) # Create an output array for the filtered image output = np.empty(image.shape) # store the result in the output array mh.median_filter(image, Bc=mh.disk(12), out=output) # Displaying the original image fig, axes = mtplt.subplots(1, 2, figsize=(9, 4)) axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].axis(”off”) # Displaying the median filtered image axes[1].imshow(output, cmap=”gray”) axes[1].set_title(”Median Filtered”) axes[1].axis(”off”) mtplt.show() Output Following is the output of the above code −

Learn Mahotas – Haar Transform work project make money

Mahotas – Haar Transform Haar transform is a technique used to convert an image from pixel intensity values to wavelet coefficients. Wavelet coefficients are numerical values representing the contribution of different frequencies to an image. In Haar transform, an image is broken into a set of orthonormal basis functions called Haar wavelets. An orthonormal basis function refers to a mathematical function that satisfies two important properties : it is perpendicular (or orthogonal) to other basis functions, and its coefficients have a length of 1. The basis functions are generated from a single wavelet by scaling and shifting. Scaling refers to changing the duration of the wavelet function, while shifting involves moving the wavelet function along the x−axis. Haar Transform in Mahotas In Mahotas, we can perform Haar transformation by using the mahotas.haar() function on an image. Following is the basic approach to perform Haar transformation on an image − Image Partitioning − The first step involves dividing the input image into nonoverlapping blocks of equal size. Averaging and Differencing − Next, the low and high frequency coefficients are computed within each block. The low frequency coefficient represents the smooth, global features of the image and is calculated as the average of pixel intensities. The high frequency coefficient represents the sharp, local features of the image and is calculated by finding differences between neighboring pixels. Subsampling − The resultant low and high frequency coefficients are then down sampled (degraded) by discarding alternate values in each row and column. Steps 2 and 3 are repeated until are repeated until the entire image has been transformed. The mahotas.haar() function The mahotas.haar() function takes a grayscale image as input and returns the wavelet coefficients as an image. The wavelet coefficients are a tuple of arrays. The first array contains the low−frequency coefficients, and the second array contains the high−frequency coefficients. Syntax Following is the basic syntax of the haar() function in mahotas − mahotas.haar(f, preserve_energy=True, inline=False) Where, f − It is the input image. preserve_energy (optional) − It specifies whether to preserve the energy of the output image (default is True). inline (optional) − It specifies whether to return a new image or modify input image (default is False). Example In the following example, we are applying Haar transformation on an image using the mh.haar() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sea.bmp”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Applying Haar transformation haar_transform = mh.haar(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the Haar transformed image axes[1].imshow(haar_transform, cmap=”gray”) axes[1].set_title(”Haar Transformed Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Without Preserving Energy We can also perform Haar transformation on an image without preserving energy. The energy of the image refers to its brightness, and it can change when an image is being transformed from one domain to another. In mahotas, the preserve_energy parameter of the mh.haar() function determines whether to preserve the energy of the output image. If we don’t want to preserve the energy, we can set this parameter to False. Hence, the brightness of the output image will be different from the brightness of the input image. If this parameter is set to True, then the output image and the input image will have the same brightness. Example In the example mentioned below, we are performing Haar transformation on an image without preserving its energy. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Applying Haar transformation haar_transform = mh.haar(image, preserve_energy=False) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image, cmap=”gray”) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the Haar transformed image axes[1].imshow(haar_transform, cmap=”gray”) axes[1].set_title(”Haar Transformed Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − Inline Haar Transformation We can also perform inline Haar transformation on an input image. Inline refers to applying transformation on the original image itself without creating a new image. This allows us to save space when applying transformations on an image. In mahotas, inline Haar transformation can be achieved by setting the inline parameter to boolean value True in the mh.haar() function. This way, a new image need not be created to store the output. Example In here, we are performing inline Haar transformation on an input image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sun.png”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Applying Haar transformation mh.haar(image, preserve_energy=False, inline=True) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 1) # Displaying the transformed image axes.imshow(image, cmap=”gray”) axes.set_title(”Haar Transformed Image”) axes.set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output After executing the above code, we get the following output − Note − Since the input image is getting overridden during transformation, the output screen will only contain a single image as seen above.

Learn Speeded-Up Robust Features work project make money

Mahotas – Speeded-Up Robust Features Speeded−Up Robust Features (SURF) is an algorithm that is used to detect distinctive features (keypoints) in images. SURF identifies keypoints by analyzing the intensity changes in an image at multiple scales. It assigns orientations to these points and generates descriptors that capture their unique characteristics. The descriptors are computed in a pattern within local regions around the keypoints. These descriptors can then be used for various applications. SURF uses two primary techniques − surf dense and surf integral. Both techniques have been discussed in detail in the upcoming chapters. SURF Surf SURF surf is a technique that combines the detection and description of keypoints of an image. It generates descriptors that encode the properties of these keypoints. The function takes an image as input and returns a set of SURF descriptors. Syntax Following is the basic syntax of the surf.surf() function in mahotas − mahotas.features.surf.surf(f, nr_octaves=4, nr_scales=6, initial_step_size=1, threshold=0.1, max_points=1024, descriptor_only=False) Where, f − It is the input image. nr_octaves (optional) − It defines the number of octaves to be used in the SURF algorithm. An octave represents an image at different level of resolution (default is 4). nr_scales (optional) − It determines the number of scales per octave. The scales are used to detect features at different levels of detail (default is 6). initial_step_size (optional) − It determines the initial step size between consecutive scales. A smaller step size allows for detection of detailed features (default is 1). threshold (optional) − It is the threshold value that is used to filter out weak SURF features (default is 0.1). max_points (optional) − It defines the maximum number of SURF points that will be returned (default is 1024). descriptor_only (optional) − It is a flag that determines whether to return only the descriptors or the descriptors and keypoints. When set to True, only the descriptors of the detected features will be returned. If set to False, both the keypoints and descriptors will be returned (default is False). We can see the surf image below − SURF Dense SURF Dense is a technique used by the SURF algorithm. The keypoints are densely sampled across an image in SURF dense. In other words, instead of searching for specific interesting points, SURF Dense calculates the descriptors for a grid of pixels in the image. This helps to capture information about the entire image. In the following image, we can see SURF dense image − SURF Integral The SURF integral technique enhances the SURF algorithm”s calculation efficiency by utilizing integral images. Integral images pre−calculate the cumulative sum of pixel intensities up to specific areas of an image. This pre−calculation eliminates redundant calculations, enabling faster and more efficient feature detection and description. As a result, the SURF algorithm becomes well−suited for real−time applications and handling large−scale datasets. The following is the image for SURF integral − Example In the following example, we are performing different SURF functions on an image as discussed above − import mahotas as mh from mahotas.features import surf import numpy as np import matplotlib.pyplot as mtplt image = mh.imread(”tree.tiff”, as_grey=True) # SURF dense surf_dense = surf.dense(image, 100) mtplt.imshow(surf_dense) mtplt.title(”SURF Dense Image”) mtplt.axis(”off”) mtplt.show() # SURF integral surf_integral = surf.integral(image) mtplt.imshow(surf_integral) mtplt.title(”SURF Integral Image”) mtplt.axis(”off”) mtplt.show() # SURF surf surf_surf = surf.surf(image) mtplt.imshow(surf_surf) mtplt.title(”SURF Surf Image”) mtplt.axis(”off”) mtplt.show() Output The output obtained is as shown below − SURF Dense Image: SURF Integral Image: SURF Surf Image: We will discuss about the SURF Dense and the SURF Integral techniques in detail in the further chapters.

Learn Mahotas – Mean Value of Image work project make money

Mahotas – Mean Value of Image Mean value of an image refers to the average brightness of all the pixels of an image. Brightness is a property of an image that determines how light or dark an image appears to the human eye. It is determined by the pixel intensity value; higher pixel intensity values represent brighter areas, while lower pixel intensity values represent darker areas. The mean value of an image is widely used in image segmentation, which involves dividing an image into distinct regions. It can also be used in image thresholding which refers to converting an image into binary image consisting of foreground and background pixels. Mean Value of Image in Mahotas Mahotas does not have a built-in function to find the mean of value of an image. However, we can find the mean value of an image by using mahotas and numpy library together. We can use the mean() function in the numpy library to find the mean pixel intensity value of an image. The mean() function works by iteratively going over each pixel and summing its intensity value. Once all the pixels have been traversed, it divides the sum by the total number of pixels. The mean pixel intensity value of an image can be calculated using the following formula − Mean = Sum of all pixel values / Total number of pixels For example, let”s assume that an image is composed of 2 pixels each with an intensity value of 5. Then the mean can be calculated as follows − Mean = 10 / 2 Mean = 5 The numpy.mean() function The numpy.mean() function takes an image as input and returns the average brightness of all its pixels as a decimal number. The mean function works on any type of input image such as RGB, grayscale or labeled. Syntax Following is the basic syntax of the mean() function in numpy − numpy.mean(image) Where, image − It is the input image. Example In the following example, we are finding the average pixel intensity value of an image using the np.mean() function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sun.png”) # Finding the mean value mean_value = np.mean(image) # Printing the mean value print(”Mean value of the image is =”, mean_value) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 1) # Displaying the original image axes.imshow(image) axes.set_title(”Original Image”) axes.set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Mean value of the image is = 105.32921300415184 The image obtained is as shown below − Mean Value of each Channel We can also find the mean value of each channel of an RGB image in Mahotas. RGB images to refer to images having three−color channels − Red, Green, and Blue. Each pixel in an RGB image has three intensity values, one for each color channel. The channel value of red is 0, green is 1 and blue is 2. These values can be used to separate an RGB image into its individual color components. In mahotas, to find the mean pixel intensity value of each channel of an RGB image, we first separate the RGB image into separate channels. This is achieved by specifying the channel value. Once the channels are separated, we can find their mean value individually. Example In the example mentioned below, we are finding the mean pixel intensity value of each channel of an RGB image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Getting the red channel. red_channel = image[:, :, 0] # Getting the green channel. green_channel = image[:, :, 1] # Getting the blue channel. blue_channel = image[:, :, 2] # Finding the mean value of each channel mean_red = np.mean(red_channel) mean_green = np.mean(green_channel) mean_blue = np.mean(blue_channel) # Printing the mean value of each channel print(”Mean value of the Red channel is =”, mean_red) print(”Mean value of the Green channel is =”, mean_green) print(”Mean value of the Blue channel is =”, mean_blue) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 1) # Displaying the original image axes.imshow(image) axes.set_title(”Original Image”) axes.set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − Mean value of the Red channel is = 135.4501688464837 Mean value of the Green channel is = 139.46532482847343 Mean value of the Blue channel is = 109.7802007397084 The image produced is as follows − Mean Value of Grayscale Image We can find the mean value of a grayscale image as well. Grayscale images refer to the image having only a single−color channel. Each pixel of a grayscale image is represented by a single intensity value. The intensity value of a grayscale image can range from 0 (black) to 255 (white). Any value between 0 and 255 will produce a shade of gray. Lower values will produce darker shades while higher values will produce lighter shades. In mahotas, we first convert an input RGB image to grayscale using the mh.colors.rgb2gray() function. Then, we find its mean pixel intensity value using the mean() function. Example In this example, we are finding the mean pixel intensity value of a grayscale image. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale grayscale_image = mh.colors.rgb2gray(image) # Finding the mean value of the grayscale image mean_value = np.mean(grayscale_image) # Printing the mean value of the image print(”Mean value of the grayscale image is =”, mean_value) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 1) # Displaying the grayscale image axes.imshow(grayscale_image, cmap=”gray”) axes.set_title(”Grayscale Image”) axes.set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output After executing the above code, we get the following output − Mean value of the grayscale

Learn Mahotas – Local Binary Patterns work project make money

Mahotas – Local Binary Patterns Local Binary Patterns (LBP) is a method that generates a binary pattern. It compares the intensity values of a central pixel with its neighbors. Each pixel in the neighborhood is assigned a value of 1 if it is greater than or equal to the center pixel”s intensity, and 0 otherwise. The binary patterns are used to compute statistical measures or histogram representations that capture the texture information in the image. The resulting descriptors can be utilized in various applications, such as texture classification, object recognition, and image retrieval. Local Binary Patterns uses a technique known as Linear Binary Patterns. The Linear Binary Pattern considers a linear (straight) neighborhood for creating a binary pattern. Let us discuss briefly about linear binary patterns below. Linear Binary Patterns Linear Binary Patterns are used to describe the texture of an image. It works by comparing the intensity values of pixels in a neighborhood around a central pixel and encoding the result as a binary number. In simpler terms, LBP looks at the pattern formed by the pixel values around a particular pixel and represents that pattern with a series of 0s and 1s. Here, we look at linear binary patterns of an image − Example In the example mentioned below, we are trying to perform the above discussed function. import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt image = mh.imread(”nature.jpeg”, as_grey=True) # Linear Binary Patterns lbp = mh.features.lbp(image, 5, 5) mtplt.hist(lbp) mtplt.title(”Linear Binary Patterns”) mtplt.xlabel(”LBP Value”) mtplt.ylabel(”Frequency”) mtplt.show() Output After executing the above code, we obtain the following output − We will discuss about the linear binary patterns in detail in the further chapter.

Learn Mahotas – XYZ to RGB Conversion work project make money

Mahotas – XYZ to RGB Conversion We learnt about XYZ color space, RGB color space, and RGB to XYZ conversion in our previous tutorial. Now let us discuss about the conversion of XYZ color space to RGB color space. When we convert from XYZ to RGB, we are taking the XYZ values of a color, which represent its perceptual properties, and transforming them into red, green, and blue values. This conversion allows us to represent the color in a format that is suitable for display on a particular device or screen. XYZ to RGB Conversion in Mahotas In Mahotas, we can convert an XYZ image to an RGB image using the colors.xyz2rgb() function. The XYZ to RGB conversion in Mahotas involves the following steps − Normalize the XYZ values − Normalize the X, Y, and Z values so that they range between 0 and 1. This step ensures that the XYZ values are relative to a reference white point and allows for consistent color calculations. Convert normalized XYZ to linear RGB − Next, use a conversion matrix to convert the normalized XYZ values to linear RGB values. The conversion matrix specifies how the XYZ coordinates contribute to the red, green, and blue components of the resulting color. The matrix multiplication is performed to obtain the linear RGB values. Apply gamma correction − Gamma correction adjusts the brightness of the RGB values to match the response of the human visual system. Scale the RGB values − After gamma correction, the RGB values are typically in the range of 0 to 1. To represent the colors in the 8−bit range (0−255), you need to scale the RGB values. Multiply each of the gamma−corrected RGB values by 255 to bring them to the appropriate scale. Result − Once the scaling is applied, you have obtained the RGB color values. These values represent the intensities of the red, green, and blue channels of the resulting color. Using the mahotas.colors.xyz2rgb() Function The mahotas.colors.xyz2rgb() function takes an XYZ image as input and returns the RGB color space version of the image. The resulting RGB image retains the structure and content of the original XYZ image, however some color detail is lost. Syntax Following is the basic syntax of the xyz2rgb() function in mahotas − mahotas.colors.xyz2rgb(xyz, dtype={float}) where, xyz − It is the input image in XYZ color space. dtype (optional) − It is the data type of the returned image (default is float). Example In the following example, we are converting an XYZ image to an RGB image using the mh.colors.xyz2rgb() function − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”nature.jpeg”) # Converting it to XYZ xyz_image = mh.colors.rgb2xyz(image) # Converting back to RGB (lossy) rgb_image = mh.colors.xyz2rgb(xyz_image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original XYZ image axes[0].imshow(xyz_image) axes[0].set_title(”XYZ Image”) axes[0].set_axis_off() # Displaying the RGB image axes[1].imshow(rgb_image) axes[1].set_title(”RGB Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − Using Transformation Matrix We can use a transformation matrix to convert an XYZ image to an RGB image. The transformation matrix has a set of values that are used to transform a XYZ pixel to RGB pixel. The XYZ pixels are converted to RGB pixels by doing matrix multiplication between the transformation matrix and the XYZ image. We achieve this by using the dot() function in the numpy library. The values of each pixel are then normalized from the range of 0 to 1 (intensity range of XYZ color) to the range of 0 to 255 (intensity range of RGB colors) by multiplying by 255 and then dividing by the maximum intensity of that pixel to obtain the RGB image. Example The following example shows conversion of an XYZ image to an RGB image using transformation matrix − import mahotas as mh import numpy as np import matplotlib.pyplot as mtplt # Function to convert XYZ to RGB def xyz_to_rgb(xyz_image): # XYZ to RGB conversion matrix xyz_to_rgb_matrix = np.array([[3.2406, -1.5372, -0.4986], [-0.9689, 1.8758, 0.0415],[0.0557, -0.2040, 1.0570]]) # Perform the XYZ to RGB conversion using matrix multiplication rgb_image = np.dot(xyz_image, xyz_to_rgb_matrix.T) # Scale the RGB values from the range [0, 1] to [0, 255] rgb_image = (rgb_image * 255.0 / np.max(rgb_image)).astype(np.uint8) return rgb_image # Loading the image image = mh.imread(”tree.tiff”) # Converting it to XYZ xyz_image = mh.colors.rgb2xyz(image) # Converting back to RGB (lossy) rgb_image = xyz_to_rgb(xyz_image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original XYZ image axes[0].imshow(xyz_image) axes[0].set_title(”XYZ Image”) axes[0].set_axis_off() # Displaying the RGB image axes[1].imshow(rgb_image) axes[1].set_title(”RGB Image”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows −

Learn Mahotas – SURF Integral work project make money

Mahotas – SURF Integral SURF, which stands for Speeded−Up Robust Features, is an algorithm used to detect features. The SURF integral is a key concept within this algorithm. To understand the SURF integral, let”s start with the idea of an image. An image is composed of pixels, which are tiny dots that store information about the intensity of the image at that particular location. Now, imagine dividing the image into a small local neighborhood. The SURF integral is a way to efficiently calculate the total pixel value for each local neighborhood. SURF Integral in Mahotas In Mahotas, we can use the mahotas.features.surf.integral() function to compute the SURF integral of an image. Following is the basic approach of how the function works − Initialization − First, the function initializes the integral image by setting all the pixel values to zero. Integral images refer to images that store the sum of all pixels up to a certain point. Recursive Sum Calculation − The function then proceeds to calculate the sum of pixels for each point in the integral image. It does this recursively, meaning it calculates the sum for each point based on the previous sums. As the integral images store the sum of all pixels up to a specific point, they can significantly increase the speed of computing SURF descriptors. Since the function uses recursion, it can be slow for computing the sum of large images. The mahotas.features.surf.integral() function The mahotas.features.surf.integral() function takes a grayscale image as input and returns an integral image as output. The returned result is a new image, typically in the form of a NumPy array, where each pixel value corresponds to the sum of pixel intensities up to that pixel location. Syntax Following is the basic syntax of the surf.integral() function in mahotas − mahotas.features.surf.integral(f, in_place=False, dtype=<class ”numpy.float64”>) Where, f − It is the input image. in_place (optional) − It a flag which determines whether to overwrite the input image (default is ”False”). dtype (optional) − It specifies the data type of the output image (default is float64). Example In the following example, we are calculating the SURF integral of an image using the mh.features.surf.integral() function. import mahotas as mh from mahotas.features import surf import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”sea.bmp”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Getting the SURF integral surf_integral = surf.integral(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the surf integral axes[1].imshow(surf_integral) axes[1].set_title(”SURF Integral”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Following is the output of the above code − SURF Integral of a Random Image We can also compute SURF integral of a randomly generate two−dimensional image. A two−dimensional random image refers to an image where each pixel is assigned a random intensity value. The intensity value can range from 0 (black) to 255 (white). In mahotas, to create a 2−D random image we first specify its dimensions. Then, we pass these dimensions along with the intensity range of the pixels to np.random.randint() function. After that we can compute the SURF integral of the image using the surf.integral() function. Example In the example mentioned below, we are computing the SURF integral of a randomly generated 2−D image. import mahotas as mh from mahotas.features import surf import numpy as np import matplotlib.pyplot as mtplt # Specifying dimensions of image l, w = 1000, 1000 # Creating a random 2-D image image = np.random.randint(0, 256, (l, w)) # Getting the SURF integral surf_integral = surf.integral(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the surf integral axes[1].imshow(surf_integral) axes[1].set_title(”SURF Integral”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output Output of the above code is as follows − SURF Integral of a Threshold Image In addition to random 2−D images, we can also compute the SURF integral of a threshold image. A threshold image is a binary image where the pixels are classified into the foreground or the background. The foreground pixels are white and represented by the value 1, while the background pixels are black and represented by value 0. In mahotas, we first threshold the input image using any thresholding algorithm. Let us assume . This can be done by using the mh.thresholding.bernsen() function on a grayscale image. Then, we can compute the SURF integral of threshold image using the surf.integral() function. Example In here, we are calculating SURF integral of a threshold image. import mahotas as mh from mahotas.features import surf import numpy as np import matplotlib.pyplot as mtplt # Loading the image image = mh.imread(”tree.tiff”) # Converting it to grayscale image = mh.colors.rgb2gray(image) # Thresholding the image image = mh.thresholding.bernsen(image, 5, 5) # Getting the SURF integral surf_integral = surf.integral(image) # Creating a figure and axes for subplots fig, axes = mtplt.subplots(1, 2) # Displaying the original image axes[0].imshow(image) axes[0].set_title(”Original Image”) axes[0].set_axis_off() # Displaying the surf integral axes[1].imshow(surf_integral) axes[1].set_title(”SURF Integral”) axes[1].set_axis_off() # Adjusting spacing between subplots mtplt.tight_layout() # Showing the figures mtplt.show() Output After executing the above code, we get the following output −