Randomized Quick Sort Algorithm Table of content Randomized Quick Sort Algorithm Implementation ”; Previous Next Quicksort is a popular sorting algorithm that chooses a pivot element and sorts the input list around that pivot element. To learn more about quick sort, please click here. Randomized quick sort is designed to decrease the chances of the algorithm being executed in the worst case time complexity of O(n2). The worst case time complexity of quick sort arises when the input given is an already sorted list, leading to n(n – 1) comparisons. There are two ways to randomize the quicksort − Randomly shuffling the inputs: Randomization is done on the input list so that the sorted input is jumbled again which reduces the time complexity. However, this is not usually performed in the randomized quick sort. Randomly choosing the pivot element: Making the pivot element a random variable is commonly used method in the randomized quick sort. Here, even if the input is sorted, the pivot is chosen randomly so the worst case time complexity is avoided. Randomized Quick Sort Algorithm The algorithm exactly follows the standard algorithm except it randomizes the pivot selection. Pseudocode partition-left(arr[], low, high) pivot = arr[high] i = low // place for swapping for j := low to high – 1 do if arr[j] <= pivot then swap arr[i] with arr[j] i = i + 1 swap arr[i] with arr[high] return i partition-right(arr[], low, high) r = Random Number from low to high Swap arr[r] and arr[high] return partition-left(arr, low, high) quicksort(arr[], low, high) if low < high p = partition-right(arr, low, high) quicksort(arr, low , p-1) quicksort(arr, p+1, high) Example Let us look at an example to understand how randomized quicksort works in avoiding the worst case time complexity. Since, we are designing randomized algorithms to decrease the occurrence of worst cases in time complexity lets take a sorted list as an input for this example. The sorted input list is 3, 5, 7, 8, 12, 15. We need to apply the quick sort algorithm to sort the list. Step 1 Considering the worst case possible, if the random pivot chosen is also the highest index number, it compares all the other numbers and another pivot is selected. Since 15 is greater than all the other numbers in the list, it won’t be swapped, and another pivot is chosen. Step 2 This time, if the random pivot function chooses 7 as the pivot number − Now the pivot divides the list into half so standard quick sort is carried out usually. However, the time complexity is decreased than the worst case. It is to be noted that the worst case time complexity of the quick sort will always remain O(n2) but with randomizations we are decreasing the occurrences of that worst case. Implementation Following are the implementations of the above approach in various programming langauges − C C++ Java Python #include <stdio.h> #include <stdlib.h> #include <time.h> // Function to swap two elements void swap(int* a, int* b) { int t = *a; *a = *b; *b = t; } // Function to partition the array int partition_left(int arr[], int low, int high) { int pivot = arr[high]; int i = low; for (int j = low; j < high; j++) { if (arr[j] <= pivot) { swap(&arr[i], &arr[j]); i++; } } swap(&arr[i], &arr[high]); return i; } // Function to perform random partition int partition_right(int arr[], int low, int high) { srand(time(NULL)); int r = low + rand() % (high – low); swap(&arr[r], &arr[high]); return partition_left(arr, low, high); } // Recursive function for quicksort void quicksort(int arr[], int low, int high) { if (low < high) { int p = partition_right(arr, low, high); quicksort(arr, low, p – 1); quicksort(arr, p + 1, high); } } // Function to print the array void printArray(int arr[], int size) { for (int i = 0; i < size; i++) printf(“%d “, arr[i]); printf(“n”); } // Driver code int main() { int arr[] = { 6, 4, 12, 8, 15, 16}; int n = sizeof(arr) / sizeof(arr[0]); printf(“Original array: “); printArray(arr, n); quicksort(arr, 0, n – 1); printf(“Sorted array: “); printArray(arr, n); return 0; } Output Original array: 6 4 12 8 15 16 Sorted array: 4 6 8 12 15 16 #include <iostream> #include <cstdlib> #include <ctime> // Function to swap two elements void swap(int arr[], int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } // Function to partition the array int partitionLeft(int arr[], int low, int high) { int pivot = arr[high]; int i = low; for (int j = low; j < high; j++) { if (arr[j] <= pivot) { swap(arr, i, j); i++; } } swap(arr, i, high); return i; } // Function to perform random partition int partitionRight(int arr[], int low, int high) { srand(time(NULL)); int r = low + rand() % (high – low); swap(arr, r, high); return partitionLeft(arr, low, high); } // Recursive function for quicksort void quicksort(int arr[], int low, int high) { if (low < high) { int p = partitionRight(arr, low, high); quicksort(arr, low, p – 1); quicksort(arr, p + 1, high); } } // Function to print the array void printArray(int arr[], int size) { for (int i = 0; i < size; i++) std::cout << arr[i] << ” “; std::cout << std::endl; } // Driver code int main() { int arr[] = {6, 4, 12, 8, 15, 16}; int n = sizeof(arr) / sizeof(arr[0]); std::cout << “Original array: “; printArray(arr, n); quicksort(arr, 0, n – 1); std::cout << “Sorted array: “; printArray(arr, n); return 0; } Output Original array: 6
Category: data Structures Algorithms
Selection Sort Algorithm Table of content Selection Sort Algorithm Implementation ”; Previous Next Selection sort is a simple sorting algorithm. This sorting algorithm, like insertion sort, is an in-place comparison-based algorithm in which the list is divided into two parts, the sorted part at the left end and the unsorted part at the right end. Initially, the sorted part is empty and the unsorted part is the entire list. The smallest element is selected from the unsorted array and swapped with the leftmost element, and that element becomes a part of the sorted array. This process continues moving unsorted array boundaries by one element to the right. This algorithm is not suitable for large data sets as its average and worst case complexities are of O(n2), where n is the number of items. Selection Sort Algorithm This type of sorting is called Selection Sort as it works by repeatedly sorting elements. That is: we first find the smallest value in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and we continue the process in this way until the entire array is sorted. 1. Set MIN to location 0. 2. Search the minimum element in the list. 3. Swap with value at location MIN. 4. Increment MIN to point to next element. 5. Repeat until the list is sorted. Pseudocode Algorithm: Selection-Sort (A) fori← 1 to n-1 do min j ←i; min x ← A[i] for j ←i + 1 to n do if A[j] < min x then min j ← j min x ← A[j] A[min j] ← A [i] A[i] ← min x Analysis Selection sort is among the simplest of sorting techniques and it works very well for small files. It has a quite important application as each item is actually moved at the most once. Section sort is a method of choice for sorting files with very large objects (records) and small keys. The worst case occurs if the array is already sorted in a descending order and we want to sort them in an ascending order. Nonetheless, the time required by selection sort algorithm is not very sensitive to the original order of the array to be sorted: the test if 𝑨[𝒋] < A[j] < min x is executed exactly the same number of times in every case. Selection sort spends most of its time trying to find the minimum element in the unsorted part of the array. It clearly shows the similarity between Selection sort and Bubble sort. Bubble sort selects the maximum remaining elements at each stage, but wastes some effort imparting some order to an unsorted part of the array. Selection sort is quadratic in both the worst and the average case, and requires no extra memory. For each i from 1 to n – 1, there is one exchange and n – i comparisons, so there is a total of n – 1 exchanges and (n − 1) + (n − 2) + …+2 + 1 = n(n − 1)/2 comparisons. These observations hold, no matter what the input data is. In the worst case, this could be quadratic, but in the average case, this quantity is O(n log n). It implies that the running time of Selection sort is quite insensitive to the input. Example Consider the following depicted array as an example. For the first position in the sorted list, the whole list is scanned sequentially. The first position where 14 is stored presently, we search the whole list and find that 10 is the lowest value. So we replace 14 with 10. After one iteration 10, which happens to be the minimum value in the list, appears in the first position of the sorted list. For the second position, where 33 is residing, we start scanning the rest of the list in a linear manner. We find that 14 is the second lowest value in the list and it should appear at the second place. We swap these values. After two iterations, two least values are positioned at the beginning in a sorted manner. The same process is applied to the rest of the items in the array − Implementation The selection sort algorithm is implemented in four different programming languages below. The given program selects the minimum number of the array and swaps it with the element in the first index. The second minimum number is swapped with the element present in the second index. The process goes on until the end of the array is reached. C C++ Java Python #include <stdio.h> void selectionSort(int array[], int size){ int i, j, imin; for(i = 0; i<size-1; i++) { imin = i; //get index of minimum data for(j = i+1; j<size; j++) if(array[j] < array[imin]) imin = j; //placing in correct position int temp; temp = array[i]; array[i] = array[imin]; array[imin] = temp; } } int main(){ int n; n = 5; int arr[5] = {12, 19, 55, 2, 16}; // initialize the array printf(“Array before Sorting: “); for(int i = 0; i<n; i++) printf(“%d “,arr[i]); printf(“n”); selectionSort(arr, n); printf(“Array after Sorting: “); for(int i = 0; i<n; i++) printf(“%d “, arr[i]); printf(“n”); } Output Array before Sorting: 12 19 55 2 16 Array after Sorting: 2 12 16 19 55 #include<iostream> using namespace std; void swapping(int &a, int &b) { //swap the content of a and b int temp; temp = a; a = b;
Longest Common Subsequence Algorithm Table of content Longest Common Subsequence Algorithm Implementation Applications ”; Previous Next The longest common subsequence problem is finding the longest sequence which exists in both the given strings. But before we understand the problem, let us understand what the term subsequence is − Let us consider a sequence S = <s1, s2, s3, s4, …,sn>. And another sequence Z = <z1, z2, z3, …,zm> over S is called a subsequence of S, if and only if it can be derived from S deletion of some elements. In simple words, a subsequence consists of consecutive elements that make up a small part in a sequence. Common Subsequence Suppose, X and Y are two sequences over a finite set of elements. We can say that Z is a common subsequence of X and Y, if Z is a subsequence of both X and Y. Longest Common Subsequence If a set of sequences are given, the longest common subsequence problem is to find a common subsequence of all the sequences that is of maximal length. Naive Method Let X be a sequence of length m and Y a sequence of length n. Check for every subsequence of X whether it is a subsequence of Y, and return the longest common subsequence found. There are 2m subsequences of X. Testing sequences whether or not it is a subsequence of Y takes O(n) time. Thus, the naive algorithm would take O(n2m) time. Longest Common Subsequence Algorithm Let X=<x1,x2,x3….,xm> and Y=<y1,y2,y3….,ym> be the sequences. To compute the length of an element the following algorithm is used. Step 1 − Construct an empty adjacency table with the size, n × m, where n = size of sequence X and m = size of sequence Y. The rows in the table represent the elements in sequence X and columns represent the elements in sequence Y. Step 2 − The zeroeth rows and columns must be filled with zeroes. And the remaining values are filled in based on different cases, by maintaining a counter value. Case 1 − If the counter encounters common element in both X and Y sequences, increment the counter by 1. Case 2 − If the counter does not encounter common elements in X and Y sequences at T[i, j], find the maximum value between T[i-1, j] and T[i, j-1] to fill it in T[i, j]. Step 3 − Once the table is filled, backtrack from the last value in the table. Backtracking here is done by tracing the path where the counter incremented first. Step 4 − The longest common subseqence obtained by noting the elements in the traced path. Pseudocode In this procedure, table C[m, n] is computed in row major order and another table B[m,n] is computed to construct optimal solution. Algorithm: LCS-Length-Table-Formulation (X, Y) m := length(X) n := length(Y) for i = 1 to m do C[i, 0] := 0 for j = 1 to n do C[0, j] := 0 for i = 1 to m do for j = 1 to n do if xi = yj C[i, j] := C[i – 1, j – 1] + 1 B[i, j] := ‘D’ else if C[i -1, j] ≥ C[i, j -1] C[i, j] := C[i – 1, j] + 1 B[i, j] := ‘U’ else C[i, j] := C[i, j – 1] + 1 B[i, j] := ‘L’ return C and B Algorithm: Print-LCS (B, X, i, j) if i=0 and j=0 return if B[i, j] = ‘D’ Print-LCS(B, X, i-1, j-1) Print(xi) else if B[i, j] = ‘U’ Print-LCS(B, X, i-1, j) else Print-LCS(B, X, i, j-1) This algorithm will print the longest common subsequence of X and Y. Analysis To populate the table, the outer for loop iterates m times and the inner for loop iterates n times. Hence, the complexity of the algorithm is O(m,n), where m and n are the length of two strings. Example In this example, we have two strings X=BACDB and Y=BDCB to find the longest common subsequence. Following the algorithm, we need to calculate two tables 1 and 2. Given n = length of X, m = length of Y X = BDCB, Y = BACDB Constructing the LCS Tables In the table below, the zeroeth rows and columns are filled with zeroes. Remianing values are filled by incrementing and choosing the maximum values according to the algorithm. Once the values are filled, the path is traced back from the last value in the table at T[4, 5]. From the traced path, the longest common subsequence is found by choosing the values where the counter is first incremented. In this example, the final count is 3 so the counter is incremented at 3 places, i.e., B, C, B. Therefore, the longest common subsequence of sequences X and Y is BCB. Implementation Following is the final implementation to find the Longest Common Subsequence using Dynamic Programming Approach − C C++ Java Python #include <stdio.h> #include <string.h> int max(int a, int b); int lcs(char* X, char* Y, int m, int n){ int L[m + 1][n + 1]; int i, j, index; for (i = 0; i <= m; i++) { for (j = 0; j <= n; j++) { if (i == 0 || j == 0) L[i][j] = 0; else if (X[i – 1] == Y[j – 1]) { L[i][j] = L[i – 1][j – 1] + 1; } else L[i][j] = max(L[i – 1][j], L[i][j – 1]); } } index = L[m][n]; char LCS[index + 1]; LCS[index] = ””; i = m, j = n; while (i >
DSA – Vertex Cover Algorithm
Vertex Cover Algorithm Table of content Vertex Cover Algorithm Implementation ”; Previous Next Have you ever wondered about the placement of traffic cameras? That how they are efficiently placed without wasting too much budget from the government? The answer to that comes in the form of vertex-cover algorithm. The positions of the cameras are chosen in such a way that one camera covers as many roads as possible, i.e., we choose junctions and make sure the camera covers as much area as possible. A vertex-cover of an undirected graph G = (V,E) is the subset of vertices of the graph such that, for all the edges (u,v) in the graph,u and v ∈ V. The junction is treated as the node of a graph and the roads as the edges. The algorithm finds the minimum set of junctions that cover maximum number of roads. It is a minimization problem since we find the minimum size of the vertex cover – the size of the vertex cover is the number of vertices in it. The optimization problem is an NP-Complete problem and hence, cannot be solved in polynomial time; but what can be found in polynomial time is the near optimal solution. Vertex Cover Algorithm The vertex cover approximation algorithm takes an undirected graph as an input and is executed to obtain a set of vertices that is definitely twice as the size of optimal vertex cover. The vertex cover is a 2-approximation algorithm. Algorithm Step 1 − Select any random edge from the input graph and mark all the edges that are incident on the vertices corresponding to the selected edge. Step 2 − Add the vertices of the arbitrary edge to an output set. Step 3 − Repeat Step 1 on the remaining unmarked edges of the graph and add the vertices chosen to the output until there’s no edge left unmarked. Step 4 − The final output set achieved would be the near-optimal vertex cover. Pseudocode APPROX-VERTEX_COVER (G: Graph) c ← { } E’ ← E[G] while E’ is not empty do Let (u, v) be an arbitrary edge of E’ c ← c U {u, v} Remove from E’ every edge incident on either u or v return c Example The set of edges of the given graph is − {(1,6),(1,2),(1,4),(2,3),(2,4),(6,7),(4,7),(7,8),(3,5),(8,5)} Now, we start by selecting an arbitrary edge (1,6). We eliminate all the edges, which are either incident to vertex 1 or 6 and we add edge (1,6) to cover. In the next step, we have chosen another edge (2,3) at random. Now we select another edge (4,7). We select another edge (8,5). Hence, the vertex cover of this graph is {1,6,2,3,4,7,5,8}. Analysis It is easy to see that the running time of this algorithm is O(V + E), using adjacency list to represent E”. Implementation Following are the implementations of the above approach in various programming langauges − C C++ Java Python #include <stdio.h> #include <stdbool.h> #define MAX_VERTICES 100 int graph[MAX_VERTICES][MAX_VERTICES]; bool included[MAX_VERTICES]; // Function to find Vertex Cover using the APPROX-VERTEX_COVER algorithm void approxVertexCover(int vertices, int edges) { bool edgesRemaining[MAX_VERTICES][MAX_VERTICES]; for (int i = 0; i < vertices; i++) { for (int j = 0; j < vertices; j++) { edgesRemaining[i][j] = graph[i][j]; } } while (edges > 0) { int u, v; for (int i = 0; i < vertices; i++) { for (int j = 0; j < vertices; j++) { if (edgesRemaining[i][j]) { u = i; v = j; break; } } } included[u] = included[v] = true; for (int i = 0; i < vertices; i++) { edgesRemaining[u][i] = edgesRemaining[i][u] = false; edgesRemaining[v][i] = edgesRemaining[i][v] = false; } edges–; } } int main() { int vertices = 8; int edges = 10; int edgesData[10][2] = { {1, 6}, {1, 2}, {1, 4}, {2, 3}, {2, 4}, {6, 7}, {4, 7}, {7, 8}, {3, 5}, {8, 5}}; for (int i = 0; i < edges; i++) { int u = edgesData[i][0]; int v = edgesData[i][1]; graph[u][v] = graph[v][u] = 1; } approxVertexCover(vertices, edges); printf(“Vertex Cover: “); for (int i = 1; i <= vertices; i++) { if (included[i]) { printf(“%d “, i); } } printf(“n”); return 0; } Output Vertex Cover: 1 3 4 5 6 7 #include <iostream> #include <vector> using namespace std; const int MAX_VERTICES = 100; vector<vector<int>> graph(MAX_VERTICES, vector<int>(MAX_VERTICES, 0)); vector<bool> included(MAX_VERTICES, false); // Function to find Vertex Cover using the APPROX-VERTEX_COVER algorithm void approxVertexCover(int vertices, int edges) { vector<vector<bool>> edgesRemaining(vertices, vector<bool>(vertices, false)); for (int i = 0; i < vertices; i++) { for (int j = 0; j < vertices; j++) { edgesRemaining[i][j] = graph[i][j]; } } while (edges > 0) { int u, v; for (int i = 0; i < vertices; i++) { for (int j = 0; j < vertices; j++) { if (edgesRemaining[i][j]) { u = i; v = j; break; } } } included[u] = included[v] = true; for (int i = 0; i < vertices; i++) { edgesRemaining[u][i] = edgesRemaining[i][u] = false; edgesRemaining[v][i] = edgesRemaining[i][v] = false; } edges–; } } int main() { int vertices = 8; int edges = 10; int edgesData[10][2] = { {1, 6}, {1, 2}, {1, 4}, {2, 3}, {2, 4}, {6, 7}, {4, 7}, {7, 8}, {3, 5}, {8, 5}}; for (int i = 0; i < edges; i++) { int u = edgesData[i][0]; int v = edgesData[i][1]; graph[u][v] = graph[v][u] = 1; } approxVertexCover(vertices, edges); cout << “Vertex Cover: “; for (int i = 1; i <= vertices; i++) { if (included[i]) { cout << i << ” “; } } cout << endl; return 0; } Output
DSA – Set Cover Problem
Set Cover Problem Table of content Set Cover Algorithm Implementation ”; Previous Next The set cover algorithm provides solution to many real-world resource allocating problems. For instance, consider an airline assigning crew members to each of their airplanes such that they have enough people to fulfill the requirements for the journey. They take into account the flight timings, the duration, the pit-stops, availability of the crew to assign them to the flights. This is where set cover algorithm comes into picture. Given a universal set U, containing few elements which are all divided into subsets. Considering the collection of these subsets as S = {S1, S2, S3, S4… Sn}, the set cover algorithm finds the minimum number of subsets such that they cover all the elements present in the universal set. As shown in the above diagram, the dots represent the elements present in the universal set U that are divided into different sets, S = {S1, S2, S3, S4, S5, S6}. The minimum number of sets that need to be selected to cover all the elements will be the optimal output = {S1, S2, S3}. Set Cover Algorithm The set cover takes the collection of sets as an input and and returns the minimum number of sets required to include all the universal elements. The set cover algorithm is an NP-Hard problem and a 2-approximation greedy algorithm. Algorithm Step 1 − Initialize Output = {} where Output represents the output set of elements. Step 2 − While the Output set does not include all the elements in the universal set, do the following − Find the cost-effectiveness of every subset present in the universal set using the formula, $frac{Costleft ( S_{i} right )}{S_{i}-Output}$ Find the subset with minimum cost effectiveness for each iteration performed. Add the subset to the Output set. Step 3 − Repeat Step 2 until there is no elements left in the universe. The output achieved is the final Output set. Pseudocode APPROX-GREEDY-SET_COVER(X, S) U = X OUTPUT = ф while U ≠ ф select Si Є S which has maximum |Si∩U| U = U – S OUTPUT = OUTPUT∪ {Si} return OUTPUT Analysis assuming the overall number of elements equals the overall number of sets (|X| = |S|), the code runs in time O(|X|3) Example Let us look at an example that describes the approximation algorithm for the set covering problem in more detail S1 = {1, 2, 3, 4} cost(S1) = 5 S2 = {2, 4, 5, 8, 10} cost(S2) = 10 S3 = {1, 3, 5, 7, 9, 11, 13} cost(S3) = 20 S4 = {4, 8, 12, 16, 20} cost(S4) = 12 S5 = {5, 6, 7, 8, 9} cost(S5) = 15 Step 1 The output set, Output = ф Find the cost effectiveness of each set for no elements in the output set, S1 = cost(S1) / (S1 – Output) = 5 / (4 – 0) S2 = cost(S2) / (S2 – Output) = 10 / (5 – 0) S3 = cost(S3) / (S3 – Output) = 20 / (7 – 0) S4 = cost(S4) / (S4 – Output) = 12 / (5 – 0) S5 = cost(S5) / (S5 – Output) = 15 / (5 – 0) The minimum cost effectiveness in this iteration is achieved at S1, therefore, the subset added to the output set, Output = {S1} with elements {1, 2, 3, 4} Step 2 Find the cost effectiveness of each set for the new elements in the output set, S2 = cost(S2) / (S2 – Output) = 10 / (5 – 4) S3 = cost(S3) / (S3 – Output) = 20 / (7 – 4) S4 = cost(S4) / (S4 – Output) = 12 / (5 – 4) S5 = cost(S5) / (S5 – Output) = 15 / (5 – 4) The minimum cost effectiveness in this iteration is achieved at S3, therefore, the subset added to the output set, Output = {S1, S3} with elements {1, 2, 3, 4, 5, 7, 9, 11, 13}. Step 3 Find the cost effectiveness of each set for the new elements in the output set, S2 = cost(S2) / (S2 – Output) = 10 / |(5 – 9)| S4 = cost(S4) / (S4 – Output) = 12 / |(5 – 9)| S5 = cost(S5) / (S5 – Output) = 15 / |(5 – 9)| The minimum cost effectiveness in this iteration is achieved at S2, therefore, the subset added to the output set, Output = {S1, S3, S2} with elements {1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 13} Step 4 Find the cost effectiveness of each set for the new elements in the output set, S4 = cost(S4) / (S4 – Output) = 12 / |(5 – 11)| S5 = cost(S5) / (S5 – Output) = 15 / |(5 – 11)| The minimum cost effectiveness in this iteration is achieved at S4, therefore, the subset added to the output set, Output = {S1, S3, S2, S4} with elements {1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 13, 16, 20} Step 5 Find the cost effectiveness of each set for the new elements in the output set, S5 = cost(S5) / (S5 – Output) = 15 / |(5 – 14)| The minimum cost effectiveness in this iteration is achieved at S5, therefore, the subset added to the output set, Output = {S1, S3, S2, S4, S5} with elements {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 16, 20} The final output that covers all the elements present in the universal finite set
DSA – Binary Search Tree
Binary Search Tree Table of content Binary Tree Representation Basic Operations Defining a Node Search Operation Insertion Operation Inorder Traversal Preorder Traversal Postorder Traversal ”; Previous Next A Binary Search Tree (BST) is a tree in which all the nodes follow the below-mentioned properties − The left sub-tree of a node has a key less than or equal to its parent node”s key. The right sub-tree of a node has a key greater than or equal to its parent node”s key. Thus, BST divides all its sub-trees into two segments; the left sub-tree and the right sub-tree and can be defined as − left_subtree (keys) ≤ node (key) ≤ right_subtree (keys) Binary Tree Representation BST is a collection of nodes arranged in a way where they maintain BST properties. Each node has a key and an associated value. While searching, the desired key is compared to the keys in BST and if found, the associated value is retrieved. Following is a pictorial representation of BST − We observe that the root node key (27) has all less-valued keys on the left sub-tree and the higher valued keys on the right sub-tree. Basic Operations Following are the basic operations of a Binary Search Tree − Search − Searches an element in a tree. Insert − Inserts an element in a tree. Pre-order Traversal − Traverses a tree in a pre-order manner. In-order Traversal − Traverses a tree in an in-order manner. Post-order Traversal − Traverses a tree in a post-order manner. Defining a Node Define a node that stores some data, and references to its left and right child nodes. struct node { int data; struct node *leftChild; struct node *rightChild; }; Search Operation Whenever an element is to be searched, start searching from the root node. Then if the data is less than the key value, search for the element in the left subtree. Otherwise, search for the element in the right subtree. Follow the same algorithm for each node. Algorithm 1. START 2. Check whether the tree is empty or not 3. If the tree is empty, search is not possible 4. Otherwise, first search the root of the tree. 5. If the key does not match with the value in the root, search its subtrees. 6. If the value of the key is less than the root value, search the left subtree 7. If the value of the key is greater than the root value, search the right subtree. 8. If the key is not found in the tree, return unsuccessful search. 9. END Example Following are the implementations of this operation in various programming languages − C C++ Java Python #include <stdio.h> #include <stdlib.h> struct node { int data; struct node *leftChild, *rightChild; }; struct node *root = NULL; struct node *newNode(int item){ struct node *temp = (struct node *)malloc(sizeof(struct node)); temp->data = item; temp->leftChild = temp->rightChild = NULL; return temp; } void insert(int data){ struct node *tempNode = (struct node*) malloc(sizeof(struct node)); struct node *current; struct node *parent; tempNode->data = data; tempNode->leftChild = NULL; tempNode->rightChild = NULL; //if tree is empty if(root == NULL) { root = tempNode; } else { current = root; parent = NULL; while(1) { parent = current; //go to left of the tree if(data < parent->data) { current = current->leftChild; //insert to the left if(current == NULL) { parent->leftChild = tempNode; return; } }//go to right of the tree else { current = current->rightChild; //insert to the right if(current == NULL) { parent->rightChild = tempNode; return; } } } } } struct node* search(int data){ struct node *current = root; while(current->data != data) { //go to left tree if(current->data > data) { current = current->leftChild; }//else go to right tree else { current = current->rightChild; } //not found if(current == NULL) { return NULL; } } return current; } void printTree(struct node* Node){ if(Node == NULL) return; printTree(Node->leftChild); printf(” –%d”, Node->data); printTree(Node->rightChild); } int main(){ insert(55); insert(20); insert(90); insert(50); insert(35); insert(15); insert(65); printf(“Insertion done”); printf(“nBST: n”); printTree(root); struct node* k; int ele = 35; printf(“nElement to be searched: %d”, ele); k = search(35); if(k != NULL) printf(“nElement %d found”, k->data); else printf(“nElement not found”); return 0; } Output Insertion done BST: –15 –20 –35 –50 –55 –65 –90 Element to be searched: 35 Element 35 found #include <iostream> using namespace std; struct Node { int data; struct Node *leftChild, *rightChild; }; Node *root = NULL; Node *newNode(int item){ Node *temp = (Node *)malloc(sizeof(Node)); temp->data = item; temp->leftChild = temp->rightChild = NULL; return temp; } void insert(int data){ Node *tempNode = (Node*) malloc(sizeof(Node)); Node *current; Node *parent; tempNode->data = data; tempNode->leftChild = NULL; tempNode->rightChild = NULL; //if tree is empty if(root == NULL) { root = tempNode; } else { current = root; parent = NULL; while(1) { parent = current; //go to left of the tree if(data < parent->data) { current = current->leftChild; //insert to the left if(current == NULL) { parent->leftChild = tempNode; return; } }//go to right of the tree else { current = current->rightChild; //insert to the right if(current == NULL) { parent->rightChild = tempNode; return; } } } } } Node* search(int data){ Node *current = root; while(current->data != data) { //go to left tree if(current->data > data) { current = current->leftChild; }//else go to right tree else { current = current->rightChild; } //not found if(current == NULL) { return NULL; } } return current; } void printTree(Node* Node) { if (Node == nullptr) return; printTree(Node->leftChild); cout << ” –” << Node->data; printTree(Node->rightChild); } int
Breadth First Search (BFS) Algorithm ”; Previous Next Breadth First Search (BFS) Algorithm Breadth First Search (BFS) algorithm traverses a graph in a breadthward motion to search a graph data structure for a node that meets a set of criteria. It uses a queue to remember the next vertex to start a search, when a dead end occurs in any iteration. Breadth First Search (BFS) algorithm starts at the tree root and explores all nodes at the present depth prior to moving on to the nodes at the next depth level. As in the example given above, BFS algorithm traverses from A to B to E to F first then to C and G lastly to D. It employs the following rules. Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Insert it in a queue. Rule 2 − If no adjacent vertex is found, remove the first vertex from the queue. Rule 3 − Repeat Rule 1 and Rule 2 until the queue is empty. Step Traversal Description 1 Initialize the queue. 2 We start from visiting S (starting node), and mark it as visited. 3 We then see an unvisited adjacent node from S. In this example, we have three nodes but alphabetically we choose A, mark it as visited and enqueue it. 4 Next, the unvisited adjacent node from S is B. We mark it as visited and enqueue it. 5 Next, the unvisited adjacent node from S is C. We mark it as visited and enqueue it. 6 Now, S is left with no unvisited adjacent nodes. So, we dequeue and find A. 7 From A we have D as unvisited adjacent node. We mark it as visited and enqueue it. At this stage, we are left with no unmarked (unvisited) nodes. But as per the algorithm we keep on dequeuing in order to get all unvisited nodes. When the queue gets emptied, the program is over. Example Following are the implementations of Breadth First Search (BFS) Algorithm in various programming languages − C C++ Java Python #include <stdio.h> #include <stdlib.h> #include <stdbool.h> #define MAX 5 struct Vertex { char label; bool visited; }; //queue variables int queue[MAX]; int rear = -1; int front = 0; int queueItemCount = 0; //graph variables //array of vertices struct Vertex* lstVertices[MAX]; //adjacency matrix int adjMatrix[MAX][MAX]; //vertex count int vertexCount = 0; //queue functions void insert(int data) { queue[++rear] = data; queueItemCount++; } int removeData() { queueItemCount–; return queue[front++]; } bool isQueueEmpty() { return queueItemCount == 0; } //graph functions //add vertex to the vertex list void addVertex(char label) { struct Vertex* vertex = (struct Vertex*) malloc(sizeof(struct Vertex)); vertex->label = label; vertex->visited = false; lstVertices[vertexCount++] = vertex; } //add edge to edge array void addEdge(int start,int end) { adjMatrix[start][end] = 1; adjMatrix[end][start] = 1; } //display the vertex void displayVertex(int vertexIndex) { printf(“%c “,lstVertices[vertexIndex]->label); } //get the adjacent unvisited vertex int getAdjUnvisitedVertex(int vertexIndex) { int i; for(i = 0; i<vertexCount; i++) { if(adjMatrix[vertexIndex][i] == 1 && lstVertices[i]->visited == false) return i; } return -1; } void breadthFirstSearch() { int i; //mark first node as visited lstVertices[0]->visited = true; //display the vertex displayVertex(0); //insert vertex index in queue insert(0); int unvisitedVertex; while(!isQueueEmpty()) { //get the unvisited vertex of vertex which is at front of the queue int tempVertex = removeData(); //no adjacent vertex found while((unvisitedVertex = getAdjUnvisitedVertex(tempVertex)) != -1) { lstVertices[unvisitedVertex]->visited = true; displayVertex(unvisitedVertex); insert(unvisitedVertex); } } //queue is empty, search is complete, reset the visited flag for(i = 0;i<vertexCount;i++) { lstVertices[i]->visited = false; } } int main() { int i, j; for(i = 0; i<MAX; i++) { // set adjacency for(j = 0; j<MAX; j++) // matrix to 0 adjMatrix[i][j] = 0; } addVertex(”S”); // 0 addVertex(”A”); // 1 addVertex(”B”); // 2 addVertex(”C”); // 3 addVertex(”D”); // 4 addEdge(0, 1); // S – A addEdge(0, 2); // S – B addEdge(0, 3); // S – C addEdge(1, 4); // A – D addEdge(2, 4); // B – D addEdge(3, 4); // C – D printf(“nBreadth First Search: “); breadthFirstSearch(); return 0; } Output Breadth First Search: S A B C D //C++ code for Breadth First Traversal #include <iostream> #include <stdlib.h> #include <stdbool.h> #define MAX 5 struct Vertex { char label; bool visited; }; //queue variables int queue[MAX]; int rear = -1; int front = 0; int queueItemCount = 0; //graph variables //array of vertices struct Vertex* lstVertices[MAX]; //adjacency matrix int adjMatrix[MAX][MAX]; //vertex count int vertexCount = 0; //queue functions void insert(int data) { queue[++rear] = data; queueItemCount++; } int removeData() { queueItemCount–; return queue[front++]; } bool isQueueEmpty() { return queueItemCount == 0; } //graph functions //add vertex to the vertex list void addVertex(char label) { struct Vertex* vertex = (struct Vertex*) malloc(sizeof(struct Vertex)); vertex->label = label; vertex->visited = false; lstVertices[vertexCount++] = vertex; } //add edge to edge array void addEdge(int start,int end) { adjMatrix[start][end] = 1; adjMatrix[end][start] = 1; } //display the vertex void displayVertex(int vertexIndex) { std::cout << lstVertices[vertexIndex]->label << ” “; } //get the adjacent unvisited vertex int getAdjUnvisitedVertex(int vertexIndex) { int i; for(i = 0; i<vertexCount; i++) { if(adjMatrix[vertexIndex][i] == 1 && lstVertices[i]->visited == false) return i; } return -1; } void breadthFirstSearch() { int i; //mark first node as visited lstVertices[0]->visited = true; //display the vertex displayVertex(0); //insert vertex index in queue insert(0); int unvisitedVertex; while(!isQueueEmpty()) { //get the unvisited vertex of vertex which is at front of the queue int tempVertex = removeData(); //no adjacent vertex found while((unvisitedVertex = getAdjUnvisitedVertex(tempVertex)) != -1) { lstVertices[unvisitedVertex]->visited = true; displayVertex(unvisitedVertex); insert(unvisitedVertex); } } //queue is empty, search is complete, reset the visited flag for(i = 0;i<vertexCount;i++) { lstVertices[i]->visited = false; } } int main() { int i, j; for(i = 0; i<MAX; i++) { // set adjacency for(j = 0; j<MAX; j++) // matrix to 0 adjMatrix[i][j] = 0; } addVertex(”S”); // 0 addVertex(”A”); // 1 addVertex(”B”); // 2 addVertex(”C”); // 3 addVertex(”D”); // 4 addEdge(0, 1); // S – A addEdge(0, 2); // S – B addEdge(0,
Optimal Merge Pattern Algorithm Table of content Pseudocode Example ”; Previous Next Merge a set of sorted files of different length into a single sorted file. We need to find an optimal solution, where the resultant file will be generated in minimum time. If the number of sorted files are given, there are many ways to merge them into a single sorted file. This merge can be performed pair wise. Hence, this type of merging is called as 2-way merge patterns. As, different pairings require different amounts of time, in this strategy we want to determine an optimal way of merging many files together. At each step, two shortest sequences are merged. To merge a p-record file and a q-record file requires possibly p + q record moves, the obvious choice being, merge the two smallest files together at each step. Two-way merge patterns can be represented by binary merge trees. Let us consider a set of n sorted files {f1, f2, f3, …, fn}. Initially, each element of this is considered as a single node binary tree. To find this optimal solution, the following algorithm is used. Pseudocode Following is the pseudocode of the Optimal Merge Pattern Algorithm − for i := 1 to n – 1 do declare new node node.leftchild := least (list) node.rightchild := least (list) node.weight) := ((node.leftchild).weight)+ ((node.rightchild).weight) insert (list, node); return least (list); At the end of this algorithm, the weight of the root node represents the optimal cost. Examples Let us consider the given files, f1, f2, f3, f4 and f5 with 20, 30, 10, 5 and 30 number of elements respectively. If merge operations are performed according to the provided sequence, then M1 = merge f1 and f2 => 20 + 30 = 50 M2 = merge M1 and f3 => 50 + 10 = 60 M3 = merge M2 and f4 => 60 + 5 = 65 M4 = merge M3 and f5 => 65 + 30 = 95 Hence, the total number of operations is 50 + 60 + 65 + 95 = 270 Now, the question arises is there any better solution? Sorting the numbers according to their size in an ascending order, we get the following sequence − f4, f3, f1, f2, f5 Hence, merge operations can be performed on this sequence M1 = merge f4 and f3 => 5 + 10 = 15 M2 = merge M1 and f1 => 15 + 20 = 35 M3 = merge M2 and f2 => 35 + 30 = 65 M4 = merge M3 and f5 => 65 + 30 = 95 Therefore, the total number of operations is 15 + 35 + 65 + 95 = 210 Obviously, this is better than the previous one. In this context, we are now going to solve the problem using this algorithm. Initial Set Step 1 Step 2 Step 3 Step 4 Hence, the solution takes 15 + 35 + 60 + 95 = 205 number of comparisons. Example Following are the implementations of the above approach in various programming languages − C C++ Java Python #include <stdio.h> #include <stdlib.h> int optimalMerge(int files[], int n) { // Sort the files in ascending order for (int i = 0; i < n – 1; i++) { for (int j = 0; j < n – i – 1; j++) { if (files[j] > files[j + 1]) { int temp = files[j]; files[j] = files[j + 1]; files[j + 1] = temp; } } } int cost = 0; while (n > 1) { // Merge the smallest two files int mergedFileSize = files[0] + files[1]; cost += mergedFileSize; // Replace the first file with the merged file size files[0] = mergedFileSize; // Shift the remaining files to the left for (int i = 1; i < n – 1; i++) { files[i] = files[i + 1]; } n–; // Reduce the number of files // Sort the files again for (int i = 0; i < n – 1; i++) { for (int j = 0; j < n – i – 1; j++) { if (files[j] > files[j + 1]) { int temp = files[j]; files[j] = files[j + 1]; files[j + 1] = temp; } } } } return cost; } int main() { int files[] = {5, 10, 20, 30, 30}; int n = sizeof(files) / sizeof(files[0]); int minCost = optimalMerge(files, n); printf(“Minimum cost of merging is: %d Comparisonsn”, minCost); return 0; } Output Minimum cost of merging is: 205 Comparisons #include <iostream> #include <algorithm> int optimalMerge(int files[], int n) { // Sort the files in ascending order for (int i = 0; i < n – 1; i++) { for (int j = 0; j < n – i – 1; j++) { if (files[j] > files[j + 1]) { std::swap(files[j], files[j + 1]); } } } int cost = 0; while (n > 1) { // Merge the smallest two files int mergedFileSize = files[0] + files[1]; cost += mergedFileSize; // Replace the first file with the merged file size files[0] = mergedFileSize; // Shift the remaining files to the left for (int i = 1; i < n – 1; i++) { files[i] = files[i + 1]; } n–; // Reduce the number of files // Sort the files again for (int i = 0; i < n – 1; i++) { for (int j = 0; j < n – i – 1; j++) { if (files[j] > files[j + 1]) { std::swap(files[j], files[j
Strassenâs Matrix Multiplication Table of content Naive Method Strassenâs Matrix Multiplication Algorithm Example ”; Previous Next Strassen”s Matrix Multiplication is the divide and conquer approach to solve the matrix multiplication problems. The usual matrix multiplication method multiplies each row with each column to achieve the product matrix. The time complexity taken by this approach is O(n3), since it takes two loops to multiply. Strassenâs method was introduced to reduce the time complexity from O(n3) to O(nlog 7). Naive Method First, we will discuss Naive method and its complexity. Here, we are calculating Z=ð¿X à Y. Using Naive method, two matrices (X and Y) can be multiplied if the order of these matrices are p à q and q à r and the resultant matrix will be of order p à r. The following pseudocode describes the Naive multiplication − Algorithm: Matrix-Multiplication (X, Y, Z) for i = 1 to p do for j = 1 to r do Z[i,j] := 0 for k = 1 to q do Z[i,j] := Z[i,j] + X[i,k] × Y[k,j] Complexity Here, we assume that integer operations take O(1) time. There are three for loops in this algorithm and one is nested in other. Hence, the algorithm takes O(n3) time to execute. Strassenâs Matrix Multiplication Algorithm In this context, using Strassenâs Matrix multiplication algorithm, the time consumption can be improved a little bit. Strassenâs Matrix multiplication can be performed only on square matrices where n is a power of 2. Order of both of the matrices are n × n. Divide X, Y and Z into four (n/2)×(n/2) matrices as represented below − $Z = begin{bmatrix}I & J \K & L end{bmatrix}$ $X = begin{bmatrix}A & B \C & D end{bmatrix}$ and $Y = begin{bmatrix}E & F \G & H end{bmatrix}$ Using Strassenâs Algorithm compute the following − $$M_{1} : colon= (A+C) times (E+F)$$ $$M_{2} : colon= (B+D) times (G+H)$$ $$M_{3} : colon= (A-D) times (E+H)$$ $$M_{4} : colon= A times (F-H)$$ $$M_{5} : colon= (C+D) times (E)$$ $$M_{6} : colon= (A+B) times (H)$$ $$M_{7} : colon= D times (G-E)$$ Then, $$I : colon= M_{2} + M_{3} – M_{6} – M_{7}$$ $$J : colon= M_{4} + M_{6}$$ $$K : colon= M_{5} + M_{7}$$ $$L : colon= M_{1} – M_{3} – M_{4} – M_{5}$$ Analysis $$T(n)=begin{cases}c & if:n= 1\7:x:T(frac{n}{2})+d:x:n^2 & otherwiseend{cases} :where: c: and :d:are: constants$$ Using this recurrence relation, we get $T(n) = O(n^{log7})$ Hence, the complexity of Strassenâs matrix multiplication algorithm is $O(n^{log7})$. Example Let us look at the implementation of Strassen”s Matrix Multiplication in various programming languages: C, C++, Java, Python. C C++ Java Python #include<stdio.h> int main(){ int z[2][2]; int i, j; int m1, m2, m3, m4 , m5, m6, m7; int x[2][2] = { {12, 34}, {22, 10} }; int y[2][2] = { {3, 4}, {2, 1} }; printf(“The first matrix is: “); for(i = 0; i < 2; i++) { printf(“n”); for(j = 0; j < 2; j++) printf(“%dt”, x[i][j]); } printf(“nThe second matrix is: “); for(i = 0; i < 2; i++) { printf(“n”); for(j = 0; j < 2; j++) printf(“%dt”, y[i][j]); } m1= (x[0][0] + x[1][1]) * (y[0][0] + y[1][1]); m2= (x[1][0] + x[1][1]) * y[0][0]; m3= x[0][0] * (y[0][1] – y[1][1]); m4= x[1][1] * (y[1][0] – y[0][0]); m5= (x[0][0] + x[0][1]) * y[1][1]; m6= (x[1][0] – x[0][0]) * (y[0][0]+y[0][1]); m7= (x[0][1] – x[1][1]) * (y[1][0]+y[1][1]); z[0][0] = m1 + m4- m5 + m7; z[0][1] = m3 + m5; z[1][0] = m2 + m4; z[1][1] = m1 – m2 + m3 + m6; printf(“nProduct achieved using Strassen”s algorithm: “); for(i = 0; i < 2 ; i++) { printf(“n”); for(j = 0; j < 2; j++) printf(“%dt”, z[i][j]); } return 0; } Output The first matrix is: 12 34 22 10 The second matrix is: 3 4 2 1 Product achieved using Strassen”s algorithm: 104 82 86 98 #include<iostream> using namespace std; int main() { int z[2][2]; int i, j; int m1, m2, m3, m4 , m5, m6, m7; int x[2][2] = { {12, 34}, {22, 10} }; int y[2][2] = { {3, 4}, {2, 1} }; cout<<“The first matrix is: “; for(i = 0; i < 2; i++) { cout<<endl; for(j = 0; j < 2; j++) cout<<x[i][j]<<” “; } cout<<“nThe second matrix is: “; for(i = 0;i < 2; i++){ cout<<endl; for(j = 0;j < 2; j++) cout<<y[i][j]<<” “; } m1 = (x[0][0] + x[1][1]) * (y[0][0] + y[1][1]); m2 = (x[1][0] + x[1][1]) * y[0][0]; m3 = x[0][0] * (y[0][1] – y[1][1]); m4 = x[1][1] * (y[1][0] – y[0][0]); m5 = (x[0][0] + x[0][1]) * y[1][1]; m6 = (x[1][0] – x[0][0]) * (y[0][0]+y[0][1]); m7 = (x[0][1] – x[1][1]) * (y[1][0]+y[1][1]); z[0][0] = m1 + m4- m5 + m7; z[0][1] = m3 + m5; z[1][0] = m2 + m4; z[1][1] = m1 – m2 + m3 + m6; cout<<“nProduct achieved using Strassen”s algorithm: “; for(i = 0; i < 2 ; i++) { cout<<endl; for(j = 0; j < 2; j++) cout<<z[i][j]<<” “; } return 0; } Output The first matrix is: 12 34 22 10 The second matrix is: 3 4 2 1 Product achieved using Strassen”s algorithm: 104 82 86 98 public class Strassens { public static void main(String[] args) { int[][] x = {{12, 34}, {22, 10}}; int[][] y = {{3, 4}, {2, 1}}; int z[][] = new int[2][2]; int m1, m2, m3, m4 , m5, m6, m7; System.out.print(“The first matrix is: “); for(int
Job Sequencing with Deadline Table of content Job Scheduling Algorithm Example ”; Previous Next Job scheduling algorithm is applied to schedule the jobs on a single processor to maximize the profits. The greedy approach of the job scheduling algorithm states that, “Given ‘n’ number of jobs with a starting time and ending time, they need to be scheduled in such a way that maximum profit is received within the maximum deadline”. Job Scheduling Algorithm Set of jobs with deadlines and profits are taken as an input with the job scheduling algorithm and scheduled subset of jobs with maximum profit are obtained as the final output. Algorithm Step1 − Find the maximum deadline value from the input set of jobs. Step2 − Once, the deadline is decided, arrange the jobs in descending order of their profits. Step3 − Selects the jobs with highest profits, their time periods not exceeding the maximum deadline. Step4 − The selected set of jobs are the output. Examples Consider the following tasks with their deadlines and profits. Schedule the tasks in such a way that they produce maximum profit after being executed − S. No. 1 2 3 4 5 Jobs J1 J2 J3 J4 J5 Deadlines 2 2 1 3 4 Profits 20 60 40 100 80 Step 1 Find the maximum deadline value, dm, from the deadlines given. dm = 4. Step 2 Arrange the jobs in descending order of their profits. S. No. 1 2 3 4 5 Jobs J4 J5 J2 J3 J1 Deadlines 3 4 2 1 2 Profits 100 80 60 40 20 The maximum deadline, dm, is 4. Therefore, all the tasks must end before 4. Choose the job with highest profit, J4. It takes up 3 parts of the maximum deadline. Therefore, the next job must have the time period 1. Total Profit = 100. Step 3 The next job with highest profit is J5. But the time taken by J5 is 4, which exceeds the deadline by 3. Therefore, it cannot be added to the output set. Step 4 The next job with highest profit is J2. The time taken by J5 is 2, which also exceeds the deadline by 1. Therefore, it cannot be added to the output set. Step 5 The next job with higher profit is J3. The time taken by J3 is 1, which does not exceed the given deadline. Therefore, J3 is added to the output set. Total Profit: 100 + 40 = 140 Step 6 Since, the maximum deadline is met, the algorithm comes to an end. The output set of jobs scheduled within the deadline are {J4, J3} with the maximum profit of 140. Example Following is the final implementation of Job sequencing Algorithm using Greedy Approach − C C++ Java Python #include <stdbool.h> #include <stdio.h> #include <stdlib.h> // A structure to represent a Jobs typedef struct Jobs { char id; // Jobs Id int dead; // Deadline of Jobs int profit; // Profit if Jobs is over before or on deadline } Jobs; // This function is used for sorting all Jobss according to // profit int compare(const void* a, const void* b){ Jobs* temp1 = (Jobs*)a; Jobs* temp2 = (Jobs*)b; return (temp2->profit – temp1->profit); } // Find minimum between two numbers. int min(int num1, int num2){ return (num1 > num2) ? num2 : num1; } int main(){ Jobs arr[] = { { ”a”, 2, 100 }, { ”b”, 2, 20 }, { ”c”, 1, 40 }, { ”d”, 3, 35 }, { ”e”, 1, 25 } }; int n = sizeof(arr) / sizeof(arr[0]); printf(“Following is maximum profit sequence of Jobs: n”); qsort(arr, n, sizeof(Jobs), compare); int result[n]; // To store result sequence of Jobs bool slot[n]; // To keep track of free time slots // Initialize all slots to be free for (int i = 0; i < n; i++) slot[i] = false; // Iterate through all given Jobs for (int i = 0; i < n; i++) { // Find a free slot for this Job for (int j = min(n, arr[i].dead) – 1; j >= 0; j–) { // Free slot found if (slot[j] == false) { result[j] = i; slot[j] = true; break; } } } // Print the result for (int i = 0; i < n; i++) if (slot[i]) printf(“%c “, arr[result[i]].id); return 0; } Output Following is maximum profit sequence of Jobs: c a d #include<iostream> #include<algorithm> using namespace std; struct Job { char id; int deadLine; int profit; }; bool comp(Job j1, Job j2){ return (j1.profit > j2.profit); //compare jobs based on profit } int min(int a, int b){ return (a<b)?a:b; } int main(){ Job jobs[] = { { ”a”, 2, 100 }, { ”b”, 2, 20 }, { ”c”, 1, 40 }, { ”d”, 3, 35 }, { ”e”, 1, 25 } }; int n = 5; cout << “Following is maximum profit sequence of Jobs: “<<“n”; sort(jobs, jobs+n, comp); //sort jobs on profit int jobSeq[n]; // To store result (Sequence of jobs) bool slot[n]; // To keep track of free time slots for (int i=0; i<n; i++) slot[i] = false; //initially all slots are free for