DAA – Cook”s Theorem

Cook”s Theorem Table of content Cook”s Theorem Theorem-1 Theorem-2 Theorem-3 Theorem-4 ”; Previous Next Cook”s Theorem Stephen Cook presented four theorems in his paper “The Complexity of Theorem Proving Procedures”. These theorems are stated below. We do understand that many unknown terms are being used in this chapter, but we don’t have any scope to discuss everything in detail. Following are the four theorems by Stephen Cook − Theorem-1 If a set S of strings is accepted by some non-deterministic Turing machine within polynomial time, then S is P-reducible to {DNF tautologies}. Theorem-2 The following sets are P-reducible to each other in pairs (and hence each has the same polynomial degree of difficulty): {tautologies}, {DNF tautologies}, D3, {sub-graph pairs}. Theorem-3 For any TQ(k) of type Q, $mathbf{frac{T_{Q}(k)}{frac{sqrt{k}}{(log:k)^2}}}$ is unbounded There is a TQ(k) of type Q such that $T_{Q}(k)leqslant 2^{k(log:k)^2}$ Theorem-4 If the set S of strings is accepted by a non-deterministic machine within time T(n) = 2n, and if TQ(k) is an honest (i.e. real-time countable) function of type Q, then there is a constant K, so S can be recognized by a deterministic machine within time TQ(K8n). First, he emphasized the significance of polynomial time reducibility. It means that if we have a polynomial time reduction from one problem to another, this ensures that any polynomial time algorithm from the second problem can be converted into a corresponding polynomial time algorithm for the first problem. Second, he focused attention on the class NP of decision problems that can be solved in polynomial time by a non-deterministic computer. Most of the intractable problems belong to this class, NP. Third, he proved that one particular problem in NP has the property that every other problem in NP can be polynomially reduced to it. If the satisfiability problem can be solved with a polynomial time algorithm, then every problem in NP can also be solved in polynomial time. If any problem in NP is intractable, then satisfiability problem must be intractable. Thus, satisfiability problem is the hardest problem in NP. Fourth, Cook suggested that other problems in NP might share with the satisfiability problem this property of being the hardest member of NP. Print Page Previous Next Advertisements ”;

DAA – Extract Method

Extracting Root Element From Heap Table of content Pseudocode Example Implementation ”; Previous Next Extract method is used to extract the root element of a Heap. Following is the algorithm. Pseudocode Heap-Extract-Max (numbers[]) max = numbers[1] numbers[1] = numbers[heapsize] heapsize = heapsize – 1 Max-Heapify (numbers[], 1) return max Example Let us consider the same example discussed previously. Now we want to extract an element. This method will return the root element of the heap. After deletion of the root element, the last element will be moved to the root position. Now, Heapify function will be called. After Heapify, the following heap is generated. Implementation Following are the implementations of this operation in various programming languages − C C++ Java Python #include <stdio.h> void swap(int arr[], int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } void maxHeapify(int arr[], int size, int i) { int leftChild = 2 * i + 1; int rightChild = 2 * i + 2; int largest = i; if (leftChild < size && arr[leftChild] > arr[largest]) largest = leftChild; if (rightChild < size && arr[rightChild] > arr[largest]) largest = rightChild; if (largest != i) { swap(arr, i, largest); maxHeapify(arr, size, largest); // Recursive call to continue heapifying } } int extractMax(int arr[], int *heapSize) { if (*heapSize < 1) { printf(“Heap underflow!n”); return -1; } int max = arr[0]; arr[0] = arr[*heapSize – 1]; (*heapSize)–; maxHeapify(arr, *heapSize, 0); // Heapify the updated heap return max; } int main() { int arr[] = { 55, 50, 30, 40, 20, 15, 10 }; // Max-Heap int heapSize = sizeof(arr) / sizeof(arr[0]); int max = extractMax(arr, &heapSize); // Extract the max element from the heap printf(“Extracted Max Element: %dn”, max); // Print the updated Max-Heap printf(“Updated Max-Heap: “); for (int i = 0; i < heapSize; i++) printf(“%d “, arr[i]); printf(“n”); return 0; } Output Extracted Max Element: 55 Updated Max-Heap: 50 40 30 10 20 15 #include <iostream> #include <vector> void swap(std::vector<int>& arr, int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } void maxHeapify(std::vector<int>& arr, int size, int i) { int leftChild = 2 * i + 1; int rightChild = 2 * i + 2; int largest = i; if (leftChild < size && arr[leftChild] > arr[largest]) largest = leftChild; if (rightChild < size && arr[rightChild] > arr[largest]) largest = rightChild; if (largest != i) { swap(arr, i, largest); maxHeapify(arr, size, largest); // Recursive call to continue heapifying } } int extractMax(std::vector<int>& arr, int& heapSize) { if (heapSize < 1) { std::cout << “Heap underflow!” << std::endl; return -1; } int max = arr[0]; arr[0] = arr[heapSize – 1]; heapSize–; maxHeapify(arr, heapSize, 0); // Heapify the updated heap return max; } int main() { std::vector<int> arr = { 55, 50, 30, 40, 20, 15, 10 }; // Max-Heap int heapSize = arr.size(); int max = extractMax(arr, heapSize); // Extract the max element from the heap std::cout << “Extracted Max Element: ” << max << std::endl; // Print the updated Max-Heap std::cout << “Updated Max-Heap: “; for (int i = 0; i < heapSize; i++) std::cout << arr[i] << ” “; std::cout << std::endl; return 0; } Output Extracted Max Element: 55 Updated Max-Heap: 50 40 30 10 20 15 import java.util.Arrays; public class MaxHeap { public static void swap(int arr[], int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } public static void maxHeapify(int arr[], int size, int i) { int leftChild = 2 * i + 1; int rightChild = 2 * i + 2; int largest = i; if (leftChild < size && arr[leftChild] > arr[largest]) largest = leftChild; if (rightChild < size && arr[rightChild] > arr[largest]) largest = rightChild; if (largest != i) { swap(arr, i, largest); maxHeapify(arr, size, largest); // Recursive call to continue heapifying } } public static int extractMax(int arr[], int heapSize) { if (heapSize < 1) { System.out.println(“Heap underflow!”); return -1; } int max = arr[0]; arr[0] = arr[heapSize – 1]; heapSize–; maxHeapify(arr, heapSize, 0); // Heapify the updated heap return max; } public static void main(String args[]) { int arr[] = { 55, 50, 30, 40, 20, 15, 10 }; // Max-Heap int heapSize = arr.length; int max = extractMax(arr, heapSize); // Extract the max element from the heap System.out.println(“Extracted Max Element: ” + max); // Print the updated Max-Heap System.out.print(“Updated Max-Heap: “); for (int i = 0; i < heapSize; i++) System.out.print(arr[i] + ” “); System.out.println(); } } Output Extracted Max Element: 55 Updated Max-Heap: 50 40 30 10 20 15 10 def swap(arr, i, j): arr[i], arr[j] = arr[j], arr[i] def max_heapify(arr, size, i): left_child = 2 * i + 1 right_child = 2 * i + 2 largest = i if left_child < size and arr[left_child] > arr[largest]: largest = left_child if right_child < size and arr[right_child] > arr[largest]: largest = right_child if largest != i: swap(arr, i, largest) max_heapify(arr, size, largest) # Recursive call to continue heapifying def extract_max(arr, heap_size): if heap_size < 1: print(“Heap underflow!”) return -1 max_element = arr[0] arr[0] = arr[heap_size – 1] heap_size -= 1 max_heapify(arr, heap_size, 0) # Heapify the updated heap return max_element arr = [55, 50, 30, 40, 20, 15, 10] # Max-Heap heap_size = len(arr) max_element = extract_max(arr, heap_size) # Extract the max element from the heap print(“Extracted Max Element:”, max_element) # Print the updated Max-Heap print(“Updated Max-Heap:”, arr) Output Extracted Max Element: 55 Updated Max-Heap: [50, 40, 30, 10, 20, 15, 10] Print Page

DAA – Insert Method

Insertion in Heaps Table of content Analysis Example ”; Previous Next To insert an element in a heap, the new element is initially appended to the end of the heap as the last element of the array. After inserting this element, heap property may be violated, hence the heap property is repaired by comparing the added element with its parent and moving the added element up a level, swapping positions with the parent. This process is called percolation up. The comparison is repeated until the parent is larger than or equal to the percolating element. Algorithm: Max-Heap-Insert (numbers[], key) heapsize = heapsize + 1 numbers[heapsize] = -∞ i = heapsize numbers[i] = key while i > 1 and numbers[Parent(numbers[], i)] < numbers[i] exchange(numbers[i], numbers[Parent(numbers[], i)]) i = Parent (numbers[], i) Analysis Initially, an element is being added at the end of the array. If it violates the heap property, the element is exchanged with its parent. The height of the tree is log n. Maximum log n number of operations needs to be performed. Hence, the complexity of this function is O(log n). Example Let us consider a max-heap, as shown below, where a new element 5 needs to be added. Initially, 55 will be added at the end of this array. After insertion, it violates the heap property. Hence, the element needs to swap with its parent. After swap, the heap looks like the following. Again, the element violates the property of heap. Hence, it is swapped with its parent. Now, we have to stop. Example Following are the implementations of this operation in various programming languages − C C++ Java Python #include <stdio.h> void swap(int* a, int* b) { int temp = *a; *a = *b; *b = temp; } int parent(int i) { if (i == 0) return -1; else return (i – 1) / 2; } void maxHeapInsert(int arr[], int* heapSize, int key) { (*heapSize)++; int i = *heapSize; arr[i] = key; while (i > 1 && arr[parent(i)] < arr[i]) { swap(&arr[i], &arr[parent(i)]); i = parent(i); } } int main() { int arr[100] = { 50, 30, 40, 20, 15, 10 }; // Initial Max-Heap int heapSize = 5; // Current heap size // New element to be inserted int newElement = 5; // Insert the new element into the Max-Heap maxHeapInsert(arr, &heapSize, newElement); // Print the updated Max-Heap printf(“Updated Max-Heap: “); for (int i = 0; i <= heapSize; i++) printf(“%d “, arr[i]); printf(“n”); return 0; } Output Updated Max-Heap: 50 30 40 20 15 10 5 #include <iostream> #include <vector> using namespace std; void swap(int& a, int& b) { int temp = a; a = b; b = temp; } int parent(int i) { if (i == 0) return -1; else return (i – 1) / 2; } void maxHeapInsert(vector<int>& arr, int& heapSize, int key) { heapSize++; int i = heapSize; // Resize the vector to accommodate the new element arr.push_back(0); arr[i] = key; while (i > 1 && arr[parent(i)] < arr[i]) { swap(arr[i], arr[parent(i)]); i = parent(i); } } int main() { vector<int> arr = { 50, 30, 40, 20, 15, 10 }; // Initial Max-Heap int heapSize = 5; // Current heap size // New element to be inserted int newElement = 5; // Insert the new element into the Max-Heap maxHeapInsert(arr, heapSize, newElement); // Print the updated Max-Heap cout << “Updated Max-Heap: “; for (int i = 0; i <= heapSize; i++) cout << arr[i] << ” “; cout << endl; return 0; } Output Updated Max-Heap: 50 30 40 20 15 10 5 import java.util.Arrays; public class MaxHeap { public static void swap(int arr[], int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } public static int parent(int i) { if (i == 0) return -1; else return (i – 1) / 2; } public static void maxHeapInsert(int arr[], int heapSize, int key) { heapSize++; int i = heapSize – 1; // Adjust the index for array insertion arr[i] = key; while (i > 0 && arr[parent(i)] < arr[i]) { swap(arr, i, parent(i)); i = parent(i); } } public static void main(String args[]) { int arr[] = { 50, 30, 40, 20, 15, 10 }; // Initial Max-Heap int heapSize = 5; // Current heap size // New element to be inserted int newElement = 5; // Insert the new element into the Max-Heap maxHeapInsert(arr, heapSize, newElement); // Print the updated Max-Heap System.out.print(“Updated Max-Heap: “); for (int i = 0; i <= heapSize; i++) System.out.print(arr[i] + ” “); System.out.println(); } } Output Updated Max-Heap: 50 30 40 20 15 5 def swap(arr, i, j): arr[i], arr[j] = arr[j], arr[i] def parent(i): if i == 0: return -1 else: return (i – 1) // 2 def max_heap_insert(arr, heap_size, key): heap_size += 1 i = heap_size arr.append(key) while i > 0 and arr[parent(i)] < arr[i]: swap(arr, i, parent(i)) i = parent(i) if __name__ == “__main__”: arr = [50, 30, 40, 20, 15, 10] # Initial Max-Heap heap_size = 5 # Current heap size # New element to be inserted new_element = 5 # Insert the new element into the Max-Heap max_heap_insert(arr, heap_size, new_element) # Print the updated Max-Heap print(“Updated Max-Heap:”, arr) Output Updated Max-Heap: [50, 30, 40, 20, 15, 10, 5] Print Page Previous Next Advertisements ”;

DAA – Travelling Salesman Problem using Dynamic Programming

Travelling Salesman Problem (Dynamic Approach) Table of content Travelling Salesman Dynamic Programming Algorithm Implementation ”; Previous Next Travelling salesman problem is the most notorious computational problem. We can use brute-force approach to evaluate every possible tour and select the best one. For n number of vertices in a graph, there are (n−1)! number of possibilities. Thus, maintaining a higher complexity. However, instead of using brute-force, using the dynamic programming approach will obtain the solution in lesser time, though there is no polynomial time algorithm. Travelling Salesman Dynamic Programming Algorithm Let us consider a graph G = (V,E), where V is a set of cities and E is a set of weighted edges. An edge e(u, v) represents that vertices u and v are connected. Distance between vertex u and v is d(u, v), which should be non-negative. Suppose we have started at city 1 and after visiting some cities now we are in city j. Hence, this is a partial tour. We certainly need to know j, since this will determine which cities are most convenient to visit next. We also need to know all the cities visited so far, so that we don”t repeat any of them. Hence, this is an appropriate sub-problem. For a subset of cities S $epsilon$ {1,2,3,…,n} that includes 1, and j $epsilon$ S, let C(S, j) be the length of the shortest path visiting each node in S exactly once, starting at 1 and ending at j. When |S|> 1 , we define 𝑪C(S,1)= $propto$ since the path cannot start and end at 1. Now, let express C(S, j) in terms of smaller sub-problems. We need to start at 1 and end at j. We should select the next city in such a way that $$Cleft ( S,j right ), =, min, Cleft ( S, -, left{j right},i right ), +, dleft ( i,j right ): where: i: epsilon : S: and: ineq j$$ Algorithm: Traveling-Salesman-Problem C ({1}, 1) = 0 for s = 2 to n do for all subsets S є {1, 2, 3, … , n} of size s and containing 1 C (S, 1) = ∞ for all j є S and j ≠ 1 C (S, j) = min {C (S – {j}, i) + d(i, j) for i є S and i ≠ j} Return minj C ({1, 2, 3, …, n}, j) + d(j, i) Analysis There are at the most 2n.n sub-problems and each one takes linear time to solve. Therefore, the total running time is O(2n.n2). Example In the following example, we will illustrate the steps to solve the travelling salesman problem. From the above graph, the following table is prepared. 1 2 3 4 1 0 10 15 20 2 5 0 9 10 3 6 13 0 12 4 8 8 9 0 S = $Phi$ $$Costleft ( 2,Phi ,1 right ), =, dleft ( 2,1 right ),=,5$$ $$Costleft ( 3,Phi ,1 right ), =, dleft ( 3,1 right ), =, 6$$ $$Costleft ( 4,Phi ,1 right ), =, dleft ( 4,1 right ), =, 8$$ S = 1 $$Cost(i,s)=minleft{Cosleft ( j,s-(j) right ), +,dleft [ i,j right ] right}$$ $$Cost(2,left{3 right},1)=d[2,3], +, Costleft ( 3,Phi ,1 right ), =, 9, +, 6, =, 15$$ $$Cost(2,left{4 right},1)=d[2,4], +, Costleft ( 4,Phi ,1 right ), =, 10, +, 8, =, 18$$ $$Cost(3,left{2 right},1)=d[3,2], +, Costleft ( 2,Phi ,1 right ), =, 13, +, 5, =, 18$$ $$Cost(3,left{4 right},1)=d[3,4], +, Costleft ( 4,Phi ,1 right ), =, 12, +, 8, =, 20$$ $$Cost(4,left{3 right},1)=d[4,3], +, Costleft ( 3,Phi ,1 right ), =, 9, +, 6, =, 15$$ $$Cost(4,left{2 right},1)=d[4,2], +, Costleft ( 2,Phi ,1 right ), =, 8, +, 5, =, 13$$ S = 2 $$Cost(2,left{3,4 right},1)=minleft{begin{matrix} dleft [ 2,3 right ],+ ,Costleft ( 3,left{ 4right},1 right ), =, 9, +, 20, =, 29 \ dleft [ 2,4 right ],+ ,Costleft ( 4,left{ 3right},1 right ), =, 10, +, 15, =, 25 \ end{matrix}right., =,25$$ $$Cost(3,left{2,4 right},1)=minleft{begin{matrix} dleft [ 3,2 right ],+ ,Costleft ( 2,left{ 4right},1 right ), =, 13, +, 18, =, 31 \ dleft [ 3,4 right ],+ ,Costleft ( 4,left{ 2right},1 right ), =, 12, +, 13, =, 25 \ end{matrix}right., =,25$$ $$Cost(4,left{2,3 right},1)=minleft{begin{matrix} dleft [ 4,2 right ],+ ,Costleft ( 2,left{ 3right},1 right ), =, 8, +, 15, =, 23 \ dleft [ 4,3 right ],+ ,Costleft ( 3,left{ 2right},1 right ), =, 9, +, 18, =, 27 \ end{matrix}right., =,23$$ S = 3 $$Cost(1,left{2,3,4 right},1)=minleft{begin{matrix} dleft [ 1,2 right ],+ ,Costleft ( 2,left{ 3,4right},1 right ), =, 10, +, 25, =, 35 \ dleft [ 1,3 right ],+ ,Costleft ( 3,left{ 2,4right},1 right ), =, 15, +, 25, =, 40 \ dleft [ 1,4 right ],+ ,Costleft ( 4,left{ 2,3right},1 right ), =, 20, +, 23, =, 43 \ end{matrix}right., =, 35$$ The minimum cost path is 35. Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3, select the path from 1 to 2 (cost is 10) then go backwards. When s = 2, we get the minimum value for d [4, 2]. Select the path from 2 to 4 (cost is 10) then go backwards. When s = 1, we get the minimum value for d [4, 2] but 2 and 4 is already selected. Therefore, we select d [4, 3] (two possible values are 15 for d [2, 3] and d [4, 3], but

DAA – Dynamic Programming

Dynamic Programming Table of content Steps of Dynamic Programming Approach Dynamic Programming vs. Greedy vs. Divide and Conquer Examples of Dynamic Programming ”; Previous Next Dynamic programming approach is similar to divide and conquer in breaking down the problem into smaller and yet smaller possible sub-problems. But unlike divide and conquer, these sub-problems are not solved independently. Rather, results of these smaller sub-problems are remembered and used for similar or overlapping sub-problems. Mostly, dynamic programming algorithms are used for solving optimization problems. Before solving the in-hand sub-problem, dynamic algorithm will try to examine the results of the previously solved sub-problems. The solutions of sub-problems are combined in order to achieve the best optimal final solution. This paradigm is thus said to be using Bottom-up approach. So we can conclude that − The problem should be able to be divided into smaller overlapping sub-problem. Final optimum solution can be achieved by using an optimum solution of smaller sub-problems. Dynamic algorithms use memorization. However, in a problem, two main properties can suggest that the given problem can be solved using Dynamic Programming. They are − Overlapping Sub-Problems Similar to Divide-and-Conquer approach, Dynamic Programming also combines solutions to sub-problems. It is mainly used where the solution of one sub-problem is needed repeatedly. The computed solutions are stored in a table, so that these don’t have to be re-computed. Hence, this technique is needed where overlapping sub-problem exists. For example, Binary Search does not have overlapping sub-problem. Whereas recursive program of Fibonacci numbers have many overlapping sub-problems. Optimal Sub-Structure A given problem has Optimal Substructure Property, if the optimal solution of the given problem can be obtained using optimal solutions of its sub-problems. For example, the Shortest Path problem has the following optimal substructure property − If a node x lies in the shortest path from a source node u to destination node v, then the shortest path from u to v is the combination of the shortest path from u to x, and the shortest path from x to v. The standard All Pair Shortest Path algorithms like Floyd-Warshall and Bellman-Ford are typical examples of Dynamic Programming. Steps of Dynamic Programming Approach Dynamic Programming algorithm is designed using the following four steps − Characterize the structure of an optimal solution. Recursively define the value of an optimal solution. Compute the value of an optimal solution, typically in a bottom-up fashion. Construct an optimal solution from the computed information. Dynamic Programming vs. Greedy vs. Divide and Conquer In contrast to greedy algorithms, where local optimization is addressed, dynamic algorithms are motivated for an overall optimization of the problem. In contrast to divide and conquer algorithms, where solutions are combined to achieve an overall solution, dynamic algorithms use the output of a smaller sub-problem and then try to optimize a bigger sub-problem. Dynamic algorithms use memorization to remember the output of already solved sub-problems. Examples of Dynamic Programming The following computer problems can be solved using dynamic programming approach − Matrix Chain Multiplication Floyd Warshall Algorithm 0-1 Knapsack Problem Longest Common Subsequence Algorithm Travelling Salesman Problem (Dynamic Approach) Dynamic programming can be used in both top-down and bottom-up manner. And of course, most of the times, referring to the previous solution output is cheaper than re-computing in terms of CPU cycles. Print Page Previous Next Advertisements ”;

DAA – Travelling Salesperson Approximation Algorithm

Travelling Salesman using Approximation Algorithm Table of content Travelling Salesman Approximation Algorithm Implementation ”; Previous Next We have already discussed the travelling salesperson problem using the greedy and dynamic programming approaches, and it is established that solving the travelling salesperson problems for the perfect optimal solutions is not possible in polynomial time. Therefore, the approximation solution is expected to find a near optimal solution for this NP-Hard problem. However, an approximate algorithm is devised only if the cost function (which is defined as the distance between two plotted points) in the problem satisfies triangle inequality. The triangle inequality is satisfied only if the cost function c, for all the vertices of a triangle u, v and w, satisfies the following equation c(u, w)≤ c(u, v)+c(v, w) It is usually automatically satisfied in many applications. Travelling Salesperson Approximation Algorithm The travelling salesperson approximation algorithm requires some prerequisite algorithms to be performed so we can achieve a near optimal solution. Let us look at those prerequisite algorithms briefly − Minimum Spanning Tree − The minimum spanning tree is a tree data structure that contains all the vertices of main graph with minimum number of edges connecting them. We apply prim’s algorithm for minimum spanning tree in this case. Pre-order Traversal − The pre-order traversal is done on tree data structures where a pointer is walked through all the nodes of the tree in a [root – left child – right child] order. Algorithm Step 1 − Choose any vertex of the given graph randomly as the starting and ending point. Step 2 − Construct a minimum spanning tree of the graph with the vertex chosen as the root using prim’s algorithm. Step 3 − Once the spanning tree is constructed, pre-order traversal is performed on the minimum spanning tree obtained in the previous step. Step 4 − The pre-order solution obtained is the Hamiltonian path of the travelling salesperson. Pseudocode APPROX_TSP(G, c) r <- root node of the minimum spanning tree T <- MST_Prim(G, c, r) visited = {ф} for i in range V: H <- Preorder_Traversal(G) visited = {H} Analysis The approximation algorithm of the travelling salesperson problem is a 2-approximation algorithm if the triangle inequality is satisfied. To prove this, we need to show that the approximate cost of the problem is double the optimal cost. Few observations that support this claim would be as follows − The cost of minimum spanning tree is never less than the cost of the optimal Hamiltonian path. That is, c(M) ≤ c(H*). The cost of full walk is also twice as the cost of minimum spanning tree. A full walk is defined as the path traced while traversing the minimum spanning tree preorderly. Full walk traverses every edge present in the graph exactly twice. Thereore, c(W) = 2c(T) Since the preorder walk path is less than the full walk path, the output of the algorithm is always lower than the cost of the full walk. Example Let us look at an example graph to visualize this approximation algorithm − Solution Consider vertex 1 from the above graph as the starting and ending point of the travelling salesperson and begin the algorithm from here. Step 1 Starting the algorithm from vertex 1, construct a minimum spanning tree from the graph. To learn more about constructing a minimum spanning tree, please click here. Step 2 Once, the minimum spanning tree is constructed, consider the starting vertex as the root node (i.e., vertex 1) and walk through the spanning tree preorderly. Rotating the spanning tree for easier interpretation, we get − The preorder traversal of the tree is found to be − 1 → 2 → 5 → 6 → 3 → 4 Step 3 Adding the root node at the end of the traced path, we get, 1 → 2 → 5 → 6 → 3 → 4 → 1 This is the output Hamiltonian path of the travelling salesman approximation problem. The cost of the path would be the sum of all the costs in the minimum spanning tree, i.e., 55. Implementation Following are the implementations of the above approach in various programming langauges − C C++ Java Python #include <stdio.h> #include <stdbool.h> #include <limits.h> #define V 6 // Number of vertices in the graph // Function to find the minimum key vertex from the set of vertices not yet included in MST int findMinKey(int key[], bool mstSet[]) { int min = INT_MAX, min_index; for (int v = 0; v < V; v++) { if (mstSet[v] == false && key[v] < min) { min = key[v]; min_index = v; } } return min_index; } // Function to perform Prim”s algorithm to find the Minimum Spanning Tree (MST) void primMST(int graph[V][V], int parent[]) { int key[V]; bool mstSet[V]; for (int i = 0; i < V; i++) { key[i] = INT_MAX; mstSet[i] = false; } key[0] = 0; parent[0] = -1; for (int count = 0; count < V – 1; count++) { int u = findMinKey(key, mstSet); mstSet[u] = true; for (int v = 0; v < V; v++) { if (graph[u][v] && mstSet[v] == false && graph[u][v] < key[v]) { parent[v] = u; key[v] = graph[u][v]; } } } } // Function to print the preorder traversal of the Minimum Spanning Tree void printPreorderTraversal(int parent[]) { printf(“The preorder traversal of the tree is found to be − “); for (int i = 1; i < V; i++) { printf(“%d → “, parent[i]); } printf(“n”); } // Main function for the Traveling Salesperson Approximation Algorithm void tspApproximation(int graph[V][V]) { int parent[V]; int root = 0; // Choosing

DAA – Matrix Chain Multiplication

Matrix Chain Multiplication Algorithm Table of content Matrix Chain Multiplication Algorithm Implementation ”; Previous Next Matrix Chain Multiplication is an algorithm that is applied to determine the lowest cost way for multiplying matrices. The actual multiplication is done using the standard way of multiplying the matrices, i.e., it follows the basic rule that the number of rows in one matrix must be equal to the number of columns in another matrix. Hence, multiple scalar multiplications must be done to achieve the product. To brief it further, consider matrices A, B, C, and D, to be multiplied; hence, the multiplication is done using the standard matrix multiplication. There are multiple combinations of the matrices found while using the standard approach since matrix multiplication is associative. For instance, there are five ways to multiply the four matrices given above − (A(B(CD))) (A((BC)D)) ((AB)(CD)) ((A(BC))D) (((AB)C)D) Now, if the size of matrices A, B, C, and D are l × m, m × n, n × p, p × q respectively, then the number of scalar multiplications performed will be lmnpq. But the cost of the matrices change based on the rows and columns present in it. Suppose, the values of l, m, n, p, q are 5, 10, 15, 20, 25 respectively, the cost of (A(B(CD))) is 5 × 100 × 25 = 12,500; however, the cost of (A((BC)D)) is 10 × 25 × 37 = 9,250. So, dynamic programming approach of the matrix chain multiplication is adopted in order to find the combination with the lowest cost. Matrix Chain Multiplication Algorithm Matrix chain multiplication algorithm is only applied to find the minimum cost way to multiply a sequence of matrices. Therefore, the input taken by the algorithm is the sequence of matrices while the output achieved is the lowest cost parenthesization. Algorithm Count the number of parenthesizations. Find the number of ways in which the input matrices can be multiplied using the formulae − $$P(n)=left{begin{matrix} 1 & if: n=1\ sum_{k=1}^{n-1} P(k)P(n-k)& if: ngeq 2\ end{matrix}right.$$ (or) $$P(n)=left{begin{matrix} frac{2(n-1)C_{n-1}}{n} & if: ngeq 2 \ 1 & if: n= 1\ end{matrix}right.$$ Once the parenthesization is done, the optimal substructure must be devised as the first step of dynamic programming approach so the final product achieved is optimal. In matrix chain multiplication, the optimal substructure is found by dividing the sequence of matrices A[i….j] into two parts A[i,k] and A[k+1,j]. It must be ensured that the parts are divided in such a way that optimal solution is achieved. Using the formula, $C[i,j]=left{begin{matrix} 0 & if : i=j\ displaystyle min_{ ileq k< j}begin{cases} C [i,k]+C[k+1,j]+d_{i-1}d_{k}d_{j} end{cases} &if : i< j \ end{matrix}right.$ find the lowest cost parenthesization of the sequence of matrices by constructing cost tables and corresponding k values table. Once the lowest cost is found, print the corresponding parenthesization as the output. Pseudocode Pseudocode to find the lowest cost of all the possible parenthesizations − MATRIX-CHAIN-MULTIPLICATION(p) n = p.length ─ 1 let m[1…n, 1…n] and s[1…n ─ 1, 2…n] be new matrices for i = 1 to n m[i, i] = 0 for l = 2 to n // l is the chain length for i = 1 to n – l + 1 j = i + l – 1 m[i, j] = ∞ for k = i to j – 1 q = m[i, k] + m[k + 1, j] + pi-1pkpj if q < m[i, j] m[i, j] = q s[i, j] = k return m and s Pseudocode to print the optimal output parenthesization − PRINT-OPTIMAL-OUTPUT(s, i, j ) if i == j print “A”i else print “(” PRINT-OPTIMAL-OUTPUT(s, i, s[i, j]) PRINT-OPTIMAL-OUTPUT(s, s[i, j] + 1, j) print “)” Example The application of dynamic programming formula is slightly different from the theory; to understand it better let us look at few examples below. A sequence of matrices A, B, C, D with dimensions 5 × 10, 10 × 15, 15 × 20, 20 × 25 are set to be multiplied. Find the lowest cost parenthesization to multiply the given matrices using matrix chain multiplication. Solution Given matrices and their corresponding dimensions are − A5×10×B10×15×C15×20×D20×25 Find the count of parenthesization of the 4 matrices, i.e. n = 4. Using the formula, $Pleft ( n right )=left{begin{matrix} 1 & if: n=1\ sum_{k=1}^{n-1}P(k)P(n-k) & if: ngeq 2 \ end{matrix}right.$ Since n = 4 ≥ 2, apply the second case of the formula − $$Pleft ( n right )=sum_{k=1}^{n-1}P(k)P(n-k)$$ $$Pleft ( 4 right )=sum_{k=1}^{3}P(k)P(4-k)$$ $$Pleft ( 4 right )=P(1)P(3)+P(2)P(2)+P(3)P(1)$$ If P(1) = 1 and P(2) is also equal to 1, P(4) will be calculated based on the P(3) value. Therefore, P(3) needs to determined first. $$Pleft ( 3 right )=P(1)P(2)+P(2)P(1)$$ $$=1+1=2$$ Therefore, $$Pleft ( 4 right )=P(1)P(3)+P(2)P(2)+P(3)P(1)$$ $$=2+1+2=5$$ Among these 5 combinations of parenthesis, the matrix chain multiplicatiion algorithm must find the lowest cost parenthesis. Step 1 The table above is known as a cost table, where all the cost values calculated from the different combinations of parenthesis are stored. Another table is also created to store the k values obtained at the minimum cost of each combination. Step 2 Applying the dynamic programming approach formula find the costs of various parenthesizations, $$C[i,j]=left{begin{matrix} 0 & if : i=j\ displaystyle min_{ ileq k< j}begin{cases} C [i,k]+Cleft [ k+1,j right ]+d_{i-1}d_{k}d_{j} end{cases} &if : i< j \ end{matrix}right.$$ $Cleft [ 1,1 right ]=0$ $Cleft [ 2,2 right ]=0$ $Cleft [ 3,3 right ]=0$ $Cleft [ 4,4 right ]=0$ Step 3 Applying the dynamic approach formula only in the upper triangular values

DAA – Randomized Algorithms

Randomized Algorithms Table of content Classification of Randomized Algorithms Need for Randomized Algorithms ”; Previous Next Randomized algorithm is a different design approach taken by the standard algorithms where few random bits are added to a part of their logic. They are different from deterministic algorithms; deterministic algorithms follow a definite procedure to get the same output every time an input is passed where randomized algorithms produce a different output every time they’re executed. It is important to note that it is not the input that is randomized, but the logic of the standard algorithm. Figure 1: Deterministic Algorithm Unlike deterministic algorithms, randomized algorithms consider randomized bits of the logic along with the input that in turn contribute towards obtaining the output. Figure 2: Randomized Algorithm However, the probability of randomized algorithms providing incorrect output cannot be ruled out either. Hence, the process called amplification is performed to reduce the likelihood of these erroneous outputs. Amplification is also an algorithm that is applied to execute some parts of the randomized algorithms multiple times to increase the probability of correctness. However, too much amplification can also exceed the time constraints making the algorithm ineffective. Classification of Randomized Algorithms Randomized algorithms are classified based on whether they have time constraints as the random variable or deterministic values. They are designed in their two common forms − Las Vegas and Monte Carlo. Las Vegas − The Las Vegas method of randomized algorithms never gives incorrect outputs, making the time constraint as the random variable. For example, in string matching algorithms, las vegas algorithms start from the beginning once they encounter an error. This increases the probability of correctness. Eg., Randomized Quick Sort Algorithm. Monte Carlo − The Monte Carlo method of randomized algorithms focuses on finishing the execution within the given time constraint. Therefore, the running time of this method is deterministic. For example, in string matching, if monte carlo encounters an error, it restarts the algorithm from the same point. Thus, saving time. Eg., Karger’s Minimum Cut Algorithm Need for Randomized Algorithms This approach is usually adopted to reduce the time complexity and space complexity. But there might be some ambiguity about how adding randomness will decrease the runtime and memory stored, instead of increasing; we will understand that using the game theory. The Game Theory and Randomized Algorithms The basic idea of game theory actually provides with few models that help understand how decision-makers in a game interact with each other. These game theoretical models use assumptions to figure out the decision-making structure of the players in a game. The popular assumptions made by these theoretical models are that the players are both rational and take into account what the opponent player would decide to do in a particular situation of a game. We will apply this theory on randomized algorithms. Zero-sum game The zero-sum game is a mathematical representation of the game theory. It has two players where the result is a gain for one player while it is an equivalent loss to the other player. So, the net improvement is the sum of both players’ status which sums up to zero. Randomized algorithms are based on the zero-sum game of designing an algorithm that gives lowest time complexity for all inputs. There are two players in the game; one designs the algorithm and the opponent provides with inputs for the algorithm. The player two needs to give the input in such a way that it will yield the worst time complexity for them to win the game. Whereas, the player one needs to design an algorithm that takes minimum time to execute any input given. For example, consider the quick sort algorithm where the main algorithm starts from selecting the pivot element. But, if the player in zero-sum game chooses the sorted list as an input, the standard algorithm provides the worst case time complexity. Therefore, randomizing the pivot selection would execute the algorithm faster than the worst time complexity. However, even if the algorithm chose the first element as pivot randomly and obtains the worst time complexity, executing it another time with the same input will solve the problem since it chooses another pivot this time. On the other hand, for algorithms like merge sort the time complexity does not depend on the input; even if the algorithm is randomized the time complexity will always remain the same. Hence, randomization is only applied on algorithms whose complexity depends on the input. Examples Few popular examples of the Randomized algorithms are − Randomized Quick Sort Algorithm Karger’s Minimum Cut Algorithm Fisher-Yates Shuffle Algorithm The Subset Sum Problem Print Page Previous Next Advertisements ”;

DAA – Dijkstra”s Shortest Path Algorithm

Dijkstra’s Shortest Path Algorithm Table of content Dijkstra’s Algorithm Example ”; Previous Next Dijkstra’s shortest path algorithm is similar to that of Prim’s algorithm as they both rely on finding the shortest path locally to achieve the global solution. However, unlike prim’s algorithm, the dijkstra’s algorithm does not find the minimum spanning tree; it is designed to find the shortest path in the graph from one vertex to other remaining vertices in the graph. Dijkstra’s algorithm can be performed on both directed and undirected graphs. Since the shortest path can be calculated from single source vertex to all the other vertices in the graph, Dijkstra’s algorithm is also called single-source shortest path algorithm. The output obtained is called shortest path spanning tree. In this chapter, we will learn about the greedy approach of the dijkstra’s algorithm. Dijkstra’s Algorithm The dijkstra’s algorithm is designed to find the shortest path between two vertices of a graph. These two vertices could either be adjacent or the farthest points in the graph. The algorithm starts from the source. The inputs taken by the algorithm are the graph G {V, E}, where V is the set of vertices and E is the set of edges, and the source vertex S. And the output is the shortest path spanning tree. Algorithm Declare two arrays − distance[] to store the distances from the source vertex to the other vertices in graph and visited[] to store the visited vertices. Set distance[S] to ‘0’ and distance[v] = ∞, where v represents all the other vertices in the graph. Add S to the visited[] array and find the adjacent vertices of S with the minimum distance. The adjacent vertex to S, say A, has the minimum distance and is not in the visited array yet. A is picked and added to the visited array and the distance of A is changed from ∞ to the assigned distance of A, say d1, where d1 < ∞. Repeat the process for the adjacent vertices of the visited vertices until the shortest path spanning tree is formed. Examples To understand the dijkstra’s concept better, let us analyze the algorithm with the help of an example graph − Step 1 Initialize the distances of all the vertices as ∞, except the source node S. Vertex S A B C D E Distance 0 ∞ ∞ ∞ ∞ ∞ Now that the source vertex S is visited, add it into the visited array. visited = {S} Step 2 The vertex S has three adjacent vertices with various distances and the vertex with minimum distance among them all is A. Hence, A is visited and the dist[A] is changed from ∞ to 6. S → A = 6 S → D = 8 S → E = 7 Vertex S A B C D E Distance 0 6 ∞ ∞ 8 7 Visited = {S, A} Step 3 There are two vertices visited in the visited array, therefore, the adjacent vertices must be checked for both the visited vertices. Vertex S has two more adjacent vertices to be visited yet: D and E. Vertex A has one adjacent vertex B. Calculate the distances from S to D, E, B and select the minimum distance − S → D = 8 and S → E = 7. S → B = S → A + A → B = 6 + 9 = 15 Vertex S A B C D E Distance 0 6 15 ∞ 8 7 Visited = {S, A, E} Step 4 Calculate the distances of the adjacent vertices – S, A, E – of all the visited arrays and select the vertex with minimum distance. S → D = 8 S → B = 15 S → C = S → E + E → C = 7 + 5 = 12 Vertex S A B C D E Distance 0 6 15 12 8 7 Visited = {S, A, E, D} Step 5 Recalculate the distances of unvisited vertices and if the distances minimum than existing distance is found, replace the value in the distance array. S → C = S → E + E → C = 7 + 5 = 12 S → C = S → D + D → C = 8 + 3 = 11 dist[C] = minimum (12, 11) = 11 S → B = S → A + A → B = 6 + 9 = 15 S → B = S → D + D → C + C → B = 8 + 3 + 12 = 23 dist[B] = minimum (15,23) = 15 Vertex S A B C D E Distance 0 6 15 11 8 7 Visited = { S, A, E, D, C} Step 6 The remaining unvisited vertex in the graph is B with the minimum distance 15, is added to the output spanning tree.

DAA – Divide & Conquer Algorithm

Divide & Conquer Algorithm Table of content Arrays as Input Linked Lists as Input Pros and cons of Divide and Conquer Approach Examples of Divide and Conquer Approach ”; Previous Next Using divide and conquer approach, the problem in hand, is divided into smaller sub-problems and then each problem is solved independently. When we keep dividing the sub-problems into even smaller sub-problems, we may eventually reach a stage where no more division is possible. Those smallest possible sub-problems are solved using original solution because it takes lesser time to compute. The solution of all sub-problems is finally merged in order to obtain the solution of the original problem. Broadly, we can understand divide-and-conquer approach in a three-step process. Divide/Break This step involves breaking the problem into smaller sub-problems. Sub-problems should represent a part of the original problem. This step generally takes a recursive approach to divide the problem until no sub-problem is further divisible. At this stage, sub-problems become atomic in size but still represent some part of the actual problem. Conquer/Solve This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the problems are considered ”solved” on their own. Merge/Combine When the smaller sub-problems are solved, this stage recursively combines them until they formulate a solution of the original problem. This algorithmic approach works recursively and conquer & merge steps works so close that they appear as one. Arrays as Input There are various ways in which various algorithms can take input such that they can be solved using the divide and conquer technique. Arrays are one of them. In algorithms that require input to be in the form of a list, like various sorting algorithms, array data structures are most commonly used. In the input for a sorting algorithm below, the array input is divided into subproblems until they cannot be divided further. Then, the subproblems are sorted (the conquer step) and are merged to form the solution of the original array back (the combine step). Since arrays are indexed and linear data structures, sorting algorithms most popularly use array data structures to receive input. Linked Lists as Input Another data structure that can be used to take input for divide and conquer algorithms is a linked list (for example, merge sort using linked lists). Like arrays, linked lists are also linear data structures that store data sequentially. Consider the merge sort algorithm on linked list; following the very popular tortoise and hare algorithm, the list is divided until it cannot be divided further. Then, the nodes in the list are sorted (conquered). These nodes are then combined (or merged) in recursively until the final solution is achieved. Various searching algorithms can also be performed on the linked list data structures with a slightly different technique as linked lists are not indexed linear data structures. They must be handled using the pointers available in the nodes of the list. Pros and cons of Divide and Conquer Approach Divide and conquer approach supports parallelism as sub-problems are independent. Hence, an algorithm, which is designed using this technique, can run on the multiprocessor system or in different machines simultaneously. In this approach, most of the algorithms are designed using recursion, hence memory management is very high. For recursive function stack is used, where function state needs to be stored. Examples of Divide and Conquer Approach The following computer algorithms are based on divide-and-conquer programming approach − Max-Min Problem Merge Sort Algorithm Quick Sort Algorithm Binary Search Algorithm Strassen”s Matrix Multiplication Karatsuba Algorithm Closest pair (points) There are various ways available to solve any computer problem, but the mentioned are a good example of divide and conquer approach. Print Page Previous Next Advertisements ”;