DAA – Insertion Sort Algorithm

Insertion Sort Algorithm Table of content Insertion Sort Algorithm Implementation ”; Previous Next Insertion sort is a very simple method to sort numbers in an ascending or descending order. This method follows the incremental method. It can be compared with the technique how cards are sorted at the time of playing a game. This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained which is always sorted. For example, the lower part of an array is maintained to be sorted. An element which is to be ”inserted” in this sorted sub-list, has to find its appropriate place and then it has to be inserted there. Hence the name, insertion sort. The array is searched sequentially and unsorted items are moved and inserted into the sorted sub-list (in the same array). This algorithm is not suitable for large data sets as its average and worst case complexity are of Ο(n2), where n is the number of items. Insertion Sort Algorithm Now we have a bigger picture of how this sorting technique works, so we can derive simple steps by which we can achieve insertion sort. Step 1 − If it is the first element, it is already sorted. return 1; Step 2 − Pick next element Step 3 − Compare with all elements in the sorted sub-list Step 4 − Shift all the elements in the sorted sub-list that is greater than the value to be sorted Step 5 − Insert the value Step 6 − Repeat until list is sorted Pseudocode Algorithm: Insertion-Sort(A) for j = 2 to A.length key = A[j] i = j – 1 while i > 0 and A[i] > key A[i + 1] = A[i] i = i -1 A[i + 1] = key Analysis Run time of this algorithm is very much dependent on the given input. If the given numbers are sorted, this algorithm runs in O(n) time. If the given numbers are in reverse order, the algorithm runs in O(n2) time. Example We take an unsorted array for our example. Insertion sort compares the first two elements. It finds that both 14 and 33 are already in ascending order. For now, 14 is in sorted sub-list. Insertion sort moves ahead and compares 33 with 27. And finds that 33 is not in the correct position. It swaps 33 with 27. It also checks with all the elements of sorted sub-list. Here we see that the sorted sub-list has only one element 14, and 27 is greater than 14. Hence, the sorted sub-list remains sorted after swapping. By now we have 14 and 27 in the sorted sub-list. Next, it compares 33 with 10. These values are not in a sorted order. So they are swapped. However, swapping makes 27 and 10 unsorted. Hence, we swap them too. Again we find 14 and 10 in an unsorted order. We swap them again. By the end of third iteration, we have a sorted sub-list of 4 items. This process goes on until all the unsorted values are covered in a sorted sub-list. Now we shall see some programming aspects of insertion sort. Implementation Since insertion sort is an in-place sorting algorithm, the algorithm is implemented in a way where the key element – which is iteratively chosen as every element in the array – is compared with it consequent elements to check its position. If the key element is less than its successive element, the swapping is not done. Otherwise, the two elements compared will be swapped and the next element is chosen as the key element. Insertion sort is implemented in four programming languages, C, C++, Java, and Python − C C++ Java Python #include <stdio.h> void insertionSort(int array[], int size){ int key, j; for(int i = 1; i<size; i++) { key = array[i];//take value j = i; while(j > 0 && array[j-1]>key) { array[j] = array[j-1]; j–; } array[j] = key; //insert in right place } } int main(){ int n; n = 5; int arr[5] = {67, 44, 82, 17, 20}; // initialize the array printf(“Array before Sorting: “); for(int i = 0; i<n; i++) printf(“%d “,arr[i]); printf(“n”); insertionSort(arr, n); printf(“Array after Sorting: “); for(int i = 0; i<n; i++) printf(“%d “, arr[i]); printf(“n”); } Output Array before Sorting: 67 44 82 17 20 Array after Sorting: 17 20 44 67 82 #include<iostream> using namespace std; void insertionSort(int *array, int size){ int key, j; for(int i = 1; i<size; i++) { key = array[i];//take value j = i; while(j > 0 && array[j-1]>key) { array[j] = array[j-1]; j–; } array[j] = key; //insert in right place } } int main(){ int n; n = 5; int arr[5] = {67, 44, 82, 17, 20}; // initialize the array cout << “Array before Sorting: “; for(int i = 0; i<n; i++) cout << arr[i] << ” “; cout << endl; insertionSort(arr, n); cout << “Array after Sorting: “; for(int i = 0; i<n; i++) cout << arr[i] << ” “; cout << endl; } Output Array before Sorting: 67 44 82 17 20 Array after Sorting: 17 20 44 67 82 import java.io.*; public class InsertionSort { public static void main(String args[]) { int n = 5; int[] arr = {67, 44, 82, 17, 20}; //initialize an array System.out.print(“Array before Sorting: “); for(int i = 0; i<n; i++) System.out.print(arr[i] + ” “); System.out.println(); for(int i = 1; i<n; i++) { int key = arr[i];//take value int j = i; while(j > 0 && arr[j-1]>key) { arr[j]

DAA – Approximation Algorithms

Approximation Algorithms Table of content Approximation Algorithms Performance Ratios Examples ”; Previous Next Approximation Algorithms Approximation algorithms are algorithms designed to solve problems that are not solvable in polynomial time for approximate solutions. These problems are known as NP complete problems. These problems are significantly effective to solve real world problems, therefore, it becomes important to solve them using a different approach. NP complete problems can still be solved in three cases: the input could be so small that the execution time is reduced, some problems can still be classified into problems that can be solved in polynomial time, or use approximation algorithms to find near-optima solutions for the problems. This leads to the concept of performance ratios of an approximation problem. Performance Ratios The main idea behind calculating the performance ratio of an approximation algorithm, which is also called as an approximation ratio, is to find how close the approximate solution is to the optimal solution. The approximate ratio is represented using ρ(n) where n is the input size of the algorithm, C is the near-optimal solution obtained by the algorithm, C* is the optimal solution for the problem. The algorithm has an approximate ratio of ρ(n) if and only if − $$maxleft{frac{C}{C^{ast} },frac{C^{ast }}{C} right}leq rho left ( n right )$$ The algorithm is then called a ρ(n)-approximation algorithm. Approximation Algorithms can be applied on two types of optimization problems: minimization problems and maximization problems. If the optimal solution of the problem is to find the maximum cost, the problem is known as the maximization problem; and if the optimal solution of the problem is to find the minimum cost, then the problem is known as a minimization problem. For maximization problems, the approximation ratio is calculated by C*/C since 0 ≤ C ≤ C*. For minimization problems, the approximation ratio is calculated by C/C* since 0 ≤ C* ≤ C. Assuming that the costs of approximation algorithms are all positive, the performance ratio is well defined and will not be less than 1. If the value is 1, that means the approximate algorithm generates the exact optimal solution. Examples Few popular examples of the approximation algorithms are − Vertex Cover Algorithm Set Cover Problem Travelling Salesman Problem (Approximation Approach) The Subset Sum Problem Print Page Previous Next Advertisements ”;

DAA – Fisher-Yates Shuffle Algorithm

Fisher-Yates Shuffle Algorithm Table of content Fisher-Yates Algorithm Example ”; Previous Next The Fisher-Yates Shuffle algorithm shuffles a finite sequence of elements by generating a random permutation. The possibility of every permutation occurring is equally likely. The algorithm is performed by storing the elements of the sequence in a sack and drawing each element randomly from the sack to form the shuffled sequence. Coined after Ronald Fisher and Frank Yates, for designing the original method of the shuffle, the algorithm is unbiased. It generates all permutations in same conditions so the output achieved is nowhere influenced. However, the modern version of the Fisher-Yates Algorithm is more efficient than that of the original one. Fisher-Yates Algorithm The Original Method The original method of Shuffle algorithm involved a pen and paper to generate a random permutation of a finite sequence. The algorithm to generate the random permutation is as follows − Step 1 − Write down all the elements in the finite sequence. Declare a separate list to store the output achieved. Step 2 − Choose an element i randomly in the input sequence and add it onto the output list. Mark the element i as visited. Step 3 − Repeat Step 2 until all the element in the finite sequence is visited and added onto the output list randomly. Step 4 − The output list generated after the process terminates is the random permutation generated. The Modern Algorithm The modern algorithm is a slightly modified version of the original fisher-yates shuffle algorithm. The main goal of the modification is to computerize the original algorithm by reducing the time complexity of the original method. The modern method is developed by Richard Durstenfeld and was popularized by Donald E. Knuth. Therefore, the modern method makes use of swapping instead of maintaining another output list to store the random permutation generated. The time complexity is reduced to O(n) rather than O(n2). The algorithm goes as follows − Step 1 − Write down the elements 1 to n in the finite sequence. Step 2 − Choose an element i randomly in the input sequence and swap it with the last unvisited element in the list. Step 3 − Repeat Step 2 until all the element in the finite sequence is visited and swapped. Step 4 − The list generated after the process terminates is the random permutation sequence. Pseudocode Shuffling is done from highest index to the lowest index of the array in the following modern method pseudocode. Fisher-Yates Shuffle (array of n elements): for i from n−1 downto 1 do j ← random integer such that 0 ≤ j ≤ i exchange a[j] and a[i] Shuffling is done from lowest index to the highest index of the array in the following modern method pseudocode. Fisher-Yates Shuffle (array of n elements): for i from 0 to n−2 do j ← random integer such that i ≤ j < n exchange a[i] and a[j] Original Method Example To describe the algorithm better, let us permute the the given finite sequence of the first six letters of the alphabet. Input sequence: A B C D E F. Step 1 This is called the pen and paper method. We consider an input array with the finite sequence stored and an output array to store the result. Step 2 Choose any element randomly and add it onto the output list after marking it checked. In this case, we choose element C. Step 3 The next element chosen randomly is E which is marked and added to the output list. Step 4 The random function then picks the next element A and adds it onto the output array after marking it visited. Step 5 Then F is selected from the remaining elements in the input sequence and added to the output after marking it visited. Step 6 The next element chosen to add onto the random permutation is D. It is marked and added to the output array. Step 7 The last element present in the input list would be B, so it is marked and added onto the output list finally. Modern Method Example In order to reduce time complexity of the original method, the modern algorithm is introduced. The modern method uses swapping to shuffle the sequences – for example, the algorithm works like shuffling a pack of cards by swapping their places in the original deck. Let us look at an example to understand how modern version of the Fisher-Yates algorithm works. Step 1 Consider first few letters of the alphabet as an input and shuffle them using the modern method. Step 2 Randomly choosing the element D and swapping it with the last unmarked element in the sequence, in this case F. Step 3 For the next step we choose element B to swap with the last unmarked element ‘E’ since F had been moved to D’s place after swapping in the previous step. Step 4 We next swap the element A with F, since it is the last unmarked element in the list. Step 5 Then the element F is swapped with the last unmarked element C. Step 6 The remaining elements in the sequence could be swapped finally, but since the random function chose E as the element it is left as it is. Step 7 The remaining element C is left as it is without swapping. The array obtained after swapping is the final output array. Example Following are

DAA – Randomized Quick Sort Algorithm

Randomized Quick Sort Algorithm Table of content Randomized Quick Sort Algorithm Implementation ”; Previous Next Quicksort is a popular sorting algorithm that chooses a pivot element and sorts the input list around that pivot element. To learn more about quick sort, please click here. Randomized quick sort is designed to decrease the chances of the algorithm being executed in the worst case time complexity of O(n2). The worst case time complexity of quick sort arises when the input given is an already sorted list, leading to n(n – 1) comparisons. There are two ways to randomize the quicksort − Randomly shuffling the inputs: Randomization is done on the input list so that the sorted input is jumbled again which reduces the time complexity. However, this is not usually performed in the randomized quick sort. Randomly choosing the pivot element: Making the pivot element a random variable is commonly used method in the randomized quick sort. Here, even if the input is sorted, the pivot is chosen randomly so the worst case time complexity is avoided. Randomized Quick Sort Algorithm The algorithm exactly follows the standard algorithm except it randomizes the pivot selection. Pseudocode partition-left(arr[], low, high) pivot = arr[high] i = low // place for swapping for j := low to high – 1 do if arr[j] <= pivot then swap arr[i] with arr[j] i = i + 1 swap arr[i] with arr[high] return i partition-right(arr[], low, high) r = Random Number from low to high Swap arr[r] and arr[high] return partition-left(arr, low, high) quicksort(arr[], low, high) if low < high p = partition-right(arr, low, high) quicksort(arr, low , p-1) quicksort(arr, p+1, high) Example Let us look at an example to understand how randomized quicksort works in avoiding the worst case time complexity. Since, we are designing randomized algorithms to decrease the occurrence of worst cases in time complexity lets take a sorted list as an input for this example. The sorted input list is 3, 5, 7, 8, 12, 15. We need to apply the quick sort algorithm to sort the list. Step 1 Considering the worst case possible, if the random pivot chosen is also the highest index number, it compares all the other numbers and another pivot is selected. Since 15 is greater than all the other numbers in the list, it won’t be swapped, and another pivot is chosen. Step 2 This time, if the random pivot function chooses 7 as the pivot number − Now the pivot divides the list into half so standard quick sort is carried out usually. However, the time complexity is decreased than the worst case. It is to be noted that the worst case time complexity of the quick sort will always remain O(n2) but with randomizations we are decreasing the occurrences of that worst case. Implementation Following are the implementations of the above approach in various programming langauges − C C++ Java Python #include <stdio.h> #include <stdlib.h> #include <time.h> // Function to swap two elements void swap(int* a, int* b) { int t = *a; *a = *b; *b = t; } // Function to partition the array int partition_left(int arr[], int low, int high) { int pivot = arr[high]; int i = low; for (int j = low; j < high; j++) { if (arr[j] <= pivot) { swap(&arr[i], &arr[j]); i++; } } swap(&arr[i], &arr[high]); return i; } // Function to perform random partition int partition_right(int arr[], int low, int high) { srand(time(NULL)); int r = low + rand() % (high – low); swap(&arr[r], &arr[high]); return partition_left(arr, low, high); } // Recursive function for quicksort void quicksort(int arr[], int low, int high) { if (low < high) { int p = partition_right(arr, low, high); quicksort(arr, low, p – 1); quicksort(arr, p + 1, high); } } // Function to print the array void printArray(int arr[], int size) { for (int i = 0; i < size; i++) printf(“%d “, arr[i]); printf(“n”); } // Driver code int main() { int arr[] = { 6, 4, 12, 8, 15, 16}; int n = sizeof(arr) / sizeof(arr[0]); printf(“Original array: “); printArray(arr, n); quicksort(arr, 0, n – 1); printf(“Sorted array: “); printArray(arr, n); return 0; } Output Original array: 6 4 12 8 15 16 Sorted array: 4 6 8 12 15 16 #include <iostream> #include <cstdlib> #include <ctime> // Function to swap two elements void swap(int arr[], int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } // Function to partition the array int partitionLeft(int arr[], int low, int high) { int pivot = arr[high]; int i = low; for (int j = low; j < high; j++) { if (arr[j] <= pivot) { swap(arr, i, j); i++; } } swap(arr, i, high); return i; } // Function to perform random partition int partitionRight(int arr[], int low, int high) { srand(time(NULL)); int r = low + rand() % (high – low); swap(arr, r, high); return partitionLeft(arr, low, high); } // Recursive function for quicksort void quicksort(int arr[], int low, int high) { if (low < high) { int p = partitionRight(arr, low, high); quicksort(arr, low, p – 1); quicksort(arr, p + 1, high); } } // Function to print the array void printArray(int arr[], int size) { for (int i = 0; i < size; i++) std::cout << arr[i] << ” “; std::cout << std::endl; } // Driver code int main() { int arr[] = {6, 4, 12, 8, 15, 16}; int n = sizeof(arr) / sizeof(arr[0]); std::cout << “Original array: “; printArray(arr, n); quicksort(arr, 0, n – 1); std::cout << “Sorted array: “; printArray(arr, n); return 0; } Output Original array: 6

DAA – Selection Sort Algorithm

Selection Sort Algorithm Table of content Selection Sort Algorithm Implementation ”; Previous Next Selection sort is a simple sorting algorithm. This sorting algorithm, like insertion sort, is an in-place comparison-based algorithm in which the list is divided into two parts, the sorted part at the left end and the unsorted part at the right end. Initially, the sorted part is empty and the unsorted part is the entire list. The smallest element is selected from the unsorted array and swapped with the leftmost element, and that element becomes a part of the sorted array. This process continues moving unsorted array boundaries by one element to the right. This algorithm is not suitable for large data sets as its average and worst case complexities are of O(n2), where n is the number of items. Selection Sort Algorithm This type of sorting is called Selection Sort as it works by repeatedly sorting elements. That is: we first find the smallest value in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and we continue the process in this way until the entire array is sorted. 1. Set MIN to location 0. 2. Search the minimum element in the list. 3. Swap with value at location MIN. 4. Increment MIN to point to next element. 5. Repeat until the list is sorted. Pseudocode Algorithm: Selection-Sort (A) fori← 1 to n-1 do min j ←i; min x ← A[i] for j ←i + 1 to n do if A[j] < min x then min j ← j min x ← A[j] A[min j] ← A [i] A[i] ← min x Analysis Selection sort is among the simplest of sorting techniques and it works very well for small files. It has a quite important application as each item is actually moved at the most once. Section sort is a method of choice for sorting files with very large objects (records) and small keys. The worst case occurs if the array is already sorted in a descending order and we want to sort them in an ascending order. Nonetheless, the time required by selection sort algorithm is not very sensitive to the original order of the array to be sorted: the test if 𝑨[𝒋] < A[j] < min x is executed exactly the same number of times in every case. Selection sort spends most of its time trying to find the minimum element in the unsorted part of the array. It clearly shows the similarity between Selection sort and Bubble sort. Bubble sort selects the maximum remaining elements at each stage, but wastes some effort imparting some order to an unsorted part of the array. Selection sort is quadratic in both the worst and the average case, and requires no extra memory. For each i from 1 to n – 1, there is one exchange and n – i comparisons, so there is a total of n – 1 exchanges and (n − 1) + (n − 2) + …+2 + 1 = n(n − 1)/2 comparisons. These observations hold, no matter what the input data is. In the worst case, this could be quadratic, but in the average case, this quantity is O(n log n). It implies that the running time of Selection sort is quite insensitive to the input. Example Consider the following depicted array as an example. For the first position in the sorted list, the whole list is scanned sequentially. The first position where 14 is stored presently, we search the whole list and find that 10 is the lowest value. So we replace 14 with 10. After one iteration 10, which happens to be the minimum value in the list, appears in the first position of the sorted list. For the second position, where 33 is residing, we start scanning the rest of the list in a linear manner. We find that 14 is the second lowest value in the list and it should appear at the second place. We swap these values. After two iterations, two least values are positioned at the beginning in a sorted manner. The same process is applied to the rest of the items in the array − Implementation The selection sort algorithm is implemented in four different programming languages below. The given program selects the minimum number of the array and swaps it with the element in the first index. The second minimum number is swapped with the element present in the second index. The process goes on until the end of the array is reached. C C++ Java Python #include <stdio.h> void selectionSort(int array[], int size){ int i, j, imin; for(i = 0; i<size-1; i++) { imin = i; //get index of minimum data for(j = i+1; j<size; j++) if(array[j] < array[imin]) imin = j; //placing in correct position int temp; temp = array[i]; array[i] = array[imin]; array[imin] = temp; } } int main(){ int n; n = 5; int arr[5] = {12, 19, 55, 2, 16}; // initialize the array printf(“Array before Sorting: “); for(int i = 0; i<n; i++) printf(“%d “,arr[i]); printf(“n”); selectionSort(arr, n); printf(“Array after Sorting: “); for(int i = 0; i<n; i++) printf(“%d “, arr[i]); printf(“n”); } Output Array before Sorting: 12 19 55 2 16 Array after Sorting: 2 12 16 19 55 #include<iostream> using namespace std; void swapping(int &a, int &b) { //swap the content of a and b int temp; temp = a; a = b;

DAA – Map Colouring Algorithm

Map Colouring Algorithm Table of content Map Colouring Algorithm Example ”; Previous Next Map colouring problem states that given a graph G {V, E} where V and E are the set of vertices and edges of the graph, all vertices in V need to be coloured in such a way that no two adjacent vertices must have the same colour. The real-world applications of this algorithm are – assigning mobile radio frequencies, making schedules, designing Sudoku, allocating registers etc. Map Colouring Algorithm With the map colouring algorithm, a graph G and the colours to be added to the graph are taken as an input and a coloured graph with no two adjacent vertices having the same colour is achieved. Algorithm Initiate all the vertices in the graph. Select the node with the highest degree to colour it with any colour. Choose the colour to be used on the graph with the help of the selection colour function so that no adjacent vertex is having the same colour. Check if the colour can be added and if it does, add it to the solution set. Repeat the process from step 2 until the output set is ready. Examples Step 1 Find degrees of all the vertices − A – 4 B – 2 C – 2 D – 3 E – 3 Step 2 Choose the vertex with the highest degree to colour first, i.e., A and choose a colour using selection colour function. Check if the colour can be added to the vertex and if yes, add it to the solution set. Step 3 Select any vertex with the next highest degree from the remaining vertices and colour it using selection colour function. D and E both have the next highest degree 3, so choose any one between them, say D. D is adjacent to A, therefore it cannot be coloured in the same colour as A. Hence, choose a different colour using selection colour function. Step 4 The next highest degree vertex is E, hence choose E. E is adjacent to both A and D, therefore it cannot be coloured in the same colours as A and D. Choose a different colour using selection colour function. Step 5 The next highest degree vertices are B and C. Thus, choose any one randomly. B is adjacent to both A and E, thus not allowing to be coloured in the colours of A and E but it is not adjacent to D, so it can be coloured with D’s colour. Step 6 The next and the last vertex remaining is C, which is adjacent to both A and D, not allowing it to be coloured using the colours of A and D. But it is not adjacent to E, so it can be coloured in E’s colour. Example Following is the complete implementation of Map Colouring Algorithm in various programming languages where a graph is coloured in such a way that no two adjacent vertices have same colour. C C++ Java Python #include<stdio.h> #include<stdbool.h> #define V 4 bool graph[V][V] = { {0, 1, 1, 0}, {1, 0, 1, 1}, {1, 1, 0, 1}, {0, 1, 1, 0}, }; bool isValid(int v,int color[], int c){ //check whether putting a color valid for v for (int i = 0; i < V; i++) if (graph[v][i] && c == color[i]) return false; return true; } bool mColoring(int colors, int color[], int vertex){ if (vertex == V) //when all vertices are considered return true; for (int col = 1; col <= colors; col++) { if (isValid(vertex,color, col)) { //check whether color col is valid or not color[vertex] = col; if (mColoring (colors, color, vertex+1) == true) //go for additional vertices return true; color[vertex] = 0; } } return false; //when no colors can be assigned } int main(){ int colors = 3; // Number of colors int color[V]; //make color matrix for each vertex for (int i = 0; i < V; i++) color[i] = 0; //initially set to 0 if (mColoring(colors, color, 0) == false) { //for vertex 0 check graph coloring printf(“Solution does not exist.”); } printf(“Assigned Colors are: n”); for (int i = 0; i < V; i++) printf(“%d “, color[i]); return 0; } Output Assigned Colors are: 1 2 3 1 #include<iostream> using namespace std; #define V 4 bool graph[V][V] = { {0, 1, 1, 0}, {1, 0, 1, 1}, {1, 1, 0, 1}, {0, 1, 1, 0}, }; bool isValid(int v,int color[], int c){ //check whether putting a color valid for v for (int i = 0; i < V; i++) if (graph[v][i] && c == color[i]) return false; return true; } bool mColoring(int colors, int color[], int vertex){ if (vertex == V) //when all vertices are considered return true; for (int col = 1; col <= colors; col++) { if (isValid(vertex,color, col)) { //check whether color col is valid or not color[vertex] = col; if (mColoring (colors, color, vertex+1) == true) //go for additional vertices return true; color[vertex] = 0; } } return false; //when no colors can be assigned } int main(){ int colors = 3; // Number of colors int color[V]; //make color matrix for each vertex for (int i = 0; i < V; i++) color[i] = 0; //initially set to 0 if (mColoring(colors, color, 0) == false) { //for vertex 0 check graph coloring cout << “Solution does not exist.”; } cout << “Assigned Colors are: n”; for (int i = 0; i < V; i++) cout << color[i] << ” “; return 0; } Output Assigned Colors are: 1 2 3

DAA – Vertex Cover Problem

Vertex Cover Algorithm Table of content Vertex Cover Algorithm Implementation ”; Previous Next Have you ever wondered about the placement of traffic cameras? That how they are efficiently placed without wasting too much budget from the government? The answer to that comes in the form of vertex-cover algorithm. The positions of the cameras are chosen in such a way that one camera covers as many roads as possible, i.e., we choose junctions and make sure the camera covers as much area as possible. A vertex-cover of an undirected graph G = (V,E) is the subset of vertices of the graph such that, for all the edges (u,v) in the graph,u and v ∈ V. The junction is treated as the node of a graph and the roads as the edges. The algorithm finds the minimum set of junctions that cover maximum number of roads. It is a minimization problem since we find the minimum size of the vertex cover – the size of the vertex cover is the number of vertices in it. The optimization problem is an NP-Complete problem and hence, cannot be solved in polynomial time; but what can be found in polynomial time is the near optimal solution. Vertex Cover Algorithm The vertex cover approximation algorithm takes an undirected graph as an input and is executed to obtain a set of vertices that is definitely twice as the size of optimal vertex cover. The vertex cover is a 2-approximation algorithm. Algorithm Step 1 − Select any random edge from the input graph and mark all the edges that are incident on the vertices corresponding to the selected edge. Step 2 − Add the vertices of the arbitrary edge to an output set. Step 3 − Repeat Step 1 on the remaining unmarked edges of the graph and add the vertices chosen to the output until there’s no edge left unmarked. Step 4 − The final output set achieved would be the near-optimal vertex cover. Pseudocode APPROX-VERTEX_COVER (G: Graph) c ← { } E’ ← E[G] while E’ is not empty do Let (u, v) be an arbitrary edge of E’ c ← c U {u, v} Remove from E’ every edge incident on either u or v return c Example The set of edges of the given graph is − {(1,6),(1,2),(1,4),(2,3),(2,4),(6,7),(4,7),(7,8),(3,5),(8,5)} Now, we start by selecting an arbitrary edge (1,6). We eliminate all the edges, which are either incident to vertex 1 or 6 and we add edge (1,6) to cover. In the next step, we have chosen another edge (2,3) at random. Now we select another edge (4,7). We select another edge (8,5). Hence, the vertex cover of this graph is {1,6,2,3,4,7,5,8}. Analysis It is easy to see that the running time of this algorithm is O(V + E), using adjacency list to represent E”. Implementation Following are the implementations of the above approach in various programming langauges − C C++ Java Python #include <stdio.h> #include <stdbool.h> #define MAX_VERTICES 100 int graph[MAX_VERTICES][MAX_VERTICES]; bool included[MAX_VERTICES]; // Function to find Vertex Cover using the APPROX-VERTEX_COVER algorithm void approxVertexCover(int vertices, int edges) { bool edgesRemaining[MAX_VERTICES][MAX_VERTICES]; for (int i = 0; i < vertices; i++) { for (int j = 0; j < vertices; j++) { edgesRemaining[i][j] = graph[i][j]; } } while (edges > 0) { int u, v; for (int i = 0; i < vertices; i++) { for (int j = 0; j < vertices; j++) { if (edgesRemaining[i][j]) { u = i; v = j; break; } } } included[u] = included[v] = true; for (int i = 0; i < vertices; i++) { edgesRemaining[u][i] = edgesRemaining[i][u] = false; edgesRemaining[v][i] = edgesRemaining[i][v] = false; } edges–; } } int main() { int vertices = 8; int edges = 10; int edgesData[10][2] = { {1, 6}, {1, 2}, {1, 4}, {2, 3}, {2, 4}, {6, 7}, {4, 7}, {7, 8}, {3, 5}, {8, 5}}; for (int i = 0; i < edges; i++) { int u = edgesData[i][0]; int v = edgesData[i][1]; graph[u][v] = graph[v][u] = 1; } approxVertexCover(vertices, edges); printf(“Vertex Cover: “); for (int i = 1; i <= vertices; i++) { if (included[i]) { printf(“%d “, i); } } printf(“n”); return 0; } Output Vertex Cover: 1 3 4 5 6 7 #include <iostream> #include <vector> using namespace std; const int MAX_VERTICES = 100; vector<vector<int>> graph(MAX_VERTICES, vector<int>(MAX_VERTICES, 0)); vector<bool> included(MAX_VERTICES, false); // Function to find Vertex Cover using the APPROX-VERTEX_COVER algorithm void approxVertexCover(int vertices, int edges) { vector<vector<bool>> edgesRemaining(vertices, vector<bool>(vertices, false)); for (int i = 0; i < vertices; i++) { for (int j = 0; j < vertices; j++) { edgesRemaining[i][j] = graph[i][j]; } } while (edges > 0) { int u, v; for (int i = 0; i < vertices; i++) { for (int j = 0; j < vertices; j++) { if (edgesRemaining[i][j]) { u = i; v = j; break; } } } included[u] = included[v] = true; for (int i = 0; i < vertices; i++) { edgesRemaining[u][i] = edgesRemaining[i][u] = false; edgesRemaining[v][i] = edgesRemaining[i][v] = false; } edges–; } } int main() { int vertices = 8; int edges = 10; int edgesData[10][2] = { {1, 6}, {1, 2}, {1, 4}, {2, 3}, {2, 4}, {6, 7}, {4, 7}, {7, 8}, {3, 5}, {8, 5}}; for (int i = 0; i < edges; i++) { int u = edgesData[i][0]; int v = edgesData[i][1]; graph[u][v] = graph[v][u] = 1; } approxVertexCover(vertices, edges); cout << “Vertex Cover: “; for (int i = 1; i <= vertices; i++) { if (included[i]) { cout << i << ” “; } } cout << endl; return 0; } Output

DAA – Fractional Knapsack

Fractional Knapsack Problem Table of content Knapsack Algorithm Example Applications ”; Previous Next The knapsack problem states that − given a set of items, holding weights and profit values, one must determine the subset of the items to be added in a knapsack such that, the total weight of the items must not exceed the limit of the knapsack and its total profit value is maximum. It is one of the most popular problems that take greedy approach to be solved. It is called as the Fractional Knapsack Problem. To explain this problem a little easier, consider a test with 12 questions, 10 marks each, out of which only 10 should be attempted to get the maximum mark of 100. The test taker now must calculate the highest profitable questions – the one that he’s confident in – to achieve the maximum mark. However, he cannot attempt all the 12 questions since there will not be any extra marks awarded for those attempted answers. This is the most basic real-world application of the knapsack problem. Knapsack Algorithm The weights (Wi) and profit values (Pi) of the items to be added in the knapsack are taken as an input for the fractional knapsack algorithm and the subset of the items added in the knapsack without exceeding the limit and with maximum profit is achieved as the output. Algorithm Consider all the items with their weights and profits mentioned respectively. Calculate Pi/Wi of all the items and sort the items in descending order based on their Pi/Wi values. Without exceeding the limit, add the items into the knapsack. If the knapsack can still store some weight, but the weights of other items exceed the limit, the fractional part of the next time can be added. Hence, giving it the name fractional knapsack problem. Examples For the given set of items and the knapsack capacity of 10 kg, find the subset of the items to be added in the knapsack such that the profit is maximum. Items 1 2 3 4 5 Weights (in kg) 3 3 2 5 1 Profits 10 15 10 12 8 Solution Step 1 Given, n = 5 Wi = {3, 3, 2, 5, 1} Pi = {10, 15, 10, 12, 8} Calculate Pi/Wi for all the items Items 1 2 3 4 5 Weights (in kg) 3 3 2 5 1 Profits 10 15 10 20 8 Pi/Wi 3.3 5 5 4 8 Step 2 Arrange all the items in descending order based on Pi/Wi Items 5 2 3 4 1 Weights (in kg) 1 3 2 5 3 Profits 8 15 10 20 10 Pi/Wi 8 5 5 4 3.3 Step 3 Without exceeding the knapsack capacity, insert the items in the knapsack with maximum profit. Knapsack = {5, 2, 3} However, the knapsack can still hold 4 kg weight, but the next item having 5 kg weight will exceed the capacity. Therefore, only 4 kg weight of the 5 kg will be added in the knapsack. Items 5 2 3 4 1 Weights (in kg) 1 3 2 5 3 Profits 8 15 10 20 10 Knapsack 1 1 1 4/5 0 Hence, the knapsack holds the weights = [(1 * 1) + (1 * 3) + (1 * 2) + (4/5 * 5)] = 10, with maximum profit of [(1 * 8) + (1 * 15) + (1 * 10) + (4/5 * 20)] = 37. Example Following is the final implementation of Fractional Knapsack Algorithm using Greedy Approach − C C++ Java Python #include <stdio.h> int n = 5; int p[10] = {3, 3, 2, 5, 1}; int w[10] = {10, 15, 10, 12, 8}; int W = 10; int main(){ int cur_w; float tot_v; int i, maxi; int used[10]; for (i = 0; i < n; ++i) used[i] = 0; cur_w = W; while (cur_w > 0) { maxi = -1; for (i = 0; i < n; ++i) if ((used[i] == 0) && ((maxi == -1) || ((float)w[i]/p[i] > (float)w[maxi]/p[maxi]))) maxi = i; used[maxi] = 1; cur_w -= p[maxi]; tot_v += w[maxi]; if (cur_w >= 0) printf(“Added object %d (%d, %d) completely in the bag. Space left: %d.n”, maxi + 1, w[maxi], p[maxi], cur_w); else { printf(“Added %d%% (%d, %d) of object %d in the bag.n”, (int)((1 + (float)cur_w/p[maxi]) * 100), w[maxi], p[maxi], maxi + 1); tot_v -= w[maxi]; tot_v += (1 + (float)cur_w/p[maxi]) * w[maxi]; } } printf(“Filled the bag with objects worth %.2f.n”, tot_v); return 0; } Output Added object 5 (8, 1) completely in the bag. Space left: 9. Added object 2 (15, 3) completely in the bag. Space left: 6. Added object 3 (10, 2) completely in the bag. Space left: 4. Added object 1 (10, 3) completely in the bag. Space left: 1. Added 19% (12, 5) of object 4 in the bag. Filled the bag with objects

DAA – Optimal Merge Pattern

Optimal Merge Pattern Algorithm Table of content Pseudocode Example ”; Previous Next Merge a set of sorted files of different length into a single sorted file. We need to find an optimal solution, where the resultant file will be generated in minimum time. If the number of sorted files are given, there are many ways to merge them into a single sorted file. This merge can be performed pair wise. Hence, this type of merging is called as 2-way merge patterns. As, different pairings require different amounts of time, in this strategy we want to determine an optimal way of merging many files together. At each step, two shortest sequences are merged. To merge a p-record file and a q-record file requires possibly p + q record moves, the obvious choice being, merge the two smallest files together at each step. Two-way merge patterns can be represented by binary merge trees. Let us consider a set of n sorted files {f1, f2, f3, …, fn}. Initially, each element of this is considered as a single node binary tree. To find this optimal solution, the following algorithm is used. Pseudocode Following is the pseudocode of the Optimal Merge Pattern Algorithm − for i := 1 to n – 1 do declare new node node.leftchild := least (list) node.rightchild := least (list) node.weight) := ((node.leftchild).weight)+ ((node.rightchild).weight) insert (list, node); return least (list); At the end of this algorithm, the weight of the root node represents the optimal cost. Examples Let us consider the given files, f1, f2, f3, f4 and f5 with 20, 30, 10, 5 and 30 number of elements respectively. If merge operations are performed according to the provided sequence, then M1 = merge f1 and f2 => 20 + 30 = 50 M2 = merge M1 and f3 => 50 + 10 = 60 M3 = merge M2 and f4 => 60 + 5 = 65 M4 = merge M3 and f5 => 65 + 30 = 95 Hence, the total number of operations is 50 + 60 + 65 + 95 = 270 Now, the question arises is there any better solution? Sorting the numbers according to their size in an ascending order, we get the following sequence − f4, f3, f1, f2, f5 Hence, merge operations can be performed on this sequence M1 = merge f4 and f3 => 5 + 10 = 15 M2 = merge M1 and f1 => 15 + 20 = 35 M3 = merge M2 and f2 => 35 + 30 = 65 M4 = merge M3 and f5 => 65 + 30 = 95 Therefore, the total number of operations is 15 + 35 + 65 + 95 = 210 Obviously, this is better than the previous one. In this context, we are now going to solve the problem using this algorithm. Initial Set Step 1 Step 2 Step 3 Step 4 Hence, the solution takes 15 + 35 + 60 + 95 = 205 number of comparisons. Example Following are the implementations of the above approach in various programming languages − C C++ Java Python #include <stdio.h> #include <stdlib.h> int optimalMerge(int files[], int n) { // Sort the files in ascending order for (int i = 0; i < n – 1; i++) { for (int j = 0; j < n – i – 1; j++) { if (files[j] > files[j + 1]) { int temp = files[j]; files[j] = files[j + 1]; files[j + 1] = temp; } } } int cost = 0; while (n > 1) { // Merge the smallest two files int mergedFileSize = files[0] + files[1]; cost += mergedFileSize; // Replace the first file with the merged file size files[0] = mergedFileSize; // Shift the remaining files to the left for (int i = 1; i < n – 1; i++) { files[i] = files[i + 1]; } n–; // Reduce the number of files // Sort the files again for (int i = 0; i < n – 1; i++) { for (int j = 0; j < n – i – 1; j++) { if (files[j] > files[j + 1]) { int temp = files[j]; files[j] = files[j + 1]; files[j + 1] = temp; } } } } return cost; } int main() { int files[] = {5, 10, 20, 30, 30}; int n = sizeof(files) / sizeof(files[0]); int minCost = optimalMerge(files, n); printf(“Minimum cost of merging is: %d Comparisonsn”, minCost); return 0; } Output Minimum cost of merging is: 205 Comparisons #include <iostream> #include <algorithm> int optimalMerge(int files[], int n) { // Sort the files in ascending order for (int i = 0; i < n – 1; i++) { for (int j = 0; j < n – i – 1; j++) { if (files[j] > files[j + 1]) { std::swap(files[j], files[j + 1]); } } } int cost = 0; while (n > 1) { // Merge the smallest two files int mergedFileSize = files[0] + files[1]; cost += mergedFileSize; // Replace the first file with the merged file size files[0] = mergedFileSize; // Shift the remaining files to the left for (int i = 1; i < n – 1; i++) { files[i] = files[i + 1]; } n–; // Reduce the number of files // Sort the files again for (int i = 0; i < n – 1; i++) { for (int j = 0; j < n – i – 1; j++) { if (files[j] > files[j + 1]) { std::swap(files[j], files[j

DAA – Cook”s Theorem

Cook”s Theorem Table of content Cook”s Theorem Theorem-1 Theorem-2 Theorem-3 Theorem-4 ”; Previous Next Cook”s Theorem Stephen Cook presented four theorems in his paper “The Complexity of Theorem Proving Procedures”. These theorems are stated below. We do understand that many unknown terms are being used in this chapter, but we don’t have any scope to discuss everything in detail. Following are the four theorems by Stephen Cook − Theorem-1 If a set S of strings is accepted by some non-deterministic Turing machine within polynomial time, then S is P-reducible to {DNF tautologies}. Theorem-2 The following sets are P-reducible to each other in pairs (and hence each has the same polynomial degree of difficulty): {tautologies}, {DNF tautologies}, D3, {sub-graph pairs}. Theorem-3 For any TQ(k) of type Q, $mathbf{frac{T_{Q}(k)}{frac{sqrt{k}}{(log:k)^2}}}$ is unbounded There is a TQ(k) of type Q such that $T_{Q}(k)leqslant 2^{k(log:k)^2}$ Theorem-4 If the set S of strings is accepted by a non-deterministic machine within time T(n) = 2n, and if TQ(k) is an honest (i.e. real-time countable) function of type Q, then there is a constant K, so S can be recognized by a deterministic machine within time TQ(K8n). First, he emphasized the significance of polynomial time reducibility. It means that if we have a polynomial time reduction from one problem to another, this ensures that any polynomial time algorithm from the second problem can be converted into a corresponding polynomial time algorithm for the first problem. Second, he focused attention on the class NP of decision problems that can be solved in polynomial time by a non-deterministic computer. Most of the intractable problems belong to this class, NP. Third, he proved that one particular problem in NP has the property that every other problem in NP can be polynomially reduced to it. If the satisfiability problem can be solved with a polynomial time algorithm, then every problem in NP can also be solved in polynomial time. If any problem in NP is intractable, then satisfiability problem must be intractable. Thus, satisfiability problem is the hardest problem in NP. Fourth, Cook suggested that other problems in NP might share with the satisfiability problem this property of being the hardest member of NP. Print Page Previous Next Advertisements ”;