DSA – Travelling Salesman Problem (Dynamic Approach)

Travelling Salesman Problem (Dynamic Approach) Table of content Travelling Salesman Dynamic Programming Algorithm Implementation ”; Previous Next Travelling salesman problem is the most notorious computational problem. We can use brute-force approach to evaluate every possible tour and select the best one. For n number of vertices in a graph, there are (nāˆ’1)! number of possibilities. Thus, maintaining a higher complexity. However, instead of using brute-force, using the dynamic programming approach will obtain the solution in lesser time, though there is no polynomial time algorithm. Travelling Salesman Dynamic Programming Algorithm Let us consider a graph G = (V,E), where V is a set of cities and E is a set of weighted edges. An edge e(u, v) represents that vertices u and v are connected. Distance between vertex u and v is d(u, v), which should be non-negative. Suppose we have started at city 1 and after visiting some cities now we are in city j. Hence, this is a partial tour. We certainly need to know j, since this will determine which cities are most convenient to visit next. We also need to know all the cities visited so far, so that we don”t repeat any of them. Hence, this is an appropriate sub-problem. For a subset of cities S $epsilon$ {1,2,3,…,n} that includes 1, and j $epsilon$ S, let C(S, j) be the length of the shortest path visiting each node in S exactly once, starting at 1 and ending at j. When |S|> 1 , we define š‘ŖC(S,1)= $propto$ since the path cannot start and end at 1. Now, let express C(S, j) in terms of smaller sub-problems. We need to start at 1 and end at j. We should select the next city in such a way that $$Cleft ( S,j right ), =, min, Cleft ( S, -, left{j right},i right ), +, dleft ( i,j right ): where: i: epsilon : S: and: ineq j$$ Algorithm: Traveling-Salesman-Problem C ({1}, 1) = 0 for s = 2 to n do for all subsets S є {1, 2, 3, ā€¦ , n} of size s and containing 1 C (S, 1) = āˆž for all j є S and j ā‰  1 C (S, j) = min {C (S ā€“ {j}, i) + d(i, j) for i є S and i ā‰  j} Return minj C ({1, 2, 3, ā€¦, n}, j) + d(j, i) Analysis There are at the most 2n.n sub-problems and each one takes linear time to solve. Therefore, the total running time is O(2n.n2). Example In the following example, we will illustrate the steps to solve the travelling salesman problem. From the above graph, the following table is prepared. 1 2 3 4 1 0 10 15 20 2 5 0 9 10 3 6 13 0 12 4 8 8 9 0 S = $Phi$ $$Costleft ( 2,Phi ,1 right ), =, dleft ( 2,1 right ),=,5$$ $$Costleft ( 3,Phi ,1 right ), =, dleft ( 3,1 right ), =, 6$$ $$Costleft ( 4,Phi ,1 right ), =, dleft ( 4,1 right ), =, 8$$ S = 1 $$Cost(i,s)=minleft{Cosleft ( j,s-(j) right ), +,dleft [ i,j right ] right}$$ $$Cost(2,left{3 right},1)=d[2,3], +, Costleft ( 3,Phi ,1 right ), =, 9, +, 6, =, 15$$ $$Cost(2,left{4 right},1)=d[2,4], +, Costleft ( 4,Phi ,1 right ), =, 10, +, 8, =, 18$$ $$Cost(3,left{2 right},1)=d[3,2], +, Costleft ( 2,Phi ,1 right ), =, 13, +, 5, =, 18$$ $$Cost(3,left{4 right},1)=d[3,4], +, Costleft ( 4,Phi ,1 right ), =, 12, +, 8, =, 20$$ $$Cost(4,left{3 right},1)=d[4,3], +, Costleft ( 3,Phi ,1 right ), =, 9, +, 6, =, 15$$ $$Cost(4,left{2 right},1)=d[4,2], +, Costleft ( 2,Phi ,1 right ), =, 8, +, 5, =, 13$$ S = 2 $$Cost(2,left{3,4 right},1)=minleft{begin{matrix} dleft [ 2,3 right ],+ ,Costleft ( 3,left{ 4right},1 right ), =, 9, +, 20, =, 29 \ dleft [ 2,4 right ],+ ,Costleft ( 4,left{ 3right},1 right ), =, 10, +, 15, =, 25 \ end{matrix}right., =,25$$ $$Cost(3,left{2,4 right},1)=minleft{begin{matrix} dleft [ 3,2 right ],+ ,Costleft ( 2,left{ 4right},1 right ), =, 13, +, 18, =, 31 \ dleft [ 3,4 right ],+ ,Costleft ( 4,left{ 2right},1 right ), =, 12, +, 13, =, 25 \ end{matrix}right., =,25$$ $$Cost(4,left{2,3 right},1)=minleft{begin{matrix} dleft [ 4,2 right ],+ ,Costleft ( 2,left{ 3right},1 right ), =, 8, +, 15, =, 23 \ dleft [ 4,3 right ],+ ,Costleft ( 3,left{ 2right},1 right ), =, 9, +, 18, =, 27 \ end{matrix}right., =,23$$ S = 3 $$Cost(1,left{2,3,4 right},1)=minleft{begin{matrix} dleft [ 1,2 right ],+ ,Costleft ( 2,left{ 3,4right},1 right ), =, 10, +, 25, =, 35 \ dleft [ 1,3 right ],+ ,Costleft ( 3,left{ 2,4right},1 right ), =, 15, +, 25, =, 40 \ dleft [ 1,4 right ],+ ,Costleft ( 4,left{ 2,3right},1 right ), =, 20, +, 23, =, 43 \ end{matrix}right., =, 35$$ The minimum cost path is 35. Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3, select the path from 1 to 2 (cost is 10) then go backwards. When s = 2, we get the minimum value for d [4, 2]. Select the path from 2 to 4 (cost is 10) then go backwards. When s = 1, we get the minimum value for d [4, 2] but 2 and 4 is already selected. Therefore, we select d [4, 3] (two possible values are 15 for d [2, 3] and d [4, 3], but

DSA – Useful Resources

Data Structure – Useful Resources ”; Previous Next The following resources contain additional information on Data Structure. Please use them to get more in-depth knowledge on this. Useful Video Courses Data Structure Online Training Course 142 Lectures 13 hours Tutorialspoint More Detail Advanced Data Structures Course 47 Lectures 7.5 hours William Fiset More Detail Data Structures in JavaScript: Fundamentals Course Best Seller 89 Lectures 14 hours Eduonix Learning Solutions More Detail Data Structures and Algorithms Using Python Most Popular 180 Lectures 17 hours Chandramouli Jayendran More Detail Prerequisite Course for Data Structure 18 Lectures 3.5 hours EnggTutes More Detail Mastering Data Structures & Algorithms using C++ 100 Lectures 22 hours EnggTutes More Detail Print Page Previous Next Advertisements ”;

DSA – Map Colouring Algorithm

Map Colouring Algorithm Table of content Map Colouring Algorithm Example ”; Previous Next Map colouring problem states that given a graph G {V, E} where V and E are the set of vertices and edges of the graph, all vertices in V need to be coloured in such a way that no two adjacent vertices must have the same colour. The real-world applications of this algorithm are ā€“ assigning mobile radio frequencies, making schedules, designing Sudoku, allocating registers etc. Map Colouring Algorithm With the map colouring algorithm, a graph G and the colours to be added to the graph are taken as an input and a coloured graph with no two adjacent vertices having the same colour is achieved. Algorithm Initiate all the vertices in the graph. Select the node with the highest degree to colour it with any colour. Choose the colour to be used on the graph with the help of the selection colour function so that no adjacent vertex is having the same colour. Check if the colour can be added and if it does, add it to the solution set. Repeat the process from step 2 until the output set is ready. Examples Step 1 Find degrees of all the vertices āˆ’ A ā€“ 4 B ā€“ 2 C ā€“ 2 D ā€“ 3 E ā€“ 3 Step 2 Choose the vertex with the highest degree to colour first, i.e., A and choose a colour using selection colour function. Check if the colour can be added to the vertex and if yes, add it to the solution set. Step 3 Select any vertex with the next highest degree from the remaining vertices and colour it using selection colour function. D and E both have the next highest degree 3, so choose any one between them, say D. D is adjacent to A, therefore it cannot be coloured in the same colour as A. Hence, choose a different colour using selection colour function. Step 4 The next highest degree vertex is E, hence choose E. E is adjacent to both A and D, therefore it cannot be coloured in the same colours as A and D. Choose a different colour using selection colour function. Step 5 The next highest degree vertices are B and C. Thus, choose any one randomly. B is adjacent to both A and E, thus not allowing to be coloured in the colours of A and E but it is not adjacent to D, so it can be coloured with Dā€™s colour. Step 6 The next and the last vertex remaining is C, which is adjacent to both A and D, not allowing it to be coloured using the colours of A and D. But it is not adjacent to E, so it can be coloured in Eā€™s colour. Example Following is the complete implementation of Map Colouring Algorithm in various programming languages where a graph is coloured in such a way that no two adjacent vertices have same colour. C C++ Java Python #include<stdio.h> #include<stdbool.h> #define V 4 bool graph[V][V] = { {0, 1, 1, 0}, {1, 0, 1, 1}, {1, 1, 0, 1}, {0, 1, 1, 0}, }; bool isValid(int v,int color[], int c){ //check whether putting a color valid for v for (int i = 0; i < V; i++) if (graph[v][i] && c == color[i]) return false; return true; } bool mColoring(int colors, int color[], int vertex){ if (vertex == V) //when all vertices are considered return true; for (int col = 1; col <= colors; col++) { if (isValid(vertex,color, col)) { //check whether color col is valid or not color[vertex] = col; if (mColoring (colors, color, vertex+1) == true) //go for additional vertices return true; color[vertex] = 0; } } return false; //when no colors can be assigned } int main(){ int colors = 3; // Number of colors int color[V]; //make color matrix for each vertex for (int i = 0; i < V; i++) color[i] = 0; //initially set to 0 if (mColoring(colors, color, 0) == false) { //for vertex 0 check graph coloring printf(“Solution does not exist.”); } printf(“Assigned Colors are: n”); for (int i = 0; i < V; i++) printf(“%d “, color[i]); return 0; } Output Assigned Colors are: 1 2 3 1 #include<iostream> using namespace std; #define V 4 bool graph[V][V] = { {0, 1, 1, 0}, {1, 0, 1, 1}, {1, 1, 0, 1}, {0, 1, 1, 0}, }; bool isValid(int v,int color[], int c){ //check whether putting a color valid for v for (int i = 0; i < V; i++) if (graph[v][i] && c == color[i]) return false; return true; } bool mColoring(int colors, int color[], int vertex){ if (vertex == V) //when all vertices are considered return true; for (int col = 1; col <= colors; col++) { if (isValid(vertex,color, col)) { //check whether color col is valid or not color[vertex] = col; if (mColoring (colors, color, vertex+1) == true) //go for additional vertices return true; color[vertex] = 0; } } return false; //when no colors can be assigned } int main(){ int colors = 3; // Number of colors int color[V]; //make color matrix for each vertex for (int i = 0; i < V; i++) color[i] = 0; //initially set to 0 if (mColoring(colors, color, 0) == false) { //for vertex 0 check graph coloring cout << “Solution does not exist.”; } cout << “Assigned Colors are: n”; for (int i = 0; i < V; i++) cout << color[i] << ” “; return 0; } Output Assigned Colors are: 1 2 3

DSA – Sublist Search

Sublist Search Algorithm Table of content Sublist Search Algorithm Implementation ”; Previous Next Until now, in this tutorial, we have only seen how to search for one element in a sequential order of elements. But the sublist search algorithm provides a procedure to search for a linked list in another linked list. It works like any simple pattern matching algorithm where the aim is to determine whether one list is present in the other list or not. The algorithm walks through the linked list where the first element of one list is compared with the first element of the second list; if a match is not found, the second element of the first list is compared with the first element of the second list. This process continues until a match is found or it reaches the end of a list. For example, consider two linked lists with values {4, 6, 7, 3, 8, 2, 6} and {3, 8, 2}. Sublist search checks whether the values of second list are present in the first linked list. The output is obtained in Boolean values {True, False}. It cannot return the position of the sub-list as the linked list is not an ordered data structure. Note āˆ’ The output is returned true only if the second linked list is present in the exact same order in the first list. Sublist Search Algorithm The main aim of this algorithm is to prove that one linked list is a sub-list of another list. Searching in this process is done linearly, checking each element of the linked list one by one; if the output returns true, then it is proven that the second list is a sub-list of the first linked list. Procedure for the sublist search algorithm is as follows āˆ’ Step 1 āˆ’ Maintain two pointers, each pointing to one list. These pointers are used to traverse through the linked lists. Step 2 āˆ’ Check for the base cases of the linked lists āˆ’ If both linked lists are empty, the output returns true. If the second list is not empty but the first list is empty, we return false. If the first list is not empty but the second list is empty, we return false. Step 3 āˆ’ Once it is established that both the lists are not empty, use the pointers to traverse through the lists element by element. Step 4 āˆ’ Compare the first element of the first linked list and the first element of the second linked list; if it is a match both the pointers are pointed to the next values in both lists respectively. Step 5 āˆ’ If it is not a match, keep the pointer in second list at the first element but move the pointer in first list forward. Compare the elements again. Step 6 āˆ’ Repeat Steps 4 and 5 until we reach the end of the lists. Step 7 āˆ’ If the output is found, TRUE is returned and if not, FALSE. Pseudocode Begin Sublist Search list_ptr -> points to the first list sub_ptr -> points to the second list ptr1 = list_ptr ptr2 = sub_ptr if list_ptr := NULL and sub_ptr := NULL then: return TRUE end else if sub_ptr := NULL or sub_ptr != NULL and list_ptr := NULL then: return FALSE end while list_ptr != NULL do: ptr1 = list_ptr while ptr2 != NULL do: if ptr1 := NULL then: return false else if ptr2->data := ptr1->data then: ptr2 = ptr2->next ptr1 = ptr1->next else break done if ptr2 := NULL return TRUE ptr2 := sub_ptr list_ptr := list.ptr->next done return FALSE end Analysis The time complexity of the sublist search depends on the number of elements present in both linked lists involved. The worst case time taken by the algorithm to be executed is O(m*n) where m is the number of elements present in the first linked list and n is the number of elements present in the second linked list. Example Suppose we have two linked lists with elements given as āˆ’ List 1 = {2, 5, 3, 3, 6, 7, 0} List 2 = {6, 7, 0} Using sublist search, we need to find out if List 2 is present in List 1. Step 1 Compare the first element of the List 2 with the first element of List 1. It is not a match, so the pointer in List 1 moves to the next memory address in it. Step 2 In this step, the second element of the List 1 is compared with the first element of the List 2. It is not a match so the pointer in List 1 moves to next memory address. Step 3 Now the third element in List 1 is compared with the first element in the List 2. Since it is not a match, the pointer in List 1 moves forward. Step 4 Now the fourth element in List 1 is compared with the first element in the List 2. Since it is not a match, the pointer in List 1 moves forward. Step 5 Now the fifth element in List 1 is compared with the first element in the List 2. Since it is a match, the pointers in both List 1 and List 2 move forward. Step 6 The sixth element in List 1 is compared with the second element in the List 2. Since it is also a match, the pointers in both List 1 and List 2 move forward. Step 7 The seventh element in List 1 is compared with the third element in the List 2. Since it is also

DSA – Greedy Algorithms

Greedy Algorithms Table of content Components of Greedy Algorithm Areas of Application Counting Coins Problem Where Greedy Approach Fails Examples of Greedy Algorithm ”; Previous Next Among all the algorithmic approaches, the simplest and straightforward approach is the Greedy method. In this approach, the decision is taken on the basis of current available information without worrying about the effect of the current decision in future. Greedy algorithms build a solution part by part, choosing the next part in such a way, that it gives an immediate benefit. This approach never reconsiders the choices taken previously. This approach is mainly used to solve optimization problems. Greedy method is easy to implement and quite efficient in most of the cases. Hence, we can say that Greedy algorithm is an algorithmic paradigm based on heuristic that follows local optimal choice at each step with the hope of finding global optimal solution. In many problems, it does not produce an optimal solution though it gives an approximate (near optimal) solution in a reasonable time. Components of Greedy Algorithm Greedy algorithms have the following five components āˆ’ A candidate set āˆ’ A solution is created from this set. A selection function āˆ’ Used to choose the best candidate to be added to the solution. A feasibility function āˆ’ Used to determine whether a candidate can be used to contribute to the solution. An objective function āˆ’ Used to assign a value to a solution or a partial solution. A solution function āˆ’ Used to indicate whether a complete solution has been reached. Areas of Application Greedy approach is used to solve many problems, such as Finding the shortest path between two vertices using Dijkstra”s algorithm. Finding the minimal spanning tree in a graph using Prim”s /Kruskal”s algorithm, etc. Counting Coins Problem The Counting Coins problem is to count to a desired value by choosing the least possible coins and the greedy approach forces the algorithm to pick the largest possible coin. If we are provided coins of 1, 2, 5 and 10 and we are asked to count 18 then the greedy procedure will be āˆ’ 1 āˆ’ Select one 10 coin, the remaining count is 8 2 āˆ’ Then select one 5 coin, the remaining count is 3 3 āˆ’ Then select one 2 coin, the remaining count is 1 4 āˆ’ And finally, the selection of one 1 coins solves the problem Though, it seems to be working fine, for this count we need to pick only 4 coins. But if we slightly change the problem then the same approach may not be able to produce the same optimum result. For the currency system, where we have coins of 1, 7, 10 value, counting coins for value 18 will be absolutely optimum but for count like 15, it may use more coins than necessary. For example, the greedy approach will use 10 + 1 + 1 + 1 + 1 + 1, total 6 coins. Whereas the same problem could be solved by using only 3 coins (7 + 7 + 1) Hence, we may conclude that the greedy approach picks an immediate optimized solution and may fail where global optimization is a major concern. Where Greedy Approach Fails In many problems, Greedy algorithm fails to find an optimal solution, moreover it may produce a worst solution. Problems like Travelling Salesman and Knapsack cannot be solved using this approach. Examples of Greedy Algorithm Most networking algorithms use the greedy approach. Here is a list of few of them āˆ’ Travelling Salesman Problem Prim”s Minimal Spanning Tree Algorithm Kruskal”s Minimal Spanning Tree Algorithm Dijkstra”s Minimal Spanning Tree Algorithm Graph āˆ’ Map Coloring Graph āˆ’ Vertex Cover DSA – Fractional Knapsack Problem DSA – Job Sequencing with Deadline DSA – Optimal Merge Pattern Algorithm We will discuss these examples elaborately in the further chapters of this tutorial. Print Page Previous Next Advertisements ”;

DSA – Jump Search Algorithm

Jump Search Algorithm Table of content Jump Search Algorithm Implementation ”; Previous Next Jump Search algorithm is a slightly modified version of the linear search algorithm. The main idea behind this algorithm is to reduce the time complexity by comparing lesser elements than the linear search algorithm. The input array is hence sorted and divided into blocks to perform searching while jumping through these blocks. For example, let us look at the given example below; the sorted input array is searched in the blocks of 3 elements each. The desired key is found only after 2 comparisons rather than the 6 comparisons of the linear search. Here, there arises a question about how to divide these blocks. To answer that, if the input array is of size ā€˜nā€™, the blocks are divided in the intervals of āˆšn. First element of every block is compared with the key element until the key elementā€™s value is less than the block element. Linear search is performed only on that previous block since the input is sorted. If the element is found, it is a successful search; otherwise, an unsuccessful search is returned. Jump search algorithm is discussed in detail further into this chapter. Jump Search Algorithm The jump search algorithm takes a sorted array as an input which is divided into smaller blocks to make the search simpler. The algorithm is as follows āˆ’ Step 1 āˆ’ If the size of the input array is ā€˜nā€™, then the size of the block is āˆšn. Set i = 0. Step 2 āˆ’ The key to be searched is compared with the ith element of the array. If it is a match, the position of the element is returned; otherwise i is incremented with the block size. Step 3 āˆ’ The Step 2 is repeated until the ith element is greater than the key element. Step 4 āˆ’ Now, the element is figured to be in the previous block, since the input array is sorted. Therefore, linear search is applied on that block to find the element. Step 5 āˆ’ If the element is found, the position is returned. If the element is not found, unsuccessful search is prompted. Pseudocode Begin blockSize := āˆšsize start := 0 end := blockSize while array[end] <= key AND end < size do start := end end := end + blockSize if end > size ā€“ 1 then end := size done for i := start to end -1 do if array[i] = key then return i done return invalid location End Analysis The time complexity of the jump search technique is O(āˆšn) and space complexity is O(1). Example Let us understand the jump search algorithm by searching for element 66 from the given sorted array, A, below āˆ’ Step 1 Initialize i = 0, and size of the input array ā€˜nā€™ = 12 Suppose, block size is represented as ā€˜mā€™. Then, m = āˆšn = āˆš12 = 3 Step 2 Compare A[0] with the key element and check whether it matches, A[0] = 0 ā‰  66 Therefore, i is incremented by the block size = 3. Now the element compared with the key element is A[3]. Step 3 A[3] = 14 ā‰  66 Since it is not a match, i is again incremented by 3. Step 4 A[6] = 48 ā‰  66 i is incremented by 3 again. A[9] is compared with the key element. Step 5 A[9] = 88 ā‰  66 However, 88 is greater than 66, therefore linear search is applied on the current block. Step 6 After applying linear search, the pointer increments from 6th index to 7th. Therefore, A[7] is compared with the key element. We find that A[7] is the required element, hence the program returns 7th index as the output. Implementation The jump search algorithm is an extended variant of linear search. The algorithm divides the input array into multiple small blocks and performs the linear search on a single block that is assumed to contain the element. If the element is not found in the assumed blocked, it returns an unsuccessful search. The output prints the position of the element in the array instead of its index. Indexing refers to the index numbers of the array that start from 0 while position is the place where the element is stored. C C++ Java Python #include<stdio.h> #include<math.h> int jump_search(int[], int, int); int main(){ int i, n, key, index; int arr[12] = {0, 6, 12, 14, 19, 22, 48, 66, 79, 88, 104, 126}; int len = sizeof(arr)/sizeof(arr[0]); printf(“Array elements are:”); for(int j = 0; j<len; j++){ printf(” %d”, arr[j]); } n = 12; key = 66; printf(“nThe element to be searched: %dn”, key); index = jump_search(arr, n, key); if(index >= 0) printf(“The element is found at position %d”, index+1); else printf(“Unsuccessful Search”); return 0; } int jump_search(int arr[], int n, int key){ int i, j, m, k; i = 0; m = sqrt(n); k = m; while(arr[m] <= key && m < n) { i = m; m += k; if(m > n – 1) return -1; } // linear search on the block for(j = i; j<m; j++) { if(arr[j] == key) return j; } return -1; } Output Array elements are: 0 6 12 14 19 22 48 66 79 88 104 126 The element to be searched: 66 The element is found at position 8 #include<iostream> #include<cmath> using namespace std; int jump_search(int[], int, int); int main(){ int i, n, key, index; int arr[12] = {0, 6, 12,

DSA – Kruskal”s Minimal Spanning Tree

KruskalĆ¢Ā€Ā™s Minimal Spanning Tree Algorithm Table of content Kruskal”s Algorithm Example ”; Previous Next Kruskal”s minimal spanning tree algorithm is one of the efficient methods to find the minimum spanning tree of a graph. A minimum spanning tree is a subgraph that connects all the vertices present in the main graph with the least possible edges and minimum cost (sum of the weights assigned to each edge). The algorithm first starts from the forest Ć¢Ā€Ā“ which is defined as a subgraph containing only vertices of the main graph Ć¢Ā€Ā“ of the graph, adding the least cost edges later until the minimum spanning tree is created without forming cycles in the graph. Kruskal”s algorithm has easier implementation than primĆ¢Ā€Ā™s algorithm, but has higher complexity. Kruskal”s Algorithm The inputs taken by the kruskalĆ¢Ā€Ā™s algorithm are the graph G {V, E}, where V is the set of vertices and E is the set of edges, and the source vertex S and the minimum spanning tree of graph G is obtained as an output. Algorithm Sort all the edges in the graph in an ascending order and store it in an array edge[]. Construct the forest of the graph on a plane with all the vertices in it. Select the least cost edge from the edge[] array and add it into the forest of the graph. Mark the vertices visited by adding them into the visited[] array. Repeat the steps 2 and 3 until all the vertices are visited without having any cycles forming in the graph When all the vertices are visited, the minimum spanning tree is formed. Calculate the minimum cost of the output spanning tree formed. Examples Construct a minimum spanning tree using kruskalĆ¢Ā€Ā™s algorithm for the graph given below āˆ’ Solution As the first step, sort all the edges in the given graph in an ascending order and store the values in an array. Edge Bā†’D Aā†’B Cā†’F Fā†’E Bā†’C Gā†’F Aā†’G Cā†’D Dā†’E Cā†’G Cost 5 6 9 10 11 12 15 17 22 25 Then, construct a forest of the given graph on a single plane. From the list of sorted edge costs, select the least cost edge and add it onto the forest in output graph. B ā†’ D = 5 Minimum cost = 5 Visited array, v = {B, D} Similarly, the next least cost edge is B ā†’ A = 6; so we add it onto the output graph. Minimum cost = 5 + 6 = 11 Visited array, v = {B, D, A} The next least cost edge is C ā†’ F = 9; add it onto the output graph. Minimum Cost = 5 + 6 + 9 = 20 Visited array, v = {B, D, A, C, F} The next edge to be added onto the output graph is F ā†’ E = 10. Minimum Cost = 5 + 6 + 9 + 10 = 30 Visited array, v = {B, D, A, C, F, E} The next edge from the least cost array is B ā†’ C = 11, hence we add it in the output graph. Minimum cost = 5 + 6 + 9 + 10 + 11 = 41 Visited array, v = {B, D, A, C, F, E} The last edge from the least cost array to be added in the output graph is F ā†’ G = 12. Minimum cost = 5 + 6 + 9 + 10 + 11 + 12 = 53 Visited array, v = {B, D, A, C, F, E, G} The obtained result is the minimum spanning tree of the given graph with cost = 53. Example The final program implements the KruskalĆ¢Ā€Ā™s minimum spanning tree problem that takes the cost adjacency matrix as the input and prints the shortest path as the output along with the minimum cost. C C++ Java Python #include <stdio.h> #include <stdlib.h> const int inf = 999999; int k, a, b, u, v, n, ne = 1; int mincost = 0; int cost[3][3] = {{0, 10, 20},{12, 0,15},{16, 18, 0}}; int p[9] = {0}; int applyfind(int i) { while(p[i] != 0) i=p[i]; return i; } int applyunion(int i,int j) { if(i!=j) { p[j]=i; return 1; } return 0; } int main() { n = 3; int i, j; for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { if (cost[i][j] == 0) { cost[i][j] = inf; } } } printf(“Minimum Cost Spanning Tree: n”); while(ne < n) { int min_val = inf; for(i=0; i<n; i++) { for(j=0; j <n; j++) { if(cost[i][j] < min_val) { min_val = cost[i][j]; a = u = i; b = v = j; } } } u = applyfind(u); v = applyfind(v); if(applyunion(u, v) != 0) { printf(“%d -> %dn”, a, b); mincost +=min_val; } cost[a][b] = cost[b][a] = 999; ne++; } printf(“Minimum cost = %d”,mincost); return 0; } Output Minimum Cost Spanning Tree: 0 -> 1 1 -> 2 Minimum cost = 25 #include <iostream> using namespace std; const int inf = 999999; int k, a, b, u, v, n, ne = 1; int mincost = 0; int cost[3][3] = {{0, 10, 20}, {12, 0, 15}, {16, 18, 0}}; int p[9] = {0}; int applyfind(int i) { while (p[i] != 0) { i = p[i]; } return i; } int applyunion(int i, int j) { if

DSA – Queue Data Structure

Queue Data Structure ”; Previous Next What is a Queue? A queue is a linear data structure where elements are stored in the FIFO (First In First Out) principle where the first element inserted would be the first element to be accessed. A queue is an Abstract Data Type (ADT) similar to stack, the thing that makes queue different from stack is that a queue is open at both its ends. The data is inserted into the queue through one end and deleted from it using the other end. Queue is very frequently used in most programming languages. A real-world example of queue can be a single-lane one-way road, where the vehicle enters first, exits first. More real-world examples can be seen as queues at the ticket windows and bus-stops. Representation of Queues Similar to the stack ADT, a queue ADT can also be implemented using arrays, linked lists, or pointers. As a small example in this tutorial, we implement queues using a one-dimensional array. Basic Operations in Queue Queue operations also include initialization of a queue, usage and permanently deleting the data from the memory. The most fundamental operations in the queue ADT include: enqueue(), dequeue(), peek(), isFull(), isEmpty(). These are all built-in operations to carry out data manipulation and to check the status of the queue. Queue uses two pointers āˆ’ front and rear. The front pointer accesses the data from the front end (helping in enqueueing) while the rear pointer accesses data from the rear end (helping in dequeuing). Queue Insertion Operation: Enqueue() The enqueue() is a data manipulation operation that is used to insert elements into the stack. The following algorithm describes the enqueue() operation in a simpler way. Algorithm 1. START 2. Check if the queue is full. 3. If the queue is full, produce overflow error and exit. 4. If the queue is not full, increment rear pointer to point the next empty space. 5. Add data element to the queue location, where the rear is pointing. 6. return success. 7. END Example Following are the implementations of this operation in various programming languages āˆ’ C C++ Java Python #include <stdio.h> #include <string.h> #include <stdlib.h> #include <stdbool.h> #define MAX 6 int intArray[MAX]; int front = 0; int rear = -1; int itemCount = 0; bool isFull(){ return itemCount == MAX; } bool isEmpty(){ return itemCount == 0; } int removeData(){ int data = intArray[front++]; if(front == MAX) { front = 0; } itemCount–; return data; } void insert(int data){ if(!isFull()) { if(rear == MAX-1) { rear = -1; } intArray[++rear] = data; itemCount++; } } int main(){ insert(3); insert(5); insert(9); insert(1); insert(12); insert(15); printf(“Queue: “); while(!isEmpty()) { int n = removeData(); printf(“%d “,n); } } Output Queue: 3 5 9 1 12 15 #include <iostream> #include <string> #define MAX 6 int intArray[MAX]; int front = 0; int rear = -1; int itemCount = 0; bool isFull(){ return itemCount == MAX; } bool isEmpty(){ return itemCount == 0; } int removeData(){ int data = intArray[front++]; if(front == MAX) { front = 0; } itemCount–; return data; } void insert(int data){ if(!isFull()) { if(rear == MAX-1) { rear = -1; } intArray[++rear] = data; itemCount++; } } int main(){ insert(3); insert(5); insert(9); insert(1); insert(12); insert(15); printf(“Queue: “); while(!isEmpty()) { int n = removeData(); printf(“%d “,n); } } Output Queue: 3 5 9 1 12 15 import java.util.*; public class Demo{ static final int MAX = 6; static int intArray[] = new int[MAX]; static int front = 0; static int rear = -1; static int itemCount = 0; public static boolean isFull(){ return itemCount == MAX; } public static boolean isEmpty(){ return itemCount == 0; } public static int removeData(){ int data = intArray[front++]; if(front == MAX) { front = 0; } itemCount–; return data; } public static void insert(int data){ if(!isFull()) { if(rear == MAX-1) { rear = -1; } intArray[++rear] = data; itemCount++; } } public static void main(String[] args){ insert(3); insert(5); insert(9); insert(1); insert(12); insert(15); System.out.print(“Queue: “); while(!isEmpty()) { int n = removeData(); System.out.print(n + ” “); } } } Output Queue: 3 5 9 1 12 15 MAX = 6; intArray = [0] * MAX front = 0; rear = -1; itemCount = 0; def isFull(): return itemCount == MAX def isEmpty(): return itemCount == 0 def removeData(): data = intArray[front+1] if(front == MAX): front = 0 itemCount-1 return data def insert(data): global rear, itemCount if(not isFull()): if(rear == MAX-1): rear = -1 rear = rear + 1 intArray[rear] = data itemCount+1 insert(3); insert(5); insert(9); insert(1); insert(12); insert(15); print(“Queue: “) for i in range(MAX): print(intArray[i], end = ” “) while(not isEmpty()): n = removeData() print(n, end = ” “) Output Queue: 3 5 9 1 12 15 Queue Deletion Operation: dequeue() The dequeue() is a data manipulation operation that is used to remove elements from the stack. The following algorithm describes the dequeue() operation in a simpler way. Algorithm 1. START 2. Check if the queue is empty. 3. If the queue is empty, produce underflow error and exit. 4. If the queue is not empty, access the data where front is pointing. 5. Increment front pointer to point to the next available data element. 6. Return success. 7. END Example Following are the implementations of this operation in various programming languages āˆ’ C C++ Java Python #include <stdio.h> #include <string.h> #include <stdlib.h> #include <stdbool.h> #define MAX 6 int intArray[MAX]; int front = 0; int rear = -1; int itemCount = 0; bool isFull(){ return itemCount == MAX; } bool isEmpty(){ return itemCount == 0; } void

DSA – Linear Search Algorithm

Linear Search Algorithm Table of content Linear Search Algorithm Implementation ”; Previous Next Linear search is a type of sequential searching algorithm. In this method, every element within the input array is traversed and compared with the key element to be found. If a match is found in the array the search is said to be successful; if there is no match found the search is said to be unsuccessful and gives the worst-case time complexity. For instance, in the given animated diagram, we are searching for an element 33. Therefore, the linear search method searches for it sequentially from the very first element until it finds a match. This returns a successful search. In the same diagram, if we have to search for an element 46, then it returns an unsuccessful search since 46 is not present in the input. Linear Search Algorithm The algorithm for linear search is relatively simple. The procedure starts at the very first index of the input array to be searched. Step 1 āˆ’ Start from the 0th index of the input array, compare the key value with the value present in the 0th index. Step 2 āˆ’ If the value matches with the key, return the position at which the value was found. Step 3 āˆ’ If the value does not match with the key, compare the next element in the array. Step 4 āˆ’ Repeat Step 3 until there is a match found. Return the position at which the match was found. Step 5 āˆ’ If it is an unsuccessful search, print that the element is not present in the array and exit the program. Pseudocode procedure linear_search (list, value) for each item in the list if match item == value return the item”s location end if end for end procedure Analysis Linear search traverses through every element sequentially therefore, the best case is when the element is found in the very first iteration. The best-case time complexity would be O(1). However, the worst case of the linear search method would be an unsuccessful search that does not find the key value in the array, it performs n iterations. Therefore, the worst-case time complexity of the linear search algorithm would be O(n). Example Let us look at the step-by-step searching of the key element (say 47) in an array using the linear search method. Step 1 The linear search starts from the 0th index. Compare the key element with the value in the 0th index, 34. However, 47 ā‰  34. So it moves to the next element. Step 2 Now, the key is compared with value in the 1st index of the array. Still, 47 ā‰  10, making the algorithm move for another iteration. Step 3 The next element 66 is compared with 47. They are both not a match so the algorithm compares the further elements. Step 4 Now the element in 3rd index, 27, is compared with the key value, 47. They are not equal so the algorithm is pushed forward to check the next element. Step 5 Comparing the element in the 4th index of the array, 47, to the key 47. It is figured that both the elements match. Now, the position in which 47 is present, i.e., 4 is returned. The output achieved is ā€œElement found at 4th indexā€. Implementation In this tutorial, the Linear Search program can be seen implemented in four programming languages. The function compares the elements of input with the key value and returns the position of the key in the array or an unsuccessful search prompt if the key is not present in the array. C C++ Java Python #include <stdio.h> void linear_search(int a[], int n, int key){ int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array printf(“The element is found at %d positionn”, i+1); count = count + 1; } } if(count == 0) // for unsuccessful search printf(“The element is not present in the arrayn”); } int main(){ int i, n, key; n = 6; int a[10] = {12, 44, 32, 18, 4, 10}; key = 18; linear_search(a, n, key); key = 23; linear_search(a, n, key); return 0; } Output The element is found at 4 position The element is not present in the array #include <iostream> using namespace std; void linear_search(int a[], int n, int key){ int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array cout << “The element is found at position ” << i+1 <<endl; count = count + 1; } } if(count == 0) // for unsuccessful search cout << “The element is not present in the array” <<endl; } int main(){ int i, n, key; n = 6; int a[10] = {12, 44, 32, 18, 4, 10}; key = 18; linear_search(a, n, key); key = 23; linear_search(a, n, key); return 0; } Output The element is found at position 4 The element is not present in the array import java.io.*; import java.util.*; public class LinearSearch { static void linear_search(int a[], int n, int key) { int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array System.out.println(“The element is found at position ” + (i+1)); count = count + 1; } } if(count == 0) // for unsuccessful search System.out.println(“The element is not present in the array”); } public static void main(String args[])

DSA – Interpolation Search

Interpolation Search Algorithm Table of content Positioning in Binary Search Position Probing in Interpolation Search Interpolation Search Algorithm Implementation ”; Previous Next Interpolation search is an improved variant of binary search. This search algorithm works on the probing position of the required value. For this algorithm to work properly, the data collection should be in a sorted form and equally distributed. Binary search has a huge advantage of time complexity over linear search. Linear search has worst-case complexity of ĪŸ(n) whereas binary search has ĪŸ(log n). There are cases where the location of target data may be known in advance. For example, in case of a telephone directory, if we want to search the telephone number of ā€œMorpheusā€. Here, linear search and even binary search will seem slow as we can directly jump to memory space where the names start from ”M” are stored. Positioning in Binary Search In binary search, if the desired data is not found then the rest of the list is divided in two parts, lower and higher. The search is carried out in either of them. Even when the data is sorted, binary search does not take advantage to probe the position of the desired data. Position Probing in Interpolation Search Interpolation search finds a particular item by computing the probe position. Initially, the probe position is the position of the middle most item of the collection. If a match occurs, then the index of the item is returned. To split the list into two parts, we use the following method āˆ’ $$mid, =, Lo, +, frac{left ( Hi, -, Lo right )ast left ( X, -, Aleft [ Lo right ] right )}{Aleft [ Hi right ], -, Aleft [ Lo right ]}$$ where āˆ’ A = list Lo = Lowest index of the list Hi = Highest index of the list A[n] = Value stored at index n in the list If the middle item is greater than the item, then the probe position is again calculated in the sub-array to the right of the middle item. Otherwise, the item is searched in the sub-array to the left of the middle item. This process continues on the sub-array as well until the size of subarray reduces to zero. Interpolation Search Algorithm As it is an improvisation of the existing BST algorithm, we are mentioning the steps to search the ”target” data value index, using position probing āˆ’ 1. Start searching data from middle of the list. 2. If it is a match, return the index of the item, and exit. 3. If it is not a match, probe position. 4. Divide the list using probing formula and find the new middle. 5. If data is greater than middle, search in higher sub-list. 6. If data is smaller than middle, search in lower sub-list. 7. Repeat until match. Pseudocode A ā†’ Array list N ā†’ Size of A X ā†’ Target Value Procedure Interpolation_Search() Set Lo ā†’ 0 Set Mid ā†’ -1 Set Hi ā†’ N-1 While X does not match if Lo equals to Hi OR A[Lo] equals to A[Hi] EXIT: Failure, Target not found end if Set Mid = Lo + ((Hi – Lo) / (A[Hi] – A[Lo])) * (X – A[Lo]) if A[Mid] = X EXIT: Success, Target found at Mid else if A[Mid] < X Set Lo to Mid+1 else if A[Mid] > X Set Hi to Mid-1 end if end if End While End Procedure Analysis Runtime complexity of interpolation search algorithm is ĪŸ(log (log n)) as compared to ĪŸ(log n) of BST in favorable situations. Example To understand the step-by-step process involved in the interpolation search, let us look at an example and work around it. Consider an array of sorted elements given below āˆ’ Let us search for the element 19. Solution Unlike binary search, the middle point in this approach is chosen using the formula āˆ’ $$mid, =, Lo, +, frac{left ( Hi, -, Lo right )ast left ( X, -, Aleft [ Lo right ] right )}{Aleft [ Hi right ], -, Aleft [ Lo right ]}$$ So in this given array input, Lo = 0, A[Lo] = 10 Hi = 9, A[Hi] = 44 X = 19 Applying the formula to find the middle point in the list, we get $$mid, =, 0, +, frac{left ( 9, -, 0 right )ast left ( 19, -, 10 right )}{44, -, 10}$$ $$mid, =, frac{9ast 9}{34}$$ $$mid, =, frac{81}{34},=,2.38$$ Since, mid is an index value, we only consider the integer part of the decimal. That is, mid = 2. Comparing the key element given, that is 19, to the element present in the mid index, it is found that both the elements match. Therefore, the element is found at index 2. Implementation Interpolation search is an improved variant of binary search. This search algorithm works on the probing position of the required value. For this algorithm to work properly, the data collection should be in sorted and equally distributed form. C C++ Java Python #include<stdio.h> #define MAX 10 // array of items on which linear search will be conducted. int list[MAX] = { 10, 14, 19, 26, 27, 31, 33, 35, 42, 44 }; int interpolation_search(int data){ int lo = 0; int hi = MAX – 1; int mid = -1; int comparisons = 1; int index = -1; while(lo <= hi) { printf(“Comparison %d n” , comparisons ) ; printf(“lo : %d, list[%d] = %dn”, lo, lo, list[lo]);