Approximation Algorithms Table of content Approximation Algorithms Performance Ratios Examples ”; Previous Next Approximation Algorithms Approximation algorithms are algorithms designed to solve problems that are not solvable in polynomial time for approximate solutions. These problems are known as NP complete problems. These problems are significantly effective to solve real world problems, therefore, it becomes important to solve them using a different approach. NP complete problems can still be solved in three cases: the input could be so small that the execution time is reduced, some problems can still be classified into problems that can be solved in polynomial time, or use approximation algorithms to find near-optima solutions for the problems. This leads to the concept of performance ratios of an approximation problem. Performance Ratios The main idea behind calculating the performance ratio of an approximation algorithm, which is also called as an approximation ratio, is to find how close the approximate solution is to the optimal solution. The approximate ratio is represented using ρ(n) where n is the input size of the algorithm, C is the near-optimal solution obtained by the algorithm, C* is the optimal solution for the problem. The algorithm has an approximate ratio of ρ(n) if and only if − $$maxleft{frac{C}{C^{ast} },frac{C^{ast }}{C} right}leq rho left ( n right )$$ The algorithm is then called a ρ(n)-approximation algorithm. Approximation Algorithms can be applied on two types of optimization problems: minimization problems and maximization problems. If the optimal solution of the problem is to find the maximum cost, the problem is known as the maximization problem; and if the optimal solution of the problem is to find the minimum cost, then the problem is known as a minimization problem. For maximization problems, the approximation ratio is calculated by C*/C since 0 ≤ C ≤ C*. For minimization problems, the approximation ratio is calculated by C/C* since 0 ≤ C* ≤ C. Assuming that the costs of approximation algorithms are all positive, the performance ratio is well defined and will not be less than 1. If the value is 1, that means the approximate algorithm generates the exact optimal solution. Examples Few popular examples of the approximation algorithms are − Vertex Cover Algorithm Set Cover Problem Travelling Salesman Problem (Approximation Approach) The Subset Sum Problem Print Page Previous Next Advertisements ”;
Category: data Structures Algorithms
DSA – Shell Sort Algorithm
Shell Sort Algorithm Table of content Shell Sort Algorithm Implementation ”; Previous Next Shell sort is a highly efficient sorting algorithm and is based on insertion sort algorithm. This algorithm avoids large shifts as in case of insertion sort, if the smaller value is to the far right and has to be moved to the far left. This algorithm uses insertion sort on a widely spread elements, first to sort them and then sorts the less widely spaced elements. This spacing is termed as interval. This interval is calculated based on Knuth”s formula as − h = h * 3 + 1 where − h is interval with initial value 1 This algorithm is quite efficient for medium-sized data sets as its average and worst case complexity are of O(n), where n is the number of items. Shell Sort Algorithm Following is the algorithm for shell sort. 1. Initialize the value of h. 2. Divide the list into smaller sub-list of equal interval h. 3. Sort these sub-lists using insertion sort. 4. Repeat until complete list is sorted. Pseudocode Following is the pseudocode for shell sort. procedure shellSort() A : array of items /* calculate interval*/ while interval < A.length /3 do: interval = interval * 3 + 1 end while while interval > 0 do: for outer = interval; outer < A.length; outer ++ do: /* select value to be inserted */ valueToInsert = A[outer] inner = outer; /*shift element towards right*/ while inner > interval -1 && A[inner – interval] >= valueToInsert do: A[inner] = A[inner – interval] inner = inner – interval end while /* insert the number at hole position */ A[inner] = valueToInsert end for /* calculate interval*/ interval = (interval -1) /3; end while end procedure Example Let us consider the following example to have an idea of how shell sort works. We take the same array we have used in our previous examples. For our example and ease of understanding, we take the interval of 4. Make a virtual sub-list of all values located at the interval of 4 positions. Here these values are {35, 14}, {33, 19}, {42, 27} and {10, 14} We compare values in each sub-list and swap them (if necessary) in the original array. After this step, the new array should look like this − Then, we take interval of 2 and this gap generates two sub-lists – {14, 27, 35, 42}, {19, 10, 33, 44} We compare and swap the values, if required, in the original array. After this step, the array should look like this − Finally, we sort the rest of the array using interval of value 1. Shell sort uses insertion sort to sort the array. Following is the step-by-step depiction − We see that it required only four swaps to sort the rest of the array. Implementation Shell sort is a highly efficient sorting algorithm and is based on insertion sort algorithm. This algorithm avoids large shifts as in case of insertion sort, if the smaller value is to the far right and has to be moved to the far left. C C++ Java Python #include <stdio.h> void shellSort(int arr[], int n){ int gap, j, k; for(gap = n/2; gap > 0; gap = gap / 2) { //initially gap = n/2, decreasing by gap /2 for(j = gap; j<n; j++) { for(k = j-gap; k>=0; k -= gap) { if(arr[k+gap] >= arr[k]) break; else { int temp; temp = arr[k+gap]; arr[k+gap] = arr[k]; arr[k] = temp; } } } } } int main(){ int n; n = 5; int arr[5] = {33, 45, 62, 12, 98}; // initialize the array printf(“Array before Sorting: “); for(int i = 0; i<n; i++) printf(“%d “,arr[i]); printf(“n”); shellSort(arr, n); printf(“Array after Sorting: “); for(int i = 0; i<n; i++) printf(“%d “, arr[i]); printf(“n”); } Output Array before Sorting: 33 45 62 12 98 Array after Sorting: 12 33 45 62 98 #include<iostream> using namespace std; void shellSort(int *arr, int n){ int gap, j, k; for(gap = n/2; gap > 0; gap = gap / 2) { //initially gap = n/2, decreasing by gap /2 for(j = gap; j<n; j++) { for(k = j-gap; k>=0; k -= gap) { if(arr[k+gap] >= arr[k]) break; else { int temp; temp = arr[k+gap]; arr[k+gap] = arr[k]; arr[k] = temp; } } } } } int main(){ int n; n = 5; int arr[5] = {33, 45, 62, 12, 98}; // initialize the array cout << “Array before Sorting: “; for(int i = 0; i<n; i++) cout << arr[i] << ” “; cout << endl; shellSort(arr, n); cout << “Array after Sorting: “; for(int i = 0; i<n; i++) cout << arr[i] << ” “; cout << endl; } Output Array before Sorting: 33 45 62 12 98 Array after Sorting: 12 33 45 62 98 import java.io.*; import java.util.*; public class ShellSort { public static void main(String args[]) { int n = 5; int[] arr = {33, 45, 62, 12, 98}; //initialize an array System.out.print(“Array before Sorting: “); for(int i = 0; i<n; i++) System.out.print(arr[i] + ” “); System.out.println(); int gap; for(gap = n/2; gap > 0; gap = gap / 2) { //initially gap = n/2, decreasing by gap /2 for(int j = gap; j<n; j++) { for(int k = j-gap; k>=0; k -= gap) { if(arr[k+gap] >= arr[k]) break; else { int temp; temp = arr[k+gap]; arr[k+gap] = arr[k]; arr[k] = temp; } } } } System.out.print(“Array After Sorting: “); for(int i = 0; i<n; i++)
DSA – Sublist Search
Sublist Search Algorithm Table of content Sublist Search Algorithm Implementation ”; Previous Next Until now, in this tutorial, we have only seen how to search for one element in a sequential order of elements. But the sublist search algorithm provides a procedure to search for a linked list in another linked list. It works like any simple pattern matching algorithm where the aim is to determine whether one list is present in the other list or not. The algorithm walks through the linked list where the first element of one list is compared with the first element of the second list; if a match is not found, the second element of the first list is compared with the first element of the second list. This process continues until a match is found or it reaches the end of a list. For example, consider two linked lists with values {4, 6, 7, 3, 8, 2, 6} and {3, 8, 2}. Sublist search checks whether the values of second list are present in the first linked list. The output is obtained in Boolean values {True, False}. It cannot return the position of the sub-list as the linked list is not an ordered data structure. Note − The output is returned true only if the second linked list is present in the exact same order in the first list. Sublist Search Algorithm The main aim of this algorithm is to prove that one linked list is a sub-list of another list. Searching in this process is done linearly, checking each element of the linked list one by one; if the output returns true, then it is proven that the second list is a sub-list of the first linked list. Procedure for the sublist search algorithm is as follows − Step 1 − Maintain two pointers, each pointing to one list. These pointers are used to traverse through the linked lists. Step 2 − Check for the base cases of the linked lists − If both linked lists are empty, the output returns true. If the second list is not empty but the first list is empty, we return false. If the first list is not empty but the second list is empty, we return false. Step 3 − Once it is established that both the lists are not empty, use the pointers to traverse through the lists element by element. Step 4 − Compare the first element of the first linked list and the first element of the second linked list; if it is a match both the pointers are pointed to the next values in both lists respectively. Step 5 − If it is not a match, keep the pointer in second list at the first element but move the pointer in first list forward. Compare the elements again. Step 6 − Repeat Steps 4 and 5 until we reach the end of the lists. Step 7 − If the output is found, TRUE is returned and if not, FALSE. Pseudocode Begin Sublist Search list_ptr -> points to the first list sub_ptr -> points to the second list ptr1 = list_ptr ptr2 = sub_ptr if list_ptr := NULL and sub_ptr := NULL then: return TRUE end else if sub_ptr := NULL or sub_ptr != NULL and list_ptr := NULL then: return FALSE end while list_ptr != NULL do: ptr1 = list_ptr while ptr2 != NULL do: if ptr1 := NULL then: return false else if ptr2->data := ptr1->data then: ptr2 = ptr2->next ptr1 = ptr1->next else break done if ptr2 := NULL return TRUE ptr2 := sub_ptr list_ptr := list.ptr->next done return FALSE end Analysis The time complexity of the sublist search depends on the number of elements present in both linked lists involved. The worst case time taken by the algorithm to be executed is O(m*n) where m is the number of elements present in the first linked list and n is the number of elements present in the second linked list. Example Suppose we have two linked lists with elements given as − List 1 = {2, 5, 3, 3, 6, 7, 0} List 2 = {6, 7, 0} Using sublist search, we need to find out if List 2 is present in List 1. Step 1 Compare the first element of the List 2 with the first element of List 1. It is not a match, so the pointer in List 1 moves to the next memory address in it. Step 2 In this step, the second element of the List 1 is compared with the first element of the List 2. It is not a match so the pointer in List 1 moves to next memory address. Step 3 Now the third element in List 1 is compared with the first element in the List 2. Since it is not a match, the pointer in List 1 moves forward. Step 4 Now the fourth element in List 1 is compared with the first element in the List 2. Since it is not a match, the pointer in List 1 moves forward. Step 5 Now the fifth element in List 1 is compared with the first element in the List 2. Since it is a match, the pointers in both List 1 and List 2 move forward. Step 6 The sixth element in List 1 is compared with the second element in the List 2. Since it is also a match, the pointers in both List 1 and List 2 move forward. Step 7 The seventh element in List 1 is compared with the third element in the List 2. Since it is also
DSA – Greedy Algorithms
Greedy Algorithms Table of content Components of Greedy Algorithm Areas of Application Counting Coins Problem Where Greedy Approach Fails Examples of Greedy Algorithm ”; Previous Next Among all the algorithmic approaches, the simplest and straightforward approach is the Greedy method. In this approach, the decision is taken on the basis of current available information without worrying about the effect of the current decision in future. Greedy algorithms build a solution part by part, choosing the next part in such a way, that it gives an immediate benefit. This approach never reconsiders the choices taken previously. This approach is mainly used to solve optimization problems. Greedy method is easy to implement and quite efficient in most of the cases. Hence, we can say that Greedy algorithm is an algorithmic paradigm based on heuristic that follows local optimal choice at each step with the hope of finding global optimal solution. In many problems, it does not produce an optimal solution though it gives an approximate (near optimal) solution in a reasonable time. Components of Greedy Algorithm Greedy algorithms have the following five components − A candidate set − A solution is created from this set. A selection function − Used to choose the best candidate to be added to the solution. A feasibility function − Used to determine whether a candidate can be used to contribute to the solution. An objective function − Used to assign a value to a solution or a partial solution. A solution function − Used to indicate whether a complete solution has been reached. Areas of Application Greedy approach is used to solve many problems, such as Finding the shortest path between two vertices using Dijkstra”s algorithm. Finding the minimal spanning tree in a graph using Prim”s /Kruskal”s algorithm, etc. Counting Coins Problem The Counting Coins problem is to count to a desired value by choosing the least possible coins and the greedy approach forces the algorithm to pick the largest possible coin. If we are provided coins of 1, 2, 5 and 10 and we are asked to count 18 then the greedy procedure will be − 1 − Select one 10 coin, the remaining count is 8 2 − Then select one 5 coin, the remaining count is 3 3 − Then select one 2 coin, the remaining count is 1 4 − And finally, the selection of one 1 coins solves the problem Though, it seems to be working fine, for this count we need to pick only 4 coins. But if we slightly change the problem then the same approach may not be able to produce the same optimum result. For the currency system, where we have coins of 1, 7, 10 value, counting coins for value 18 will be absolutely optimum but for count like 15, it may use more coins than necessary. For example, the greedy approach will use 10 + 1 + 1 + 1 + 1 + 1, total 6 coins. Whereas the same problem could be solved by using only 3 coins (7 + 7 + 1) Hence, we may conclude that the greedy approach picks an immediate optimized solution and may fail where global optimization is a major concern. Where Greedy Approach Fails In many problems, Greedy algorithm fails to find an optimal solution, moreover it may produce a worst solution. Problems like Travelling Salesman and Knapsack cannot be solved using this approach. Examples of Greedy Algorithm Most networking algorithms use the greedy approach. Here is a list of few of them − Travelling Salesman Problem Prim”s Minimal Spanning Tree Algorithm Kruskal”s Minimal Spanning Tree Algorithm Dijkstra”s Minimal Spanning Tree Algorithm Graph − Map Coloring Graph − Vertex Cover DSA – Fractional Knapsack Problem DSA – Job Sequencing with Deadline DSA – Optimal Merge Pattern Algorithm We will discuss these examples elaborately in the further chapters of this tutorial. Print Page Previous Next Advertisements ”;
DSA – Jump Search Algorithm
Jump Search Algorithm Table of content Jump Search Algorithm Implementation ”; Previous Next Jump Search algorithm is a slightly modified version of the linear search algorithm. The main idea behind this algorithm is to reduce the time complexity by comparing lesser elements than the linear search algorithm. The input array is hence sorted and divided into blocks to perform searching while jumping through these blocks. For example, let us look at the given example below; the sorted input array is searched in the blocks of 3 elements each. The desired key is found only after 2 comparisons rather than the 6 comparisons of the linear search. Here, there arises a question about how to divide these blocks. To answer that, if the input array is of size ‘n’, the blocks are divided in the intervals of √n. First element of every block is compared with the key element until the key element’s value is less than the block element. Linear search is performed only on that previous block since the input is sorted. If the element is found, it is a successful search; otherwise, an unsuccessful search is returned. Jump search algorithm is discussed in detail further into this chapter. Jump Search Algorithm The jump search algorithm takes a sorted array as an input which is divided into smaller blocks to make the search simpler. The algorithm is as follows − Step 1 − If the size of the input array is ‘n’, then the size of the block is √n. Set i = 0. Step 2 − The key to be searched is compared with the ith element of the array. If it is a match, the position of the element is returned; otherwise i is incremented with the block size. Step 3 − The Step 2 is repeated until the ith element is greater than the key element. Step 4 − Now, the element is figured to be in the previous block, since the input array is sorted. Therefore, linear search is applied on that block to find the element. Step 5 − If the element is found, the position is returned. If the element is not found, unsuccessful search is prompted. Pseudocode Begin blockSize := √size start := 0 end := blockSize while array[end] <= key AND end < size do start := end end := end + blockSize if end > size – 1 then end := size done for i := start to end -1 do if array[i] = key then return i done return invalid location End Analysis The time complexity of the jump search technique is O(√n) and space complexity is O(1). Example Let us understand the jump search algorithm by searching for element 66 from the given sorted array, A, below − Step 1 Initialize i = 0, and size of the input array ‘n’ = 12 Suppose, block size is represented as ‘m’. Then, m = √n = √12 = 3 Step 2 Compare A[0] with the key element and check whether it matches, A[0] = 0 ≠ 66 Therefore, i is incremented by the block size = 3. Now the element compared with the key element is A[3]. Step 3 A[3] = 14 ≠ 66 Since it is not a match, i is again incremented by 3. Step 4 A[6] = 48 ≠ 66 i is incremented by 3 again. A[9] is compared with the key element. Step 5 A[9] = 88 ≠ 66 However, 88 is greater than 66, therefore linear search is applied on the current block. Step 6 After applying linear search, the pointer increments from 6th index to 7th. Therefore, A[7] is compared with the key element. We find that A[7] is the required element, hence the program returns 7th index as the output. Implementation The jump search algorithm is an extended variant of linear search. The algorithm divides the input array into multiple small blocks and performs the linear search on a single block that is assumed to contain the element. If the element is not found in the assumed blocked, it returns an unsuccessful search. The output prints the position of the element in the array instead of its index. Indexing refers to the index numbers of the array that start from 0 while position is the place where the element is stored. C C++ Java Python #include<stdio.h> #include<math.h> int jump_search(int[], int, int); int main(){ int i, n, key, index; int arr[12] = {0, 6, 12, 14, 19, 22, 48, 66, 79, 88, 104, 126}; int len = sizeof(arr)/sizeof(arr[0]); printf(“Array elements are:”); for(int j = 0; j<len; j++){ printf(” %d”, arr[j]); } n = 12; key = 66; printf(“nThe element to be searched: %dn”, key); index = jump_search(arr, n, key); if(index >= 0) printf(“The element is found at position %d”, index+1); else printf(“Unsuccessful Search”); return 0; } int jump_search(int arr[], int n, int key){ int i, j, m, k; i = 0; m = sqrt(n); k = m; while(arr[m] <= key && m < n) { i = m; m += k; if(m > n – 1) return -1; } // linear search on the block for(j = i; j<m; j++) { if(arr[j] == key) return j; } return -1; } Output Array elements are: 0 6 12 14 19 22 48 66 79 88 104 126 The element to be searched: 66 The element is found at position 8 #include<iostream> #include<cmath> using namespace std; int jump_search(int[], int, int); int main(){ int i, n, key, index; int arr[12] = {0, 6, 12,
Kruskalâs Minimal Spanning Tree Algorithm Table of content Kruskal”s Algorithm Example ”; Previous Next Kruskal”s minimal spanning tree algorithm is one of the efficient methods to find the minimum spanning tree of a graph. A minimum spanning tree is a subgraph that connects all the vertices present in the main graph with the least possible edges and minimum cost (sum of the weights assigned to each edge). The algorithm first starts from the forest â which is defined as a subgraph containing only vertices of the main graph â of the graph, adding the least cost edges later until the minimum spanning tree is created without forming cycles in the graph. Kruskal”s algorithm has easier implementation than primâs algorithm, but has higher complexity. Kruskal”s Algorithm The inputs taken by the kruskalâs algorithm are the graph G {V, E}, where V is the set of vertices and E is the set of edges, and the source vertex S and the minimum spanning tree of graph G is obtained as an output. Algorithm Sort all the edges in the graph in an ascending order and store it in an array edge[]. Construct the forest of the graph on a plane with all the vertices in it. Select the least cost edge from the edge[] array and add it into the forest of the graph. Mark the vertices visited by adding them into the visited[] array. Repeat the steps 2 and 3 until all the vertices are visited without having any cycles forming in the graph When all the vertices are visited, the minimum spanning tree is formed. Calculate the minimum cost of the output spanning tree formed. Examples Construct a minimum spanning tree using kruskalâs algorithm for the graph given below − Solution As the first step, sort all the edges in the given graph in an ascending order and store the values in an array. Edge B→D A→B C→F F→E B→C G→F A→G C→D D→E C→G Cost 5 6 9 10 11 12 15 17 22 25 Then, construct a forest of the given graph on a single plane. From the list of sorted edge costs, select the least cost edge and add it onto the forest in output graph. B → D = 5 Minimum cost = 5 Visited array, v = {B, D} Similarly, the next least cost edge is B → A = 6; so we add it onto the output graph. Minimum cost = 5 + 6 = 11 Visited array, v = {B, D, A} The next least cost edge is C → F = 9; add it onto the output graph. Minimum Cost = 5 + 6 + 9 = 20 Visited array, v = {B, D, A, C, F} The next edge to be added onto the output graph is F → E = 10. Minimum Cost = 5 + 6 + 9 + 10 = 30 Visited array, v = {B, D, A, C, F, E} The next edge from the least cost array is B → C = 11, hence we add it in the output graph. Minimum cost = 5 + 6 + 9 + 10 + 11 = 41 Visited array, v = {B, D, A, C, F, E} The last edge from the least cost array to be added in the output graph is F → G = 12. Minimum cost = 5 + 6 + 9 + 10 + 11 + 12 = 53 Visited array, v = {B, D, A, C, F, E, G} The obtained result is the minimum spanning tree of the given graph with cost = 53. Example The final program implements the Kruskalâs minimum spanning tree problem that takes the cost adjacency matrix as the input and prints the shortest path as the output along with the minimum cost. C C++ Java Python #include <stdio.h> #include <stdlib.h> const int inf = 999999; int k, a, b, u, v, n, ne = 1; int mincost = 0; int cost[3][3] = {{0, 10, 20},{12, 0,15},{16, 18, 0}}; int p[9] = {0}; int applyfind(int i) { while(p[i] != 0) i=p[i]; return i; } int applyunion(int i,int j) { if(i!=j) { p[j]=i; return 1; } return 0; } int main() { n = 3; int i, j; for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { if (cost[i][j] == 0) { cost[i][j] = inf; } } } printf(“Minimum Cost Spanning Tree: n”); while(ne < n) { int min_val = inf; for(i=0; i<n; i++) { for(j=0; j <n; j++) { if(cost[i][j] < min_val) { min_val = cost[i][j]; a = u = i; b = v = j; } } } u = applyfind(u); v = applyfind(v); if(applyunion(u, v) != 0) { printf(“%d -> %dn”, a, b); mincost +=min_val; } cost[a][b] = cost[b][a] = 999; ne++; } printf(“Minimum cost = %d”,mincost); return 0; } Output Minimum Cost Spanning Tree: 0 -> 1 1 -> 2 Minimum cost = 25 #include <iostream> using namespace std; const int inf = 999999; int k, a, b, u, v, n, ne = 1; int mincost = 0; int cost[3][3] = {{0, 10, 20}, {12, 0, 15}, {16, 18, 0}}; int p[9] = {0}; int applyfind(int i) { while (p[i] != 0) { i = p[i]; } return i; } int applyunion(int i, int j) { if
DSA – Queue Data Structure
Queue Data Structure ”; Previous Next What is a Queue? A queue is a linear data structure where elements are stored in the FIFO (First In First Out) principle where the first element inserted would be the first element to be accessed. A queue is an Abstract Data Type (ADT) similar to stack, the thing that makes queue different from stack is that a queue is open at both its ends. The data is inserted into the queue through one end and deleted from it using the other end. Queue is very frequently used in most programming languages. A real-world example of queue can be a single-lane one-way road, where the vehicle enters first, exits first. More real-world examples can be seen as queues at the ticket windows and bus-stops. Representation of Queues Similar to the stack ADT, a queue ADT can also be implemented using arrays, linked lists, or pointers. As a small example in this tutorial, we implement queues using a one-dimensional array. Basic Operations in Queue Queue operations also include initialization of a queue, usage and permanently deleting the data from the memory. The most fundamental operations in the queue ADT include: enqueue(), dequeue(), peek(), isFull(), isEmpty(). These are all built-in operations to carry out data manipulation and to check the status of the queue. Queue uses two pointers − front and rear. The front pointer accesses the data from the front end (helping in enqueueing) while the rear pointer accesses data from the rear end (helping in dequeuing). Queue Insertion Operation: Enqueue() The enqueue() is a data manipulation operation that is used to insert elements into the stack. The following algorithm describes the enqueue() operation in a simpler way. Algorithm 1. START 2. Check if the queue is full. 3. If the queue is full, produce overflow error and exit. 4. If the queue is not full, increment rear pointer to point the next empty space. 5. Add data element to the queue location, where the rear is pointing. 6. return success. 7. END Example Following are the implementations of this operation in various programming languages − C C++ Java Python #include <stdio.h> #include <string.h> #include <stdlib.h> #include <stdbool.h> #define MAX 6 int intArray[MAX]; int front = 0; int rear = -1; int itemCount = 0; bool isFull(){ return itemCount == MAX; } bool isEmpty(){ return itemCount == 0; } int removeData(){ int data = intArray[front++]; if(front == MAX) { front = 0; } itemCount–; return data; } void insert(int data){ if(!isFull()) { if(rear == MAX-1) { rear = -1; } intArray[++rear] = data; itemCount++; } } int main(){ insert(3); insert(5); insert(9); insert(1); insert(12); insert(15); printf(“Queue: “); while(!isEmpty()) { int n = removeData(); printf(“%d “,n); } } Output Queue: 3 5 9 1 12 15 #include <iostream> #include <string> #define MAX 6 int intArray[MAX]; int front = 0; int rear = -1; int itemCount = 0; bool isFull(){ return itemCount == MAX; } bool isEmpty(){ return itemCount == 0; } int removeData(){ int data = intArray[front++]; if(front == MAX) { front = 0; } itemCount–; return data; } void insert(int data){ if(!isFull()) { if(rear == MAX-1) { rear = -1; } intArray[++rear] = data; itemCount++; } } int main(){ insert(3); insert(5); insert(9); insert(1); insert(12); insert(15); printf(“Queue: “); while(!isEmpty()) { int n = removeData(); printf(“%d “,n); } } Output Queue: 3 5 9 1 12 15 import java.util.*; public class Demo{ static final int MAX = 6; static int intArray[] = new int[MAX]; static int front = 0; static int rear = -1; static int itemCount = 0; public static boolean isFull(){ return itemCount == MAX; } public static boolean isEmpty(){ return itemCount == 0; } public static int removeData(){ int data = intArray[front++]; if(front == MAX) { front = 0; } itemCount–; return data; } public static void insert(int data){ if(!isFull()) { if(rear == MAX-1) { rear = -1; } intArray[++rear] = data; itemCount++; } } public static void main(String[] args){ insert(3); insert(5); insert(9); insert(1); insert(12); insert(15); System.out.print(“Queue: “); while(!isEmpty()) { int n = removeData(); System.out.print(n + ” “); } } } Output Queue: 3 5 9 1 12 15 MAX = 6; intArray = [0] * MAX front = 0; rear = -1; itemCount = 0; def isFull(): return itemCount == MAX def isEmpty(): return itemCount == 0 def removeData(): data = intArray[front+1] if(front == MAX): front = 0 itemCount-1 return data def insert(data): global rear, itemCount if(not isFull()): if(rear == MAX-1): rear = -1 rear = rear + 1 intArray[rear] = data itemCount+1 insert(3); insert(5); insert(9); insert(1); insert(12); insert(15); print(“Queue: “) for i in range(MAX): print(intArray[i], end = ” “) while(not isEmpty()): n = removeData() print(n, end = ” “) Output Queue: 3 5 9 1 12 15 Queue Deletion Operation: dequeue() The dequeue() is a data manipulation operation that is used to remove elements from the stack. The following algorithm describes the dequeue() operation in a simpler way. Algorithm 1. START 2. Check if the queue is empty. 3. If the queue is empty, produce underflow error and exit. 4. If the queue is not empty, access the data where front is pointing. 5. Increment front pointer to point to the next available data element. 6. Return success. 7. END Example Following are the implementations of this operation in various programming languages − C C++ Java Python #include <stdio.h> #include <string.h> #include <stdlib.h> #include <stdbool.h> #define MAX 6 int intArray[MAX]; int front = 0; int rear = -1; int itemCount = 0; bool isFull(){ return itemCount == MAX; } bool isEmpty(){ return itemCount == 0; } void
Linear Search Algorithm Table of content Linear Search Algorithm Implementation ”; Previous Next Linear search is a type of sequential searching algorithm. In this method, every element within the input array is traversed and compared with the key element to be found. If a match is found in the array the search is said to be successful; if there is no match found the search is said to be unsuccessful and gives the worst-case time complexity. For instance, in the given animated diagram, we are searching for an element 33. Therefore, the linear search method searches for it sequentially from the very first element until it finds a match. This returns a successful search. In the same diagram, if we have to search for an element 46, then it returns an unsuccessful search since 46 is not present in the input. Linear Search Algorithm The algorithm for linear search is relatively simple. The procedure starts at the very first index of the input array to be searched. Step 1 − Start from the 0th index of the input array, compare the key value with the value present in the 0th index. Step 2 − If the value matches with the key, return the position at which the value was found. Step 3 − If the value does not match with the key, compare the next element in the array. Step 4 − Repeat Step 3 until there is a match found. Return the position at which the match was found. Step 5 − If it is an unsuccessful search, print that the element is not present in the array and exit the program. Pseudocode procedure linear_search (list, value) for each item in the list if match item == value return the item”s location end if end for end procedure Analysis Linear search traverses through every element sequentially therefore, the best case is when the element is found in the very first iteration. The best-case time complexity would be O(1). However, the worst case of the linear search method would be an unsuccessful search that does not find the key value in the array, it performs n iterations. Therefore, the worst-case time complexity of the linear search algorithm would be O(n). Example Let us look at the step-by-step searching of the key element (say 47) in an array using the linear search method. Step 1 The linear search starts from the 0th index. Compare the key element with the value in the 0th index, 34. However, 47 ≠ 34. So it moves to the next element. Step 2 Now, the key is compared with value in the 1st index of the array. Still, 47 ≠ 10, making the algorithm move for another iteration. Step 3 The next element 66 is compared with 47. They are both not a match so the algorithm compares the further elements. Step 4 Now the element in 3rd index, 27, is compared with the key value, 47. They are not equal so the algorithm is pushed forward to check the next element. Step 5 Comparing the element in the 4th index of the array, 47, to the key 47. It is figured that both the elements match. Now, the position in which 47 is present, i.e., 4 is returned. The output achieved is “Element found at 4th index”. Implementation In this tutorial, the Linear Search program can be seen implemented in four programming languages. The function compares the elements of input with the key value and returns the position of the key in the array or an unsuccessful search prompt if the key is not present in the array. C C++ Java Python #include <stdio.h> void linear_search(int a[], int n, int key){ int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array printf(“The element is found at %d positionn”, i+1); count = count + 1; } } if(count == 0) // for unsuccessful search printf(“The element is not present in the arrayn”); } int main(){ int i, n, key; n = 6; int a[10] = {12, 44, 32, 18, 4, 10}; key = 18; linear_search(a, n, key); key = 23; linear_search(a, n, key); return 0; } Output The element is found at 4 position The element is not present in the array #include <iostream> using namespace std; void linear_search(int a[], int n, int key){ int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array cout << “The element is found at position ” << i+1 <<endl; count = count + 1; } } if(count == 0) // for unsuccessful search cout << “The element is not present in the array” <<endl; } int main(){ int i, n, key; n = 6; int a[10] = {12, 44, 32, 18, 4, 10}; key = 18; linear_search(a, n, key); key = 23; linear_search(a, n, key); return 0; } Output The element is found at position 4 The element is not present in the array import java.io.*; import java.util.*; public class LinearSearch { static void linear_search(int a[], int n, int key) { int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array System.out.println(“The element is found at position ” + (i+1)); count = count + 1; } } if(count == 0) // for unsuccessful search System.out.println(“The element is not present in the array”); } public static void main(String args[])
DSA – Interpolation Search
Interpolation Search Algorithm Table of content Positioning in Binary Search Position Probing in Interpolation Search Interpolation Search Algorithm Implementation ”; Previous Next Interpolation search is an improved variant of binary search. This search algorithm works on the probing position of the required value. For this algorithm to work properly, the data collection should be in a sorted form and equally distributed. Binary search has a huge advantage of time complexity over linear search. Linear search has worst-case complexity of Ο(n) whereas binary search has Ο(log n). There are cases where the location of target data may be known in advance. For example, in case of a telephone directory, if we want to search the telephone number of “Morpheus”. Here, linear search and even binary search will seem slow as we can directly jump to memory space where the names start from ”M” are stored. Positioning in Binary Search In binary search, if the desired data is not found then the rest of the list is divided in two parts, lower and higher. The search is carried out in either of them. Even when the data is sorted, binary search does not take advantage to probe the position of the desired data. Position Probing in Interpolation Search Interpolation search finds a particular item by computing the probe position. Initially, the probe position is the position of the middle most item of the collection. If a match occurs, then the index of the item is returned. To split the list into two parts, we use the following method − $$mid, =, Lo, +, frac{left ( Hi, -, Lo right )ast left ( X, -, Aleft [ Lo right ] right )}{Aleft [ Hi right ], -, Aleft [ Lo right ]}$$ where − A = list Lo = Lowest index of the list Hi = Highest index of the list A[n] = Value stored at index n in the list If the middle item is greater than the item, then the probe position is again calculated in the sub-array to the right of the middle item. Otherwise, the item is searched in the sub-array to the left of the middle item. This process continues on the sub-array as well until the size of subarray reduces to zero. Interpolation Search Algorithm As it is an improvisation of the existing BST algorithm, we are mentioning the steps to search the ”target” data value index, using position probing − 1. Start searching data from middle of the list. 2. If it is a match, return the index of the item, and exit. 3. If it is not a match, probe position. 4. Divide the list using probing formula and find the new middle. 5. If data is greater than middle, search in higher sub-list. 6. If data is smaller than middle, search in lower sub-list. 7. Repeat until match. Pseudocode A → Array list N → Size of A X → Target Value Procedure Interpolation_Search() Set Lo → 0 Set Mid → -1 Set Hi → N-1 While X does not match if Lo equals to Hi OR A[Lo] equals to A[Hi] EXIT: Failure, Target not found end if Set Mid = Lo + ((Hi – Lo) / (A[Hi] – A[Lo])) * (X – A[Lo]) if A[Mid] = X EXIT: Success, Target found at Mid else if A[Mid] < X Set Lo to Mid+1 else if A[Mid] > X Set Hi to Mid-1 end if end if End While End Procedure Analysis Runtime complexity of interpolation search algorithm is Ο(log (log n)) as compared to Ο(log n) of BST in favorable situations. Example To understand the step-by-step process involved in the interpolation search, let us look at an example and work around it. Consider an array of sorted elements given below − Let us search for the element 19. Solution Unlike binary search, the middle point in this approach is chosen using the formula − $$mid, =, Lo, +, frac{left ( Hi, -, Lo right )ast left ( X, -, Aleft [ Lo right ] right )}{Aleft [ Hi right ], -, Aleft [ Lo right ]}$$ So in this given array input, Lo = 0, A[Lo] = 10 Hi = 9, A[Hi] = 44 X = 19 Applying the formula to find the middle point in the list, we get $$mid, =, 0, +, frac{left ( 9, -, 0 right )ast left ( 19, -, 10 right )}{44, -, 10}$$ $$mid, =, frac{9ast 9}{34}$$ $$mid, =, frac{81}{34},=,2.38$$ Since, mid is an index value, we only consider the integer part of the decimal. That is, mid = 2. Comparing the key element given, that is 19, to the element present in the mid index, it is found that both the elements match. Therefore, the element is found at index 2. Implementation Interpolation search is an improved variant of binary search. This search algorithm works on the probing position of the required value. For this algorithm to work properly, the data collection should be in sorted and equally distributed form. C C++ Java Python #include<stdio.h> #define MAX 10 // array of items on which linear search will be conducted. int list[MAX] = { 10, 14, 19, 26, 27, 31, 33, 35, 42, 44 }; int interpolation_search(int data){ int lo = 0; int hi = MAX – 1; int mid = -1; int comparisons = 1; int index = -1; while(lo <= hi) { printf(“Comparison %d n” , comparisons ) ; printf(“lo : %d, list[%d] = %dn”, lo, lo, list[lo]);
DSA – Exponential Search
Exponential Search Algorithm Table of content Exponential Search Algorithm Implementation ”; Previous Next Exponential search algorithm targets a range of an input array in which it assumes that the required element must be present in and performs a binary search on that particular small range. This algorithm is also known as doubling search or finger search. It is similar to jump search in dividing the sorted input into multiple blocks and conducting a smaller scale search. However, the difference occurs while performing computations to divide the blocks and the type of smaller scale search applied (jump search applies linear search and exponential search applies binary search). Hence, this algorithm jumps exponentially in the powers of 2. In simpler words, the search is performed on the blocks divided using pow(2, k) where k is an integer greater than or equal to 0. Once the element at position pow(2, n) is greater than the key element, binary search is performed on the current block. Exponential Search Algorithm In the exponential search algorithm, the jump starts from the 1st index of the array. So we manually compare the first element as the first step in the algorithm. Step 1 − Compare the first element in the array with the key, if a match is found return the 0th index. Step 2 − Initialize i = 1 and compare the ith element of the array with the key to be search. If it matches return the index. Step 3 − If the element does not match, jump through the array exponentially in the powers of 2. Therefore, now the algorithm compares the element present in the incremental position. Step 4 − If the match is found, the index is returned. Otherwise Step 2 is repeated iteratively until the element at the incremental position becomes greater than the key to be searched. Step 5 − Since the next increment has the higher element than the key and the input is sorted, the algorithm applies binary search algorithm on the current block. Step 6 − The index at which the key is present is returned if the match is found; otherwise it is determined as an unsuccessful search. Pseudocode Begin m := pow(2, k) // m is the block size start := 1 low := 0 high := size – 1 // size is the size of input if array[0] == key return 0 while array[m] <= key AND m < size do start := start + 1 m := pow(2, start) while low <= high do: mid = low + (high – low) / 2 if array[mid] == x return mid if array[mid] < x low = mid + 1 else high = mid – 1 done return invalid location End Analysis Even though it is called Exponential search it does not perform searching in exponential time complexity. But as we know, in this search algorithm, the basic search being performed is binary search. Therefore, the time complexity of the exponential search algorithm will be the same as the binary search algorithm’s, O(log n). Example To understand the exponential search algorithm better and in a simpler way, let us search for an element in an example input array using the exponential search algorithm − The sorted input array given to the search algorithm is − Let us search for the position of element 81 in the given array. Step 1 Compare the first element of the array with the key element 81. The first element of the array is 6, but the key element to be searched is 81; hence, the jump starts from the 1st index as there is no match found. Step 2 After initializing i = 1, the key element is compared with the element in the first index. Here, the element in the 1st index does not match with the key element. So it is again incremented exponentially in the powers of 2. The index is incremented to 2m = 21 = the element in 2nd index is compared with the key element. It is still not a match so it is once again incremented. Step 3 The index is incremented in the powers of 2 again. 22 = 4 = the element in 4th index is compared with the key element and a match is not found yet. Step 4 The index is incremented exponentially once again. This time the element in the 8th index is compared with the key element and a match is not found. However, the element in the 8th index is greater than the key element. Hence, the binary search algorithm is applied on the current block of elements. Step 5 The current block of elements includes the elements in the indices [4, 5, 6, 7]. Small scale binary search is applied on this block of elements, where the mid is calculated to be the 5th element. Step 6 The match is not found at the mid element and figures that the desired element is greater than the mid element. Hence, the search takes place is the right half of the block. The mid now is set as 6th element − Step 7 The element is still not found at the 6th element so it now searches in the right half of the mid element. The next mid is set as 7th element. Here, the element is found at the 7th index. Implementation In the implementation of the exponential search algorithm, the program checks for the matches at every exponential jump in the powers of 2. If the match is found the location of the