DSA – Max-Min Problem

Max-Min Problem Table of content Max-Min Problem Naive Method Divide and Conquer Approach ”; Previous Next Let us consider a simple problem that can be solved by divide and conquer technique. Max-Min Problem The Max-Min Problem in algorithm analysis is finding the maximum and minimum value in an array. Solution To find the maximum and minimum numbers in a given array numbers[] of size n, the following algorithm can be used. First we are representing the naive method and then we will present divide and conquer approach. Naive Method Naive method is a basic method to solve any problem. In this method, the maximum and minimum number can be found separately. To find the maximum and minimum numbers, the following straightforward algorithm can be used. Algorithm: Max-Min-Element (numbers[]) max := numbers[1] min := numbers[1] for i = 2 to n do if numbers[i] > max then max := numbers[i] if numbers[i] < min then min := numbers[i] return (max, min) Example Following are the implementations of the above approach in various programming languages − C C++ Java Python #include <stdio.h> struct Pair { int max; int min; }; // Function to find maximum and minimum using the naive algorithm struct Pair maxMinNaive(int arr[], int n) { struct Pair result; result.max = arr[0]; result.min = arr[0]; // Loop through the array to find the maximum and minimum values for (int i = 1; i < n; i++) { if (arr[i] > result.max) { result.max = arr[i]; // Update the maximum value if a larger element is found } if (arr[i] < result.min) { result.min = arr[i]; // Update the minimum value if a smaller element is found } } return result; // Return the pair of maximum and minimum values } int main() { int arr[] = {6, 4, 26, 14, 33, 64, 46}; int n = sizeof(arr) / sizeof(arr[0]); struct Pair result = maxMinNaive(arr, n); printf(“Maximum element is: %dn”, result.max); printf(“Minimum element is: %dn”, result.min); return 0; } Output Maximum element is: 64 Minimum element is: 4 #include <iostream> using namespace std; struct Pair { int max; int min; }; // Function to find maximum and minimum using the naive algorithm Pair maxMinNaive(int arr[], int n) { Pair result; result.max = arr[0]; result.min = arr[0]; // Loop through the array to find the maximum and minimum values for (int i = 1; i < n; i++) { if (arr[i] > result.max) { result.max = arr[i]; // Update the maximum value if a larger element is found } if (arr[i] < result.min) { result.min = arr[i]; // Update the minimum value if a smaller element is found } } return result; // Return the pair of maximum and minimum values } int main() { int arr[] = {6, 4, 26, 14, 33, 64, 46}; int n = sizeof(arr) / sizeof(arr[0]); Pair result = maxMinNaive(arr, n); cout << “Maximum element is: ” << result.max << endl; cout << “Minimum element is: ” << result.min << endl; return 0; } Output Maximum element is: 64 Minimum element is: 4 public class MaxMinNaive { static class Pair { int max; int min; } // Function to find maximum and minimum using the naive algorithm static Pair maxMinNaive(int[] arr) { Pair result = new Pair(); result.max = arr[0]; result.min = arr[0]; // Loop through the array to find the maximum and minimum values for (int i = 1; i < arr.length; i++) { if (arr[i] > result.max) { result.max = arr[i]; // Update the maximum value if a larger element is found } if (arr[i] < result.min) { result.min = arr[i]; // Update the minimum value if a smaller element is found } } return result; // Return the pair of maximum and minimum values } public static void main(String[] args) { int[] arr = {6, 4, 26, 14, 33, 64, 46}; Pair result = maxMinNaive(arr); System.out.println(“Maximum element is: ” + result.max); System.out.println(“Minimum element is: ” + result.min); } } Output Maximum element is: 64 Minimum element is: 4 def max_min_naive(arr): max_val = arr[0] min_val = arr[0] # Loop through the array to find the maximum and minimum values for i in range(1, len(arr)): if arr[i] > max_val: max_val = arr[i] # Update the maximum value if a larger element is found if arr[i] < min_val: min_val = arr[i] # Update the minimum value if a smaller element is found return max_val, min_val # Return the pair of maximum and minimum values arr = [6, 4, 26, 14, 33, 64, 46] max_val, min_val = max_min_naive(arr) print(“Maximum element is:”, max_val) print(“Minimum element is:”, min_val) Output Maximum element is: 64 Minimum element is: 4 Analysis The number of comparison in Naive method is 2n – 2. The number of comparisons can be reduced using the divide and conquer approach. Following is the technique. Divide and Conquer Approach In this approach, the array is divided into two halves. Then using recursive approach maximum and minimum numbers in each halves are found. Later, return the maximum of two maxima of each half and the minimum of two minima of each half. In this given problem, the number of elements in an array is $y – x + 1$, where y is greater than or equal to x. $mathbf{mathit{Max – Min(x, y)}}$ will return the maximum and minimum values of an array $mathbf{mathit{numbers[x…y]}}$. Algorithm: Max – Min(x, y) if y – x ≤ 1 then return (max(numbers[x],numbers[y]),min((numbers[x],numbers[y])) else (max1, min1):= maxmin(x, ⌊((x + y)/2)⌋) (max2, min2):= maxmin(⌊((x + y)/2) + 1)⌋,y) return (max(max1, max2), min(min1, min2)) Example Following are implementations of the above approach in various programming languages − C C++ Java Python #include <stdio.h> // Structure to store both maximum and minimum elements struct Pair { int max; int min; }; struct Pair maxMinDivideConquer(int arr[], int low, int high) { struct Pair result; struct Pair left; struct Pair right; int mid; // If only one element in the array if (low == high) { result.max = arr[low]; result.min = arr[low]; return result; } // If

DSA – Doubly Linked List Data Structure

Doubly Linked List Data Structure ”; Previous Next What is Doubly Linked List? Doubly Linked List is a variation of Linked list in which navigation is possible in both ways, forward as well as backward easily as compared to Single Linked List. Following are the important terms to understand the concept of doubly linked list. Link − Each link of a linked list can store a data called an element. Next − Each link of a linked list contains a link to the next link called Next. Prev − Each link of a linked list contains a link to the previous link called Prev. Linked List − A Linked List contains the connection link to the first link called First and to the last link called Last. Doubly Linked List Representation As per the above illustration, following are the important points to be considered. Doubly Linked List contains a link element called first and last. Each link carries a data field(s) and a link field called next. Each link is linked with its next link using its next link. Each link is linked with its previous link using its previous link. The last link carries a link as null to mark the end of the list. Basic Operations in Doubly Linked List Following are the basic operations supported by a list. Insertion − Adds an element at the beginning of the list. Insert Last − Adds an element at the end of the list. Insert After − Adds an element after an item of the list. Deletion − Deletes an element at the beginning of the list. Delete Last − Deletes an element from the end of the list. Delete − Deletes an element from the list using the key. Display forward − Displays the complete list in a forward manner. Display backward − Displays the complete list in a backward manner. Doubly Linked List – Insertion at the Beginning In this operation, we create a new node with three compartments, one containing the data, the others containing the address of its previous and next nodes in the list. This new node is inserted at the beginning of the list. Algorithm 1. START 2. Create a new node with three variables: prev, data, next. 3. Store the new data in the data variable 4. If the list is empty, make the new node as head. 5. Otherwise, link the address of the existing first node to the next variable of the new node, and assign null to the prev variable. 6. Point the head to the new node. 7. END Example Following are the implementations of this operation in various programming languages − C C++ Java Python #include <stdio.h> #include <string.h> #include <stdlib.h> #include <stdbool.h> struct node { int data; int key; struct node *next; struct node *prev; }; //this link always point to first Link struct node *head = NULL; //this link always point to last Link struct node *last = NULL; struct node *current = NULL; //is list empty bool isEmpty(){ return head == NULL; } //display the doubly linked list void printList(){ struct node *ptr = head; while(ptr != NULL) { printf(“(%d,%d) “,ptr->key,ptr->data); ptr = ptr->next; } } //insert link at the first location void insertFirst(int key, int data){ //create a link struct node *link = (struct node*) malloc(sizeof(struct node)); link->key = key; link->data = data; if(isEmpty()) { //make it the last link last = link; } else { //update first prev link head->prev = link; } //point it to old first link link->next = head; //point first to new first link head = link; } void main(){ insertFirst(1,10); insertFirst(2,20); insertFirst(3,30); insertFirst(4,1); insertFirst(5,40); insertFirst(6,56); printf(“nDoubly Linked List: “); printList(); } Output Doubly Linked List: (6,56) (5,40) (4,1) (3,30) (2,20) (1,10) #include <iostream> #include <cstring> #include <cstdlib> #include <cstdbool> struct node { int data; int key; struct node *next; struct node *prev; }; //this link always point to first Link struct node *head = NULL; //this link always point to last Link struct node *last = NULL; struct node *current = NULL; //is list empty bool isEmpty(){ return head == NULL; } //display the doubly linked list void printList(){ struct node *ptr = head; while(ptr != NULL) { printf(“(%d,%d) “,ptr->key,ptr->data); ptr = ptr->next; } } //insert link at the first location void insertFirst(int key, int data){ //create a link struct node *link = (struct node*) malloc(sizeof(struct node)); link->key = key; link->data = data; if(isEmpty()) { //make it the last link last = link; } else { //update first prev link head->prev = link; } //point it to old first link link->next = head; //point first to new first link head = link; } int main(){ insertFirst(1,10); insertFirst(2,20); insertFirst(3,30); insertFirst(4,1); insertFirst(5,40); insertFirst(6,56); printf(“Doubly Linked List: “); printList(); return 0; } Output Doubly Linked List: (6,56) (5,40) (4,1) (3,30) (2,20) (1,10) //Java code for doubly linked list import java.util.*; class Node { public int data; public int key; public Node next; public Node prev; public Node(int data, int key) { this.data = data; this.key = key; this.next = null; this.prev = null; } } public class Main { //this link always point to first Link static Node head = null; //this link always point to last Link static Node last = null; static Node current = null; // is list empty public static boolean is_empty() { return head == null; } //display the doubly linked list public static void print_list() { Node ptr = head; while (ptr != null) { System.out.println(“(” + ptr.key

DSA – Useful Resources

Data Structure – Useful Resources ”; Previous Next The following resources contain additional information on Data Structure. Please use them to get more in-depth knowledge on this. Useful Video Courses Data Structure Online Training Course 142 Lectures 13 hours Tutorialspoint More Detail Advanced Data Structures Course 47 Lectures 7.5 hours William Fiset More Detail Data Structures in JavaScript: Fundamentals Course Best Seller 89 Lectures 14 hours Eduonix Learning Solutions More Detail Data Structures and Algorithms Using Python Most Popular 180 Lectures 17 hours Chandramouli Jayendran More Detail Prerequisite Course for Data Structure 18 Lectures 3.5 hours EnggTutes More Detail Mastering Data Structures & Algorithms using C++ 100 Lectures 22 hours EnggTutes More Detail Print Page Previous Next Advertisements ”;

DSA – Fibonacci Search

Fibonacci Search Algorithm Table of content Fibonacci Search Algorithm Implementation ”; Previous Next As the name suggests, the Fibonacci Search Algorithm uses Fibonacci numbers to search for an element in a sorted input array. But first, let us revise our knowledge on Fibonacci numbers − Fibonacci Series is a series of numbers that have two primitive numbers 0 and 1. The successive numbers are the sum of preceding two numbers in the series. This is an infinite constant series, therefore, the numbers in it are fixed. The first few numbers in this Fibonacci series include − 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89… The main idea behind the Fibonacci series is also to eliminate the least possible places where the element could be found. In a way, it acts like a divide & conquer algorithm (logic being the closest to binary search algorithm). This algorithm, like jump search and exponential search, also skips through the indices of the input array in order to perform searching. Fibonacci Search Algorithm The Fibonacci Search Algorithm makes use of the Fibonacci Series to diminish the range of an array on which the searching is set to be performed. With every iteration, the search range decreases making it easier to locate the element in the array. The detailed procedure of the searching is seen below − Step 1 − As the first step, find the immediate Fibonacci number that is greater than or equal to the size of the input array. Then, also hold the two preceding numbers of the selected Fibonacci number, that is, we hold Fm, Fm-1, Fm-2 numbers from the Fibonacci Series. Step 2 − Initialize the offset value as -1, as we are considering the entire array as the searching range in the beginning. Step 3 − Until Fm-2 is greater than 0, we perform the following steps − Compare the key element to be found with the element at index [min(offset+Fm-2,n-1)]. If a match is found, return the index. If the key element is found to be lesser value than this element, we reduce the range of the input from 0 to the index of this element. The Fibonacci numbers are also updated with Fm = Fm-2. But if the key element is greater than the element at this index, we remove the elements before this element from the search range. The Fibonacci numbers are updated as Fm = Fm-1. The offset value is set to the index of this element. Step 4 − As there are two 1s in the Fibonacci series, there arises a case where your two preceding numbers will become 1. So if Fm-1 becomes 1, there is only one element left in the array to be searched. We compare the key element with that element and return the 1st index. Otherwise, the algorithm returns an unsuccessful search. Pseudocode Begin Fibonacci Search n <- size of the input array offset = -1 Fm2 := 0 Fm1 := 1 Fm := Fm2 + Fm1 while Fm < n do: Fm2 = Fm1 Fm1 = Fm Fm = Fm2 + Fm1 done while fm > 1 do: i := minimum of (offset + fm2, n – 1) if (A[i] < x) then: Fm := Fm1 Fm1 := Fm2 Fm2 := Fm – Fm1 offset = i end else if (A[i] > x) then: Fm = Fm2 Fm1 = Fm1 – Fm2 Fm2 = Fm – Fm1 end else return i; end done if (Fm1 and Array[offset + 1] == x) then: return offset + 1 end return invalid location; end Analysis The Fibonacci Search algorithm takes logarithmic time complexity to search for an element. Since it is based on a divide on a conquer approach and is similar to idea of binary search, the time taken by this algorithm to be executed under the worst case consequences is O(log n). Example Suppose we have a sorted array of elements {12, 14, 16, 17, 20, 24, 31, 43, 50, 62} and need to identify the location of element 24 in it using Fibonacci Search. Step 1 The size of the input array is 10. The smallest Fibonacci number greater than 10 is 13. Therefore, Fm = 13, Fm-1 = 8, Fm-2 = 5. We initialize offset = -1 Step 2 In the first iteration, compare it with the element at index = minimum (offset + Fm-2, n – 1) = minimum (-1 + 5, 9) = minimum (4, 9) = 4. The fourth element in the array is 20, which is not a match and is less than the key element. Step 3 In the second iteration, update the offset value and the Fibonacci numbers. Since the key is greater, the offset value will become the index of the element, i.e. 4. Fibonacci numbers are updated as Fm = Fm-1 = 8. Fm-1 = 5, Fm-2 = 3. Now, compare it with the element at index = minimum (offset + Fm-2, n – 1) = minimum (4 + 3, 9) = minimum (7, 9) = 7. Element at the 7th index of the array is 43, which is not a match and is also lesser than the key. Step 4 We discard the elements after the 7th index, so n = 7 and offset value remains 4. Fibonacci numbers are pushed two steps backward, i.e. Fm = Fm-2 = 3. Fm-1 = 2, Fm-2 = 1. Now, compare it with the element at index = minimum (offset + Fm-2, n – 1) = minimum (4 + 1, 6) = minimum (5, 7) = 5. The element at index 5 in the

DSA – B+ Trees

B+ Trees Table of content Basic Operations of B+ Trees Insertion operation ”; Previous Next The B+ trees are extensions of B trees designed to make the insertion, deletion and searching operations more efficient. The properties of B+ trees are similar to the properties of B trees, except that the B trees can store keys and records in all internal nodes and leaf nodes while B+ trees store records in leaf nodes and keys in internal nodes. One profound property of the B+ tree is that all the leaf nodes are connected to each other in a single linked list format and a data pointer is available to point to the data present in disk file. This helps fetch the records in equal numbers of disk access. Since the size of main memory is limited, B+ trees act as the data storage for the records that couldn’t be stored in the main memory. For this, the internal nodes are stored in the main memory and the leaf nodes are stored in the secondary memory storage. Properties of B+ trees Every node in a B+ Tree, except root, will hold a maximum of m children and (m-1) keys, and a minimum of $mathrm{left lceil frac{m}{2}right rceil}$ children and $mathrm{left lceil frac{m-1}{2}right rceil}$ keys, since the order of the tree is m. The root node must have no less than two children and at least one search key. All the paths in a B tree must end at the same level, i.e. the leaf nodes must be at the same level. A B+ tree always maintains sorted data. Basic Operations of B+ Trees The operations supported in B+ trees are Insertion, deletion and searching with the time complexity of O(log n) for every operation. They are almost similar to the B tree operations as the base idea to store data in both data structures is same. However, the difference occurs as the data is stored only in the leaf nodes of a B+ trees, unlike B trees. Insertion operation The insertion to a B+ tree starts at a leaf node. Step 1 − Calculate the maximum and minimum number of keys to be added onto the B+ tree node. Step 2 − Insert the elements one by one accordingly into a leaf node until it exceeds the maximum key number. Step 3 − The node is split into half where the left child consists of minimum number of keys and the remaining keys are stored in the right child. Step 4 − But if the internal node also exceeds the maximum key property, the node is split in half where the left child consists of the minimum keys and remaining keys are stored in the right child. However, the smallest number in the right child is made the parent. Step 5 − If both the leaf node and internal node have the maximum keys, both of them are split in the similar manner and the smallest key in the right child is added to the parent node. Example Following are the implementations of this operation in various programming languages − C C++ Java Python // C program for Bplus tree #include <stdio.h> #include <stdlib.h> struct BplusTree { int *d; struct BplusTree **child_ptr; int l; int n; }; struct BplusTree *r = NULL, *np = NULL, *x = NULL; struct BplusTree* init() { //to create nodes int i; np = (struct BplusTree*)malloc(sizeof(struct BplusTree)); np->d = (int*)malloc(6 * sizeof(int)); // order 6 np->child_ptr = (struct BplusTree**)malloc(7 * sizeof(struct BplusTree*)); np->l = 1; np->n = 0; for (i = 0; i < 7; i++) { np->child_ptr[i] = NULL; } return np; } void traverse(struct BplusTree *p) { //traverse tree printf(“n”); int i; for (i = 0; i < p->n; i++) { if (p->l == 0) { traverse(p->child_ptr[i]); } printf(” %d”, p->d[i]); } if (p->l == 0) { traverse(p->child_ptr[i]); } printf(“n”); } void sort(int *p, int n) { int i, j, t; for (i = 0; i < n; i++) { for (j = i; j <= n; j++) { if (p[i] > p[j]) { t = p[i]; p[i] = p[j]; p[j] = t; } } } } int split_child(struct BplusTree *x, int i) { int j, mid; struct BplusTree *np1, *np3, *y; np3 = init(); np3->l = 1; if (i == -1) { mid = x->d[2]; x->d[2] = 0; x->n–; np1 = init(); np1->l = 0; x->l = 1; for (j = 3; j < 6; j++) { np3->d[j – 3] = x->d[j]; np3->child_ptr[j – 3] = x->child_ptr[j]; np3->n++; x->d[j] = 0; x->n–; } for (j = 0; j < 6; j++) { x->child_ptr[j] = NULL; } np1->d[0] = mid; np1->child_ptr[np1->n] = x; np1->child_ptr[np1->n + 1] = np3; np1->n++; r = np1; } else { y = x->child_ptr[i]; mid = y->d[2]; y->d[2] = 0; y->n–; for (j = 3; j < 6; j++) { np3->d[j – 3] = y->d[j]; np3->n++; y->d[j] = 0; y->n–; } x->child_ptr[i + 1] = y; x->child_ptr[i + 1] = np3; } return mid; } void insert(int a) { int i, t; x = r; if (x == NULL) { r = init(); x = r; } else { if (x->l == 1 && x->n == 6) { t = split_child(x, -1); x = r; for (i = 0; i < x->n; i++) { if (a > x->d[i] && a < x->d[i + 1]) { i++; break; } else if (a < x->d[0]) { break; } else { continue; } } x = x->child_ptr[i]; } else { while (x->l == 0) { for (i = 0; i < x->n; i++) { if (a > x->d[i] && a < x->d[i + 1]) { i++; break; } else if

DSA – Tower of Hanoi Using Recursion

Tower of Hanoi Using Recursion Table of content Tower of Hanoi Rules Algorithm Example ”; Previous Next Tower of Hanoi Tower of Hanoi, is a mathematical puzzle which consists of three towers (pegs) and more than one rings is as depicted − These rings are of different sizes and stacked upon in an ascending order, i.e. the smaller one sits over the larger one. There are other variations of the puzzle where the number of disks increase, but the tower count remains the same. Rules The mission is to move all the disks to some another tower without violating the sequence of arrangement. A few rules to be followed for Tower of Hanoi are − Only one disk can be moved among the towers at any given time. Only the “top” disk can be removed. No large disk can sit over a small disk. Following is an animated representation of solving a Tower of Hanoi puzzle with three disks. Tower of Hanoi puzzle with n disks can be solved in minimum 2n−1 steps. This presentation shows that a puzzle with 3 disks has taken 23 – 1 = 7 steps. Algorithm To write an algorithm for Tower of Hanoi, first we need to learn how to solve this problem with lesser amount of disks, say → 1 or 2. We mark three towers with name, source, destination and aux (only to help moving the disks). If we have only one disk, then it can easily be moved from source to destination peg. If we have 2 disks − First, we move the smaller (top) disk to aux peg. Then, we move the larger (bottom) disk to destination peg. And finally, we move the smaller disk from aux to destination peg. So now, we are in a position to design an algorithm for Tower of Hanoi with more than two disks. We divide the stack of disks in two parts. The largest disk (nth disk) is in one part and all other (n-1) disks are in the second part. Our ultimate aim is to move disk n from source to destination and then put all other (n1) disks onto it. We can imagine to apply the same in a recursive way for all given set of disks. The steps to follow are − Step 1 − Move n-1 disks from source to aux Step 2 − Move nth disk from source to dest Step 3 − Move n-1 disks from aux to dest A recursive algorithm for Tower of Hanoi can be driven as follows − START Procedure Hanoi(disk, source, dest, aux) IF disk == 1, THEN move disk from source to dest ELSE Hanoi(disk – 1, source, aux, dest) // Step 1 move disk from source to dest // Step 2 Hanoi(disk – 1, aux, dest, source) // Step 3 END IF END Procedure STOP Example Following are the implementations of this approach in various programming languages − C C++ Java Python #include <stdio.h> void hanoi(int n, char from, char to, char via) { if(n == 1){ printf(“Move disk 1 from %c to %cn”, from, to); } else{ hanoi(n-1, from, via, to); printf(“Move disk %d from %c to %cn”, n, from, to); hanoi(n-1, via, to, from); } } int main() { int n = 3; char from = ”A”; char to = ”B”; char via = ”C”; //calling hanoi() method hanoi(n, from, via, to); } #include <iostream> using namespace std; void hanoi(int n, char from, char to, char via) { if(n == 1){ cout<<“Move disk 1 from “<<from<<” to “<<to<<endl; } else{ hanoi(n-1, from, via, to); cout<<“Move disk “<<n<<” from “<<from<<” to “<<to<<endl; hanoi(n-1, via, to, from); } } int main() { int n = 3; char from = ”A”; char to = ”B”; char via = ”C”; //calling hanoi() method hanoi(n, from , via, to); } import java.util.*; public class Demo { public static void hanoi(int n, String from, String to, String via) { if(n == 1){ System.out.println(“Move disk 1 from ” + from + ” to ” + to); } else{ hanoi(n-1, from, via, to); System.out.println(“Move disk ” + n + ” from ” + from + ” to ” + to); hanoi(n-1, via, to, from); } } public static void main(String[] args) { int n = 3; String from = “A”; String to = “B”; String via = “C”; //calling hanoi() metod hanoi(n, from, via, to); } } def hanoi(n, f, to, via): if n == 1: print(“Move disk 1 from”,f,”to”,to); else: hanoi(n-1, f, via, to) print(“Move disk”,n,”from”,f,”to”,to); hanoi(n-1, via, to, f) n = 3 f = ”A” to = ”B” via = ”C” hanoi(n, f, via, to) Output Move disk 1 from A to C Move disk 2 from A to B Move disk 1 from C to B Move disk 3 from A to C Move disk 1 from B to A Move disk 2 from B to C Move disk 1 from A to C Print Page Previous Next Advertisements ”;

DSA – Radix Sort Algorithm

Radix Sort Algorithm Table of content Radix Sort Algorithm Implementation ”; Previous Next Radix sort is a step-wise sorting algorithm that starts the sorting from the least significant digit of the input elements. Like Counting Sort and Bucket Sort, Radix sort also assumes something about the input elements, that they are all k-digit numbers. The sorting starts with the least significant digit of each element. These least significant digits are all considered individual elements and sorted first; followed by the second least significant digits. This process is continued until all the digits of the input elements are sorted. Note − If the elements do not have same number of digits, find the maximum number of digits in an input element and add leading zeroes to the elements having less digits. It does not change the values of the elements but still makes them k-digit numbers. Radix Sort Algorithm The radix sort algorithm makes use of the counting sort algorithm while sorting in every phase. The detailed steps are as follows − Step 1 − Check whether all the input elements have same number of digits. If not, check for numbers that have maximum number of digits in the list and add leading zeroes to the ones that do not. Step 2 − Take the least significant digit of each element. Step 3 − Sort these digits using counting sort logic and change the order of elements based on the output achieved. For example, if the input elements are decimal numbers, the possible values each digit can take would be 0-9, so index the digits based on these values. Step 4 − Repeat the Step 2 for the next least significant digits until all the digits in the elements are sorted. Step 5 − The final list of elements achieved after kth loop is the sorted output. Pseudocode Algorithm: RadixSort(a[], n): // Find the maximum element of the list max = a[0] for (i=1 to n-1): if (a[i]>max): max=a[i] // applying counting sort for each digit in each number of //the input list For (pos=1 to max/pos>0): countSort(a, n, pos) pos=pos*10 The countSort algorithm called would be − Algorithm: countSort(a, n, pos) Initialize count[0…9] with zeroes for i = 0 to n: count[(a[i]/pos) % 10]++ for i = 1 to 10: count[i] = count[i] + count[i-1] for i = n-1 to 0: output[count[(a[i]/pos) % 10]-1] = a[i] i– for i to n: a[i] = output[i] Analysis Given that there are k-digits in the input elements, the running time taken by the radix sort algorithm would be Θ(k(n + b). Here, n is the number of elements in the input list while b is the number of possible values each digit of a number can take. Example For the given unsorted list of elements, 236, 143, 26, 42, 1, 99, 765, 482, 3, 56, we need to perform the radix sort and obtain the sorted output list − Step 1 Check for elements with maximum number of digits, which is 3. So we add leading zeroes to the numbers that do not have 3 digits. The list we achieved would be − 236, 143, 026, 042, 001, 099, 765, 482, 003, 056 Step 2 Construct a table to store the values based on their indexing. Since the inputs given are decimal numbers, the indexing is done based on the possible values of these digits, i.e., 0-9. Step 3 Based on the least significant digit of all the numbers, place the numbers on their respective indices. The elements sorted after this step would be 001, 042, 482, 143, 003, 765, 236, 026, 056, 099. Step 4 The order of input for this step would be the order of the output in the previous step. Now, we perform sorting using the second least significant digit. The order of the output achieved is 001, 003, 026, 236, 042, 143, 056, 765, 482, 099. Step 5 The input list after the previous step is rearranged as − 001, 003, 026, 236, 042, 143, 056, 765, 482, 099 Now, we need to sort the last digits of the input elements. Since there are no further digits in the input elements, the output achieved in this step is considered as the final output. The final sorted output is − 1, 3, 26, 42, 56, 99, 143, 236, 482, 765 Implementation The counting sort algorithm assists the radix sort to perform sorting on multiple d-digit numbers iteratively for ‘d’ loops. Radix sort is implemented in four programming languages in this tutorial: C, C++, Java, Python. C C++ Java Python #include <stdio.h> void countsort(int a[], int n, int pos){ int output[n + 1]; int max = (a[0] / pos) % 10; for (int i = 1; i < n; i++) { if (((a[i] / pos) % 10) > max) max = a[i]; } int count[max + 1]; for (int i = 0; i < max; ++i) count[i] = 0; for (int i = 0; i < n; i++) count[(a[i] / pos) % 10]++; for (int i = 1; i < 10; i++) count[i] += count[i – 1]; for (int i = n – 1; i >= 0; i–) { output[count[(a[i] / pos) % 10] – 1] = a[i]; count[(a[i] / pos) % 10]–; } for (int i = 0; i < n; i++) a[i] = output[i]; } void radixsort(int a[], int n){ int max = a[0]; for (int i = 1; i < n; i++) if (a[i] > max) max = a[i]; for (int pos = 1; max / pos > 0; pos *= 10) countsort(a, n,

DSA – Approximation Algorithms

Approximation Algorithms Table of content Approximation Algorithms Performance Ratios Examples ”; Previous Next Approximation Algorithms Approximation algorithms are algorithms designed to solve problems that are not solvable in polynomial time for approximate solutions. These problems are known as NP complete problems. These problems are significantly effective to solve real world problems, therefore, it becomes important to solve them using a different approach. NP complete problems can still be solved in three cases: the input could be so small that the execution time is reduced, some problems can still be classified into problems that can be solved in polynomial time, or use approximation algorithms to find near-optima solutions for the problems. This leads to the concept of performance ratios of an approximation problem. Performance Ratios The main idea behind calculating the performance ratio of an approximation algorithm, which is also called as an approximation ratio, is to find how close the approximate solution is to the optimal solution. The approximate ratio is represented using ρ(n) where n is the input size of the algorithm, C is the near-optimal solution obtained by the algorithm, C* is the optimal solution for the problem. The algorithm has an approximate ratio of ρ(n) if and only if − $$maxleft{frac{C}{C^{ast} },frac{C^{ast }}{C} right}leq rho left ( n right )$$ The algorithm is then called a ρ(n)-approximation algorithm. Approximation Algorithms can be applied on two types of optimization problems: minimization problems and maximization problems. If the optimal solution of the problem is to find the maximum cost, the problem is known as the maximization problem; and if the optimal solution of the problem is to find the minimum cost, then the problem is known as a minimization problem. For maximization problems, the approximation ratio is calculated by C*/C since 0 ≤ C ≤ C*. For minimization problems, the approximation ratio is calculated by C/C* since 0 ≤ C* ≤ C. Assuming that the costs of approximation algorithms are all positive, the performance ratio is well defined and will not be less than 1. If the value is 1, that means the approximate algorithm generates the exact optimal solution. Examples Few popular examples of the approximation algorithms are − Vertex Cover Algorithm Set Cover Problem Travelling Salesman Problem (Approximation Approach) The Subset Sum Problem Print Page Previous Next Advertisements ”;

DSA – Shell Sort Algorithm

Shell Sort Algorithm Table of content Shell Sort Algorithm Implementation ”; Previous Next Shell sort is a highly efficient sorting algorithm and is based on insertion sort algorithm. This algorithm avoids large shifts as in case of insertion sort, if the smaller value is to the far right and has to be moved to the far left. This algorithm uses insertion sort on a widely spread elements, first to sort them and then sorts the less widely spaced elements. This spacing is termed as interval. This interval is calculated based on Knuth”s formula as − h = h * 3 + 1 where − h is interval with initial value 1 This algorithm is quite efficient for medium-sized data sets as its average and worst case complexity are of O(n), where n is the number of items. Shell Sort Algorithm Following is the algorithm for shell sort. 1. Initialize the value of h. 2. Divide the list into smaller sub-list of equal interval h. 3. Sort these sub-lists using insertion sort. 4. Repeat until complete list is sorted. Pseudocode Following is the pseudocode for shell sort. procedure shellSort() A : array of items /* calculate interval*/ while interval < A.length /3 do: interval = interval * 3 + 1 end while while interval > 0 do: for outer = interval; outer < A.length; outer ++ do: /* select value to be inserted */ valueToInsert = A[outer] inner = outer; /*shift element towards right*/ while inner > interval -1 && A[inner – interval] >= valueToInsert do: A[inner] = A[inner – interval] inner = inner – interval end while /* insert the number at hole position */ A[inner] = valueToInsert end for /* calculate interval*/ interval = (interval -1) /3; end while end procedure Example Let us consider the following example to have an idea of how shell sort works. We take the same array we have used in our previous examples. For our example and ease of understanding, we take the interval of 4. Make a virtual sub-list of all values located at the interval of 4 positions. Here these values are {35, 14}, {33, 19}, {42, 27} and {10, 14} We compare values in each sub-list and swap them (if necessary) in the original array. After this step, the new array should look like this − Then, we take interval of 2 and this gap generates two sub-lists – {14, 27, 35, 42}, {19, 10, 33, 44} We compare and swap the values, if required, in the original array. After this step, the array should look like this − Finally, we sort the rest of the array using interval of value 1. Shell sort uses insertion sort to sort the array. Following is the step-by-step depiction − We see that it required only four swaps to sort the rest of the array. Implementation Shell sort is a highly efficient sorting algorithm and is based on insertion sort algorithm. This algorithm avoids large shifts as in case of insertion sort, if the smaller value is to the far right and has to be moved to the far left. C C++ Java Python #include <stdio.h> void shellSort(int arr[], int n){ int gap, j, k; for(gap = n/2; gap > 0; gap = gap / 2) { //initially gap = n/2, decreasing by gap /2 for(j = gap; j<n; j++) { for(k = j-gap; k>=0; k -= gap) { if(arr[k+gap] >= arr[k]) break; else { int temp; temp = arr[k+gap]; arr[k+gap] = arr[k]; arr[k] = temp; } } } } } int main(){ int n; n = 5; int arr[5] = {33, 45, 62, 12, 98}; // initialize the array printf(“Array before Sorting: “); for(int i = 0; i<n; i++) printf(“%d “,arr[i]); printf(“n”); shellSort(arr, n); printf(“Array after Sorting: “); for(int i = 0; i<n; i++) printf(“%d “, arr[i]); printf(“n”); } Output Array before Sorting: 33 45 62 12 98 Array after Sorting: 12 33 45 62 98 #include<iostream> using namespace std; void shellSort(int *arr, int n){ int gap, j, k; for(gap = n/2; gap > 0; gap = gap / 2) { //initially gap = n/2, decreasing by gap /2 for(j = gap; j<n; j++) { for(k = j-gap; k>=0; k -= gap) { if(arr[k+gap] >= arr[k]) break; else { int temp; temp = arr[k+gap]; arr[k+gap] = arr[k]; arr[k] = temp; } } } } } int main(){ int n; n = 5; int arr[5] = {33, 45, 62, 12, 98}; // initialize the array cout << “Array before Sorting: “; for(int i = 0; i<n; i++) cout << arr[i] << ” “; cout << endl; shellSort(arr, n); cout << “Array after Sorting: “; for(int i = 0; i<n; i++) cout << arr[i] << ” “; cout << endl; } Output Array before Sorting: 33 45 62 12 98 Array after Sorting: 12 33 45 62 98 import java.io.*; import java.util.*; public class ShellSort { public static void main(String args[]) { int n = 5; int[] arr = {33, 45, 62, 12, 98}; //initialize an array System.out.print(“Array before Sorting: “); for(int i = 0; i<n; i++) System.out.print(arr[i] + ” “); System.out.println(); int gap; for(gap = n/2; gap > 0; gap = gap / 2) { //initially gap = n/2, decreasing by gap /2 for(int j = gap; j<n; j++) { for(int k = j-gap; k>=0; k -= gap) { if(arr[k+gap] >= arr[k]) break; else { int temp; temp = arr[k+gap]; arr[k+gap] = arr[k]; arr[k] = temp; } } } } System.out.print(“Array After Sorting: “); for(int i = 0; i<n; i++)

DSA – Quick Sort Algorithm

Quick Sort Algorithm Table of content Partition in Quick Sort Quick Sort Pivot Algorithm Quick Sort Algorithm Quick Sort Pseudocode Analysis Implementation ”; Previous Next Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data into smaller arrays. A large array is partitioned into two arrays one of which holds values smaller than the specified value, say pivot, based on which the partition is made and another array holds values greater than the pivot value. Quicksort partitions an array and then calls itself recursively twice to sort the two resulting subarrays. This algorithm is quite efficient for large-sized data sets as its average and worst-case complexity are O(n2), respectively. Partition in Quick Sort Following animated representation explains how to find the pivot value in an array. The pivot value divides the list into two parts. And recursively, we find the pivot for each sub-lists until all lists contains only one element. Quick Sort Pivot Algorithm Based on our understanding of partitioning in quick sort, we will now try to write an algorithm for it, which is as follows. 1. Choose the highest index value has pivot 2. Take two variables to point left and right of the list excluding pivot 3. Left points to the low index 4. Right points to the high 5. While value at left is less than pivot move right 6. While value at right is greater than pivot move left 7. If both step 5 and step 6 does not match swap left and right 8. If left ≥ right, the point where they met is new pivot Quick Sort Pivot Pseudocode The pseudocode for the above algorithm can be derived as − function partitionFunc(left, right, pivot) leftPointer = left rightPointer = right – 1 while True do while A[++leftPointer] < pivot do //do-nothing end while while rightPointer > 0 && A[–rightPointer] > pivot do //do-nothing end while if leftPointer >= rightPointer break else swap leftPointer,rightPointer end if end while swap leftPointer,right return leftPointer end function Quick Sort Algorithm Using pivot algorithm recursively, we end up with smaller possible partitions. Each partition is then processed for quick sort. We define recursive algorithm for quicksort as follows − 1. Make the right-most index value pivot 2. Partition the array using pivot value 3. Quicksort left partition recursively 4. Quicksort right partition recursively Quick Sort Pseudocode To get more into it, let see the pseudocode for quick sort algorithm − procedure quickSort(left, right) if right-left <= 0 return else pivot = A[right] partition = partitionFunc(left, right, pivot) quickSort(left,partition-1) quickSort(partition+1,right) end if end procedure Analysis The worst case complexity of Quick-Sort algorithm is O(n2). However, using this technique, in average cases generally we get the output in O (n log n) time. Implementation Following are the implementations of Quick Sort algorithm in various programming languages − C C++ Java Python #include <stdio.h> #include <stdbool.h> #define MAX 7 int intArray[MAX] = { 4,6,3,2,1,9,7 }; void printline(int count) { int i; for (i = 0; i < count – 1; i++) { printf(“=”); } printf(“=n”); } void display() { int i; printf(“[“); // navigate through all items for (i = 0; i < MAX; i++) { printf(“%d “, intArray[i]); } printf(“]n”); } void swap(int num1, int num2) { int temp = intArray[num1]; intArray[num1] = intArray[num2]; intArray[num2] = temp; } int partition(int left, int right, int pivot) { int leftPointer = left – 1; int rightPointer = right; while (true) { while (intArray[++leftPointer] < pivot) { //do nothing } while (rightPointer > 0 && intArray[–rightPointer] > pivot) { //do nothing } if (leftPointer >= rightPointer) { break; } else { printf(” item swapped :%d,%dn”, intArray[leftPointer], intArray[rightPointer]); swap(leftPointer, rightPointer); } } printf(” pivot swapped :%d,%dn”, intArray[leftPointer], intArray[right]); swap(leftPointer, right); printf(“Updated Array: “); display(); return leftPointer; } void quickSort(int left, int right) { if (right – left <= 0) { return; } else { int pivot = intArray[right]; int partitionPoint = partition(left, right, pivot); quickSort(left, partitionPoint – 1); quickSort(partitionPoint + 1, right); } } int main() { printf(“Input Array: “); display(); printline(50); quickSort(0, MAX – 1); printf(“Output Array: “); display(); printline(50); } Output Input Array: [4 6 3 2 1 9 7 ] ================================================== pivot swapped :9,7 Updated Array: [4 6 3 2 1 7 9 ] pivot swapped :4,1 Updated Array: [1 6 3 2 4 7 9 ] item swapped :6,2 pivot swapped :6,4 Updated Array: [1 2 3 4 6 7 9 ] pivot swapped :3,3 Updated Array: [1 2 3 4 6 7 9 ] Output Array: [1 2 3 4 6 7 9 ] ================================================== #include <iostream> using namespace std; #define MAX 7 int intArray[MAX] = {4,6,3,2,1,9,7}; void display() { int i; cout << “[“; // navigate through all items for(i = 0;i < MAX;i++) { cout << intArray[i] << ” “; } cout << “]n”; } void swap(int num1, int num2) { int temp = intArray[num1]; intArray[num1] = intArray[num2]; intArray[num2] = temp; } int partition(int left, int right, int pivot) { int leftPointer = left -1; int rightPointer = right; while(true) { while(intArray[++leftPointer] < pivot) { //do nothing } while(rightPointer > 0 && intArray[–rightPointer] > pivot) { //do nothing } if(leftPointer >= rightPointer) { break; } else { cout << “item swapped : ” << intArray[leftPointer] << “,” << intArray[rightPointer] << endl; swap(leftPointer, rightPointer); } } cout << “npivot swapped : ” << intArray[leftPointer] << “,” << intArray[right] << endl; swap(leftPointer,right); cout << “Updated Array: “; display(); return leftPointer; }