DAA – Max Cliques

Max Cliques Table of content Max-Clique Algorithm Implementation ”; Previous Next In an undirected graph, a clique is a complete sub-graph of the given graph. Complete sub-graph means, all the vertices of this sub-graph is connected to all other vertices of this sub-graph. The Max-Clique problem is the computational problem of finding maximum clique of the graph. Max clique is used in many real-world problems. Let us consider a social networking application, where vertices represent people’s profile and the edges represent mutual acquaintance in a graph. In this graph, a clique represents a subset of people who all know each other. To find a maximum clique, one can systematically inspect all subsets, but this sort of brute-force search is too time-consuming for networks comprising more than a few dozen vertices. Max-Clique Algorithm The algorithm to find the maximum clique of a graph is relatively simple. The steps to the procedure are given below − Step 1: Take a graph as an input to the algorithm with a non-empty set of vertices and edges. Step 2: Create an output set and add the edges into it if they form a clique of the graph. Step 3: Repeat Step 2 iteratively until all the vertices of the graph are checked, and the list does not form a clique further. Step 4: Then the output set is backtracked to check which clique has the maximum edges in it. Pseudocode Algorithm: Max-Clique (G, n, k) S := ф for i = 1 to k do t := choice (1…n) if t є S then return failure S := S U t for all pairs (i, j) such that i є S and j є S and i ≠ j do if (i, j) is not a edge of the graph then return failure return success Analysis Max-Clique problem is a non-deterministic algorithm. In this algorithm, first we try to determine a set of k distinct vertices and then we try to test whether these vertices form a complete graph. There is no polynomial time deterministic algorithm to solve this problem. This problem is NP-Complete. Example Take a look at the following graph. Here, the sub-graph containing vertices 2, 3, 4 and 6 forms a complete graph. Hence, this sub-graph is a clique. As this is the maximum complete sub-graph of the provided graph, it’s a 4-Clique. Implementation Following are the implementations of the above approach in various programming languages − C C++ Java Python #include <stdio.h> #define MAX 100 int store[MAX], n; int graph[MAX][MAX]; int d[MAX]; int max(int a, int b){ if(a > b){ return a; } else{ return b; } } int is_clique(int b) { for (int i = 1; i < b; i++) { for (int j = i + 1; j < b; j++) { if (graph[store[i]][store[j]] == 0) { return 0; } } } return 1; } int maxCliques(int i, int l) { int max_ = 0; for (int j = i + 1; j <= n; j++) { store[l] = j; if (is_clique(l + 1)) { max_ = max(max_, l); max_ = max(max_, maxCliques(j, l + 1)); } } return max_; } int main() { int edges[][2] = { { 1, 4 }, { 4, 6 }, { 1, 6 }, { 3, 3 }, { 4, 2 }, { 8, 12 } }; int size = sizeof(edges) / sizeof(edges[0]); n = 6; for (int i = 0; i < size; i++) { graph[edges[i][0]][edges[i][1]] = 1; graph[edges[i][1]][edges[i][0]] = 1; d[edges[i][0]]++; d[edges[i][1]]++; } printf(“Max clique: %dn”, maxCliques(0, 1)); return 0; } Output Max clique: 3 using namespace std; #include<iostream> const int MAX = 100; // Storing the vertices int store[MAX], n; // Graph int graph[MAX][MAX]; // Degree of the vertices int d[MAX]; // Function to check if the given set of vertices in store array is a clique or not bool is_clique(int b) { // Run a loop for all set of edges for (int i = 1; i < b; i++) { for (int j = i + 1; j < b; j++) // If any edge is missing if (graph[store[i]][store[j]] == 0) return false; } return true; } // Function to find all the sizes of maximal cliques int maxCliques(int i, int l) { // Maximal clique size int max_ = 0; // Check if any vertices from i+1 can be inserted for (int j = i + 1; j <= n; j++) { // Add the vertex to store store[l] = j; // If the graph is not a clique of size k then // it cannot be a clique by adding another edge if (is_clique(l + 1)) { // Update max max_ = max(max_, l); // Check if another edge can be added max_ = max(max_, maxCliques(j, l + 1)); } } return max_; } // Driver code int main() { int edges[][2] = { { 1, 4 }, { 4, 6 }, { 1, 6 }, { 3, 3 }, { 4, 2 }, { 8, 12 } }; int size = sizeof(edges) / sizeof(edges[0]); n = 6; for (int i = 0; i < size; i++) { graph[edges[i][0]][edges[i][1]] = 1; graph[edges[i][1]][edges[i][0]] = 1; d[edges[i][0]]++; d[edges[i][1]]++; } cout <<“Max clique: “<<maxCliques(0, 1); return 0; } Output Max clique: 3 import java.util.ArrayList; import java.util.List; public class MaxCliques { static final int MAX = 100; static int[] store = new int[MAX]; static int[][] graph = new int[MAX][MAX]; static int[] d = new int[MAX]; static int n; // Function to check if the given set of vertices in store array is a clique or not static boolean isClique(int b) { for (int i = 1; i

DAA – Useful Resources

DAA – Useful Resources ”; Previous Next The following resources contain additional information on Design and Analysis of Algorithms. Please use them to get more in-depth knowledge on this. Useful Video Courses Learn the Art and Science of PCB Design with Eagle 32 Lectures 3 hours Amit Rana More Detail Design and Analysis of an Algorithm 27 Lectures 4.5 hours MONESH VENKUL VOMMI, Hemanth Yeluru More Detail Basics of Visual Merchandising and Store Design 17 Lectures 45 mins Management Study Guide More Detail Foundations Of Technical Analysis Essential Charting Skills For Trading And Investing 20 Lectures 2 hours Simon Milgard More Detail Object-Oriented Analysis, Design and Programming with UML 122 Lectures 10 hours Packt Publishing More Detail Learn How to Use Microsoft Office PowerPoint and Paint to Design Modern Forms for Visual Studio Featured 5 Lectures 1 hours Ayman Elsaid Abdelwahed Khoshouey More Detail Print Page Previous Next Advertisements ”;

DAA – Discussion

Discuss Design and Analysis of Algorithms ”; Previous Next An Algorithm is a sequence of steps to solve a problem. Design and Analysis of Algorithm is very important for designing algorithm to solve different types of problems in the branch of computer science and information technology. This tutorial introduces the fundamental concepts of Designing Strategies, Complexity analysis of Algorithms, followed by problems on Graph Theory and Sorting methods. This tutorial also includes the basic concepts on Complexity theory. Print Page Previous Next Advertisements ”;

DAA – Quick Guide

Design and Analysis Quick Guide ”; Previous Next Introduction to Algorithms An algorithm is a set of steps of operations to solve a problem performing calculation, data processing, and automated reasoning tasks. An algorithm is an efficient method that can be expressed within finite amount of time and space. An algorithm is the best way to represent the solution of a particular problem in a very simple and efficient way. If we have an algorithm for a specific problem, then we can implement it in any programming language, meaning that the algorithm is independent from any programming languages. Algorithm Design The important aspects of algorithm design include creating an efficient algorithm to solve a problem in an efficient way using minimum time and space. To solve a problem, different approaches can be followed. Some of them can be efficient with respect to time consumption, whereas other approaches may be memory efficient. However, one has to keep in mind that both time consumption and memory usage cannot be optimized simultaneously. If we require an algorithm to run in lesser time, we have to invest in more memory and if we require an algorithm to run with lesser memory, we need to have more time. Problem Development Steps The following steps are involved in solving computational problems. Problem definition Development of a model Specification of an Algorithm Designing an Algorithm Checking the correctness of an Algorithm Analysis of an Algorithm Implementation of an Algorithm Program testing Documentation How to Write an Algorithm? There are no well-defined standards for writing algorithms. Rather, it is problem and resource dependent. Algorithms are never written to support a particular programming code. As we know that all programming languages share basic code constructs like loops (do, for, while), flow-control (if-else), etc. These common constructs can be used to write an algorithm. We write algorithms in a step-by-step manner, but it is not always the case. Algorithm writing is a process and is executed after the problem domain is well-defined. That is, we should know the problem domain, for which we are designing a solution. Example Let”s try to learn algorithm-writing by using an example. Problem − Design an algorithm to add two numbers and display the result. Step 1 − START Step 2 − declare three integers a, b & c Step 3 − define values of a & b Step 4 − add values of a & b Step 5 − store output of step 4 to c Step 6 − print c Step 7 − STOP Algorithms tell the programmers how to code the program. Alternatively, the algorithm can be written as − Step 1 − START ADD Step 2 − get values of a & b Step 3 − c ← a &plus; b Step 4 − display c Step 5 − STOP In design and analysis of algorithms, usually the second method is used to describe an algorithm. It makes it easy for the analyst to analyze the algorithm ignoring all unwanted definitions. He can observe what operations are being used and how the process is flowing. Characteristics of Algorithms Not all procedures can be called an algorithm. An algorithm should have the following characteristics − Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases), and their inputs/outputs should be clear and must lead to only one meaning. Input − An algorithm should have 0 or more well-defined inputs. Output − An algorithm should have 1 or more well-defined outputs, and should match the desired output. Finiteness − Algorithms must terminate after a finite number of steps. Feasibility − Should be feasible with the available resources. Independent − An algorithm should have step-by-step directions, which should be independent of any programming code. Pseudocode Pseudocode gives a high-level description of an algorithm without the ambiguity associated with plain text but also without the need to know the syntax of a particular programming language. The running time can be estimated in a more general manner by using Pseudocode to represent the algorithm as a set of fundamental operations which can then be counted. Difference between Algorithm and Pseudocode An algorithm is a formal definition with some specific characteristics that describes a process, which could be executed by a Turing-complete computer machine to perform a specific task. Generally, the word “algorithm” can be used to describe any high level task in computer science. On the other hand, pseudocode is an informal and (often rudimentary) human readable description of an algorithm leaving many granular details of it. Writing a pseudocode has no restriction of styles and its only objective is to describe the high level steps of algorithm in a much realistic manner in natural language. For example, following is an algorithm for Insertion Sort. Algorithm: Insertion-Sort Input: A list L of integers of length n Output: A sorted list L1 containing those integers present in L Step 1: Keep a sorted list L1 which starts off empty Step 2: Perform Step 3 for each element in the original list L Step 3: Insert it into the correct position in the sorted list L1. Step 4: Return the sorted list Step 5: Stop Here is a pseudocode which describes how the high level abstract process mentioned above in the algorithm Insertion-Sort could be described in a more realistic way. for i <- 1 to length(A) x <- A[i] j <- i while j > 0 and A[j-1] > x A[j] <- A[j-1] j <- j – 1 A[j] <- x In this tutorial, algorithms will be presented in the form of pseudocode, that is similar in many respects to C, C++, Java, Python, and other programming languages. Example C C++ Java Python #include <stdio.h> void insertionSort(int arr[], int n) { int i, j, key; for (i = 1; i < n; i++) { key = arr[i]; j = i – 1; // Move elements of arr[0..i-1] that are greater than key, // to one position ahead of their

DAA – Binary Heap

Design and Analysis Binary Heap Table of content Max-Heap Min-Heap Array Representation ”; Previous Next There are several types of heaps, however in this chapter, we are going to discuss binary heap. A binary heap is a data structure, which looks similar to a complete binary tree. Heap data structure obeys ordering properties discussed below. Generally, a Heap is represented by an array. In this chapter, we are representing a heap by H. As the elements of a heap is stored in an array, considering the starting index as 1, the position of the parent node of ith element can be found at ⌊ i/2 ⌋ . Left child and right child of ith node is at position 2i and 2i + 1. A binary heap can be classified further as either a max-heap or a min-heap based on the ordering property. Max-Heap In this heap, the key value of a node is greater than or equal to the key value of the highest child. Hence, H[Parent(i)] ≥ H[i] Min-Heap In mean-heap, the key value of a node is lesser than or equal to the key value of the lowest child. Hence, H[Parent(i)] ≤ H[i] In this context, basic operations are shown below with respect to Max-Heap. Insertion and deletion of elements in and from heaps need rearrangement of elements. Hence, Heapify function needs to be called. Array Representation A complete binary tree can be represented by an array, storing its elements using level order traversal. Let us consider a heap (as shown below) which will be represented by an array H. Considering the starting index as 0, using level order traversal, the elements are being kept in an array as follows. Index 0 1 2 3 4 5 6 7 8 … elements 70 30 50 12 20 35 25 4 8 … In this context, operations on heap are being represented with respect to Max-Heap. To find the index of the parent of an element at index i, the following algorithm Parent (numbers[], i) is used. Algorithm: Parent (numbers[], i) if i == 1 return NULL else [i / 2] The index of the left child of an element at index i can be found using the following algorithm, Left-Child (numbers[], i). Algorithm: Left-Child (numbers[], i) If 2 * i ≤ heapsize return [2 * i] else return NULL The index of the right child of an element at index i can be found using the following algorithm, Right-Child(numbers[], i). Algorithm: Right-Child (numbers[], i) if 2 * i < heapsize return [2 * i + 1] else return NULL Print Page Previous Next Advertisements ”;

DAA – Heapify Method

Heapify Operation in Binary Heap Table of content Heapify Pseudocode Example ”; Previous Next Heapify method rearranges the elements of an array where the left and right sub-tree of ith element obeys the heap property. Heapify Pseudocode Max-Heapify(numbers[], i) leftchild := numbers[2i] rightchild := numbers [2i + 1] if leftchild ≤ numbers[].size and numbers[leftchild] > numbers[i] largest := leftchild else largest := i if rightchild ≤ numbers[].size and numbers[rightchild] > numbers[largest] largest := rightchild if largest ≠ i swap numbers[i] with numbers[largest] Max-Heapify(numbers, largest) When the provided array does not obey the heap property, Heap is built based on the following algorithm Build-Max-Heap (numbers[]). Algorithm: Build-Max-Heap(numbers[]) numbers[].size := numbers[].length fori = ⌊ numbers[].length/2 ⌋ to 1 by -1 Max-Heapify (numbers[], i) Example Following are the implementations of this operation in various programming languages − C C++ Java Python #include <stdio.h> void swap(int arr[], int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } void maxHeapify(int arr[], int size, int i) { int leftChild = 2 * i + 1; int rightChild = 2 * i + 2; int largest = i; if (leftChild < size && arr[leftChild] > arr[largest]) largest = leftChild; if (rightChild < size && arr[rightChild] > arr[largest]) largest = rightChild; if (largest != i) { swap(arr, i, largest); maxHeapify(arr, size, largest); // Recursive call to continue heapifying } } void buildMaxHeap(int arr[], int size) { for (int i = size / 2 – 1; i >= 0; i–) maxHeapify(arr, size, i); // Start heapifying from the parent nodes in bottom-up order } int main() { int arr[] = { 3, 10, 4, 5, 1 }; // Initial Max-Heap (or any array) int size = sizeof(arr) / sizeof(arr[0]); buildMaxHeap(arr, size); // Build the Max-Heap from the given array printf(“Max Heap: “); for (int i = 0; i < size; i++) printf(“%d “, arr[i]); // Print the updated Max-Heap printf(“n”); return 0; } Output Max Heap: 10 5 4 3 1 #include <iostream> #include <vector> using namespace std; void swap(vector<int>& arr, int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } void maxHeapify(vector<int>& arr, int size, int i) { int leftChild = 2 * i + 1; int rightChild = 2 * i + 2; int largest = i; if (leftChild < size && arr[leftChild] > arr[largest]) largest = leftChild; if (rightChild < size && arr[rightChild] > arr[largest]) largest = rightChild; if (largest != i) { swap(arr, i, largest); maxHeapify(arr, size, largest); // Recursive call to continue heapifying } } void buildMaxHeap(vector<int>& arr, int size) { for (int i = size / 2 – 1; i >= 0; i–) maxHeapify(arr, size, i); // Start heapifying from the parent nodes in bottom-up order } int main() { vector<int> arr = { 3, 10, 4, 5, 1 }; // Initial Max-Heap (or any array) int size = arr.size(); buildMaxHeap(arr, size); // Build the Max-Heap from the given array cout << “Max Heap: “; for (int i = 0; i < size; i++) cout << arr[i] << ” “; // Print the updated Max-Heap cout << endl; return 0; } Output Max Heap: 10 5 4 3 1 import java.util.Arrays; public class MaxHeap { public static void swap(int arr[], int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } public static void maxHeapify(int arr[], int size, int i) { int leftChild = 2 * i + 1; int rightChild = 2 * i + 2; int largest = i; if (leftChild < size && arr[leftChild] > arr[largest]) largest = leftChild; if (rightChild < size && arr[rightChild] > arr[largest]) largest = rightChild; if (largest != i) { swap(arr, i, largest); maxHeapify(arr, size, largest); // Recursive call to continue heapifying } } public static void buildMaxHeap(int arr[]) { int size = arr.length; for (int i = size / 2 – 1; i >= 0; i–) maxHeapify(arr, size, i); // Start heapifying from the parent nodes in bottom-up order } public static void main(String args[]) { int arr[] = { 3, 10, 4, 5, 1 }; // Initial Max-Heap (or any array) buildMaxHeap(arr); // Build the Max-Heap from the given array System.out.print(“Max Heap: “); for (int i = 0; i < arr.length; i++) System.out.print(arr[i] + ” “); // Print the updated Max-Heap System.out.println(); } } Output Max Heap: 10 5 4 3 1 def swap(arr, i, j): arr[i], arr[j] = arr[j], arr[i] def max_heapify(arr, size, i): left_child = 2 * i + 1 right_child = 2 * i + 2 largest = i if left_child < size and arr[left_child] > arr[largest]: largest = left_child if right_child < size and arr[right_child] > arr[largest]: largest = right_child if largest != i: swap(arr, i, largest) max_heapify(arr, size, largest) # Recursive call to continue heapifying def build_max_heap(arr): size = len(arr) for i in range(size // 2 – 1, -1, -1): max_heapify(arr, size, i) # Start heapifying from the parent nodes in bottom-up order arr = [3, 10, 4, 5, 1] # Initial Max-Heap (or any array) build_max_heap(arr) # Build the Max-Heap from the given array print(“Max Heap:”, arr) # Print the updated Max-Heap Output Max Heap: [10, 5, 4, 3, 1] Print Page Previous Next Advertisements ”;

DAA – Jump Search

Jump Search Algorithm Table of content Jump Search Algorithm Implementation ”; Previous Next Jump Search algorithm is a slightly modified version of the linear search algorithm. The main idea behind this algorithm is to reduce the time complexity by comparing lesser elements than the linear search algorithm. The input array is hence sorted and divided into blocks to perform searching while jumping through these blocks. For example, let us look at the given example below; the sorted input array is searched in the blocks of 3 elements each. The desired key is found only after 2 comparisons rather than the 6 comparisons of the linear search. Here, there arises a question about how to divide these blocks. To answer that, if the input array is of size ‘n’, the blocks are divided in the intervals of √n. First element of every block is compared with the key element until the key element’s value is less than the block element. Linear search is performed only on that previous block since the input is sorted. If the element is found, it is a successful search; otherwise, an unsuccessful search is returned. Jump search algorithm is discussed in detail further into this chapter. Jump Search Algorithm The jump search algorithm takes a sorted array as an input which is divided into smaller blocks to make the search simpler. The algorithm is as follows − Step 1 − If the size of the input array is ‘n’, then the size of the block is √n. Set i = 0. Step 2 − The key to be searched is compared with the ith element of the array. If it is a match, the position of the element is returned; otherwise i is incremented with the block size. Step 3 − The Step 2 is repeated until the ith element is greater than the key element. Step 4 − Now, the element is figured to be in the previous block, since the input array is sorted. Therefore, linear search is applied on that block to find the element. Step 5 − If the element is found, the position is returned. If the element is not found, unsuccessful search is prompted. Pseudocode Begin blockSize := √size start := 0 end := blockSize while array[end] <= key AND end < size do start := end end := end + blockSize if end > size – 1 then end := size done for i := start to end -1 do if array[i] = key then return i done return invalid location End Analysis The time complexity of the jump search technique is O(√n) and space complexity is O(1). Example Let us understand the jump search algorithm by searching for element 66 from the given sorted array, A, below − Step 1 Initialize i = 0, and size of the input array ‘n’ = 12 Suppose, block size is represented as ‘m’. Then, m = √n = √12 = 3 Step 2 Compare A[0] with the key element and check whether it matches, A[0] = 0 ≠ 66 Therefore, i is incremented by the block size = 3. Now the element compared with the key element is A[3]. Step 3 A[3] = 14 ≠ 66 Since it is not a match, i is again incremented by 3. Step 4 A[6] = 48 ≠ 66 i is incremented by 3 again. A[9] is compared with the key element. Step 5 A[9] = 88 ≠ 66 However, 88 is greater than 66, therefore linear search is applied on the current block. Step 6 After applying linear search, the pointer increments from 6th index to 7th. Therefore, A[7] is compared with the key element. We find that A[7] is the required element, hence the program returns 7th index as the output. Implementation The jump search algorithm is an extended variant of linear search. The algorithm divides the input array into multiple small blocks and performs the linear search on a single block that is assumed to contain the element. If the element is not found in the assumed blocked, it returns an unsuccessful search. The output prints the position of the element in the array instead of its index. Indexing refers to the index numbers of the array that start from 0 while position is the place where the element is stored. C C++ Java Python #include<stdio.h> #include<math.h> int jump_search(int[], int, int); int main(){ int i, n, key, index; int arr[12] = {0, 6, 12, 14, 19, 22, 48, 66, 79, 88, 104, 126}; int len = sizeof(arr)/sizeof(arr[0]); printf(“Array elements are:”); for(int j = 0; j<len; j++){ printf(” %d”, arr[j]); } n = 12; key = 66; printf(“nThe element to be searched: %dn”, key); index = jump_search(arr, n, key); if(index >= 0) printf(“The element is found at position %d”, index+1); else printf(“Unsuccessful Search”); return 0; } int jump_search(int arr[], int n, int key){ int i, j, m, k; i = 0; m = sqrt(n); k = m; while(arr[m] <= key && m < n) { i = m; m += k; if(m > n – 1) return -1; } // linear search on the block for(j = i; j<m; j++) { if(arr[j] == key) return j; } return -1; } Output Array elements are: 0 6 12 14 19 22 48 66 79 88 104 126 The element to be searched: 66 The element is found at position 8 #include<iostream> #include<cmath> using namespace std; int jump_search(int[], int, int); int main(){ int i, n, key, index; int arr[12] = {0, 6, 12,

DAA – Linear Search

Linear Search Algorithm Table of content Linear Search Algorithm Implementation ”; Previous Next Linear search is a type of sequential searching algorithm. In this method, every element within the input array is traversed and compared with the key element to be found. If a match is found in the array the search is said to be successful; if there is no match found the search is said to be unsuccessful and gives the worst-case time complexity. For instance, in the given animated diagram, we are searching for an element 33. Therefore, the linear search method searches for it sequentially from the very first element until it finds a match. This returns a successful search. In the same diagram, if we have to search for an element 46, then it returns an unsuccessful search since 46 is not present in the input. Linear Search Algorithm The algorithm for linear search is relatively simple. The procedure starts at the very first index of the input array to be searched. Step 1 − Start from the 0th index of the input array, compare the key value with the value present in the 0th index. Step 2 − If the value matches with the key, return the position at which the value was found. Step 3 − If the value does not match with the key, compare the next element in the array. Step 4 − Repeat Step 3 until there is a match found. Return the position at which the match was found. Step 5 − If it is an unsuccessful search, print that the element is not present in the array and exit the program. Pseudocode procedure linear_search (list, value) for each item in the list if match item == value return the item”s location end if end for end procedure Analysis Linear search traverses through every element sequentially therefore, the best case is when the element is found in the very first iteration. The best-case time complexity would be O(1). However, the worst case of the linear search method would be an unsuccessful search that does not find the key value in the array, it performs n iterations. Therefore, the worst-case time complexity of the linear search algorithm would be O(n). Example Let us look at the step-by-step searching of the key element (say 47) in an array using the linear search method. Step 1 The linear search starts from the 0th index. Compare the key element with the value in the 0th index, 34. However, 47 ≠ 34. So it moves to the next element. Step 2 Now, the key is compared with value in the 1st index of the array. Still, 47 ≠ 10, making the algorithm move for another iteration. Step 3 The next element 66 is compared with 47. They are both not a match so the algorithm compares the further elements. Step 4 Now the element in 3rd index, 27, is compared with the key value, 47. They are not equal so the algorithm is pushed forward to check the next element. Step 5 Comparing the element in the 4th index of the array, 47, to the key 47. It is figured that both the elements match. Now, the position in which 47 is present, i.e., 4 is returned. The output achieved is “Element found at 4th index”. Implementation In this tutorial, the Linear Search program can be seen implemented in four programming languages. The function compares the elements of input with the key value and returns the position of the key in the array or an unsuccessful search prompt if the key is not present in the array. C C++ Java Python #include <stdio.h> void linear_search(int a[], int n, int key){ int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array printf(“The element is found at %d positionn”, i+1); count = count + 1; } } if(count == 0) // for unsuccessful search printf(“The element is not present in the arrayn”); } int main(){ int i, n, key; n = 6; int a[10] = {12, 44, 32, 18, 4, 10}; key = 18; linear_search(a, n, key); key = 23; linear_search(a, n, key); return 0; } Output The element is found at 4 position The element is not present in the array #include <iostream> using namespace std; void linear_search(int a[], int n, int key){ int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array cout << “The element is found at position ” << i+1 <<endl; count = count + 1; } } if(count == 0) // for unsuccessful search cout << “The element is not present in the array” <<endl; } int main(){ int i, n, key; n = 6; int a[10] = {12, 44, 32, 18, 4, 10}; key = 18; linear_search(a, n, key); key = 23; linear_search(a, n, key); return 0; } Output The element is found at position 4 The element is not present in the array import java.io.*; import java.util.*; public class LinearSearch { static void linear_search(int a[], int n, int key) { int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array System.out.println(“The element is found at position ” + (i+1)); count = count + 1; } } if(count == 0) // for unsuccessful search System.out.println(“The element is not present in the array”); } public static void main(String args[])

DAA – Deterministic vs. Nondeterministic Computations

Deterministic vs. Nondeterministic Computations Table of content Deterministic Computation and the Class P Nondeterministic Computation and the Class NP ”; Previous Next To understand class P and NP, first we should know the computational model. Hence, in this chapter we will discuss two important computational models. Deterministic Computation and the Class P The deterministic computation always provides the same output during the initial state with all the data available. As long as there is no change in the computation, there is no randomness involved with it. There are various models available to perform deterministic computation − Deterministic Turing Machine One of these models is deterministic one-tape Turing machine. This machine consists of a finite state control, a read-write head and a two-way tape with infinite sequence. Following is the schematic diagram of a deterministic one-tape Turing machine. A program for a deterministic Turing machine specifies the following information − A finite set of tape symbols (input symbols and a blank symbol) A finite set of states A transition function In algorithmic analysis, if a problem is solvable in polynomial time by a deterministic one tape Turing machine, the problem belongs to P class. Nondeterministic Computation and the Class NP Nondeterministic Turing Machine To solve the computational problem, another model is the Non-deterministic Turing Machine (NDTM). The structure of NDTM is similar to DTM, however here we have one additional module known as the guessing module, which is associated with one write-only head. Following is the schematic diagram. If the problem is solvable in polynomial time by a non-deterministic Turing machine, the problem belongs to NP class. Print Page Previous Next Advertisements ”;

DAA – NP Hard & NP-Complete Classes

NP Hard & NP-Complete Classes Table of content Definition of NP-Completeness NP-Complete Problems NP-Hard Problems TSP is NP-Complete Proof ”; Previous Next A problem is in the class NPC if it is in NP and is as hard as any problem in NP. A problem is NP-hard if all problems in NP are polynomial time reducible to it, even though it may not be in NP itself. If a polynomial time algorithm exists for any of these problems, all problems in NP would be polynomial time solvable. These problems are called NP-complete. The phenomenon of NP-completeness is important for both theoretical and practical reasons. Definition of NP-Completeness A language B is NP-complete if it satisfies two conditions B is in NP Every A in NP is polynomial time reducible to B. If a language satisfies the second property, but not necessarily the first one, the language B is known as NP-Hard. Informally, a search problem B is NP-Hard if there exists some NP-Complete problem A that Turing reduces to B. The problem in NP-Hard cannot be solved in polynomial time, until P = NP. If a problem is proved to be NPC, there is no need to waste time on trying to find an efficient algorithm for it. Instead, we can focus on design approximation algorithm. NP-Complete Problems Following are some NP-Complete problems, for which no polynomial time algorithm is known. Determining whether a graph has a Hamiltonian cycle Determining whether a Boolean formula is satisfiable, etc. NP-Hard Problems The following problems are NP-Hard The circuit-satisfiability problem Set Cover Vertex Cover Travelling Salesman Problem In this context, now we will discuss TSP is NP-Complete TSP is NP-Complete The traveling salesman problem consists of a salesman and a set of cities. The salesman has to visit each one of the cities starting from a certain one and returning to the same city. The challenge of the problem is that the traveling salesman wants to minimize the total length of the trip Proof To prove TSP is NP-Complete, first we have to prove that TSP belongs to NP. In TSP, we find a tour and check that the tour contains each vertex once. Then the total cost of the edges of the tour is calculated. Finally, we check if the cost is minimum. This can be completed in polynomial time. Thus TSP belongs to NP. Secondly, we have to prove that TSP is NP-hard. To prove this, one way is to show that Hamiltonian cycle ≤p TSP (as we know that the Hamiltonian cycle problem is NPcomplete). Assume G = (V, E) to be an instance of Hamiltonian cycle. Hence, an instance of TSP is constructed. We create the complete graph G” = (V, E”), where $$E^{”}=lbrace(i, j)colon i, j in V ::and:ineq j$$ Thus, the cost function is defined as follows − $$t(i,j)=begin{cases}0 & if: (i, j): in E\1 & otherwiseend{cases}$$ Now, suppose that a Hamiltonian cycle h exists in G. It is clear that the cost of each edge in h is 0 in G” as each edge belongs to E. Therefore, h has a cost of 0 in G”. Thus, if graph G has a Hamiltonian cycle, then graph G” has a tour of 0 cost. Conversely, we assume that G” has a tour h” of cost at most 0. The cost of edges in E” are 0 and 1 by definition. Hence, each edge must have a cost of 0 as the cost of h” is 0. We therefore conclude that h” contains only edges in E. We have thus proven that G has a Hamiltonian cycle, if and only if G” has a tour of cost at most 0. TSP is NP-complete. Print Page Previous Next Advertisements ”;