DAA – Job Sequencing with Deadline

Job Sequencing with Deadline Table of content Job Scheduling Algorithm Example ”; Previous Next Job scheduling algorithm is applied to schedule the jobs on a single processor to maximize the profits. The greedy approach of the job scheduling algorithm states that, “Given ‘n’ number of jobs with a starting time and ending time, they need to be scheduled in such a way that maximum profit is received within the maximum deadline”. Job Scheduling Algorithm Set of jobs with deadlines and profits are taken as an input with the job scheduling algorithm and scheduled subset of jobs with maximum profit are obtained as the final output. Algorithm Step1 − Find the maximum deadline value from the input set of jobs. Step2 − Once, the deadline is decided, arrange the jobs in descending order of their profits. Step3 − Selects the jobs with highest profits, their time periods not exceeding the maximum deadline. Step4 − The selected set of jobs are the output. Examples Consider the following tasks with their deadlines and profits. Schedule the tasks in such a way that they produce maximum profit after being executed − S. No. 1 2 3 4 5 Jobs J1 J2 J3 J4 J5 Deadlines 2 2 1 3 4 Profits 20 60 40 100 80 Step 1 Find the maximum deadline value, dm, from the deadlines given. dm = 4. Step 2 Arrange the jobs in descending order of their profits. S. No. 1 2 3 4 5 Jobs J4 J5 J2 J3 J1 Deadlines 3 4 2 1 2 Profits 100 80 60 40 20 The maximum deadline, dm, is 4. Therefore, all the tasks must end before 4. Choose the job with highest profit, J4. It takes up 3 parts of the maximum deadline. Therefore, the next job must have the time period 1. Total Profit = 100. Step 3 The next job with highest profit is J5. But the time taken by J5 is 4, which exceeds the deadline by 3. Therefore, it cannot be added to the output set. Step 4 The next job with highest profit is J2. The time taken by J5 is 2, which also exceeds the deadline by 1. Therefore, it cannot be added to the output set. Step 5 The next job with higher profit is J3. The time taken by J3 is 1, which does not exceed the given deadline. Therefore, J3 is added to the output set. Total Profit: 100 + 40 = 140 Step 6 Since, the maximum deadline is met, the algorithm comes to an end. The output set of jobs scheduled within the deadline are {J4, J3} with the maximum profit of 140. Example Following is the final implementation of Job sequencing Algorithm using Greedy Approach − C C++ Java Python #include <stdbool.h> #include <stdio.h> #include <stdlib.h> // A structure to represent a Jobs typedef struct Jobs { char id; // Jobs Id int dead; // Deadline of Jobs int profit; // Profit if Jobs is over before or on deadline } Jobs; // This function is used for sorting all Jobss according to // profit int compare(const void* a, const void* b){ Jobs* temp1 = (Jobs*)a; Jobs* temp2 = (Jobs*)b; return (temp2->profit – temp1->profit); } // Find minimum between two numbers. int min(int num1, int num2){ return (num1 > num2) ? num2 : num1; } int main(){ Jobs arr[] = { { ”a”, 2, 100 }, { ”b”, 2, 20 }, { ”c”, 1, 40 }, { ”d”, 3, 35 }, { ”e”, 1, 25 } }; int n = sizeof(arr) / sizeof(arr[0]); printf(“Following is maximum profit sequence of Jobs: n”); qsort(arr, n, sizeof(Jobs), compare); int result[n]; // To store result sequence of Jobs bool slot[n]; // To keep track of free time slots // Initialize all slots to be free for (int i = 0; i < n; i++) slot[i] = false; // Iterate through all given Jobs for (int i = 0; i < n; i++) { // Find a free slot for this Job for (int j = min(n, arr[i].dead) – 1; j >= 0; j–) { // Free slot found if (slot[j] == false) { result[j] = i; slot[j] = true; break; } } } // Print the result for (int i = 0; i < n; i++) if (slot[i]) printf(“%c “, arr[result[i]].id); return 0; } Output Following is maximum profit sequence of Jobs: c a d #include<iostream> #include<algorithm> using namespace std; struct Job { char id; int deadLine; int profit; }; bool comp(Job j1, Job j2){ return (j1.profit > j2.profit); //compare jobs based on profit } int min(int a, int b){ return (a<b)?a:b; } int main(){ Job jobs[] = { { ”a”, 2, 100 }, { ”b”, 2, 20 }, { ”c”, 1, 40 }, { ”d”, 3, 35 }, { ”e”, 1, 25 } }; int n = 5; cout << “Following is maximum profit sequence of Jobs: “<<“n”; sort(jobs, jobs+n, comp); //sort jobs on profit int jobSeq[n]; // To store result (Sequence of jobs) bool slot[n]; // To keep track of free time slots for (int i=0; i<n; i++) slot[i] = false; //initially all slots are free for

DAA – Asymptotic Notations & Apriori Analysis

Data Structures – Asymptotic Analysis Table of content Asymptotic Analysis Asymptotic Notations Big Oh, O: Asymptotic Upper Bound Big omega, Ω: Asymptotic Lower Bound Big theta, θ: Asymptotic Tight Bound Little Oh, o Little omega, ω Common Asymptotic Notations Apriori and Apostiari Analysis ”; Previous Next Asymptotic Analysis Asymptotic analysis of an algorithm refers to defining the mathematical foundation/framing of its run-time performance. Using asymptotic analysis, we can very well conclude the best case, average case, and worst case scenario of an algorithm. Asymptotic analysis is input bound i.e., if there”s no input to the algorithm, it is concluded to work in a constant time. Other than the “input” all other factors are considered constant. Asymptotic analysis refers to computing the running time of any operation in mathematical units of computation. For example, the running time of one operation is computed as f(n) and may be for another operation it is computed as g(n2). This means the first operation running time will increase linearly with the increase in n and the running time of the second operation will increase exponentially when n increases. Similarly, the running time of both operations will be nearly the same if n is significantly small. Usually, the time required by an algorithm falls under three types − Best Case − Minimum time required for program execution. Average Case − Average time required for program execution. Worst Case − Maximum time required for program execution. Asymptotic Notations Execution time of an algorithm depends on the instruction set, processor speed, disk I/O speed, etc. Hence, we estimate the efficiency of an algorithm asymptotically. Time function of an algorithm is represented by T(n), where n is the input size. Different types of asymptotic notations are used to represent the complexity of an algorithm. Following asymptotic notations are used to calculate the running time complexity of an algorithm. O − Big Oh Notation Ω − Big omega Notation θ − Big theta Notation o − Little Oh Notation ω − Little omega Notation Big Oh, O: Asymptotic Upper Bound The notation Ο(n) is the formal way to express the upper bound of an algorithm”s running time. is the most commonly used notation. It measures the worst case time complexity or the longest amount of time an algorithm can possibly take to complete. A function f(n) can be represented is the order of g(n) that is O(g(n)), if there exists a value of positive integer n as n0 and a positive constant c such that − $f(n)leqslant c.g(n)$ for $n > n_{0}$ in all case Hence, function g(n) is an upper bound for function f(n), as g(n) grows faster than f(n). Example Let us consider a given function, $f(n) = 4.n^3 + 10.n^2 + 5.n + 1$ Considering $g(n) = n^3$, $f(n)leqslant 5.g(n)$ for all the values of $n > 2$ Hence, the complexity of f(n) can be represented as $O(g(n))$, i.e. $O(n^3)$ Big Omega, Ω: Asymptotic Lower Bound The notation Ω(n) is the formal way to express the lower bound of an algorithm”s running time. It measures the best case time complexity or the best amount of time an algorithm can possibly take to complete. We say that $f(n) = Omega (g(n))$ when there exists constant c that $f(n)geqslant c.g(n)$ for all sufficiently large value of n. Here n is a positive integer. It means function g is a lower bound for function f ; after a certain value of n, f will never go below g. Example Let us consider a given function, $f(n) = 4.n^3 + 10.n^2 + 5.n + 1$. Considering $g(n) = n^3$, $f(n)geqslant 4.g(n)$ for all the values of $n > 0$. Hence, the complexity of f(n) can be represented as $Omega (g(n))$, i.e. $Omega (n^3)$ Theta, θ: Asymptotic Tight Bound The notation θ(n) is the formal way to express both the lower bound and the upper bound of an algorithm”s running time. Some may confuse the theta notation as the average case time complexity; while big theta notation could be almost accurately used to describe the average case, other notations could be used as well. We say that $f(n) = theta(g(n))$ when there exist constants c1 and c2 that $c_{1}.g(n) leqslant f(n) leqslant c_{2}.g(n)$ for all sufficiently large value of n. Here n is a positive integer. This means function g is a tight bound for function f. Example Let us consider a given function, $f(n) = 4.n^3 + 10.n^2 + 5.n + 1$ Considering $g(n) = n^3$, $4.g(n) leqslant f(n) leqslant 5.g(n)$ for all the large values of n. Hence, the complexity of f(n) can be represented as $theta (g(n))$, i.e. $theta (n^3)$. Little Oh, o The asymptotic upper bound provided by O-notation may or may not be asymptotically tight. The bound $2.n^2 = O(n^2)$ is asymptotically tight, but the bound $2.n = O(n^2)$ is not. We use o-notation to denote an upper bound that is not asymptotically tight. We formally define o(g(n)) (little-oh of g of n) as the set f(n) = o(g(n)) for any positive constant $c > 0$ and there exists a value $n_{0} > 0$, such that $0 leqslant f(n) leqslant c.g(n)$. Intuitively, in the o-notation, the function f(n) becomes insignificant relative to g(n) as n approaches infinity; that is, $$lim_{n rightarrow infty}left(frac{f(n)}{g(n)}right) = 0$$ Example Let us consider the same function, $f(n) = 4.n^3 + 10.n^2 + 5.n + 1$ Considering $g(n) = n^{4}$, $$lim_{n rightarrow infty}left(frac{4.n^3 + 10.n^2 + 5.n + 1}{n^4}right) = 0$$ Hence, the complexity of f(n) can be represented as

DAA – Master”s Theorem

Master’s Theorem Table of content What is Master”s theorem? Problem Statement Examples ”; Previous Next What is Master”s theorem? Master’s theorem is one of the many methods that are applied to calculate time complexities of algorithms. In analysis, time complexities are calculated to find out the best optimal logic of an algorithm. Master’s theorem is applied on recurrence relations. But before we get deep into the master’s theorem, let us first revise what recurrence relations are − Recurrence relations are equations that define the sequence of elements in which a term is a function of its preceding term. In algorithm analysis, the recurrence relations are usually formed when loops are present in an algorithm. Problem Statement Master’s theorem can only be applied on decreasing and dividing recurring functions. If the relation is not decreasing or dividing, master’s theorem must not be applied. Master’s Theorem for Dividing Functions Consider a relation of type − T(n) = aT(n/b) + f(n) where, a >= 1 and b > 1, n − size of the problem a − number of sub-problems in the recursion n/b − size of the sub problems based on the assumption that all sub-problems are of the same size. f(n) − represents the cost of work done outside the recursion -> Θ(nk logn p) ,where k >= 0 and p is a real number; If the recurrence relation is in the above given form, then there are three cases in the master theorem to determine the asymptotic notations − If a > bk , then T(n)= Θ (nlogb a ) [ logb a = log a / log b. ] If a = bk If p > -1, then T(n) = Θ (nlogb a logp+1 n) If p = -1, then T(n) = Θ (n logb a log log n) If p < -1, then T(n) = Θ (n logb a) If a < bk, If p >= 0, then T(n) = Θ (nk logp n). If p < 0, then T(n) = Θ (nk) Master’s Theorem for Decreasing Functions Consider a relation of type − T(n) = aT(n-b) + f(n) where, a >= 1 and b > 1, f(n) is asymptotically positive Here, n − size of the problem a − number of sub-problems in the recursion n-b − size of the sub problems based on the assumption that all sub-problems are of the same size. f(n) − represents the cost of work done outside the recursion -> Θ(nk), where k >= 0. If the recurrence relation is in the above given form, then there are three cases in the master theorem to determine the asymptotic notations − if a = 1, T(n) = O (nk+1) if a > 1, T(n) = O (an/b * nk) if a < 1, T(n) = O (nk) Examples Few examples to apply master’s theorem on dividing recurrence relations − Example 1 Consider a recurrence relation given as T(n) = 8T(n/2) + n2 In this problem, a = 8, b = 2 and f(n) = Θ(nk logn p) = n2, giving us k = 2 and p = 0. a = 8 > bk = 22 = 4, Hence, case 1 must be applied for this equation. To calculate, T(n) = Θ (nlogb a ) = nlog28 = n( log 8 / log 2 ) = n3 Therefore, T(n) = Θ(n3) is the tight bound for this equation. Example 2 Consider a recurrence relation given as T(n) = 4T(n/2) + n2 In this problem, a = 4, b = 2 and f(n) = Θ(nk logn p) = n2, giving us k = 2 and p = 0. a = 4 = bk = 22 = 4, p > -1 Hence, case 2(i) must be applied for this equation. To calculate, T(n) = Θ (nlogb a logp+1 n) = nlog24 log0+1n = n2logn Therefore, T(n) = Θ(n2logn) is the tight bound for this equation. Example 3 Consider a recurrence relation given as T(n) = 2T(n/2) + n/log n In this problem, a = 2, b = 2 and f(n) = Θ(nk logn p) = n/log n, giving us k = 1 and p = -1. a = 2 = bk = 21 = 2, p = -1 Hence, case 2(ii) must be applied for this equation. To calculate, T(n) = Θ (n logb a log log n) = nlog44 log logn = n.log(logn) Therefore, T(n) = Θ(n.log(logn)) is the tight bound for this equation. Example 4 Consider a recurrence relation given as T(n) = 16T(n/4) + n2/log2n In this problem, a = 16, b = 4 and f(n) = Θ(nk logn p) = n2/log2n, giving us k = 2 and p = -2. a = 16 = bk = 42 = 16, p < -1 Hence, case 2(iii) must be applied for this equation. To calculate, T(n) = Θ (n logb a) = nlog416 = n2 Therefore, T(n) = Θ(n2) is the tight bound for this equation. Example 5 Consider a recurrence relation given as T(n) = 2T(n/2) + n2 In this problem, a = 2, b = 2 and f(n) = Θ(nk logn p) = n2, giving us k = 2 and p = 0. a = 2 < bk = 22 = 4, p = 0 Hence, case 3(i) must be applied for this equation. To calculate, T(n) = Θ (nk logp n) = n2 log0n = n2 Therefore, T(n) = Θ(n2) is the tight bound for this equation. Example

Home

Design and Analysis of Algorithms Tutorial Table of Content Design and Analysis of Algorithms Tutorial DAA Online Compiler or Editor Why Design and Analysis of Algorithms? Design Strategies Analysis of Algorithms Applications of DAA Who Should Learn DAA Prerequisites to Learn DAA DAA Jobs and Opportunities Frequently Asked Questions about DAA PDF Version Quick Guide Resources Job Search Discussion Design and Analysis of Algorithms Tutorial An Algorithm is a sequence of steps to solve a problem. It acts like a set of instructions on how a program should be executed. Thus, there is no fixed structure of an algorithm. Design and Analysis of Algorithms covers the concepts of designing an algorithm as to solve various problems in computer science and information technology, and also analyse the complexity of these algorithms designed. The main aim of designing an algorithm is to provide a optimal solution for a problem. Not all problems must have similar type of solutions; an optimal solution for one problem may not be optimal for another. Therefore, we must adopt various strategies to provide feasible solutions for all types of problems. This tutorial introduces the fundamental concepts of Designing Strategies, Complexity analysis of Algorithms, followed by problems on Graph Theory and Sorting methods. This tutorial also includes the basic concepts on Complexity theory. DAA Online Compiler or Editor In this tutorial, we will provide online compilers and editors to execute programs of all algorithms. The code is written in four different programming languages: C, C++, Java, Python. This will eliminate the need to install a local setup for all these languages. For instance, let us execute the code for a simple linear search algorithm to work with these online compilers. C C++ Java Python #include <stdio.h> void linear_search(int a[], int n, int key){ int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array printf(“The element is found at %d positionn”, i+1); count = count + 1; } } if(count == 0) // for unsuccessful search printf(“The element is not present in the arrayn”); } int main(){ int i, n, key; n = 6; int a[10] = {12, 44, 32, 18, 4, 10}; key = 18; linear_search(a, n, key); key = 23; linear_search(a, n, key); return 0; } #include <iostream> using namespace std; void linear_search(int a[], int n, int key){ int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array cout << “The element is found at position ” << i+1 <<endl; count = count + 1; } } if(count == 0) // for unsuccessful search cout << “The element is not present in the array” <<endl; } int main(){ int i, n, key; n = 6; int a[10] = {12, 44, 32, 18, 4, 10}; key = 18; linear_search(a, n, key); key = 23; linear_search(a, n, key); return 0; } import java.io.*; import java.util.*; public class LinearSearch { static void linear_search(int a[], int n, int key) { int i, count = 0; for(i = 0; i < n; i++) { if(a[i] == key) { // compares each element of the array System.out.println(“The element is found at position ” + (i+1)); count = count + 1; } } if(count == 0) // for unsuccessful search System.out.println(“The element is not present in the array”); } public static void main(String args[]) { int i, n, key; n = 6; int a[] = {12, 44, 32, 18, 4, 10, 66}; key = 10; linear_search(a, n, key); key = 54; linear_search(a, n, key); } } def linear_search(a, n, key): count = 0 for i in range(n): if(a[i] == key): print(“The element is found at position”, (i+1)) count = count + 1 if(count == 0): print(“Unsuccessful Search”) a = [14, 56, 77, 32, 84, 9, 10] n = len(a) key = 32 linear_search(a, n, key) key = 3 linear_search(a, n, key) Why Design and Analysis of Algorithms? One computer problem might have several versions of a solution. In this case, every approach taken to solve the computer problem is correct. However, choosing the best-suited solution will improve the efficiency of the program. There might be a misconception that smaller algorithms are best-suited solutions in most cases. But, a feasible solution is not based on the length of algorithm, but the one with efficient complexity (time and space complexity). We study Design and Analysis of Algorithms to analyse the complexity of all the versions of a solution to a computer problem. Design Strategies There are various types of strategies in order to design algorithms for various problems. Some of them are listed as follows − Greedy Approach Divide & Conquer Approach Dynamic Programming Approach Randomization Approach Approximation Approach Recursive Approach Branch and Bound Approach In this tutorial, we will see various algorithms of each approach to solve various problems. Analysis of Algorithms To analyse the feasibility of an algorithm designed, we calculate the complexity of it. This is represented in three notations, called asymptotic notations. Asymptotic notations are as follows − Worst-Case Scenario − Big Oh and Little Oh Notations Best-Case Scenario − Big Omega and Little Omega Notations Average-Case Scenario − Theta Notation Each solution is analysed in all scenarios of the problem, and the solution having the best worst-case is said to be optimal. Thus, providing an efficient algorithm. Applications of DAA There are applications of Design and Analysis of Algorithms (DAA) in a wide range of

DAA – Introduction to Algorithms

Data Structures – Algorithms Basics Table of content Characteristics of an Algorithm How to Write an Algorithm? Algorithm Analysis Algorithm Complexity Space Complexity Time Complexity ”; Previous Next Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a certain order to get the desired output. Algorithms are generally created independent of underlying languages, i.e. an algorithm can be implemented in more than one programming language. From the data structure point of view, following are some important categories of algorithms − Search − Algorithm to search an item in a data structure. Sort − Algorithm to sort items in a certain order. Insert − Algorithm to insert item in a data structure. Update − Algorithm to update an existing item in a data structure. Delete − Algorithm to delete an existing item from a data structure. Characteristics of an Algorithm Not all procedures can be called an algorithm. An algorithm should have the following characteristics − Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases), and their inputs/outputs should be clear and must lead to only one meaning. Input − An algorithm should have 0 or more well-defined inputs. Output − An algorithm should have 1 or more well-defined outputs, and should match the desired output. Finiteness − Algorithms must terminate after a finite number of steps. Feasibility − Should be feasible with the available resources. Independent − An algorithm should have step-by-step directions, which should be independent of any programming code. How to Write an Algorithm? There are no well-defined standards for writing algorithms. Rather, it is problem and resource dependent. Algorithms are never written to support a particular programming code. As we know that all programming languages share basic code constructs like loops (do, for, while), flow-control (if-else), etc. These common constructs can be used to write an algorithm. We write algorithms in a step-by-step manner, but it is not always the case. Algorithm writing is a process and is executed after the problem domain is well-defined. That is, we should know the problem domain, for which we are designing a solution. Example Let”s try to learn algorithm-writing by using an example. Problem − Design an algorithm to add two numbers and display the result. Step 1 − START Step 2 − declare three integers a, b & c Step 3 − define values of a & b Step 4 − add values of a & b Step 5 − store output of step 4 to c Step 6 − print c Step 7 − STOP Algorithms tell the programmers how to code the program. Alternatively, the algorithm can be written as − Step 1 − START ADD Step 2 − get values of a & b Step 3 − c ← a &plus; b Step 4 − display c Step 5 − STOP In design and analysis of algorithms, usually the second method is used to describe an algorithm. It makes it easy for the analyst to analyze the algorithm ignoring all unwanted definitions. He can observe what operations are being used and how the process is flowing. Writing step numbers, is optional. We design an algorithm to get a solution of a given problem. A problem can be solved in more than one ways. Hence, many solution algorithms can be derived for a given problem. The next step is to analyze those proposed solution algorithms and implement the best suitable solution. Algorithm Analysis Efficiency of an algorithm can be analyzed at two different stages, before implementation and after implementation. They are the following − A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of an algorithm is measured by assuming that all other factors, for example, processor speed, are constant and have no effect on the implementation. A Posterior Analysis − This is an empirical analysis of an algorithm. The selected algorithm is implemented using programming language. This is then executed on target computer machine. In this analysis, actual statistics like running time and space required, are collected. We shall learn about a priori algorithm analysis. Algorithm analysis deals with the execution or running time of various operations involved. The running time of an operation can be defined as the number of computer instructions executed per operation. Algorithm Complexity Suppose X is an algorithm and n is the size of input data, the time and space used by the algorithm X are the two main factors, which decide the efficiency of X. Time Factor − Time is measured by counting the number of key operations such as comparisons in the sorting algorithm. Space Factor − Space is measured by counting the maximum memory space required by the algorithm. The complexity of an algorithm f(n) gives the running time and/or the storage space required by the algorithm in terms of n as the size of input data. Space Complexity Space complexity of an algorithm represents the amount of memory space required by the algorithm in its life cycle. The space required by an algorithm is equal to the sum of the following two components − A fixed part that is a space required to store certain data and variables, that are independent of the size of the problem. For example, simple variables and constants used, program size, etc. A variable part is a space required by variables, whose size depends on the size of the problem. For example, dynamic memory allocation, recursion stack space, etc. Space complexity S(P) of any algorithm P is S(P) = C &plus; SP(I), where C is the fixed part and S(I) is the variable part of the algorithm, which depends on instance characteristic I. Following is a simple example that tries to explain the concept − Algorithm: SUM(A, B) Step 1 − START Step 2 − C ← A &plus; B &plus; 10 Step 3 − Stop Here we have three variables A, B, and C and one constant. Hence S(P) = 1 &plus;