Binary Search Algorithm Table of content Binary Search Algorithm Implementation ”; Previous Next Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search algorithm works on the principle of divide and conquer, since it divides the array into half before searching. For this algorithm to work properly, the data collection should be in the sorted form. Binary search looks for a particular key value by comparing the middle most item of the collection. If a match occurs, then the index of item is returned. But if the middle item has a value greater than the key value, the right sub-array of the middle item is searched. Otherwise, the left sub-array is searched. This process continues recursively until the size of a subarray reduces to zero. Binary Search Algorithm Binary Search algorithm is an interval searching method that performs the searching in intervals only. The input taken by the binary search algorithm must always be in a sorted array since it divides the array into subarrays based on the greater or lower values. The algorithm follows the procedure below − Step 1 − Select the middle item in the array and compare it with the key value to be searched. If it is matched, return the position of the median. Step 2 − If it does not match the key value, check if the key value is either greater than or less than the median value. Step 3 − If the key is greater, perform the search in the right sub-array; but if the key is lower than the median value, perform the search in the left sub-array. Step 4 − Repeat Steps 1, 2 and 3 iteratively, until the size of sub-array becomes 1. Step 5 − If the key value does not exist in the array, then the algorithm returns an unsuccessful search. Pseudocode The pseudocode of binary search algorithms should look like this − Procedure binary_search A ← sorted array n ← size of array x ← value to be searched Set lowerBound = 1 Set upperBound = n while x not found if upperBound < lowerBound EXIT: x does not exists. set midPoint = lowerBound + ( upperBound – lowerBound ) / 2 if A[midPoint] < x set lowerBound = midPoint + 1 if A[midPoint] > x set upperBound = midPoint – 1 if A[midPoint] = x EXIT: x found at location midPoint end while end procedure Analysis Since the binary search algorithm performs searching iteratively, calculating the time complexity is not as easy as the linear search algorithm. The input array is searched iteratively by dividing into multiple sub-arrays after every unsuccessful iteration. Therefore, the recurrence relation formed would be of a dividing function. To explain it in simpler terms, During the first iteration, the element is searched in the entire array. Therefore, length of the array = n. In the second iteration, only half of the original array is searched. Hence, length of the array = n/2. In the third iteration, half of the previous sub-array is searched. Here, length of the array will be = n/4. Similarly, in the ith iteration, the length of the array will become n/2i To achieve a successful search, after the last iteration the length of array must be 1. Hence, n/2i = 1 That gives us − n = 2i Applying log on both sides, log n = log 2i log n = i. log 2 i = log n The time complexity of the binary search algorithm is O(log n) Example For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the process of binary search with a pictorial example. The following is our sorted array and let us assume that we need to search the location of value 31 using binary search. First, we shall determine half of the array by using this formula − mid = low + (high – low) / 2 Here it is, 0 + (9 – 0) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array. Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find that the value at location 4 is 27, which is not a match. As the value is greater than 27 and we have a sorted array, so we also know that the target value must be in the upper portion of the array. We change our low to mid + 1 and find the new mid value again. low = mid + 1 mid = low + (high – low) / 2 Our new mid is 7 now. We compare the value stored at location 7 with our target value 31. The value stored at location 7 is not a match, rather it is less than what we are looking for. So, the value must be in the lower part from this location. Hence, we calculate the mid again. This time it is 5. We compare the value stored at location 5 with our target value. We find that it is a match. We conclude that the target value 31 is stored at location 5. Binary search halves the searchable items and thus reduces the count of comparisons to be made to very less numbers. Implementation Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search algorithm works on the principle of divide and conquer. For this algorithm to work
Category: data Structures Algorithms
DSA – Hash Table
Hash Table Data structure Table of content Hashing Linear Probing Basic Operations Search Operation Insert Operation Delete Operation Complete implementation ”; Previous Next Hash Table is a data structure which stores data in an associative manner. In a hash table, data is stored in an array format, where each data value has its own unique index value. Access of data becomes very fast if we know the index of the desired data. Thus, it becomes a data structure in which insertion and search operations are very fast irrespective of the size of the data. Hash Table uses an array as a storage medium and uses hash technique to generate an index where an element is to be inserted or is to be located from. Hashing Hashing is a technique to convert a range of key values into a range of indexes of an array. We”re going to use modulo operator to get a range of key values. Consider an example of hash table of size 20, and the following items are to be stored. Item are in the (key,value) format. (1,20) (2,70) (42,80) (4,25) (12,44) (14,32) (17,11) (13,78) (37,98) Sr.No. Key Hash Array Index 1 1 1 % 20 = 1 1 2 2 2 % 20 = 2 2 3 42 42 % 20 = 2 2 4 4 4 % 20 = 4 4 5 12 12 % 20 = 12 12 6 14 14 % 20 = 14 14 7 17 17 % 20 = 17 17 8 13 13 % 20 = 13 13 9 37 37 % 20 = 17 17 Linear Probing As we can see, it may happen that the hashing technique is used to create an already used index of the array. In such a case, we can search the next empty location in the array by looking into the next cell until we find an empty cell. This technique is called linear probing. Sr.No. Key Hash Array Index After Linear Probing, Array Index 1 1 1 % 20 = 1 1 1 2 2 2 % 20 = 2 2 2 3 42 42 % 20 = 2 2 3 4 4 4 % 20 = 4 4 4 5 12 12 % 20 = 12 12 12 6 14 14 % 20 = 14 14 14 7 17 17 % 20 = 17 17 17 8 13 13 % 20 = 13 13 13 9 37 37 % 20 = 17 17 18 Basic Operations Following are the basic primary operations of a hash table. Search − Searches an element in a hash table. Insert − Inserts an element in a hash table. Delete − Deletes an element from a hash table. DataItem Define a data item having some data and key, based on which the search is to be conducted in a hash table. struct DataItem { int data; int key; }; Hash Method Define a hashing method to compute the hash code of the key of the data item. int hashCode(int key){ return key % SIZE; } Search Operation Whenever an element is to be searched, compute the hash code of the key passed and locate the element using that hash code as index in the array. Use linear probing to get the element ahead if the element is not found at the computed hash code. struct DataItem *search(int key) { //get the hash int hashIndex = hashCode(key); //move in array until an empty while(hashArray[hashIndex] != NULL) { if(hashArray[hashIndex]->key == key) return hashArray[hashIndex]; //go to next cell ++hashIndex; //wrap around the table hashIndex %= SIZE; } return NULL; } Example Following are the implementations of this operation in various programming language − C C++ Java Python #include <stdio.h> #define SIZE 10 // Define the size of the hash table struct DataItem { int key; }; struct DataItem *hashArray[SIZE]; // Define the hash table as an array of DataItem pointers int hashCode(int key) { // Return a hash value based on the key return key % SIZE; } struct DataItem *search(int key) { // get the hash int hashIndex = hashCode(key); // move in array until an empty slot is found or the key is found while (hashArray[hashIndex] != NULL) { // If the key is found, return the corresponding DataItem pointer if (hashArray[hashIndex]->key == key) return hashArray[hashIndex]; // go to the next cell ++hashIndex; // wrap around the table hashIndex %= SIZE; } // If the key is not found, return NULL return NULL; } int main() { // Initializing the hash table with some sample DataItems struct DataItem item2 = {25}; // Assuming the key is 25 struct DataItem item3 = {64}; // Assuming the key is 64 struct DataItem item4 = {22}; // Assuming the key is 22 // Calculate the hash index for each item and place them in the hash table int hashIndex2 = hashCode(item2.key); hashArray[hashIndex2] = &item2; int hashIndex3 = hashCode(item3.key); hashArray[hashIndex3] = &item3; int hashIndex4 = hashCode(item4.key); hashArray[hashIndex4] = &item4; // Call the search function to test it int keyToSearch = 64; // The key to search for in the hash table struct DataItem *result = search(keyToSearch); printf(“The element to be searched: %d”, keyToSearch); if (result != NULL) { printf(“nElement found”); } else { printf(“nElement not found”); } return 0; } Output The element to be searched: 64 Element found #include <iostream> #include <unordered_map> using namespace std; #define SIZE 10 // Define the size of the hash table struct DataItem { int key; }; unordered_map<int, DataItem*> hashMap; // Define the hash table as an unordered_map int hashCode(int key) { // Return a hash value based on the key return key % SIZE; } DataItem* search(int key) { // get the hash int hashIndex = hashCode(key); // move in the map until an empty slot is found or the key is found while (hashMap[hashIndex] != nullptr) { // If the key is found, return the corresponding DataItem pointer if (hashMap[hashIndex]->key == key) return hashMap[hashIndex]; // go to the next
DSA – Asymptotic Analysis
Data Structures – Asymptotic Analysis Table of content Asymptotic Analysis Asymptotic Notations Big Oh, O: Asymptotic Upper Bound Big omega, Ω: Asymptotic Lower Bound Big theta, θ: Asymptotic Tight Bound Little Oh, o Little omega, ω Common Asymptotic Notations Apriori and Apostiari Analysis ”; Previous Next Asymptotic Analysis Asymptotic analysis of an algorithm refers to defining the mathematical foundation/framing of its run-time performance. Using asymptotic analysis, we can very well conclude the best case, average case, and worst case scenario of an algorithm. Asymptotic analysis is input bound i.e., if there”s no input to the algorithm, it is concluded to work in a constant time. Other than the “input” all other factors are considered constant. Asymptotic analysis refers to computing the running time of any operation in mathematical units of computation. For example, the running time of one operation is computed as f(n) and may be for another operation it is computed as g(n2). This means the first operation running time will increase linearly with the increase in n and the running time of the second operation will increase exponentially when n increases. Similarly, the running time of both operations will be nearly the same if n is significantly small. Usually, the time required by an algorithm falls under three types − Best Case − Minimum time required for program execution. Average Case − Average time required for program execution. Worst Case − Maximum time required for program execution. Asymptotic Notations Execution time of an algorithm depends on the instruction set, processor speed, disk I/O speed, etc. Hence, we estimate the efficiency of an algorithm asymptotically. Time function of an algorithm is represented by T(n), where n is the input size. Different types of asymptotic notations are used to represent the complexity of an algorithm. Following asymptotic notations are used to calculate the running time complexity of an algorithm. O − Big Oh Notation Ω − Big omega Notation θ − Big theta Notation o − Little Oh Notation ω − Little omega Notation Big Oh, O: Asymptotic Upper Bound The notation Ο(n) is the formal way to express the upper bound of an algorithm”s running time. is the most commonly used notation. It measures the worst case time complexity or the longest amount of time an algorithm can possibly take to complete. A function f(n) can be represented is the order of g(n) that is O(g(n)), if there exists a value of positive integer n as n0 and a positive constant c such that − $f(n)leqslant c.g(n)$ for $n > n_{0}$ in all case Hence, function g(n) is an upper bound for function f(n), as g(n) grows faster than f(n). Example Let us consider a given function, $f(n) = 4.n^3 + 10.n^2 + 5.n + 1$ Considering $g(n) = n^3$, $f(n)leqslant 5.g(n)$ for all the values of $n > 2$ Hence, the complexity of f(n) can be represented as $O(g(n))$, i.e. $O(n^3)$ Big Omega, Ω: Asymptotic Lower Bound The notation Ω(n) is the formal way to express the lower bound of an algorithm”s running time. It measures the best case time complexity or the best amount of time an algorithm can possibly take to complete. We say that $f(n) = Omega (g(n))$ when there exists constant c that $f(n)geqslant c.g(n)$ for all sufficiently large value of n. Here n is a positive integer. It means function g is a lower bound for function f ; after a certain value of n, f will never go below g. Example Let us consider a given function, $f(n) = 4.n^3 + 10.n^2 + 5.n + 1$. Considering $g(n) = n^3$, $f(n)geqslant 4.g(n)$ for all the values of $n > 0$. Hence, the complexity of f(n) can be represented as $Omega (g(n))$, i.e. $Omega (n^3)$ Theta, θ: Asymptotic Tight Bound The notation θ(n) is the formal way to express both the lower bound and the upper bound of an algorithm”s running time. Some may confuse the theta notation as the average case time complexity; while big theta notation could be almost accurately used to describe the average case, other notations could be used as well. We say that $f(n) = theta(g(n))$ when there exist constants c1 and c2 that $c_{1}.g(n) leqslant f(n) leqslant c_{2}.g(n)$ for all sufficiently large value of n. Here n is a positive integer. This means function g is a tight bound for function f. Example Let us consider a given function, $f(n) = 4.n^3 + 10.n^2 + 5.n + 1$ Considering $g(n) = n^3$, $4.g(n) leqslant f(n) leqslant 5.g(n)$ for all the large values of n. Hence, the complexity of f(n) can be represented as $theta (g(n))$, i.e. $theta (n^3)$. Little Oh, o The asymptotic upper bound provided by O-notation may or may not be asymptotically tight. The bound $2.n^2 = O(n^2)$ is asymptotically tight, but the bound $2.n = O(n^2)$ is not. We use o-notation to denote an upper bound that is not asymptotically tight. We formally define o(g(n)) (little-oh of g of n) as the set f(n) = o(g(n)) for any positive constant $c > 0$ and there exists a value $n_{0} > 0$, such that $0 leqslant f(n) leqslant c.g(n)$. Intuitively, in the o-notation, the function f(n) becomes insignificant relative to g(n) as n approaches infinity; that is, $$lim_{n rightarrow infty}left(frac{f(n)}{g(n)}right) = 0$$ Example Let us consider the same function, $f(n) = 4.n^3 + 10.n^2 + 5.n + 1$ Considering $g(n) = n^{4}$, $$lim_{n rightarrow infty}left(frac{4.n^3 + 10.n^2 + 5.n + 1}{n^4}right) = 0$$ Hence, the complexity of f(n) can be represented as
DSA – Array Data Structure
Array Data Structure ”; Previous Next What is an Array? An array is a type of linear data structure that is defined as a collection of elements with same or different data types. They exist in both single dimension and multiple dimensions. These data structures come into picture when there is a necessity to store multiple elements of similar nature together at one place. The difference between an array index and a memory address is that the array index acts like a key value to label the elements in the array. However, a memory address is the starting address of free memory available. Following are the important terms to understand the concept of Array. Element − Each item stored in an array is called an element. Index − Each location of an element in an array has a numerical index, which is used to identify the element. Syntax Creating an array in C and C++ programming languages − data_type array_name[array_size]={elements separated by commas} or, data_type array_name[array_size]; Creating an array in Java programming language − data_type[] array_name = {elements separated by commas} or, data_type array_name = new data_type[array_size]; Need for Arrays Arrays are used as solutions to many problems from the small sorting problems to more complex problems like travelling salesperson problem. There are many data structures other than arrays that provide efficient time and space complexity for these problems, so what makes using arrays better? The answer lies in the random access lookup time. Arrays provide O(1) random access lookup time. That means, accessing the 1st index of the array and the 1000th index of the array will both take the same time. This is due to the fact that array comes with a pointer and an offset value. The pointer points to the right location of the memory and the offset value shows how far to look in the said memory. array_name[index] | | Pointer Offset Therefore, in an array with 6 elements, to access the 1st element, array is pointed towards the 0th index. Similarly, to access the 6th element, array is pointed towards the 5th index. Array Representation Arrays are represented as a collection of buckets where each bucket stores one element. These buckets are indexed from ”0” to ”n-1”, where n is the size of that particular array. For example, an array with size 10 will have buckets indexed from 0 to 9. This indexing will be similar for the multidimensional arrays as well. If it is a 2-dimensional array, it will have sub-buckets in each bucket. Then it will be indexed as array_name[m][n], where m and n are the sizes of each level in the array. As per the above illustration, following are the important points to be considered. Index starts with 0. Array length is 9 which means it can store 9 elements. Each element can be accessed via its index. For example, we can fetch an element at index 6 as 23. Basic Operations in Arrays The basic operations in the Arrays are insertion, deletion, searching, display, traverse, and update. These operations are usually performed to either modify the data in the array or to report the status of the array. Following are the basic operations supported by an array. Traverse − print all the array elements one by one. Insertion − Adds an element at the given index. Deletion − Deletes an element at the given index. Search − Searches an element using the given index or by the value. Update − Updates an element at the given index. Display − Displays the contents of the array. In C, when an array is initialized with size, then it assigns defaults values to its elements in following order. Data Type Default Value bool false char 0 int 0 float 0.0 double 0.0f void wchar_t 0 Array – Insertion Operation In the insertion operation, we are adding one or more elements to the array. Based on the requirement, a new element can be added at the beginning, end, or any given index of array. This is done using input statements of the programming languages. Algorithm Following is an algorithm to insert elements into a Linear Array until we reach the end of the array − 1. Start 2. Create an Array of a desired datatype and size. 3. Initialize a variable ”i” as 0. 4. Enter the element at ith index of the array. 5. Increment i by 1. 6. Repeat Steps 4 & 5 until the end of the array. 7. Stop Example Here, we see a practical implementation of insertion operation, where we add data at the end of the array − C C++ Java Python #include <stdio.h> int main(){ int LA[3] = {}, i; printf(“Array Before Insertion:n”); for(i = 0; i < 3; i++) printf(“LA[%d] = %d n”, i, LA[i]); printf(“Inserting Elements.. n”); printf(“The array elements after insertion :n”); // prints array values for(i = 0; i < 3; i++) { LA[i] = i + 2; printf(“LA[%d] = %d n”, i, LA[i]); } return 0; } #include <iostream> using namespace std; int main(){ int LA[3] = {}, i; cout << “Array Before Insertion:” << endl; for(i = 0; i < 3; i++) cout << “LA[” << i <<“] = ” << LA[i] << endl; //prints garbage values cout << “Inserting elements..” <<endl; cout << “Array After Insertion:” << endl; // prints array values for(i =
Linked List Data Structure ”; Previous Next What is Linked List? A linked list is a linear data structure which can store a collection of “nodes” connected together via links i.e. pointers. Linked lists nodes are not stored at a contiguous location, rather they are linked using pointers to the different memory locations. A node consists of the data value and a pointer to the address of the next node within the linked list. A linked list is a dynamic linear data structure whose memory size can be allocated or de-allocated at run time based on the operation insertion or deletion, this helps in using system memory efficiently. Linked lists can be used to implment various data structures like a stack, queue, graph, hash maps, etc. A linked list starts with a head node which points to the first node. Every node consists of data which holds the actual data (value) associated with the node and a next pointer which holds the memory address of the next node in the linked list. The last node is called the tail node in the list which points to null indicating the end of the list. Linked Lists vs Arrays In case of arrays, the size is given at the time of creation and so arrays are of fixed lenghth where as Linked lists are dynamic in size and any number of nodes can be added in the linked lists dynamically. An array can accommodate similar types of data types where as linked lists can store various nodes of different data types. Types of Linked List Following are the various types of linked list. Singly Linked Lists Singly linked lists contain two “buckets” in one node; one bucket holds the data and the other bucket holds the address of the next node of the list. Traversals can be done in one direction only as there is only a single link between two nodes of the same list. Doubly Linked Lists Doubly Linked Lists contain three “buckets” in one node; one bucket holds the data and the other buckets hold the addresses of the previous and next nodes in the list. The list is traversed twice as the nodes in the list are connected to each other from both sides. Circular Linked Lists Circular linked lists can exist in both singly linked list and doubly linked list. Since the last node and the first node of the circular linked list are connected, the traversal in this linked list will go on forever until it is broken. Basic Operations in Linked List The basic operations in the linked lists are insertion, deletion, searching, display, and deleting an element at a given key. These operations are performed on Singly Linked Lists as given below − Insertion − Adds an element at the beginning of the list. Deletion − Deletes an element at the beginning of the list. Display − Displays the complete list. Search − Searches an element using the given key. Delete − Deletes an element using the given key. Linked List – Insertion Operation Adding a new node in linked list is a more than one step activity. We shall learn this with diagrams here. First, create a node using the same structure and find the location where it has to be inserted. Imagine that we are inserting a node B (NewNode), between A (LeftNode) and C (RightNode). Then point B.next to C − NewNode.next -> RightNode; It should look like this − Now, the next node at the left should point to the new node. LeftNode.next -> NewNode; This will put the new node in the middle of the two. The new list should look like this − Insertion in linked list can be done in three different ways. They are explained as follows − Insertion at Beginning In this operation, we are adding an element at the beginning of the list. Algorithm 1. START 2. Create a node to store the data 3. Check if the list is empty 4. If the list is empty, add the data to the node and assign the head pointer to it. 5. If the list is not empty, add the data to a node and link to the current head. Assign the head to the newly added node. 6. END Example Following are the implementations of this operation in various programming languages − C C++ Java Python #include <stdio.h> #include <string.h> #include <stdlib.h> struct node { int data; struct node *next; }; struct node *head = NULL; struct node *current = NULL; // display the list void printList(){ struct node *p = head; printf(“n[“); //start from the beginning while(p != NULL) { printf(” %d “,p->data); p = p->next; } printf(“]”); } //insertion at the beginning void insertatbegin(int data){ //create a link struct node *lk = (struct node*) malloc(sizeof(struct node)); lk->data = data; // point it to old first node lk->next = head; //point first to new first node head = lk; } void main(){ int k=0; insertatbegin(12); insertatbegin(22); insertatbegin(30); insertatbegin(44); insertatbegin(50); printf(“Linked List: “); // print list printList(); } Output Linked List: [ 50 44 30 22 12 ] #include <bits/stdc++.h> #include <string> using namespace std; struct node { int data; struct node *next; }; struct node *head = NULL; struct node *current = NULL; // display the list void printList(){ struct node *p = head; cout << “n[“; //start from the beginning while(p != NULL) { cout << ” ” << p->data << ” “; p = p->next; } cout << “]”; }
DSA – Home
Data Structures and Algorithms (DSA) Tutorial PDF Version Quick Guide Resources Job Search Discussion Data Structures and Algorithms (DSA) Tutorial Data structures and algorithms (DSA) are two important aspects of any programming language. Every programming language has its own data structures and different types of algorithms to handle these data structures. Data Structures are used to organise and store data to use it in an effective way when performing data operations. Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a certain order to get the desired output. Algorithms are generally created independent of underlying languages, i.e. an algorithm can be implemented in more than one programming language. Almost every enterprise application uses various types of data structures in one or the other way. So, as a programmer, data structures and algorithms are really important aspects of day-to-day programming. A data structure is a particular way to arrange data so it can be saved in memory and retrieved for later use where as an algorithm is a set of steps for solving a known problem. Data Structures and Algorithms is abbreviated as DSA in the context of Computer Science. This tutorial will give you a great understanding on Data Structures needed to understand the complexity of enterprise level applications and need of algorithms, and data structures. Why to Learn Data Structures & Algorithms (DSA)? As applications are getting complex and data rich, there are three common problems that applications face now-a-days. Data Search − Consider an inventory of 1 million(106) items of a store. If the application is to search an item, it has to search an item in 1 million(106) items every time slowing down the search. As data grows, search will become slower. Processor speed − Processor speed although being very high, falls limited if the data grows to billion records. Multiple requests − As thousands of users can search data simultaneously on a web server, even the fast server fails while searching the data. To solve the above-mentioned problems, data structures come to rescue. Data can be organized in a data structure in such a way that all items may not be required to be searched, and the required data can be searched almost instantly. How to start learning Data Structures & Algorithms (DSA)? The basic steps to learn DSA is as follows: Step 1 – Learn Time and Space complexities Time and Space complexities are the measures of the amount of time required to execute the code (Time Complexity) and amount of space required to execute the code (Space Complexity). Step 2 – Learn Different Data Structures Here we learn different types of data structures like Array, Stack, Queye, Linked List et. Step 3 – Learn Different Algorithms Once you have good undertanding about various data sturtcures then you can start learning associated algorithms to process the data stored in these data structures. These algorithms include searching, sorting, and other different algorithms. Applications of Data Structures & Algorithms (DSA) From the data structure point of view, following are some important categories of algorithms − Search − Algorithm to search an item in a data structure. Sort − Algorithm to sort items in a certain order. Insert − Algorithm to insert item in a data structure. Update − Algorithm to update an existing item in a data structure. Delete − Algorithm to delete an existing item from a data structure. The following computer problems can be solved using Data Structures − Fibonacci number series Knapsack problem Tower of Hanoi All pair shortest path by Floyd-Warshall Shortest path by Dijkstra Project scheduling Who Should Learn DSA This tutorial has been designed for Computer Science Students as well as Software Professionals who are willing to learn data Structures and Algorithm (DSA) Programming in simple and easy steps. After completing this tutorial you will be at intermediate level of expertise from where you can take yourself to higher level of expertise. DSA Online Editor & Compiler In this tutorial, we will work with data structures and algorithms in four different programming languages: C, C++, Java, Python. So, we provide Online Compilers for each of these languages to execute the given code. Doing so, we are aiming to compromise the need for local setup for the compilers. C C++ Java Python #include <stdio.h> int main(){ int LA[3] = {}, i; for(i = 0; i < 3; i++) { LA[i] = i + 2; printf(“LA[%d] = %d n”, i, LA[i]); } return 0; } Output LA [0] = 2 LA [1] = 3 LA [2] = 4 #include <iostream> using namespace std; int main(){ int LA[3] = {}, i; for(i = 0; i < 3; i++) { LA[i] = i + 2; cout << “LA[” << i <<“] = ” << LA[i] << endl; } return 0; } Output LA [0] = 2 LA [1] = 3 LA [2] = 4 public class ArrayDemo { public static void main(String []args) { int LA[] = new int[3]; for(int i = 0; i < 3; i++) { LA[i] = i+2; System.out.println(“LA[” + i + “] = ” + LA[i]); } } } Output LA [0] = 2 LA [1] = 3 LA [2] = 4 LA = [0, 0, 0] x = 0 for x in range(len(LA)): LA[x] = x+2; print(“LA”, [x], ” = ” , LA[x]) Output LA [0] = 2 LA [1] = 3 LA [2] = 4 Prerequisites to Learn DSA Before proceeding with this tutorial, you should have a basic understanding of C programming language, text editor, and execution of programs, etc. DSA Online Quiz This Data Structures Algorithms tutorial helps you prepare for technical interviews and certification exams. We have provided various quizzes and assignments to check your learning level. Given quizzes have multiple choice type of questions and their answers with short explanation. Following is a sample quiz, try to attempt any of the given answers: Q – A complete graph can have A – n2 spanning trees