DBMS – Concurrency Control

DBMS – Concurrency Control ”; Previous Next In a multiprogramming environment where multiple transactions can be executed simultaneously, it is highly important to control the concurrency of transactions. We have concurrency control protocols to ensure atomicity, isolation, and serializability of concurrent transactions. Concurrency control protocols can be broadly divided into two categories − Lock based protocols Time stamp based protocols Lock-based Protocols Database systems equipped with lock-based protocols use a mechanism by which any transaction cannot read or write data until it acquires an appropriate lock on it. Locks are of two kinds − Binary Locks − A lock on a data item can be in two states; it is either locked or unlocked. Shared/exclusive − This type of locking mechanism differentiates the locks based on their uses. If a lock is acquired on a data item to perform a write operation, it is an exclusive lock. Allowing more than one transaction to write on the same data item would lead the database into an inconsistent state. Read locks are shared because no data value is being changed. There are four types of lock protocols available − Simplistic Lock Protocol Simplistic lock-based protocols allow transactions to obtain a lock on every object before a ”write” operation is performed. Transactions may unlock the data item after completing the ‘write’ operation. Pre-claiming Lock Protocol Pre-claiming protocols evaluate their operations and create a list of data items on which they need locks. Before initiating an execution, the transaction requests the system for all the locks it needs beforehand. If all the locks are granted, the transaction executes and releases all the locks when all its operations are over. If all the locks are not granted, the transaction rolls back and waits until all the locks are granted. Two-Phase Locking 2PL This locking protocol divides the execution phase of a transaction into three parts. In the first part, when the transaction starts executing, it seeks permission for the locks it requires. The second part is where the transaction acquires all the locks. As soon as the transaction releases its first lock, the third phase starts. In this phase, the transaction cannot demand any new locks; it only releases the acquired locks. Two-phase locking has two phases, one is growing, where all the locks are being acquired by the transaction; and the second phase is shrinking, where the locks held by the transaction are being released. To claim an exclusive (write) lock, a transaction must first acquire a shared (read) lock and then upgrade it to an exclusive lock. Strict Two-Phase Locking The first phase of Strict-2PL is same as 2PL. After acquiring all the locks in the first phase, the transaction continues to execute normally. But in contrast to 2PL, Strict-2PL does not release a lock after using it. Strict-2PL holds all the locks until the commit point and releases all the locks at a time. Strict-2PL does not have cascading abort as 2PL does. Timestamp-based Protocols The most commonly used concurrency protocol is the timestamp based protocol. This protocol uses either system time or logical counter as a timestamp. Lock-based protocols manage the order between the conflicting pairs among transactions at the time of execution, whereas timestamp-based protocols start working as soon as a transaction is created. Every transaction has a timestamp associated with it, and the ordering is determined by the age of the transaction. A transaction created at 0002 clock time would be older than all other transactions that come after it. For example, any transaction ”y” entering the system at 0004 is two seconds younger and the priority would be given to the older one. In addition, every data item is given the latest read and write-timestamp. This lets the system know when the last ‘read and write’ operation was performed on the data item. Timestamp Ordering Protocol The timestamp-ordering protocol ensures serializability among transactions in their conflicting read and write operations. This is the responsibility of the protocol system that the conflicting pair of tasks should be executed according to the timestamp values of the transactions. The timestamp of transaction Ti is denoted as TS(Ti). Read time-stamp of data-item X is denoted by R-timestamp(X). Write time-stamp of data-item X is denoted by W-timestamp(X). Timestamp ordering protocol works as follows − If a transaction Ti issues a read(X) operation − If TS(Ti) < W-timestamp(X) Operation rejected. If TS(Ti) >= W-timestamp(X) Operation executed. All data-item timestamps updated. If a transaction Ti issues a write(X) operation − If TS(Ti) < R-timestamp(X) Operation rejected. If TS(Ti) < W-timestamp(X) Operation rejected and Ti rolled back. Otherwise, operation executed. Thomas” Write Rule This rule states if TS(Ti) < W-timestamp(X), then the operation is rejected and Ti is rolled back. Time-stamp ordering rules can be modified to make the schedule view serializable. Instead of making Ti rolled back, the ”write” operation itself is ignored. Print Page Previous Next Advertisements ”;

DBMS – Data Recovery

DBMS – Data Recovery ”; Previous Next Crash Recovery DBMS is a highly complex system with hundreds of transactions being executed every second. The durability and robustness of a DBMS depends on its complex architecture and its underlying hardware and system software. If it fails or crashes amid transactions, it is expected that the system would follow some sort of algorithm or techniques to recover lost data. Failure Classification To see where the problem has occurred, we generalize a failure into various categories, as follows − Transaction failure A transaction has to abort when it fails to execute or when it reaches a point from where it can’t go any further. This is called transaction failure where only a few transactions or processes are hurt. Reasons for a transaction failure could be − Logical errors − Where a transaction cannot complete because it has some code error or any internal error condition. System errors − Where the database system itself terminates an active transaction because the DBMS is not able to execute it, or it has to stop because of some system condition. For example, in case of deadlock or resource unavailability, the system aborts an active transaction. System Crash There are problems − external to the system − that may cause the system to stop abruptly and cause the system to crash. For example, interruptions in power supply may cause the failure of underlying hardware or software failure. Examples may include operating system errors. Disk Failure In early days of technology evolution, it was a common problem where hard-disk drives or storage drives used to fail frequently. Disk failures include formation of bad sectors, unreachability to the disk, disk head crash or any other failure, which destroys all or a part of disk storage. Storage Structure We have already described the storage system. In brief, the storage structure can be divided into two categories − Volatile storage − As the name suggests, a volatile storage cannot survive system crashes. Volatile storage devices are placed very close to the CPU; normally they are embedded onto the chipset itself. For example, main memory and cache memory are examples of volatile storage. They are fast but can store only a small amount of information. Non-volatile storage − These memories are made to survive system crashes. They are huge in data storage capacity, but slower in accessibility. Examples may include hard-disks, magnetic tapes, flash memory, and non-volatile (battery backed up) RAM. Recovery and Atomicity When a system crashes, it may have several transactions being executed and various files opened for them to modify the data items. Transactions are made of various operations, which are atomic in nature. But according to ACID properties of DBMS, atomicity of transactions as a whole must be maintained, that is, either all the operations are executed or none. When a DBMS recovers from a crash, it should maintain the following − It should check the states of all the transactions, which were being executed. A transaction may be in the middle of some operation; the DBMS must ensure the atomicity of the transaction in this case. It should check whether the transaction can be completed now or it needs to be rolled back. No transactions would be allowed to leave the DBMS in an inconsistent state. There are two types of techniques, which can help a DBMS in recovering as well as maintaining the atomicity of a transaction − Maintaining the logs of each transaction, and writing them onto some stable storage before actually modifying the database. Maintaining shadow paging, where the changes are done on a volatile memory, and later, the actual database is updated. Log-based Recovery Log is a sequence of records, which maintains the records of actions performed by a transaction. It is important that the logs are written prior to the actual modification and stored on a stable storage media, which is failsafe. Log-based recovery works as follows − The log file is kept on a stable storage media. When a transaction enters the system and starts execution, it writes a log about it. <Tn, Start> When the transaction modifies an item X, it write logs as follows − <Tn, X, V1, V2> It reads Tn has changed the value of X, from V1 to V2. When the transaction finishes, it logs − <Tn, commit> The database can be modified using two approaches − Deferred database modification − All logs are written on to the stable storage and the database is updated when a transaction commits. Immediate database modification − Each log follows an actual database modification. That is, the database is modified immediately after every operation. Recovery with Concurrent Transactions When more than one transaction are being executed in parallel, the logs are interleaved. At the time of recovery, it would become hard for the recovery system to backtrack all logs, and then start recovering. To ease this situation, most modern DBMS use the concept of ”checkpoints”. Checkpoint Keeping and maintaining logs in real time and in real environment may fill out all the memory space available in the system. As time passes, the log file may grow too big to be handled at all. Checkpoint is a mechanism where all the previous logs are removed from the system and stored permanently in a storage disk. Checkpoint declares a point before which the DBMS was in consistent state, and all the transactions were committed. Recovery When a system with concurrent transactions crashes and recovers, it behaves in the following manner − The recovery system reads the logs backwards from the end to the last checkpoint. It maintains two lists, an undo-list and a redo-list. If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just <Tn, Commit>, it puts the transaction in the redo-list. If the recovery system sees a log with <Tn, Start> but no commit or abort log found, it puts the transaction in undo-list. All the transactions in the undo-list are then

DBMS – Discussion

Discuss DBMS ”; Previous Next Database Management System or DBMS in short refers to the technology of storing and retrieving users’ data with utmost efficiency along with appropriate security measures. This tutorial explains the basics of DBMS such as its architecture, data models, data schemas, data independence, E-R model, relation model, relational database design, and storage and file structure and much more. Print Page Previous Next Advertisements ”;

DBMS – Indexing

DBMS – Indexing ”; Previous Next We know that data is stored in the form of records. Every record has a key field, which helps it to be recognized uniquely. Indexing is a data structure technique to efficiently retrieve records from the database files based on some attributes on which the indexing has been done. Indexing in database systems is similar to what we see in books. Indexing is defined based on its indexing attributes. Indexing can be of the following types − Primary Index − Primary index is defined on an ordered data file. The data file is ordered on a key field. The key field is generally the primary key of the relation. Secondary Index − Secondary index may be generated from a field which is a candidate key and has a unique value in every record, or a non-key with duplicate values. Clustering Index − Clustering index is defined on an ordered data file. The data file is ordered on a non-key field. Ordered Indexing is of two types − Dense Index Sparse Index Dense Index In dense index, there is an index record for every search key value in the database. This makes searching faster but requires more space to store index records itself. Index records contain search key value and a pointer to the actual record on the disk. Sparse Index In sparse index, index records are not created for every search key. An index record here contains a search key and an actual pointer to the data on the disk. To search a record, we first proceed by index record and reach at the actual location of the data. If the data we are looking for is not where we directly reach by following the index, then the system starts sequential search until the desired data is found. Multilevel Index Index records comprise search-key values and data pointers. Multilevel index is stored on the disk along with the actual database files. As the size of the database grows, so does the size of the indices. There is an immense need to keep the index records in the main memory so as to speed up the search operations. If single-level index is used, then a large size index cannot be kept in memory which leads to multiple disk accesses. Multi-level Index helps in breaking down the index into several smaller indices in order to make the outermost level so small that it can be saved in a single disk block, which can easily be accommodated anywhere in the main memory. B+ Tree A B+ tree is a balanced binary search tree that follows a multi-level index format. The leaf nodes of a B+ tree denote actual data pointers. B+ tree ensures that all leaf nodes remain at the same height, thus balanced. Additionally, the leaf nodes are linked using a link list; therefore, a B+ tree can support random access as well as sequential access. Structure of B+ Tree Every leaf node is at equal distance from the root node. A B+ tree is of the order n where n is fixed for every B+ tree. Internal nodes − Internal (non-leaf) nodes contain at least ⌈n/2⌉ pointers, except the root node. At most, an internal node can contain n pointers. Leaf nodes − Leaf nodes contain at least ⌈n/2⌉ record pointers and ⌈n/2⌉ key values. At most, a leaf node can contain n record pointers and n key values. Every leaf node contains one block pointer P to point to next leaf node and forms a linked list. B+ Tree Insertion B+ trees are filled from bottom and each entry is done at the leaf node. If a leaf node overflows − Split node into two parts. Partition at i = ⌊(m+1)/2⌋. First i entries are stored in one node. Rest of the entries (i+1 onwards) are moved to a new node. ith key is duplicated at the parent of the leaf. If a non-leaf node overflows − Split node into two parts. Partition the node at i = ⌈(m+1)/2⌉. Entries up to i are kept in one node. Rest of the entries are moved to a new node. B+ Tree Deletion B+ tree entries are deleted at the leaf nodes. The target entry is searched and deleted. If it is an internal node, delete and replace with the entry from the left position. After deletion, underflow is tested, If underflow occurs, distribute the entries from the nodes left to it. If distribution is not possible from left, then Distribute from the nodes right to it. If distribution is not possible from left or from right, then Merge the node with left and right to it. Print Page Previous Next Advertisements ”;

DBMS – Hashing

DBMS – Hashing ”; Previous Next For a huge database structure, it can be almost next to impossible to search all the index values through all its level and then reach the destination data block to retrieve the desired data. Hashing is an effective technique to calculate the direct location of a data record on the disk without using index structure. Hashing uses hash functions with search keys as parameters to generate the address of a data record. Hash Organization Bucket − A hash file stores data in bucket format. Bucket is considered a unit of storage. A bucket typically stores one complete disk block, which in turn can store one or more records. Hash Function − A hash function, h, is a mapping function that maps all the set of search-keys K to the address where actual records are placed. It is a function from search keys to bucket addresses. Static Hashing In static hashing, when a search-key value is provided, the hash function always computes the same address. For example, if mod-4 hash function is used, then it shall generate only 5 values. The output address shall always be same for that function. The number of buckets provided remains unchanged at all times. Operation Insertion − When a record is required to be entered using static hash, the hash function h computes the bucket address for search key K, where the record will be stored. Bucket address = h(K) Search − When a record needs to be retrieved, the same hash function can be used to retrieve the address of the bucket where the data is stored. Delete − This is simply a search followed by a deletion operation. Bucket Overflow The condition of bucket-overflow is known as collision. This is a fatal state for any static hash function. In this case, overflow chaining can be used. Overflow Chaining − When buckets are full, a new bucket is allocated for the same hash result and is linked after the previous one. This mechanism is called Closed Hashing. Linear Probing − When a hash function generates an address at which data is already stored, the next free bucket is allocated to it. This mechanism is called Open Hashing. Dynamic Hashing The problem with static hashing is that it does not expand or shrink dynamically as the size of the database grows or shrinks. Dynamic hashing provides a mechanism in which data buckets are added and removed dynamically and on-demand. Dynamic hashing is also known as extended hashing. Hash function, in dynamic hashing, is made to produce a large number of values and only a few are used initially. Organization The prefix of an entire hash value is taken as a hash index. Only a portion of the hash value is used for computing bucket addresses. Every hash index has a depth value to signify how many bits are used for computing a hash function. These bits can address 2n buckets. When all these bits are consumed − that is, when all the buckets are full − then the depth value is increased linearly and twice the buckets are allocated. Operation Querying − Look at the depth value of the hash index and use those bits to compute the bucket address. Update − Perform a query as above and update the data. Deletion − Perform a query to locate the desired data and delete the same. Insertion − Compute the address of the bucket If the bucket is already full. Add more buckets. Add additional bits to the hash value. Re-compute the hash function. Else Add data to the bucket, If all the buckets are full, perform the remedies of static hashing. Hashing is not favorable when the data is organized in some ordering and the queries require a range of data. When data is discrete and random, hash performs the best. Hashing algorithms have high complexity than indexing. All hash operations are done in constant time. Print Page Previous Next Advertisements ”;

DBMS – Relational Algebra

DBMS – Relational Algebra ”; Previous Next Relational database systems are expected to be equipped with a query language that can assist its users to query the database instances. There are two kinds of query languages − relational algebra and relational calculus. Relational Algebra Relational algebra is a procedural query language, which takes instances of relations as input and yields instances of relations as output. It uses operators to perform queries. An operator can be either unary or binary. They accept relations as their input and yield relations as their output. Relational algebra is performed recursively on a relation and intermediate results are also considered relations. The fundamental operations of relational algebra are as follows − Select Project Union Set different Cartesian product Rename We will discuss all these operations in the following sections. Select Operation (σ) It selects tuples that satisfy the given predicate from a relation. Notation − σp(r) Where σ stands for selection predicate and r stands for relation. p is prepositional logic formula which may use connectors like and, or, and not. These terms may use relational operators like − =, ≠, ≥, < ,  >,  ≤. For example − σsubject = “database”(Books) Output − Selects tuples from books where subject is ”database”. σsubject = “database” and price = “450”(Books) Output − Selects tuples from books where subject is ”database” and ”price” is 450. σsubject = “database” and price = “450” or year > “2010”(Books) Output − Selects tuples from books where subject is ”database” and ”price” is 450 or those books published after 2010. Project Operation (∏) It projects column(s) that satisfy a given predicate. Notation − ∏A1, A2, An (r) Where A1, A2 , An are attribute names of relation r. Duplicate rows are automatically eliminated, as relation is a set. For example − ∏subject, author (Books) Selects and projects columns named as subject and author from the relation Books. Union Operation (∪) It performs binary union between two given relations and is defined as − r ∪ s = { t | t ∈ r or t ∈ s} Notation − r U s Where r and s are either database relations or relation result set (temporary relation). For a union operation to be valid, the following conditions must hold − r, and s must have the same number of attributes. Attribute domains must be compatible. Duplicate tuples are automatically eliminated. ∏ author (Books) ∪ ∏ author (Articles) Output − Projects the names of the authors who have either written a book or an article or both. Set Difference (−) The result of set difference operation is tuples, which are present in one relation but are not in the second relation. Notation − r − s Finds all the tuples that are present in r but not in s. ∏ author (Books) − ∏ author (Articles) Output − Provides the name of authors who have written books but not articles. Cartesian Product (Χ) Combines information of two different relations into one. Notation − r Χ s Where r and s are relations and their output will be defined as − r Χ s = { q t | q ∈ r and t ∈ s} σauthor = ”tutorialspoint”(Books Χ Articles) Output − Yields a relation, which shows all the books and articles written by tutorialspoint. Rename Operation (ρ) The results of relational algebra are also relations but without any name. The rename operation allows us to rename the output relation. ”rename” operation is denoted with small Greek letter rho ρ. Notation − ρ x (E) Where the result of expression E is saved with name of x. Additional operations are − Set intersection Assignment Natural join Relational Calculus In contrast to Relational Algebra, Relational Calculus is a non-procedural query language, that is, it tells what to do but never explains how to do it. Relational calculus exists in two forms − Tuple Relational Calculus (TRC) Filtering variable ranges over tuples Notation − {T | Condition} Returns all tuples T that satisfies a condition. For example − { T.name | Author(T) AND T.article = ”database” } Output − Returns tuples with ”name” from Author who has written article on ”database”. TRC can be quantified. We can use Existential (∃) and Universal Quantifiers (∀). For example − { R| ∃T   ∈ Authors(T.article=”database” AND R.name=T.name)} Output − The above query will yield the same result as the previous one. Domain Relational Calculus (DRC) In DRC, the filtering variable uses the domain of attributes instead of entire tuple values (as done in TRC, mentioned above). Notation − { a1, a2, a3, …, an | P (a1, a2, a3, … ,an)} Where a1, a2 are attributes and P stands for formulae built by inner attributes. For example − { | ∈ TutorialsPoint ∧ subject = ”database”} Output − Yields Article, Page, and Subject from the relation TutorialsPoint, where subject is database. Just like TRC, DRC can also be written using existential and universal quantifiers. DRC also involves relational operators. The expression power of Tuple Relation Calculus and Domain Relation Calculus is equivalent to Relational Algebra. Print Page Previous Next Advertisements ”;

DBMS – Data Backup

DBMS – Data Backup ”; Previous Next Loss of Volatile Storage A volatile storage like RAM stores all the active logs, disk buffers, and related data. In addition, it stores all the transactions that are being currently executed. What happens if such a volatile storage crashes abruptly? It would obviously take away all the logs and active copies of the database. It makes recovery almost impossible, as everything that is required to recover the data is lost. Following techniques may be adopted in case of loss of volatile storage − We can have checkpoints at multiple stages so as to save the contents of the database periodically. A state of active database in the volatile memory can be periodically dumped onto a stable storage, which may also contain logs and active transactions and buffer blocks. <dump> can be marked on a log file, whenever the database contents are dumped from a non-volatile memory to a stable one. Recovery When the system recovers from a failure, it can restore the latest dump. It can maintain a redo-list and an undo-list as checkpoints. It can recover the system by consulting undo-redo lists to restore the state of all transactions up to the last checkpoint. Database Backup & Recovery from Catastrophic Failure A catastrophic failure is one where a stable, secondary storage device gets corrupt. With the storage device, all the valuable data that is stored inside is lost. We have two different strategies to recover data from such a catastrophic failure − Remote backup &minu; Here a backup copy of the database is stored at a remote location from where it can be restored in case of a catastrophe. Alternatively, database backups can be taken on magnetic tapes and stored at a safer place. This backup can later be transferred onto a freshly installed database to bring it to the point of backup. Grown-up databases are too bulky to be frequently backed up. In such cases, we have techniques where we can restore a database just by looking at its logs. So, all that we need to do here is to take a backup of all the logs at frequent intervals of time. The database can be backed up once a week, and the logs being very small can be backed up every day or as frequently as possible. Remote Backup Remote backup provides a sense of security in case the primary location where the database is located gets destroyed. Remote backup can be offline or real-time or online. In case it is offline, it is maintained manually. Online backup systems are more real-time and lifesavers for database administrators and investors. An online backup system is a mechanism where every bit of the real-time data is backed up simultaneously at two distant places. One of them is directly connected to the system and the other one is kept at a remote place as backup. As soon as the primary database storage fails, the backup system senses the failure and switches the user system to the remote storage. Sometimes this is so instant that the users can’t even realize a failure. Print Page Previous Next Advertisements ”;

DBMS – File Structure

DBMS – File Structure ”; Previous Next Relative data and information is stored collectively in file formats. A file is a sequence of records stored in binary format. A disk drive is formatted into several blocks that can store records. File records are mapped onto those disk blocks. File Organization File Organization defines how file records are mapped onto disk blocks. We have four types of File Organization to organize file records − Heap File Organization When a file is created using Heap File Organization, the Operating System allocates memory area to that file without any further accounting details. File records can be placed anywhere in that memory area. It is the responsibility of the software to manage the records. Heap File does not support any ordering, sequencing, or indexing on its own. Sequential File Organization Every file record contains a data field (attribute) to uniquely identify that record. In sequential file organization, records are placed in the file in some sequential order based on the unique key field or search key. Practically, it is not possible to store all the records sequentially in physical form. Hash File Organization Hash File Organization uses Hash function computation on some fields of the records. The output of the hash function determines the location of disk block where the records are to be placed. Clustered File Organization Clustered file organization is not considered good for large databases. In this mechanism, related records from one or more relations are kept in the same disk block, that is, the ordering of records is not based on primary key or search key. File Operations Operations on database files can be broadly classified into two categories − Update Operations Retrieval Operations Update operations change the data values by insertion, deletion, or update. Retrieval operations, on the other hand, do not alter the data but retrieve them after optional conditional filtering. In both types of operations, selection plays a significant role. Other than creation and deletion of a file, there could be several operations, which can be done on files. Open − A file can be opened in one of the two modes, read mode or write mode. In read mode, the operating system does not allow anyone to alter data. In other words, data is read only. Files opened in read mode can be shared among several entities. Write mode allows data modification. Files opened in write mode can be read but cannot be shared. Locate − Every file has a file pointer, which tells the current position where the data is to be read or written. This pointer can be adjusted accordingly. Using find (seek) operation, it can be moved forward or backward. Read − By default, when files are opened in read mode, the file pointer points to the beginning of the file. There are options where the user can tell the operating system where to locate the file pointer at the time of opening a file. The very next data to the file pointer is read. Write − User can select to open a file in write mode, which enables them to edit its contents. It can be deletion, insertion, or modification. The file pointer can be located at the time of opening or can be dynamically changed if the operating system allows to do so. Close − This is the most important operation from the operating system’s point of view. When a request to close a file is generated, the operating system removes all the locks (if in shared mode), saves the data (if altered) to the secondary storage media, and releases all the buffers and file handlers associated with the file. The organization of data inside a file plays a major role here. The process to locate the file pointer to a desired record inside a file various based on whether the records are arranged sequentially or clustered. Print Page Previous Next Advertisements ”;

DBMS – Storage System

DBMS – Storage System ”; Previous Next Databases are stored in file formats, which contain records. At physical level, the actual data is stored in electromagnetic format on some device. These storage devices can be broadly categorized into three types − Primary Storage − The memory storage that is directly accessible to the CPU comes under this category. CPU”s internal memory (registers), fast memory (cache), and main memory (RAM) are directly accessible to the CPU, as they are all placed on the motherboard or CPU chipset. This storage is typically very small, ultra-fast, and volatile. Primary storage requires continuous power supply in order to maintain its state. In case of a power failure, all its data is lost. Secondary Storage − Secondary storage devices are used to store data for future use or as backup. Secondary storage includes memory devices that are not a part of the CPU chipset or motherboard, for example, magnetic disks, optical disks (DVD, CD, etc.), hard disks, flash drives, and magnetic tapes. Tertiary Storage − Tertiary storage is used to store huge volumes of data. Since such storage devices are external to the computer system, they are the slowest in speed. These storage devices are mostly used to take the back up of an entire system. Optical disks and magnetic tapes are widely used as tertiary storage. Memory Hierarchy A computer system has a well-defined hierarchy of memory. A CPU has direct access to it main memory as well as its inbuilt registers. The access time of the main memory is obviously less than the CPU speed. To minimize this speed mismatch, cache memory is introduced. Cache memory provides the fastest access time and it contains data that is most frequently accessed by the CPU. The memory with the fastest access is the costliest one. Larger storage devices offer slow speed and they are less expensive, however they can store huge volumes of data as compared to CPU registers or cache memory. Magnetic Disks Hard disk drives are the most common secondary storage devices in present computer systems. These are called magnetic disks because they use the concept of magnetization to store information. Hard disks consist of metal disks coated with magnetizable material. These disks are placed vertically on a spindle. A read/write head moves in between the disks and is used to magnetize or de-magnetize the spot under it. A magnetized spot can be recognized as 0 (zero) or 1 (one). Hard disks are formatted in a well-defined order to store data efficiently. A hard disk plate has many concentric circles on it, called tracks. Every track is further divided into sectors. A sector on a hard disk typically stores 512 bytes of data. Redundant Array of Independent Disks RAID or Redundant Array of Independent Disks, is a technology to connect multiple secondary storage devices and use them as a single storage media. RAID consists of an array of disks in which multiple disks are connected together to achieve different goals. RAID levels define the use of disk arrays. RAID 0 In this level, a striped array of disks is implemented. The data is broken down into blocks and the blocks are distributed among disks. Each disk receives a block of data to write/read in parallel. It enhances the speed and performance of the storage device. There is no parity and backup in Level 0. RAID 1 RAID 1 uses mirroring techniques. When data is sent to a RAID controller, it sends a copy of data to all the disks in the array. RAID level 1 is also called mirroring and provides 100% redundancy in case of a failure. RAID 2 RAID 2 records Error Correction Code using Hamming distance for its data, striped on different disks. Like level 0, each data bit in a word is recorded on a separate disk and ECC codes of the data words are stored on a different set disks. Due to its complex structure and high cost, RAID 2 is not commercially available. RAID 3 RAID 3 stripes the data onto multiple disks. The parity bit generated for data word is stored on a different disk. This technique makes it to overcome single disk failures. RAID 4 In this level, an entire block of data is written onto data disks and then the parity is generated and stored on a different disk. Note that level 3 uses byte-level striping, whereas level 4 uses block-level striping. Both level 3 and level 4 require at least three disks to implement RAID. RAID 5 RAID 5 writes whole data blocks onto different disks, but the parity bits generated for data block stripe are distributed among all the data disks rather than storing them on a different dedicated disk. RAID 6 RAID 6 is an extension of level 5. In this level, two independent parities are generated and stored in distributed fashion among multiple disks. Two parities provide additional fault tolerance. This level requires at least four disk drives to implement RAID. Print Page Previous Next Advertisements ”;

DBMS – Architecture

DBMS – Architecture ”; Previous Next The design of a DBMS depends on its architecture. It can be centralized or decentralized or hierarchical. The architecture of a DBMS can be seen as either single tier or multi-tier. An n-tier architecture divides the whole system into related but independent n modules, which can be independently modified, altered, changed, or replaced. In 1-tier architecture, the DBMS is the only entity where the user directly sits on the DBMS and uses it. Any changes done here will directly be done on the DBMS itself. It does not provide handy tools for end-users. Database designers and programmers normally prefer to use single-tier architecture. If the architecture of DBMS is 2-tier, then it must have an application through which the DBMS can be accessed. Programmers use 2-tier architecture where they access the DBMS by means of an application. Here the application tier is entirely independent of the database in terms of operation, design, and programming. 3-tier Architecture A 3-tier architecture separates its tiers from each other based on the complexity of the users and how they use the data present in the database. It is the most widely used architecture to design a DBMS. Database (Data) Tier − At this tier, the database resides along with its query processing languages. We also have the relations that define the data and their constraints at this level. Application (Middle) Tier − At this tier reside the application server and the programs that access the database. For a user, this application tier presents an abstracted view of the database. End-users are unaware of any existence of the database beyond the application. At the other end, the database tier is not aware of any other user beyond the application tier. Hence, the application layer sits in the middle and acts as a mediator between the end-user and the database. User (Presentation) Tier − End-users operate on this tier and they know nothing about any existence of the database beyond this layer. At this layer, multiple views of the database can be provided by the application. All views are generated by applications that reside in the application tier. Multiple-tier database architecture is highly modifiable, as almost all its components are independent and can be changed independently. Print Page Previous Next Advertisements ”;