Compiler – Intermediate Code Generation ”; Previous Next A source code can directly be translated into its target machine code, then why at all we need to translate the source code into an intermediate code which is then translated to its target code? Let us see the reasons why we need an intermediate code. If a compiler translates the source language to its target machine language without having the option for generating intermediate code, then for each new machine, a full native compiler is required. Intermediate code eliminates the need of a new full compiler for every unique machine by keeping the analysis portion same for all the compilers. The second part of compiler, synthesis, is changed according to the target machine. It becomes easier to apply the source code modifications to improve code performance by applying code optimization techniques on the intermediate code. Intermediate Representation Intermediate codes can be represented in a variety of ways and they have their own benefits. High Level IR – High-level intermediate code representation is very close to the source language itself. They can be easily generated from the source code and we can easily apply code modifications to enhance performance. But for target machine optimization, it is less preferred. Low Level IR – This one is close to the target machine, which makes it suitable for register and memory allocation, instruction set selection, etc. It is good for machine-dependent optimizations. Intermediate code can be either language specific (e.g., Byte Code for Java) or language independent (three-address code). Three-Address Code Intermediate code generator receives input from its predecessor phase, semantic analyzer, in the form of an annotated syntax tree. That syntax tree then can be converted into a linear representation, e.g., postfix notation. Intermediate code tends to be machine independent code. Therefore, code generator assumes to have unlimited number of memory storage (register) to generate code. For example: a = b + c * d; The intermediate code generator will try to divide this expression into sub-expressions and then generate the corresponding code. r1 = c * d; r2 = b + r1; a = r2 r being used as registers in the target program. A three-address code has at most three address locations to calculate the expression. A three-address code can be represented in two forms : quadruples and triples. Quadruples Each instruction in quadruples presentation is divided into four fields: operator, arg1, arg2, and result. The above example is represented below in quadruples format: Op arg1 arg2 result * c d r1 + b r1 r2 + r2 r1 r3 = r3 a Triples Each instruction in triples presentation has three fields : op, arg1, and arg2.The results of respective sub-expressions are denoted by the position of expression. Triples represent similarity with DAG and syntax tree. They are equivalent to DAG while representing expressions. Op arg1 arg2 * c d + b (0) + (1) (0) = (2) Triples face the problem of code immovability while optimization, as the results are positional and changing the order or position of an expression may cause problems. Indirect Triples This representation is an enhancement over triples representation. It uses pointers instead of position to store results. This enables the optimizers to freely re-position the sub-expression to produce an optimized code. Declarations A variable or procedure has to be declared before it can be used. Declaration involves allocation of space in memory and entry of type and name in the symbol table. A program may be coded and designed keeping the target machine structure in mind, but it may not always be possible to accurately convert a source code to its target language. Taking the whole program as a collection of procedures and sub-procedures, it becomes possible to declare all the names local to the procedure. Memory allocation is done in a consecutive manner and names are allocated to memory in the sequence they are declared in the program. We use offset variable and set it to zero {offset = 0} that denote the base address. The source programming language and the target machine architecture may vary in the way names are stored, so relative addressing is used. While the first name is allocated memory starting from the memory location 0 {offset=0}, the next name declared later, should be allocated memory next to the first one. Example: We take the example of C programming language where an integer variable is assigned 2 bytes of memory and a float variable is assigned 4 bytes of memory. int a; float b; Allocation process: {offset = 0} int a; id.type = int id.width = 2 offset = offset + id.width {offset = 2} float b; id.type = float id.width = 4 offset = offset + id.width {offset = 6} To enter this detail in a symbol table, a procedure enter can be used. This method may have the following structure: enter(name, type, offset) This procedure should create an entry in the symbol table, for variable name, having its type set to type and relative address offset in its data area. Print Page Previous Next Advertisements ”;
Category: compiler Design
Compiler Design – Useful Resources ”; Previous Next The following resources contain additional information on Compiler Design. Please use them to get more in-depth knowledge on this topic. Useful Video Courses Compiler Design Online Training 103 Lectures 10 hours Tutorialspoint More Detail Adobe Xd Course – UI / UX Design, Prototype And Getting A Job 128 Lectures 11.5 hours Aleksandar Cucukovic More Detail Mobile App Design Course From Scratch In Adobe Xd 38 Lectures 4.5 hours Aleksandar Cucukovic More Detail Build A Website Design For Any Business 62 Lectures 3.5 hours Music Course Online More Detail Object-Oriented Analysis, Design and Programming with UML 122 Lectures 10 hours Packt Publishing More Detail Autocad Floor Plan Design: Crash Course 6 Lectures 1 hours Iretioluwa Hephzibah Wealth More Detail Print Page Previous Next Advertisements ”;
Compiler Design – Error Recovery ”; Previous Next A parser should be able to detect and report any error in the program. It is expected that when an error is encountered, the parser should be able to handle it and carry on parsing the rest of the input. Mostly it is expected from the parser to check for errors but errors may be encountered at various stages of the compilation process. A program may have the following kinds of errors at various stages: Lexical : name of some identifier typed incorrectly Syntactical : missing semicolon or unbalanced parenthesis Semantical : incompatible value assignment Logical : code not reachable, infinite loop There are four common error-recovery strategies that can be implemented in the parser to deal with errors in the code. Panic mode When a parser encounters an error anywhere in the statement, it ignores the rest of the statement by not processing input from erroneous input to delimiter, such as semi-colon. This is the easiest way of error-recovery and also, it prevents the parser from developing infinite loops. Statement mode When a parser encounters an error, it tries to take corrective measures so that the rest of inputs of statement allow the parser to parse ahead. For example, inserting a missing semicolon, replacing comma with a semicolon etc. Parser designers have to be careful here because one wrong correction may lead to an infinite loop. Error productions Some common errors are known to the compiler designers that may occur in the code. In addition, the designers can create augmented grammar to be used, as productions that generate erroneous constructs when these errors are encountered. Global correction The parser considers the program in hand as a whole and tries to figure out what the program is intended to do and tries to find out a closest match for it, which is error-free. When an erroneous input (statement) X is fed, it creates a parse tree for some closest error-free statement Y. This may allow the parser to make minimal changes in the source code, but due to the complexity (time and space) of this strategy, it has not been implemented in practice yet. Abstract Syntax Trees Parse tree representations are not easy to be parsed by the compiler, as they contain more details than actually needed. Take the following parse tree as an example: If watched closely, we find most of the leaf nodes are single child to their parent nodes. This information can be eliminated before feeding it to the next phase. By hiding extra information, we can obtain a tree as shown below: Abstract tree can be represented as: ASTs are important data structures in a compiler with least unnecessary information. ASTs are more compact than a parse tree and can be easily used by a compiler. Print Page Previous Next Advertisements ”;
Compiler Design – Semantic Analysis ”; Previous Next We have learnt how a parser constructs parse trees in the syntax analysis phase. The plain parse-tree constructed in that phase is generally of no use for a compiler, as it does not carry any information of how to evaluate the tree. The productions of context-free grammar, which makes the rules of the language, do not accommodate how to interpret them. For example E → E + T The above CFG production has no semantic rule associated with it, and it cannot help in making any sense of the production. Semantics Semantics of a language provide meaning to its constructs, like tokens and syntax structure. Semantics help interpret symbols, their types, and their relations with each other. Semantic analysis judges whether the syntax structure constructed in the source program derives any meaning or not. CFG + semantic rules = Syntax Directed Definitions For example: int a = “value”; should not issue an error in lexical and syntax analysis phase, as it is lexically and structurally correct, but it should generate a semantic error as the type of the assignment differs. These rules are set by the grammar of the language and evaluated in semantic analysis. The following tasks should be performed in semantic analysis: Scope resolution Type checking Array-bound checking Semantic Errors We have mentioned some of the semantics errors that the semantic analyzer is expected to recognize: Type mismatch Undeclared variable Reserved identifier misuse. Multiple declaration of variable in a scope. Accessing an out of scope variable. Actual and formal parameter mismatch. Attribute Grammar Attribute grammar is a special form of context-free grammar where some additional information (attributes) are appended to one or more of its non-terminals in order to provide context-sensitive information. Each attribute has well-defined domain of values, such as integer, float, character, string, and expressions. Attribute grammar is a medium to provide semantics to the context-free grammar and it can help specify the syntax and semantics of a programming language. Attribute grammar (when viewed as a parse-tree) can pass values or information among the nodes of a tree. Example: E → E + T { E.value = E.value + T.value } The right part of the CFG contains the semantic rules that specify how the grammar should be interpreted. Here, the values of non-terminals E and T are added together and the result is copied to the non-terminal E. Semantic attributes may be assigned to their values from their domain at the time of parsing and evaluated at the time of assignment or conditions. Based on the way the attributes get their values, they can be broadly divided into two categories : synthesized attributes and inherited attributes. Synthesized attributes These attributes get values from the attribute values of their child nodes. To illustrate, assume the following production: S → ABC If S is taking values from its child nodes (A,B,C), then it is said to be a synthesized attribute, as the values of ABC are synthesized to S. As in our previous example (E → E + T), the parent node E gets its value from its child node. Synthesized attributes never take values from their parent nodes or any sibling nodes. Inherited attributes In contrast to synthesized attributes, inherited attributes can take values from parent and/or siblings. As in the following production, S → ABC A can get values from S, B and C. B can take values from S, A, and C. Likewise, C can take values from S, A, and B. Expansion : When a non-terminal is expanded to terminals as per a grammatical rule Reduction : When a terminal is reduced to its corresponding non-terminal according to grammar rules. Syntax trees are parsed top-down and left to right. Whenever reduction occurs, we apply its corresponding semantic rules (actions). Semantic analysis uses Syntax Directed Translations to perform the above tasks. Semantic analyzer receives AST (Abstract Syntax Tree) from its previous stage (syntax analysis). Semantic analyzer attaches attribute information with AST, which are called Attributed AST. Attributes are two tuple value, <attribute name, attribute value> For example: int value = 5; <type, “integer”> <presentvalue, “5”> For every production, we attach a semantic rule. S-attributed SDT If an SDT uses only synthesized attributes, it is called as S-attributed SDT. These attributes are evaluated using S-attributed SDTs that have their semantic actions written after the production (right hand side). As depicted above, attributes in S-attributed SDTs are evaluated in bottom-up parsing, as the values of the parent nodes depend upon the values of the child nodes. L-attributed SDT This form of SDT uses both synthesized and inherited attributes with restriction of not taking values from right siblings. In L-attributed SDTs, a non-terminal can get values from its parent, child, and sibling nodes. As in the following production S → ABC S can take values from A, B, and C (synthesized). A can take values from S only. B can take values from S and A. C can get values from S, A, and B. No non-terminal can get values from the sibling to its right. Attributes in L-attributed SDTs are evaluated by depth-first and left-to-right parsing manner. We may conclude that if a definition is S-attributed, then it is also L-attributed as L-attributed definition encloses S-attributed definitions. Print Page Previous Next Advertisements ”;
Compiler Design – Symbol Table ”; Previous Next Symbol table is an important data structure created and maintained by compilers in order to store information about the occurrence of various entities such as variable names, function names, objects, classes, interfaces, etc. Symbol table is used by both the analysis and the synthesis parts of a compiler. A symbol table may serve the following purposes depending upon the language in hand: To store the names of all entities in a structured form at one place. To verify if a variable has been declared. To implement type checking, by verifying assignments and expressions in the source code are semantically correct. To determine the scope of a name (scope resolution). A symbol table is simply a table which can be either linear or a hash table. It maintains an entry for each name in the following format: <symbol name, type, attribute> For example, if a symbol table has to store information about the following variable declaration: static int interest; then it should store the entry such as: <interest, int, static> The attribute clause contains the entries related to the name. Implementation If a compiler is to handle a small amount of data, then the symbol table can be implemented as an unordered list, which is easy to code, but it is only suitable for small tables only. A symbol table can be implemented in one of the following ways: Linear (sorted or unsorted) list Binary Search Tree Hash table Among all, symbol tables are mostly implemented as hash tables, where the source code symbol itself is treated as a key for the hash function and the return value is the information about the symbol. Operations A symbol table, either linear or hash, should provide the following operations. insert() This operation is more frequently used by analysis phase, i.e., the first half of the compiler where tokens are identified and names are stored in the table. This operation is used to add information in the symbol table about unique names occurring in the source code. The format or structure in which the names are stored depends upon the compiler in hand. An attribute for a symbol in the source code is the information associated with that symbol. This information contains the value, state, scope, and type about the symbol. The insert() function takes the symbol and its attributes as arguments and stores the information in the symbol table. For example: int a; should be processed by the compiler as: insert(a, int); lookup() lookup() operation is used to search a name in the symbol table to determine: if the symbol exists in the table. if it is declared before it is being used. if the name is used in the scope. if the symbol is initialized. if the symbol declared multiple times. The format of lookup() function varies according to the programming language. The basic format should match the following: lookup(symbol) This method returns 0 (zero) if the symbol does not exist in the symbol table. If the symbol exists in the symbol table, it returns its attributes stored in the table. Scope Management A compiler maintains two types of symbol tables: a global symbol table which can be accessed by all the procedures and scope symbol tables that are created for each scope in the program. To determine the scope of a name, symbol tables are arranged in hierarchical structure as shown in the example below: . . . int value=10; void pro_one() { int one_1; int one_2; { int one_3; |_ inner scope 1 int one_4; | } / int one_5; { int one_6; |_ inner scope 2 int one_7; | } / } void pro_two() { int two_1; int two_2; { int two_3; |_ inner scope 3 int two_4; | } / int two_5; } . . . The above program can be represented in a hierarchical structure of symbol tables: The global symbol table contains names for one global variable (int value) and two procedure names, which should be available to all the child nodes shown above. The names mentioned in the pro_one symbol table (and all its child tables) are not available for pro_two symbols and its child tables. This symbol table data structure hierarchy is stored in the semantic analyzer and whenever a name needs to be searched in a symbol table, it is searched using the following algorithm: first a symbol will be searched in the current scope, i.e. current symbol table. if a name is found, then search is completed, else it will be searched in the parent symbol table until, either the name is found or global symbol table has been searched for the name. Print Page Previous Next Advertisements ”;
Compiler Design – Syntax Analysis ”; Previous Next Syntax analysis or parsing is the second phase of a compiler. In this chapter, we shall learn the basic concepts used in the construction of a parser. We have seen that a lexical analyzer can identify tokens with the help of regular expressions and pattern rules. But a lexical analyzer cannot check the syntax of a given sentence due to the limitations of the regular expressions. Regular expressions cannot check balancing tokens, such as parenthesis. Therefore, this phase uses context-free grammar (CFG), which is recognized by push-down automata. CFG, on the other hand, is a superset of Regular Grammar, as depicted below: It implies that every Regular Grammar is also context-free, but there exists some problems, which are beyond the scope of Regular Grammar. CFG is a helpful tool in describing the syntax of programming languages. Context-Free Grammar In this section, we will first see the definition of context-free grammar and introduce terminologies used in parsing technology. A context-free grammar has four components: A set of non-terminals (V). Non-terminals are syntactic variables that denote sets of strings. The non-terminals define sets of strings that help define the language generated by the grammar. A set of tokens, known as terminal symbols (Σ). Terminals are the basic symbols from which strings are formed. A set of productions (P). The productions of a grammar specify the manner in which the terminals and non-terminals can be combined to form strings. Each production consists of a non-terminal called the left side of the production, an arrow, and a sequence of tokens and/or on- terminals, called the right side of the production. One of the non-terminals is designated as the start symbol (S); from where the production begins. The strings are derived from the start symbol by repeatedly replacing a non-terminal (initially the start symbol) by the right side of a production, for that non-terminal. Example We take the problem of palindrome language, which cannot be described by means of Regular Expression. That is, L = { w | w = wR } is not a regular language. But it can be described by means of CFG, as illustrated below: G = ( V, Σ, P, S ) Where: V = { Q, Z, N } Σ = { 0, 1 } P = { Q → Z | Q → N | Q → ℇ | Z → 0Q0 | N → 1Q1 } S = { Q } This grammar describes palindrome language, such as: 1001, 11100111, 00100, 1010101, 11111, etc. Syntax Analyzers A syntax analyzer or parser takes the input from a lexical analyzer in the form of token streams. The parser analyzes the source code (token stream) against the production rules to detect any errors in the code. The output of this phase is a parse tree. This way, the parser accomplishes two tasks, i.e., parsing the code, looking for errors and generating a parse tree as the output of the phase. Parsers are expected to parse the whole code even if some errors exist in the program. Parsers use error recovering strategies, which we will learn later in this chapter. Derivation A derivation is basically a sequence of production rules, in order to get the input string. During parsing, we take two decisions for some sentential form of input: Deciding the non-terminal which is to be replaced. Deciding the production rule, by which, the non-terminal will be replaced. To decide which non-terminal to be replaced with production rule, we can have two options. Left-most Derivation If the sentential form of an input is scanned and replaced from left to right, it is called left-most derivation. The sentential form derived by the left-most derivation is called the left-sentential form. Right-most Derivation If we scan and replace the input with production rules, from right to left, it is known as right-most derivation. The sentential form derived from the right-most derivation is called the right-sentential form. Example Production rules: E → E + E E → E * E E → id Input string: id + id * id The left-most derivation is: E → E * E E → E + E * E E → id + E * E E → id + id * E E → id + id * id Notice that the left-most side non-terminal is always processed first. The right-most derivation is: E → E + E E → E + E * E E → E + E * id E → E + id * id E → id + id * id Parse Tree A parse tree is a graphical depiction of a derivation. It is convenient to see how strings are derived from the start symbol. The start symbol of the derivation becomes the root of the parse tree. Let us see this by an example from the last topic. We take the left-most derivation of a + b * c The left-most derivation is: E → E * E E → E + E * E E → id + E * E E → id + id * E E → id + id * id Step 1: E → E * E Step 2: E → E + E * E Step 3: E → id + E * E Step 4: E → id + id * E Step 5: E → id + id * id In a parse tree: All leaf nodes are terminals. All interior nodes are non-terminals. In-order traversal gives original input string. A parse tree depicts associativity and precedence of operators. The deepest sub-tree is traversed first, therefore the operator in that sub-tree gets precedence over the operator which is in the parent nodes. Ambiguity A grammar G is said to be ambiguous if it has more than one parse tree (left or right derivation) for at least one string. Example E → E + E E → E – E
Compiler Design – Bottom-Up Parser ”; Previous Next Bottom-up parsing starts from the leaf nodes of a tree and works in upward direction till it reaches the root node. Here, we start from a sentence and then apply production rules in reverse manner in order to reach the start symbol. The image given below depicts the bottom-up parsers available. Shift-Reduce Parsing Shift-reduce parsing uses two unique steps for bottom-up parsing. These steps are known as shift-step and reduce-step. Shift step: The shift step refers to the advancement of the input pointer to the next input symbol, which is called the shifted symbol. This symbol is pushed onto the stack. The shifted symbol is treated as a single node of the parse tree. Reduce step : When the parser finds a complete grammar rule (RHS) and replaces it to (LHS), it is known as reduce-step. This occurs when the top of the stack contains a handle. To reduce, a POP function is performed on the stack which pops off the handle and replaces it with LHS non-terminal symbol. LR Parser The LR parser is a non-recursive, shift-reduce, bottom-up parser. It uses a wide class of context-free grammar which makes it the most efficient syntax analysis technique. LR parsers are also known as LR(k) parsers, where L stands for left-to-right scanning of the input stream; R stands for the construction of right-most derivation in reverse, and k denotes the number of lookahead symbols to make decisions. There are three widely used algorithms available for constructing an LR parser: SLR(1) – Simple LR Parser: Works on smallest class of grammar Few number of states, hence very small table Simple and fast construction LR(1) – LR Parser: Works on complete set of LR(1) Grammar Generates large table and large number of states Slow construction LALR(1) – Look-Ahead LR Parser: Works on intermediate size of grammar Number of states are same as in SLR(1) LR Parsing Algorithm Here we describe a skeleton algorithm of an LR parser: token = next_token() repeat forever s = top of stack if action[s, token] = “shift si” then PUSH token PUSH si token = next_token() else if action[s, token] = “reduce A::= β“ then POP 2 * |β| symbols s = top of stack PUSH A PUSH goto[s,A] else if action[s, token] = “accept” then return else error() LL vs. LR LL LR Does a leftmost derivation. Does a rightmost derivation in reverse. Starts with the root nonterminal on the stack. Ends with the root nonterminal on the stack. Ends when the stack is empty. Starts with an empty stack. Uses the stack for designating what is still to be expected. Uses the stack for designating what is already seen. Builds the parse tree top-down. Builds the parse tree bottom-up. Continuously pops a nonterminal off the stack, and pushes the corresponding right hand side. Tries to recognize a right hand side on the stack, pops it, and pushes the corresponding nonterminal. Expands the non-terminals. Reduces the non-terminals. Reads the terminals when it pops one off the stack. Reads the terminals while it pushes them on the stack. Pre-order traversal of the parse tree. Post-order traversal of the parse tree. Print Page Previous Next Advertisements ”;
Compiler Design – Regular Expressions ”; Previous Next The lexical analyzer needs to scan and identify only a finite set of valid string/token/lexeme that belong to the language in hand. It searches for the pattern defined by the language rules. Regular expressions have the capability to express finite languages by defining a pattern for finite strings of symbols. The grammar defined by regular expressions is known as regular grammar. The language defined by regular grammar is known as regular language. Regular expression is an important notation for specifying patterns. Each pattern matches a set of strings, so regular expressions serve as names for a set of strings. Programming language tokens can be described by regular languages. The specification of regular expressions is an example of a recursive definition. Regular languages are easy to understand and have efficient implementation. There are a number of algebraic laws that are obeyed by regular expressions, which can be used to manipulate regular expressions into equivalent forms. Operations The various operations on languages are: Union of two languages L and M is written as L U M = {s | s is in L or s is in M} Concatenation of two languages L and M is written as LM = {st | s is in L and t is in M} The Kleene Closure of a language L is written as L* = Zero or more occurrence of language L. Notations If r and s are regular expressions denoting the languages L(r) and L(s), then Union : (r)|(s) is a regular expression denoting L(r) U L(s) Concatenation : (r)(s) is a regular expression denoting L(r)L(s) Kleene closure : (r)* is a regular expression denoting (L(r))* (r) is a regular expression denoting L(r) Precedence and Associativity *, concatenation (.), and | (pipe sign) are left associative * has the highest precedence Concatenation (.) has the second highest precedence. | (pipe sign) has the lowest precedence of all. Representing valid tokens of a language in regular expression If x is a regular expression, then: x* means zero or more occurrence of x. i.e., it can generate { e, x, xx, xxx, xxxx, … } x+ means one or more occurrence of x. i.e., it can generate { x, xx, xxx, xxxx … } or x.x* x? means at most one occurrence of x i.e., it can generate either {x} or {e}. [a-z] is all lower-case alphabets of English language. [A-Z] is all upper-case alphabets of English language. [0-9] is all natural digits used in mathematics. Representing occurrence of symbols using regular expressions letter = [a – z] or [A – Z] digit = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 or [0-9] sign = [ + | – ] Representing language tokens using regular expressions Decimal = (sign)?(digit)+ Identifier = (letter)(letter | digit)* The only problem left with the lexical analyzer is how to verify the validity of a regular expression used in specifying the patterns of keywords of a language. A well-accepted solution is to use finite automata for verification. Print Page Previous Next Advertisements ”;
Compiler Design – Overview
Compiler Design – Overview ”; Previous Next Computers are a balanced mix of software and hardware. Hardware is just a piece of mechanical device and its functions are being controlled by a compatible software. Hardware understands instructions in the form of electronic charge, which is the counterpart of binary language in software programming. Binary language has only two alphabets, 0 and 1. To instruct, the hardware codes must be written in binary format, which is simply a series of 1s and 0s. It would be a difficult and cumbersome task for computer programmers to write such codes, which is why we have compilers to write such codes. Language Processing System We have learnt that any computer system is made of hardware and software. The hardware understands a language, which humans cannot understand. So we write programs in high-level language, which is easier for us to understand and remember. These programs are then fed into a series of tools and OS components to get the desired code that can be used by the machine. This is known as Language Processing System. The high-level language is converted into binary language in various phases. A compiler is a program that converts high-level language to assembly language. Similarly, an assembler is a program that converts the assembly language to machine-level language. Let us first understand how a program, using C compiler, is executed on a host machine. User writes a program in C language (high-level language). The C compiler, compiles the program and translates it to assembly program (low-level language). An assembler then translates the assembly program into machine code (object). A linker tool is used to link all the parts of the program together for execution (executable machine code). A loader loads all of them into memory and then the program is executed. Before diving straight into the concepts of compilers, we should understand a few other tools that work closely with compilers. Preprocessor A preprocessor, generally considered as a part of compiler, is a tool that produces input for compilers. It deals with macro-processing, augmentation, file inclusion, language extension, etc. Interpreter An interpreter, like a compiler, translates high-level language into low-level machine language. The difference lies in the way they read the source code or input. A compiler reads the whole source code at once, creates tokens, checks semantics, generates intermediate code, executes the whole program and may involve many passes. In contrast, an interpreter reads a statement from the input, converts it to an intermediate code, executes it, then takes the next statement in sequence. If an error occurs, an interpreter stops execution and reports it. whereas a compiler reads the whole program even if it encounters several errors. Assembler An assembler translates assembly language programs into machine code.The output of an assembler is called an object file, which contains a combination of machine instructions as well as the data required to place these instructions in memory. Linker Linker is a computer program that links and merges various object files together in order to make an executable file. All these files might have been compiled by separate assemblers. The major task of a linker is to search and locate referenced module/routines in a program and to determine the memory location where these codes will be loaded, making the program instruction to have absolute references. Loader Loader is a part of operating system and is responsible for loading executable files into memory and execute them. It calculates the size of a program (instructions and data) and creates memory space for it. It initializes various registers to initiate execution. Cross-compiler A compiler that runs on platform (A) and is capable of generating executable code for platform (B) is called a cross-compiler. Source-to-source Compiler A compiler that takes the source code of one programming language and translates it into the source code of another programming language is called a source-to-source compiler. Print Page Previous Next Advertisements ”;
Compiler Design – Top-Down Parser ”; Previous Next We have learnt in the last chapter that the top-down parsing technique parses the input, and starts constructing a parse tree from the root node gradually moving down to the leaf nodes. The types of top-down parsing are depicted below: Recursive Descent Parsing Recursive descent is a top-down parsing technique that constructs the parse tree from the top and the input is read from left to right. It uses procedures for every terminal and non-terminal entity. This parsing technique recursively parses the input to make a parse tree, which may or may not require back-tracking. But the grammar associated with it (if not left factored) cannot avoid back-tracking. A form of recursive-descent parsing that does not require any back-tracking is known as predictive parsing. This parsing technique is regarded recursive as it uses context-free grammar which is recursive in nature. Back-tracking Top- down parsers start from the root node (start symbol) and match the input string against the production rules to replace them (if matched). To understand this, take the following example of CFG: S → rXd | rZd X → oa | ea Z → ai For an input string: read, a top-down parser, will behave like this: It will start with S from the production rules and will match its yield to the left-most letter of the input, i.e. ‘r’. The very production of S (S → rXd) matches with it. So the top-down parser advances to the next input letter (i.e. ‘e’). The parser tries to expand non-terminal ‘X’ and checks its production from the left (X → oa). It does not match with the next input symbol. So the top-down parser backtracks to obtain the next production rule of X, (X → ea). Now the parser matches all the input letters in an ordered manner. The string is accepted. Predictive Parser Predictive parser is a recursive descent parser, which has the capability to predict which production is to be used to replace the input string. The predictive parser does not suffer from backtracking. To accomplish its tasks, the predictive parser uses a look-ahead pointer, which points to the next input symbols. To make the parser back-tracking free, the predictive parser puts some constraints on the grammar and accepts only a class of grammar known as LL(k) grammar. Predictive parsing uses a stack and a parsing table to parse the input and generate a parse tree. Both the stack and the input contains an end symbol $ to denote that the stack is empty and the input is consumed. The parser refers to the parsing table to take any decision on the input and stack element combination. In recursive descent parsing, the parser may have more than one production to choose from for a single instance of input, whereas in predictive parser, each step has at most one production to choose. There might be instances where there is no production matching the input string, making the parsing procedure to fail. LL Parser An LL Parser accepts LL grammar. LL grammar is a subset of context-free grammar but with some restrictions to get the simplified version, in order to achieve easy implementation. LL grammar can be implemented by means of both algorithms namely, recursive-descent or table-driven. LL parser is denoted as LL(k). The first L in LL(k) is parsing the input from left to right, the second L in LL(k) stands for left-most derivation and k itself represents the number of look aheads. Generally k = 1, so LL(k) may also be written as LL(1). LL Parsing Algorithm We may stick to deterministic LL(1) for parser explanation, as the size of table grows exponentially with the value of k. Secondly, if a given grammar is not LL(1), then usually, it is not LL(k), for any given k. Given below is an algorithm for LL(1) Parsing: Input: string ω parsing table M for grammar G Output: If ω is in L(G) then left-most derivation of ω, error otherwise. Initial State : $S on stack (with S being start symbol) ω$ in the input buffer SET ip to point the first symbol of ω$. repeat let X be the top stack symbol and a the symbol pointed by ip. if X∈ Vt or $ if X = a POP X and advance ip. else error() endif else /* X is non-terminal */ if M[X,a] = X → Y1, Y2,… Yk POP X PUSH Yk, Yk-1,… Y1 /* Y1 on top */ Output the production X → Y1, Y2,… Yk else error() endif endif until X = $ /* empty stack */ A grammar G is LL(1) if A → α | β are two distinct productions of G: for no terminal, both α and β derive strings beginning with a. at most one of α and β can derive empty string. if β → t, then α does not derive any string beginning with a terminal in FOLLOW(A). Print Page Previous Next Advertisements ”;