Wednesday, 5 December 2012

A Tiny Lisp Interpreter

Most of the computer languages has a lot of syntactic notations, for lisp family of languages the syntax is based on lists in parameterized prefix notations like passing argument to a function in parenthesis. In this tiny lisp interpreter we are including only a small subset of all the keywords that is lisp.
       The code has been done in javascript to incorporate in html language for a better visual experience.

  1. Parsing: The parsing component takes an input program in the form of a sequence of characters, verifies it according to the syntactic rules of the language, and translates the program into an internal representation. In a simple interpreter the internal representation is a tree structure that closely mirrors the nested structure of statements or expressions in the program. In a language translator called a compiler the internal representation is a sequence of instructions that can be directly executed by the computer.
  2. Execution: The internal representation is then processed according to the semantic rules of the language, thereby carrying out the computation. Execution is implemented with the function eval.

The program takes the prefix inputs and via parser function converts it into lists of lists. By this internal representation the execution function takes each list elements and evaluates each expression.The evaluation is done by eval function, for expression mapping is done by associate array to map each operate to its expressions.

lisp]=> (define square (lambda (r) (* r r)))
lisp]=> (square 12)
144
 For the interpreter when "define" interpreters then checks for the second string in the associate array for the presence of the function name, if not then the string is added and the rest of the expression is been mapped to the array.

Instead of associate array we have used environment class which is a subset of associate array inorder to incorporate find function to find the keys of the array.

Now for the function parse. Parsing is traditionally separated into two parts: lexical analysis, in which the input character string is broken up into a sequence of tokens, and syntactic analysis, in which the tokens are assembled into an internal representation.For this purpose we have used tokenize function.

          function tokenize(s)
         {  
               s=s.replace(/\(/g," ( ").replace(/\)/g," ) ").split(" ") 
               var p=[];  
                for(i in s)
               { 
                      if(s[i]!="")
                     { 
                            p.push(s[i]); 
                     } 
               }  
                    return p;
             }

To keep up a good interactive session we have used custom repl which would look for user inputs.

  process.stdin.resume();
   process.stdout.write('lisp]=> ');
  process.stdin.on('data',function(input){
  input = input.toString();
  var val = eval(parse(tokenize(input)))
  if (val != undefined)
  {
    process.stdout.write('Result:'+val);
  }
  else {process.stdout.write('lisp]=> ');

Click here for the whole code.

 

No comments:

Post a Comment