Tuesday, 18 December 2012

Conway's Game of Life


The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970.
The "game" is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves.
The Game of Life is not a game in the conventional sense. There are no players, and no winning or losing. Once the "pieces" are placed in the starting position, the rules determine everything that happens later. Nevertheless, Life is full of surprises! In most cases, it is impossible to look at a starting position (or pattern) and see what will happen in the future. The only way to find out is to follow the rules of the game.
 

Rules

The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, alive or dead. Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:
  1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
  2. Any live cell with two or three live neighbours lives on to the next generation.
  3. Any live cell with more than three live neighbours dies, as if by overcrowding.
  4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed—births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called a tick. The rules continue to be applied repeatedly to create further generations.
Here is the code of Game of Life I'd done in JavaScript

Lisp Interpreter




    An interpreter is also a program that translates a high-level language into a low-level one, but it does it at the moment the program is run. You write the program using a text editor or something similar, and then instruct the interpreter to run the program. It takes the program, one line at a time, and translates each line before running it: It translates the first line and runs it, then translates the second line and runs it etc.

A language interpreter has two parts:
  1. Parsing: The parsing component takes an input program in the form of a sequence of characters, verifies it according to the syntactic rules of the language, and translates the program into an internal representation. In a simple interpreter the internal representation is a tree structure that closely mirrors the nested structure of statements or expressions in the program.
  2. Execution: The internal representation is then processed according to the semantic rules of the language, thereby carrying out the computation.
     
Here is a picture of the interpretation process and an interactive session showing how parsing and execution operate on a short program: 
 


Parsing:
Parsing is traditionally separated into two parts: lexical analysis, in which the input character string is broken up into a sequence of tokens, and syntactic analysis, in which the tokens are assembled into an internal representation.

Execution:
The translated programme is processed according to the semantic rules of the language. Considering an example, its output after evaluation step will be as follows.

>>> program = "(begin (define r 3) (* 3.141592653 (* r r)))"
After parsing the program we get,
['begin', ['define', 'r', 3], ['*', 3.141592653, ['*', 'r', 'r']]]
After Executing this, we will get,

28.274333

Here is the code of Lisp Interpreter in Javascript. Now lets go through the code.

Function Env():

This function defines the environment for a variable used in execution. The function find(variable) will find the correct environment of execution for the variable. If the variable is defined in the existing environment, the current environment is considered, else the function is called recursively to the outer environments untill the correct environment is found, property known as lexical scoping.

Function addGlobal():

This functions defines the global environment. Basic math functions such as '+', '*' etc are defined in the global environment. Other than these 21 basic functions, some other math functions such as sin,cos,log etc are also included in the global environment as shown below.

         var functions = ['abs', 'acos', 'asin', 'atan', 'atan2', 'ceil', 'cos', 'exp', 'floor', 'log',
                                               'max', 'min', 'pow', 'random', 'round', 'sin', 'sqrt', 'tan'];
             for (var i = 0; i < functions.length; i += 1) {
                 env[functions[i]] = Math[functions[i]];
             }

Function eval():

Evaluation function defines the steps of evaluation processes for all types of lines of codes. It represents the execution part of the interpreter. 9 cases are defined in the function. 


Functions for parsing:

Parsing is the first stage of the interpreter. It converts the input code into a set of tokens and reads according to the semantic meanings of the language. We use a number of functions for these actions which are listed below. 
  • parser() : Returnes the output of readFrom function. 
  • tokenize() : Converts the input code into a set of tokens. A set of built in functions are used here. s.replace() is used to replace the parenthesis with a space before and after it. s.trim() removes the spaces at the beginning and end of the expression. s.split() is applied to split the code after every space into a set of tokens. 
  • readFrom() : The functions takes the output of tokenize() function as input and forms a list avoiding all parenthesis and resembles the internal ordering of the expression. 
  • atom() : Converts integers in string format to numbers. 
Function repl():

repl() represents the read evaluation print loop. The function is used to form an interactive Lisp interpreter.

Monday, 17 December 2012

Huffman Coding Overvew.


    Huffman coding is a statistical technique which attempts to reduce the    amount of bits required to represent a string of symbols. The algorithm accomplishes its goals by allowing symbols to vary in length. Shorter codes are assigned to the most frequently used symbols, and longer codes to the symbols which appear less frequently in the string.
So, when we’ll translate the input we want to encode the most frequent symbols will take less space than they used to and the least frequent symbols will take more space but because they’re less frequent.



Building a Huffman Tree

    The Huffman code for an alphabet (set of symbols) may be generated by constructing a binary tree with nodes containing the symbols to be encoded and their probabilities of occurrence. The tree may be constructed as follows:

Step 1. Create a parentless node for each symbol. Each node should include the symbol and its probability.
Step 2. Select the two parentless nodes with the lowest probabilities.
Step 3. Create a new node which is the parent of the two lowest probability nodes.
Step 4. Assign the new node a probability equal to the sum of its children's probabilities.
Step 5. Repeat from Step 2 until there is only one parentless node left.


Huffman Code: Example


    The following example bases on a data source using a set of five different symbols. The symbol's frequencies are:
Symbol  Frequency
   A       24
   B       12
   C       10
   D        8
   E        8
          ----> total 186 bit
         (with 3 bit per code word)


    The two rarest symbols 'E' and 'D' are connected first, followed by 'C' and 'D'. The new parent nodes have the frequency 16 and 22 respectively and are brought together in the next step. The resulting node and the remaining symbol 'A' are subordinated to the root node that is created in a final step.


Code Tree According To Huffman.





Symbol Frequency  Code  Code   total
                       Length  Length
   A      24      0      1      24
   B      12      100    3      36
   C      10      101    3      30
   D       8      110    3      24
   E       8      111    3      24
---------------------------------------
    ges. 186 bit          tot. 138 bit
    (3 bit code)
  
    To decode a string of bits we just need to traverse the tree for each bit, if the bit is 0 we take a left step and if the bit is 1 we take a right step until we hit a leaf (which is the symbol we are looking for). For example, if we have the string “100 111 110 ″ and our tree, decoding it we’ll get the string “bed”.
    It’s very important to observe that not one code is a prefix of another code for another symbol. In our example, if 0 is the code for ‘a’, 00 cannot be a code for any other symbol because there’s going to be a conflict. We’ll never reach that symbol because after taking steps for the first two bits we’ll get ‘a’, we’re never going the find the symbol for 00.
Here is the python code for Huffman coding.