Sunday, 16 December 2012
Paint Application Using Javascript
      Stroke Thickness       Eraser Thickness
Monday, 10 December 2012
Python Webapp2 framework
App engine includes simple web application framework called webapp2. Webapp2 is already installed in google app engine environment. It can be used in python by importing webapp2 library.
Webapp2 includes two features:
Request Handler class that processes requests and build classes
WSGIApplication instance which routes incoming requests to handlers.
This code defines one request handler,
The application itself is represented by a
Google app engine is equipped with features which provides an interactive session with users:
- name: jinja2
version: latest
for a template file one has to implement an html file which specifies the template of the app, by using that template we can implement
Webapp2 includes two features:
Request Handler class that processes requests and build classes
WSGIApplication instance which routes incoming requests to handlers.
import webapp2
class MainPage(webapp2.RequestHandler):
def get(self):
self.response.write('hello')
app=webapp2.WSGIApplication([('/',MainPage)],debug=True)
This code defines one request handler,
MainPage
, mapped to the root URL (/
). When webapp2
receives an HTTP GET request to the URL /
, it instantiates the MainPage
class and calls the instance's get
method. Inside the method, information about the request is available using self.request
. webapp2
sends a response based on the final state of the MainPage
instance.The application itself is represented by a
webapp2.WSGIApplication
instance. The parameter debug=true
passed to its constructor tells webapp2
to print stack traces to the browser output if a handler encounters an
error or raises an uncaught exception.Google app engine is equipped with features which provides an interactive session with users:
- User Service: get_current_user() function checks for the user is signed in or not.If the user is signed in then returns the user object otherwise returns None and users.create_login_url() returns the google account sign in page so as to sign in after the check for the current user fails.
- High Replication DataStore : which uses paxos algorithm to replicate data along multiple data centres.The Datastore is extremely resilient in the face of catastrophic failure, but
its consistency guarantees may differ from what you're familiar with. It includes a data modelling API which can be used by importing the
google.appengine.ext.db
module. Datastore has a sophisticated query engine for datamodels. Like sql language we can call gql by which provides access to the App Engine Datastore query engine's features using a familiar syntax. - Templates : There are many templating system in python two of them are jinja2 and Django. App Engine includes the Django and Jinja2 templating engines. To configure jinja2 library by adding the following in app.yaml file
- name: jinja2
version: latest
for a template file one has to implement an html file which specifies the template of the app, by using that template we can implement
jinja_environment.get_template(name)
takes the name of a template file, and returns a template object. template.render(template_values)
takes a dictionary of values, and returns the rendered text. The
template uses Jinja2 templating syntax to access and iterate over the
values, and can refer to properties of those values. In many cases, you
can pass datastore model objects directly as values, and access their
properties from templates.Friday, 7 December 2012
Google App Engine Uploading Static Html
Google app engine is an is a platform as a service (PaaS) cloud computing platform for developing and hosting web applications in Google-managed data centers. Applications are sandboxed and run across multiple servers.
App Engine offers automatic scaling for web applications—as the number
of requests increases for an application, App Engine automatically
allocates more resources for the web application to handle the
additional demand.
You can serve your app with your own domain with "example.xyz.com" or you can host for free in appspot.com provided for free for first 1 GB of usage. Further usage will be charged as per the subscribed amount of data. Google app engine is supported by three runtime environments python, Java and Go.
App engine includes following features :
Each SDK also includes a tool to upload your application to App Engine. Once you have created your application's code, static files and configuration files, you run the tool to upload the data. The tool prompts you for your Google account email address and password.
See GOOGLE APP SDK for sdk download.
You should use two commands from the sdks
dec_appserver.py - Python development server
It includes a web server application you can run on your computer that simulates your application running in the App Engine Python runtime environment.
appcfg.py
You can use this command to upload new versions of the code, configuration and static files for your app to App Engine.
For Uploading your html application you should create a folder for you application with the name of your application. The folder includes three files app.yaml, a python file and the static file which you need to host on google app engine.
app.yaml is an configuration file which is used to describe which handler should be used to different urls.
The file includes
Now all the configuration is been done now you can upload the file by the following command:
You can serve your app with your own domain with "example.xyz.com" or you can host for free in appspot.com provided for free for first 1 GB of usage. Further usage will be charged as per the subscribed amount of data. Google app engine is supported by three runtime environments python, Java and Go.
App engine includes following features :
- dynamic web serving, with full support for common web technologies
- persistent storage with queries, sorting and transactions
- automatic scaling and load balancing
- APIs for authenticating users and sending email using Google Accounts
- a fully featured local development environment that simulates Google App Engine on your computer
- task queues for performing work outside of the scope of a web request
- scheduled tasks for triggering events at specified times and regular intervals
Each SDK also includes a tool to upload your application to App Engine. Once you have created your application's code, static files and configuration files, you run the tool to upload the data. The tool prompts you for your Google account email address and password.
See GOOGLE APP SDK for sdk download.
You should use two commands from the sdks
dec_appserver.py - Python development server
It includes a web server application you can run on your computer that simulates your application running in the App Engine Python runtime environment.
appcfg.py
You can use this command to upload new versions of the code, configuration and static files for your app to App Engine.
For Uploading your html application you should create a folder for you application with the name of your application. The folder includes three files app.yaml, a python file and the static file which you need to host on google app engine.
app.yaml is an configuration file which is used to describe which handler should be used to different urls.
The file includes
application: helloworld
version: 1runtime: python27api_version: 1threadsafe: true
handlers:- url: /.* script: helloworld.app
Application is helloworld in this case which is an unique application id which would be assigned when you register your web app in appspot.com for free. Version is the application version, your new updations can be assigned to further new versions.
Runtime is the version of python you are running in your system. This application is
threadsafe
so the same instance can
handle several simultaneous requests. Threadsafe is an advanced feature
and may result in erratic behavior if your application is not
specifically designed to be threadsafe.Every request to a URL whose path matches the regular expression /.*
(all URLs) should be handled by the app
object in the helloworld
module.
For testing you can use development web server command which runs a web server listening on port 8080, you can check in the status typing url http://localhost:8080/
The python file is webapp2 framework which has two parts
- one or more
RequestHandler
classes that process requests and build responses - a
WSGIApplication
instance that routes incoming requests to handlers based on the URL
import wsgiref.handlersThe static file which you would to upload can be specified as a filenname in double quotes. The html file with javascript will also work for this upload.
from google.appengine.ext import webapp
from google.appengine.ext.webapp import Request
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext.webapp import template
class mainh(webapp.RequestHandler):
def get(self):
self.response.out.write(template.render("prasanthtimer.html",{}))
def main():
app = webapp.WSGIApplication([
(r'.*',mainh)], debug=True)
wsgiref.handlers.CGIHandler().run(app)
if __name__ == "__main__":
main()
Now all the configuration is been done now you can upload the file by the following command:
Now you can find your application running onappcfg.py update helloworld/
http://your_app_id.appspot.com
Have fun.....!!!!!!!!!!!! Wednesday, 5 December 2012
CountDown Timer
Here is an simple CountDown timer which is done javascript. The idea behind is refreshing the canvas on every call of setInterval( ) which is set for 1000ms.
There are three button which are "SET" , "START", "STOP". The set button calls the function time which draws the value from the textfield on to the canvas. The start button calls the timer function in which setInterval() is called for decrementing and displaying on canvas. The stop button is for calling clearInterval() to stop the running timer.
Here is the Timer
STOP => Stop Timer
START => To start the timer
SET => To set the initial value
There are three button which are "SET" , "START", "STOP". The set button calls the function time which draws the value from the textfield on to the canvas. The start button calls the timer function in which setInterval() is called for decrementing and displaying on canvas. The stop button is for calling clearInterval() to stop the running timer.
Here is the Timer
STOP => Stop Timer
START => To start the timer
SET => To set the initial value
Conways Game Of Life
A Conways Game Of Life is a cellular automaton which consists of a regular grid of cells, each in one of a finite number of states, such as on and off (in contrast to a coupled map lattice). The grid can be in any finite number of dimensions. For each cell, a set of cells called its neighborhood (usually including the cell itself) is defined relative to the specified cell. An initial state (time t=0) is selected by assigning a state for each cell. A new generation is created (advancing t by 1), according to some fixed rule
(generally, a mathematical function) that determines the new state of
each cell in terms of the current state of the cell and the states of
the cells in its neighborhood. Typically, the rule for updating the
state of cells is the same for each cell and does not change over time,
and is applied to the whole grid simultaneously, though exceptions are
known, such as the Probabilistic Cellular Automata and asynchronous cellular automaton.
The game is usually a zero-player game i.e the evolution is by its initial value that is provided by a random number generator. The game has following rules.
The different patterns are:
Oscillators Still Lifes Spaceships
By including all the patterns here is the whole Game!!!!!
The game is usually a zero-player game i.e the evolution is by its initial value that is provided by a random number generator. The game has following rules.
- Any live cell with fewer than two live neighbours dies, as if caused by under-population.
- Any live cell with two or three live neighbours lives on to the next generation.
- Any live cell with more than three live neighbours dies, as if by overcrowding.
- Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
The different patterns are:
Oscillators Still Lifes Spaceships
By including all the patterns here is the whole Game!!!!!
A Tiny Lisp Interpreter
Most of the computer languages has a lot of syntactic notations, for lisp family of languages the syntax is based on lists in parameterized prefix notations like passing argument to a function in parenthesis. In this tiny lisp interpreter we are including only a small subset of all the keywords that is lisp.
The code has been done in javascript to incorporate in html language for a better visual experience.
- Parsing: The parsing component takes an input program in the form of a sequence of characters, verifies it according to the syntactic rules of the language, and translates the program into an internal representation. In a simple interpreter the internal representation is a tree structure that closely mirrors the nested structure of statements or expressions in the program. In a language translator called a compiler the internal representation is a sequence of instructions that can be directly executed by the computer.
- Execution: The internal representation is then processed according to the semantic rules of the language, thereby carrying out the computation. Execution is implemented with the function eval.
The program takes the prefix inputs and via parser function converts it into lists of lists. By this internal representation the execution function takes each list elements and evaluates each expression.The evaluation is done by eval function, for expression mapping is done by associate array to map each operate to its expressions.
lisp]=> (define square (lambda (r) (* r r))) lisp]=> (square 12) 144
For the interpreter when "define" interpreters then checks for the second string in the associate array for the presence of the function name, if not then the string is added and the rest of the expression is been mapped to the array.
Instead of associate array we have used environment class which is a subset of associate array inorder to incorporate find function to find the keys of the array.
Now for the function parse. Parsing is traditionally separated into two parts: lexical
analysis, in which the input character string is broken up into a
sequence of tokens, and syntactic analysis, in which the
tokens are assembled into an internal representation.For this purpose we have used tokenize function.
function tokenize(s)
{
s=s.replace(/\(/g," ( ").replace(/\)/g," ) ").split(" ")
var p=[];
for(i in s)
{
if(s[i]!="")
{
p.push(s[i]);
}
}
return p;
}
function tokenize(s)
{
s=s.replace(/\(/g," ( ").replace(/\)/g," ) ").split(" ")
var p=[];
for(i in s)
{
if(s[i]!="")
{
p.push(s[i]);
}
}
return p;
}
To keep up a good interactive session we have used custom repl which would look for user inputs.
process.stdin.resume();process.stdout.write('lisp]=> ');process.stdin.on('data',function(input){input = input.toString();var val = eval(parse(tokenize(input)))if (val != undefined){process.stdout.write('Result:'+val);}else {process.stdout.write('lisp]=> ');
Click here for the whole code.
Friday, 16 November 2012
Huffman Coding in Python
Here is a simple explanation for the code to encode and decode the string which you have entered by using Huffman data compression. Huffman coding is a lossless data compression based on variable-length code
table for encoding a source symbol
where the variable-length code table has been derived in a particular
way based on the estimated probability of occurrence for each possible
value of the source symbol.
Huffman coding uses a specific method for choosing the representation for each symbol, resulting in a prefix code that expresses the most common source
symbols using shorter strings of bits that are used for less common
source symbols.
The main part of code is the tree where on each instance creates a leaf node or the internal node which depends on the parameters you are passing.
Here I have created a tree data structure which as a value named "cargo" then a "left" and a "right" which will be "None" when the node is a leaf node.
class Tree:
def __init__(self, cargo, left=None, right=None):
self.cargo = cargo
self.left = left
self.right = right
The creation of tree is a bottom up procedure by starting with leaf nodes
left=Tree(1)
right=Tree(2)
Then we can assign an intermediate node by using left and right as the second and third argument for Class Tree
Tree(1,left,right)
This Tree structure can be called recursively to get a multilevel binary tree
Tree(1,Tree(1),Tree(2))
Here is an step by step description for the huffman code:
Step 1: First create a Tree data structure as explained earlier
Step 2: Read the string character by character and create a List of tuples which contains the character and its number of occurrence which is its weight.
Step 3: The tuples can be made the leaf node by creating instance for each tuple without specifying the left and right parameters.
Step 4: After creating leaf node sort list of nodes with weights as the key. This can be done by using sorted method passing key as second element of the tuple
Step 5: Pop least two elements (tree) and add the value and the concatenate the characters.
Step 6: Again sort to get the least two elements.
Step 7: Repeat the process from 4 to 6 until all the characters without repetition are received
By previous 7 steps we have created the Tree for the given string. Now by using the Tree we can traverse to encode the string.
Algorithm for function encode
Step 1: Pass the tree, empty string and each characters to the function
Step 2: Check weather the code is in the root if "yes" then check the left or right node traverse the tree as per the condition gets true until reaches the leaf node.
Step 3: For the left node traversal concatenate the empty string with "0" and for the right concatenate "1"
Step 4: The functions returns the code when reaches the leaf node.
Algorithm for function decode:
Step 1: Pass the tree, the code and the empty string to the function.
Step 2: The copy for the tree is created so that after finding each characters then the tree can be reassigned so as to traverse for the next symbol.
Step 3: The statements are similar as that of decoding but the condition checks for each 1's and 0's in the string.
Step 4: For finding character of each code the empty string is joined.
Step 5: returns the string for the corresponding code.
This simple code can be tested for the huffman code for a string or a sentence. The checking for decode can be found from the tree for which it has been used for encoding.For each character a unique code is assigned and that code can be used to decode to get the corresponding characters.
Thank you
Wednesday, 7 November 2012
Usb Accessory
Android supports a variety of USB peripherals and Android USB accessories through two modes: USB accessory and USB host. In USB
accessory mode, the external USB hardware act as the USB hosts. The different accessories might
include robotics controllers; docking stations; diagnostic and musical equipment; kiosks; card
readers; and much more. This gives Android-powered devices that do not have host capabilities the
ability to interact with USB hardware. Android USB accessories must be designed to work with
Android-powered devices and must adhere to the Android accessory communication protocol. In USB
host mode, the Android-powered device acts as the host. Examples of devices include digital
cameras, keyboards, mice, and game controllers. USB devices that are designed for a wide range of
applications and environments can still interact with Android applications that can correctly
communicate with the device.
Usb accessory is supported by devices with Android 2.3.4 or more versions. When an Android-powered device is in USB accessory mode, the attached Android USB accessory acts as the host, provides power to the USB bus, and enumerates connected devices.Although the USB accessory APIs were introduced to the platform in Android 3.1, they are also available in Android 2.3.4 using the Google APIs add-on library.
There are two usage differences between using the Google APIs add-on library and the platform APIs.
If you are using the add-on library, you must obtain the
Usb accessory is supported by devices with Android 2.3.4 or more versions. When an Android-powered device is in USB accessory mode, the attached Android USB accessory acts as the host, provides power to the USB bus, and enumerates connected devices.Although the USB accessory APIs were introduced to the platform in Android 3.1, they are also available in Android 2.3.4 using the Google APIs add-on library.
com.android.future.usb
: To support USB accessory mode in Android 2.3.4, the Google APIs add-on library includes the backported USB accessory APIs and they are contained in this namespace. This add-on library is a thin wrapper around theandroid.hardware.usb
accessory APIs and does not support USB host mode. If you want to support the widest range of devices that support USB accessory mode, use the add-on library and import this package. It is important to note that not all Android 2.3.4 devices are required to support the USB accessory feature. Each individual device manufacturer decides whether or not to support this capability, which is why you must declare it in your manifest file.android.hardware.usb
: This namespace contains the classes that support USB accessory mode in Android 3.1.
There are two usage differences between using the Google APIs add-on library and the platform APIs.
If you are using the add-on library, you must obtain the
UsbManager
object in the following manner:UsbManager manager = UsbManager.getInstance(this);
If you are not using the add-on library, you must obtain the UsbManager
object in the following manner:UsbManager manager = (UsbManager) getSystemService(Context.USB_SERVICE);
When you filter for a connected accessory with an intent filter, the UsbAccessory
object is contained inside the intent that is passed to your
application. If you are using the add-on library, you must obtain the UsbAccessory
object in the following manner:UsbAccessory accessory = UsbManager.getAccessory(intent);
If you are not using the add-on library, you must obtain the UsbAccessory
object in the following manner:UsbAccessory accessory = (UsbAccessory) intent.getParcelableExtra(UsbManager.EXTRA_ACCESSORY);
The following codes that is to be changed in Manifest file in order accomplish usb accessory
- Because not all Android-powered devices are guaranteed to support the USB accessory APIs,
include a
<uses-feature>
element that declares that your application uses theandroid.hardware.usb.accessory
feature. - If you are using the
add-on library,
add the
<uses-library>
element specifyingcom.android.future.usb.accessory
for the library. - Set the minimum SDK of the application to API Level 10 if you are using the add-on library
or 12 if you are using the
android.hardware.usb
package. -
If you want your application to be notified of an attached USB accessory, specify an
<intent-filter>
and<meta-data>
element pair for theandroid.hardware.usb.action.USB_ACCESSORY_ATTACHED
intent in your main activity. The<meta-data>
element points to an external XML resource file that declares identifying information about the accessory that you want to detect.
In the XML resource file, declare<usb-accessory>
elements for the accessories that you want to filter. Each<usb-accessory>
can have the following attributes: manufacturer
model
version
res/xml/
directory. The resource file name
(without the .xml extension) must be the same as the one you specified in the
<meta-data>
element.Tuesday, 6 November 2012
Arduino ADK
ADK is a micro controller based on MEGA 2560. As a part of revision of micro-controller ADK is equipped with an USB Host interface.
Because the ADK is a USB Host, the phone will attempt to draw power
from it when it needs to charge. When the ADK is powered over USB, 500mA
total is available for the phone and board.The external power regulator
can supply up to 1500mA. 750mA is available for the phone and ADK
board. An additional 750mA is allocated for any actuators and sensors
attached to the board. A power supply must be capable of providing 1.5A
to use this much current.
The Mega ADK board is a derivative of the Arduino Mega 2560.
The modified Mega 2560 board includes a USB host chip. This host chip
allows any USB device to connect to the Arduino.
The USB host is not part of the original core of
Arduino. To use the new features on this board you will need to include
some libraries in your sketches.
There are three libraries needed to make the system work:
- MAX3421e: handles the USB host chip
- Usb: handles the USB communication
- Android Accessory: checks if the device connecting is one of the available accessory-enabled phones
Android Accessories
An Android accessory is a physical accessory that can
be attached to your Android device. These particular devices perform
specific actions. With an Android phone and the Mega ADK, you can use whatever sensors and actuators you require to create your own accessories.
The USB accessory and the device check to make sure
they are connected by passing back and forth product and vendor IDs.
Google offers two accessory codes for people to try out: product IDs
0×2D00 and 0×2D01. Google has the USB vendor ID 0×1841.
Arduino IDE 1.0
Arduino’s software is based on Processing’s IDE. The current release is version 0022. The Mega ADK has been developed as part of The 1.0 beta release. The libraries needed to make an Android accessory using the USB host chip are not included as part of version 0022, so you’ll need the 1.0 beta to work with the ADK and Android.
There are some fundamental changes in the way version
1.0 works, including a new extension for sketches. The suffix will
change from *.pde to *.ino, any previous sketches will need to re-saved with the new extension.
Android SDK
The Android OS is based on Linux. Android Apps are made in a Java-like language running on a virtual machine called Dalvik.
Android offers a single download location to get the
development software used by the different hardware manufacturers. This
helps streamline development for different devices. You can get the
Android SDK from the Android development website. You can easily upgrade
to newer versions of the OS.
Google controls the main branch of the Android
development system. They produce the core and the libraries that link
the virtual machine with different peripherals.
If you’re going to make a commercially-sold accessory, it is your responsibility to:
- port their drivers to each new version of the OS
- create a ROM (functional image memory of a phone) that is compatible with that version of the OS
- provide the developer’s community with a port of their drivers via the SDK upgrade system
The manufacturers are not always ready with
ports the same time Google introduces a new revision of the OS. This has
created an interesting parallel ROM development community dedicated to
the creation of ROMs that include all the latest features yet capable
of running on older devices. One of the most successful mods is Cyanogen.
For USB accessories to be supported on a particular
device, there must be support for the accessory-mode, a special means of
connecting over the USB port. This allows data transfer between devices
and external peripherals.
Accessory mode is a feature of Android OS since version 2.3.4 Gingerbread and 3.1 Honeycomb.
Accessory mode is a feature of Android OS since version 2.3.4 Gingerbread and 3.1 Honeycomb.
Google suggests programming with the ADK using Eclipse and the Android SDK, together with Arduino’s IDE.
Eclipse is a multi-platform development environment. It performs operations
like code prediction, error correction, project storage, and multiple
workspace management.
To develop Android applications, the ADT (Android
Development Tools) plugin is needed on top of Eclipse.
Artificial Neural Networks
It is a mathematical model or computational model that is inspired by the structure and functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons and processes information using a connectionist approach for computation.
The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in union to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons.
The motivation for artificial neural network(ANN) research is the belief that a human capabilities particularly in real time visual perception, speech understanding and sensory information processing and in adaptivity as well as intelligent decision making in general, come from organisational and computational principles exhibited in the highly complex neural network of the brain. Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques.
A trained neural network can be thought of as an "expert" in the category of information it has been given to analyse. This expert can then be used to provide projections given new situations of interest and answer "what if" questions.
Other advantages include:
Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
Self-Organisation: An ANN can create its own organisation or representation of the information it receives during learning time.
Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.
Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation.
The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in union to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons.
The motivation for artificial neural network(ANN) research is the belief that a human capabilities particularly in real time visual perception, speech understanding and sensory information processing and in adaptivity as well as intelligent decision making in general, come from organisational and computational principles exhibited in the highly complex neural network of the brain. Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques.
A trained neural network can be thought of as an "expert" in the category of information it has been given to analyse. This expert can then be used to provide projections given new situations of interest and answer "what if" questions.
Other advantages include:
Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
Self-Organisation: An ANN can create its own organisation or representation of the information it receives during learning time.
Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.
Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation.
Friday, 5 October 2012
Why lambda?
In an functional programming it is not sufficient to define small trivial functions for lower level operations, rather these functions can be used as an argument. The most convenient way to define a function to associate a variable to a particular operation without bounding to a particular identifier is to use lambda.
A
One way to accomplish this is to use an auxiliary procedure to bind the local variables:
A
lambda
expression evaluates to a procedure. The environment in
effect when the lambda
expression is evaluated is remembered as
part of the procedure; it is called the closing environment. When
the procedure is later called with some arguments, the closing
environment is extended by binding the variables in the formal parameter
list to fresh locations, and the locations are filled with the arguments
according to rules about to be given. The new environment created by
this process is referred to as the invocation environment.One way to accomplish this is to use an auxiliary procedure to bind the local variables:
(define (f x y)
(define (f-helper a b)
(+ (* x (square a))
(* y b)
(* a b)))
(f-helper (+ 1 (* x y))
(- 1 y)))
By using lambda for the example then the body of f becomes a single call to the procedure.
(define (f x y)
((lambda (a b)
(+ (* x (square a))
(* y b)
(* a b)))
(+ 1 (* x y))
(- 1 y)))
((lambda (a b)
(+ (* x (square a))
(* y b)
(* a b)))
(+ 1 (* x y))
(- 1 y)))
The lambda is also known as an anonymous functions since it doesn't spoil the name space by unnecessary definition of named functions.
Functional Programming
Functional programming(FP) is one of the programming paradigm, paradigm means which describes various concepts and ideas. Functional programming was the new way to abstract and compose the structure and to avoid mutation. Functions are the basic structures that leads to higher procedural abstractions. Functions in FP are first class citizens that can be
:: defined anywhere inside or outside the functions
:: can be passed as arguments
Popular functional programming languages are
:: Lisp
:: Scala
:: Scheme
:: Racket
:: Clojure
:: Small Talk
:: Ruby
Functions are the powerful ways of abstracting operations and lower order formulas in order to modularize each part of programming construct and segregate to form a higher order construct using basic functions. Functions that can manipulate functions are known as higher order functions. These powerful way of abstraction will increase the expressive power of our language.
Glueing functions together
Glue enables simple functions to be glued together to make more complex ones. It can be illustrated with a simple list- processing problem - adding up the elements of a list. We define lists by
listof X ::= nil | cons X (listof X)
which means that a list of Xs (whatever X is) is either nil, representing a list
with no elements, or it is a cons of an X and another list of Xs. A cons represents
a list whose first element is the X and whose second and subsequent elements
are the elements of the other list of Xs.
[] means nil
[1] means cons 1 nil
[1,2,3] means cons 1 (cons 2 (cons 3 nil))
Our ability to decompose a problem into parts depends directly on our ability to glue solutions together. To assist modular programming, a language must provide good glue.
Functional programming languages provide two new kinds of glue - higher-order
functions and lazy evaluation. Using these glues one can modularise programs
in new and exciting ways, and as small snippets explained earlier.
:: defined anywhere inside or outside the functions
:: can be passed as arguments
Popular functional programming languages are
:: Lisp
:: Scala
:: Scheme
:: Racket
:: Clojure
:: Small Talk
:: Ruby
Functions are the powerful ways of abstracting operations and lower order formulas in order to modularize each part of programming construct and segregate to form a higher order construct using basic functions. Functions that can manipulate functions are known as higher order functions. These powerful way of abstraction will increase the expressive power of our language.
Glueing functions together
Glue enables simple functions to be glued together to make more complex ones. It can be illustrated with a simple list- processing problem - adding up the elements of a list. We define lists by
listof X ::= nil | cons X (listof X)
which means that a list of Xs (whatever X is) is either nil, representing a list
with no elements, or it is a cons of an X and another list of Xs. A cons represents
a list whose first element is the X and whose second and subsequent elements
are the elements of the other list of Xs.
[] means nil
[1] means cons 1 nil
[1,2,3] means cons 1 (cons 2 (cons 3 nil))
Our ability to decompose a problem into parts depends directly on our ability to glue solutions together. To assist modular programming, a language must provide good glue.
Functional programming languages provide two new kinds of glue - higher-order
functions and lazy evaluation. Using these glues one can modularise programs
in new and exciting ways, and as small snippets explained earlier.
Difference between if and customized if as new-if
Suppose we wanted to define a function
This even seems to work. But there is something deeply wrong with this definition. Try stepping through
No application of
The problem is that functions (such as
Theory behind the new-if is the explanation regarding tail recursion which was explained in previous blog post.
......Thank You......
new-if
which acts just like
if
; for instance,
and likewise=>
(if (< 2 3) 4 5)
4
This is not too hard to write:=>
(new-if (< 2 3) 4 5)
4
(define new-if (lambda (test then-exp else-exp) (cond [test then-exp] [else else-exp])))
This even seems to work. But there is something deeply wrong with this definition. Try stepping through
(fact 4)
for the following
definition of fact
:
(define fact (lambda (x) (new-if (< x 3) x (* n (fact (- n 1))))))
No application of
fact
ever halts--the function keeps calling
itself recursively forever.
The problem is that functions (such as
new-if
) always evaluate all
their arguments, while special forms (such as if
and
cond
) may only evaluate some of their arguments, leaving others as
expressions rather than values. We can't use the ordinary part of Scheme
to write new special forms, only new functions.
Theory behind the new-if is the explanation regarding tail recursion which was explained in previous blog post.
......Thank You......
Tail recursions
Tail recursion is the act of making a tail recursive call.
We can understand that term in parts.
A call is just application of a function.
A recursive call occurs when a function invokes itself.
A tail call occurs when a function's result is just the value of
another function call. In other words, in order to determine the value of
function
Why to use tail calls?
For a non-tail call, Scheme must remember that after the current function application, it is responsible for doing something else (in this case, adding 1 to the result); and there may be many such things to do. For a tail call, Scheme doesn't have to remember anything, because there is nothing more to do: after it is done with the current function call, it is really done.
Non-tail calls force Scheme to remember future actions (that couldn't be performed before), but tail calls don't. Non-tail calls require more space (because there is more to remember), so they are not as efficient as tail calls. Tail calls also permit us to forget the values of function parameters and local variables, because we never do any more work in the calling function, but a non-tail call forces us to remember such values, which might be used when we return to the calling function.
f
, we call g
, and return whatever g
did, without modification.Why to use tail calls?
For a non-tail call, Scheme must remember that after the current function application, it is responsible for doing something else (in this case, adding 1 to the result); and there may be many such things to do. For a tail call, Scheme doesn't have to remember anything, because there is nothing more to do: after it is done with the current function call, it is really done.
Non-tail calls force Scheme to remember future actions (that couldn't be performed before), but tail calls don't. Non-tail calls require more space (because there is more to remember), so they are not as efficient as tail calls. Tail calls also permit us to forget the values of function parameters and local variables, because we never do any more work in the calling function, but a non-tail call forces us to remember such values, which might be used when we return to the calling function.
(define foo (lambda (x) (if (even? x) (+ 1 (foo (/ x 2))) (bar x)))) (define bar (lambda (y) (* y 3)))
The definition of
foo
contains a recursive call to foo
(it's recursive because foo
is the procedure in which the call to
foo
appears) and a tail call to bar
. The call to
bar
is a tail call because, if the parameter x
is even,
then the value of the call of foo
is just whatever bar
returns. Friday, 10 August 2012
Python Code Design Descipline
We can analyze the rules that are to be followed while coding in python, through a step by step interpretation we can start from top that is from imports.
Imports should be seperated by lines
For eg: import os
import sys
They should be put at the top of the file over all global module and constants.
They also should be grouped in following order.
** Standard library imports.
** Standard related library imports.
** Local application/specific imports.
White spaces should be not more than one space around an assignment (or other) operator to align it with another.
such as
x = 2 not x = 2
Always surround the binary operators with single white spaces such as for assignment (=), augmented assignment (+=, -= etc.), comparisons (==, <, >, !=, <>, <=, >=, in, not in, is, is not), Booleans (and, or, not).
Use 4 spaces per indentation level.
For really old code that you don't want to mess up, you can continue to use 8-space tabs.
Continuation lines should align wrapped elements either vertically using Python's implicit line joining inside parentheses, brackets and braces, or using a hanging indent. When using a hanging indent the following considerations should be applied; there should be no arguments on the first line and further indentation should be used to clearly distinguish itself as a continuation line.
Comments should be complete sentences. If a comment is a phrase or sentence, its first word should be capitalized, unless it is an identifier that begins with a lower case letter.
Using TRUE in assignment and conditions is not encouraged.
Imports should be seperated by lines
For eg: import os
import sys
They should be put at the top of the file over all global module and constants.
They also should be grouped in following order.
** Standard library imports.
** Standard related library imports.
** Local application/specific imports.
White spaces should be not more than one space around an assignment (or other) operator to align it with another.
such as
x = 2 not x = 2
Always surround the binary operators with single white spaces such as for assignment (=), augmented assignment (+=, -= etc.), comparisons (==, <, >, !=, <>, <=, >=, in, not in, is, is not), Booleans (and, or, not).
If operators with different priorities are used, consider adding
whitespace around the operators with the lowest priority. However, never use more than one space, and
always have the same amount of whitespace on both sides of a binary
operator.
For really old code that you don't want to mess up, you can continue to use 8-space tabs.
Continuation lines should align wrapped elements either vertically using Python's implicit line joining inside parentheses, brackets and braces, or using a hanging indent. When using a hanging indent the following considerations should be applied; there should be no arguments on the first line and further indentation should be used to clearly distinguish itself as a continuation line.
foo = long_function_name(var_one, var_two, var_three,
var_four)hen invoking the Python command line interpreter with the -t option, it issues warnings about code that illegally mixes tabs and spaces. When using -tt these warnings become errors.
Comments should be complete sentences. If a comment is a phrase or sentence, its first word should be capitalized, unless it is an identifier that begins with a lower case letter.
Block Comments generally apply to some (or all) code that follows
them, and are indented to the same level as that code. Each line of a
block comment starts with a # and a single space.
Inline Comments is a comment on the same line as a statement. Inline comments should be separated by at least two spaces from the
statement. They should start with a # and a single space.
Naming Conventions of Python's library are a bit of a mess, so
we'll never get this completely consistent -- nevertheless, here are
the currently recommended naming standards. New modules and packages
(including third party frameworks) should be written to these
standards.
joined_lower for functions, methods, attributes
joined_lower or ALL_CAPS for constants
StudlyCaps for classes
camelCase only to conform to pre-existing conventions
Attributes: interface, _internal, __private
Try not to use loops as much as possible instead you can use inbuilt functions or methods such as join() so to join all the list elements as an example.
Using TRUE in assignment and conditions is not encouraged.
for example you can use
if x:
print 'yes'
not
if x=TRUE:
print 'yes'
You can use a split in built method to extract words from paragraphs instead of using for or while loops nested. This decreases program readability.
>>>a='This is prasanth's program'.split()
>>>a
['This','is','prasanth's','program']
""""simple""""
All the conventions has to be followed thoroughly so as to improve readability to make yourself better while reading your own program.............
.................THANK YOU...................
Subscribe to:
Posts (Atom)