Monday, 20 July 2015

Some basic concepts of Theoretical Neurosciences.

I think concepts like Convolution, RC circuits, Cable Equation, Poisson Distribution are very basic things one should know before starting Theoretical Neurosciences.

Convolution is found everywhere!!, RC circuits are used when we model a neuron as combination of Resistor(R) and Capacitor(C)(precisely in Hudgkin-Huxly models). Cable Equation is used in Rolls theory which describes how a pulse traverses through axons which are modeled like cables. Neuron spiking follows Poisson distribution. Poisson distribution is the limit of Binomial distribution. Lets look at each of them separately. Found some videos on Neuron Modelling watch them before going through the concepts.

CONVOLUTION:
coming soon!!

Sunday, 19 July 2015

Some Basic Notes/Discussions on Jeff Hawkin's book On Intelligence:
Jeff says that fundamentally the present artificial intelligence technologies are different from how mammalian brain works. He says that we should at least get to know some basic working principles of brain to implement them on machine. But people have been doing the work without studying the brain. All AI technologies have failed miserably in doing tasks which can be done by humans without any effort. He is defining the term 'INTELLIGENCE' in his book. Academic world has been saying that our brains are too complicated and its almost impossible to understand them. But he seems not convinced by that and he gave few examples from the past when people didn't accept the idea that Earth is spherical in shape and few sections of the society took it on to heart and started persecuting!! He says complexity is a symptom of confusion not a cause. Yes I agree that!! Its obvious that when something as complex as brain is what you are supposed to decipher confusion inherently exists. He point outs that what is wrong in Chinese Room experiment! There the person inside basically doesn't understand chinese but still he is able to do the work. But our brains are not like that they recognize patterns and we understand what it is. Inherently the way AI and Brains work are different. Ability to make predictions about future is the crux of intelligence. What he says is that AI proponents saw parallel between computation and thinking.

A human need not do anything to understand a story. One can read a story and it can be clear and can be understood by him/her. But others cannot tell from ones quiet behavior whether he/she understood or not or even if he/she knows the language. Others can ask the person who read the story whether he/she understood the story or not. But that persons understanding occurred when he read the story not just when he answered the questions.
He says Brain and AI differ on few things they are:
-->Temporal Concept in Brain
-->Importance of feedback(Note that back propagation in Neural Networks isn't feedback. It doesn't happen when some output is required back propagation happens in input stage that is during training but in brain it happens during output also)


Neural networks have no sense of time nor history. They have standard output for standard input.

AI and Neural Net communities concentration was only on behavior/output of the system. Jeff argues that behavior doesn't measure intelligence one can be intelligent at the same time he can sit idle and think about or form solutions in his brain, this doesnt mean that he is not intelligent. So our concept was intelligence was not complete, AI was not successful because of this reason.

Intelligence is not just matter of acting or behaving intelligent. Behavior is manifestation of intelligence but not central character or primary definition of being intelligent.

He argues we can create mindful machines. He brings in functionalism to explain that. He says that" According to functionalism being intelligent or having mind is purely a property of organization and has nothing inherently to do with what you are organized out of. A mind exists in any system whose constituent parts have the right causal relationship with each other but those parts can just as validly be neurons silicon chips or something else.

Neo cortex which is the seat of intelligence has 6 layers. Its new brain, which means it evolved recently and thats why mammals have different cognitive abilities than reptiles which do not have neo cortex. Medulla oblangata, cerebellum, thalamus all are parts of old brain and mostly they are involved in motor, behavioral, emotional matters. Neo cortex is involved with memory, intelligence and assisting the old brain. Cells in our brain create the mind is fact, not a hypothesis.

Sight, hearing, touch seem very different but the way the cortex processes signals from ear is same as the way it processes signals from eyes. Cortex does something universal that can be applied to any type of sensory or motor system.
Consider the fact that we have special visual areas that seems to be specifically devoted to represent written letters and digits.  This doesnt mean that we are ready to process letters and digits as soon as we are born. It all depends on the environment and how the brain trains itself. Any brain if put in right environment can learn any  number of languages be it spoken, sign, musical , mathematical.


He says that there is some single powerful algorithm being implemented in every part of cortex in  a suitable hierarchy which is provided with input from environment. So he argues that there is no reason for intelligent machines of future to have same senses or capabilities as we. The cortical algo can be implemented in novel ways so that genuine flexible intelligence emerges outside of bio brains.

He says Turing went wrong in saying behavior is proof of intelligence.

The four attributes of neocortical memory that are fundamentally different from computer memory are:
--> Neo cortex stores sequences of patterns.
--> Neo cortex recalls patterns auto-associatively.
-->Neo cortec stores patterns in an invariant form.
--> Neo cortex stores patterns in hierarchy.
Let me explain some terms here.
Auto-associativity: Its a kind of memory which can retrieved fully when a small portion of its given as input. Perfect example will be a camouflaged soldier in photograph. At first sight you wont notice him but when told to look for soldier your brain starts looking for patterns associated with it then you recognize there is a soldier in the picture.

Invariant form: When your think of human face what you get is certain picture of what is face? How it is? You wont get any particular face not yours or your girl friends!! You get a picture which has 2 eyes a nose and a pair of ears. If you notice a face with something missing or something is extra or even something out of place than that is exception. You have an invariant picturisation of objects in your brain.

Hierarchy: As already discussed Neo cortex has 6 layers these are organized in hierarchy.  If you go down the hierarchy thee are particular neurons trained to detect a particular patterns if the input from lower levels matches the trained condition of neurons they fire further taking to upper levels. So if you go to lower levels basically you will see some seemingly random spikes which cannot make any sense, at this level all information like touch, olfactory, sight, hearing all are in same form!! spikes!! processed similarly for patterns.  In the upper layers its recognized if the picture you are seeing belongs to a tiger or  a lion!!
 
Earlier power of computer was compared using CLOCK SPEED, now a days we do it by comparing RAM, NO OF CORES. I hope someday in future, not very far we will do the same by using term EXPERIENCE. Per se you can just say hey! my computer has got XXX years of experience(data), so it knows the surroundings very well and perform tasks quicker!!
To know more about Jeff's work go through Numenta started by him. They developed some algorithms which have concept of time.


Sunday, 31 May 2015


Some Introductory Concepts for Neuromorphic Engineering:
coming soon.....!!!!!!
Before going to the actual stuff, lets watch some interesting videos.
here is a good short notes on Neuromorphic chips.
and here is Good Research Paper.
Lecture notes on Silicon Retina
Watch some videos about Neurosciences on Redwood's page

Friday, 15 May 2015

Some other Sources

Hi welcome back after so many days. Just came back from a college visit, my undergraduate college National Institute of Technology Calicut and I am relieved from Internship at International Institute of Information Technology, Hyderabad, Robotics Research Center. And I am getting ready to join Boise State University, ECE department. I will be working under Dr. Elisa Barney Smith. Boise has 2 Neuromorphic groups Here  After my search regarding Neuromorphic Engineering research groups at various universities I have found few more other than ETH-Zurich's group. Those are as below:
There is one stanford group called Brains in Silicon and Cornells group called Asynchronous VLSI and Architecture and there is one more group called Institute of Neuromorphic Engineering associated with Maryland, College Park i think. I found one more group from UCSD also. There is one group at The Computational Sensorimotor Systems Laboratory at UMD.

There is one more professor at RMIT, Australia working on memristor kind of technologies.Dr. Shanta Sriram.

There is one group at Singapore SINAPSE. There is an institute INCF also working in same.

There is one researcher at Princeton working on Neuromorphic Engineering. Princeton University.
One more researcher at University of Waterloo is also working on same. UWaterloo.

A Physicist at heidelberg is working on Brain inspired computing.
A professor at Dayton is also working on the topic.
Johns Hopkins's professor is also working on it.
Neural Engineering
Swedish institute is also working on mimicking a neuron.
Hongkong University of Technology  professor also works in same area.
HKUT another prfessor also works for same.
Professor in OSLO is also working the same.
Professor from IMSE Serville is also working.
Professor in universit of western sydney.
Professor in University of Sydney.
Lab in University of Sydney in Neuromorphic Hearing devices.
MIT  Professor Rahul Sarpeshkar also is working!!
WWU proff
UTK proff
Italiano proff
Rama Lab  at Washington university in St.Louis.


I started doing Introduction to Analog Electronics and Digital IC Design Course from Dr. R. Jacob Baker's webiste, cmosedu.com. I am also doing Introduction to Linear Dynamical Systems by Stephen Boyd from Stanford and Computational Neurosciences by UWash on Coursera. This is hell lot but I should do them thoroughly to get the pace. Actually 3 of the courses above said are like I know stuff from them already.

Tuesday, 3 March 2015

Intro to Neuromorphic Engineering.

This blog is to keep record of my work towards a brain inspired machine. It can help keep things tracked. I will post my work and resources I will find regarding same topic here.

These are the courses/areas one should have studied before delving deep into Neuromorphic Engineering. Ones more i would like to say that its not compulsory but its better if you inderstand these topics. Nowadays everything is intern disciplinary. These courses will give you introduction to Computational Neurosciences and Analog/Digital VLSI.




1. Computational Neurosceinces(coursera)
2. Fundamentals of Digital Image and Video Processing.(Coursera)(Extra)
3. Image&Video Processing(||)(Extra)
4. Machine Learning(||)(Extra)
5.Neural Networks for ML(||)
6.DSP(MIT ocw) and Revisit Circuits,signals and systems.
7.Exploring Neural Data(||)
8.Understanding the Brain(||)
9. Microelectronic Circuits(UC Berkely)
10. Intro to Analog IC Design( course)
11. CMOS Analog IC Design.(||)
12. Physical IC Design (||)
13. CMOS Mixed Signal circuit Design(||)
14. Advanced Analog IC DEsign(||)
15. Cadence/VLSI(||)
16.Synapse Neuron and Brain(Coursera).
There are some sources for Learning Neuromorphic Engineering. 
Like ETH Zurich. ETHZ
And also for some other resources visit.
neuromorphicengineering 
NOTE:  Courses 2, 3 and 4 are not necessary but when you have a proper idea about them you can think about an application in multiple ways. It helps in correlating many theories.  

The first course covers many aspects for starters. In first few videos concepts related to Brain and Neurons are taught. Second few weeks its more of Information Coding and theory. Next few weeks it covers RC circuit equivalent of neurons and dynamic analysis. Last couple of weeks deal with machine learning concepts. In total the course is good package for starters. It will be better to understand the concepts in RC, RLC, RL circuits and systems analysis to develop intuition and proper understanding of the RC equivalent of Neurons. A good book written by the professor who taught us Circuits and Networks, Signals and Systems, Analog MOS Cirucits at NIT-Calicut for B.Tech 2010-2014 batch can be referred for Dynamic system and network analysis. I will post some problems from his tutorials which can give some good understanding of dynamic system analysis.  



With the clock ticking very fast for Moores Law. Single chip's capability to compute will be limited by size of transistor. I think its the time to look for alternative forms of computing. Von Neuman machines are completely opposite to  Human brain in many aspects. Human brain is slow but present machines are fast. We are good with patterns but its hard for a machine inherently to recognize patterns. With the Invention of Memristor the fourth fundamental circuit element Neuromorphic Engineering has gained importance. Corporate giants like IBM, Qualcomm, HRL Laboratories have been working in this area investing billions of dollars.

To know more about the area please see this TrueNorth, NeuromorphicEngineering, MemristorDharmendra Modha
Recently  started the book On Intelligence by Jeff Hawkins. I will discuss some of the points from there as soon as i complete that.