Leonardo

Take a look at this fancy Leonardo a sociable toy robot at MIT. I may write about sociable robots later, but before doing so you may wish to take a look at it.

Evolution and Learning

I have conducted some interesting researches combining evolution and learning in a new way. I may discuss it later as it is in its early development stages.

Language and Thought

This is a very interesting article showing a sample for the effect of language on thought. It is shown that different languages can shape the way people think and act in their world. This idea is extremely plausible in my mind and I have believed in it for a while.

Considering Controlling Probabilities in Behavior Learning

Yesterday, I got stucked in a problem that was imposed by Dr.Nili. The problem was too simple: How do I update new values in my subsumption architecture learning. What I did seem reasonable but was not compatible with my theory. Actually, I did update each layer whenever it was controlling or it outputs NoAction. I did not consider “controlling probabilities” of each layer and it was inconsistence with my theories in which those probabilities were very important. I changed the code and considered that probabilities too: if a state-action in a behavior does not receive a reinforcement signal for a while, it will decreases toward zero. It is natural as its controlling probability is decreasing. Anyway, I implemented this code and it has worked. It is not very fascinating as the previous code had worked too – maybe due to its intrinsic robustness. The interesting fact is that each behavior predicts its structural value too, i.e. the sum of the value of each behavior is equal to its behavior value in the structure. It is the first time that I get this equality.
What is remained to do is to implement these algorithm to object lifting problem (I have done these with that abstract one) and check the other method of updating which is standard one (not this averaging).

TOEFL days

Thesilog has lost one of its posts during host transfer. Anyway, that was not very important. Emmm … I am rather busy with studying for the TOEFL exam that will be held tomorrow. I am a little stressful! Let’s see what will happen.

Credit Assignment Report

Today, I was working on my Subsumption architecture credit assignment report. It is rather completed now which is a very good news for me. This report is contaminated (!) with a lot of formulas, and it seems to be one of the most difficult reports one may encounter in his lifetime (I am kidding!). emmm … what should I do now? I don’t know, I cannot think anymore …

Neural Network presentation making

Dr.Yazdanpanah wants to present his Neural Network course using PowerPoint slides instead of writing on that old lovely whiteboards. He asked me to make necessary powerpoint files for the whole neural network course. I was engaged in doing so for the several past days (it was multi-layer feedforward NN part). It is not an easy task, but it is fun. I don’t know whether you know or not, but I love NN and believe in that connectionist approach to intelligence. Therefore, preparing course materials is nice job. I can re-read a lot of related things, which I don’t face in my daily project works, and also may learn some new stuff. Anyway, I am not sure if he accept my work, but let’s try it!

Complexity Papers Online

And now, introducing Complexity Papers Online. You can find a lot of different papers, dissertations, and also links to other paper collections related to complexity theory. It is evident that there is no close definition for complexity and it ranges from learning and evolution to chaos theory. Anyway, it looks worthy.

AI Links

It is somehow disappointing that there are a lot of useful stuff in the Internet that you cannot even read its title – not mentioning their readings. Unfortunately, there is no simple way to read all of them. Emmm … let’s link to this site:

AI Links that is maintained by Mark Humphrys – whom has recently gained my attention due his works on action selection and specially that interesting W-Learning idea.
Let’s bring its title in order to be easier for you (and specially myself) to remember what you (I) can find in it.

Fuzzy SSA, Old Notes, and Predictive NN

Today, I was there at control laboratory doing some kind of research! I supposed to write my technical report about value decomposition, credit assignment, and … in SSA, but I didn’t write a single word (except this post that you are reading and a few more in a comment section of some weblogs). In spite of that, today was not useless as I had a chance to discuss with Mohammad about a fuzzy generalization of SSA and also re-read some previously devised theories, methods and hypotheses of my project. In addition, I used MatLab’s Neural Network Toolbox for the first time and implemented a predictive network. I made a predictive model of LTI system and that was successful. Thereafter, I tried to predict my Anti Memoirs’ daily hit and that was not ok! It wasn’t as easy as I supposed. I will work on it later.