Lotfi Zadeh: A New Frontier in Computation – Computation with Information Described in Natural Language

I have one other notes of WCCI conference. It is about an invited talk by Lotfi Zadeh, the father of Fuzzy logic. I do not know whether this post would be useful to anyone or not, because I do not try to make my notes into a readable text, e.g. sometimes I just wrote a word or phrase as a reminder for myself. However, it may act the same and give you some hints to something useful!

Monday (17 July 2006) – 5:25PM
Now, I am going to attend Lotfi Zadeh’s invited talk. He is the father of Fuzzy logic. I haven’t seen him before, so I am a bit excited. This confrontation is specially important for me because I highly respect his idea (some kind of idol for me!).
Let’s see what he will talk about. The title of the talk is “A New Frontier in Computation – Computation with Information Described in Natural Language”.

-He mentioned that one big problem in current AI research is the lack of attention to the problem of perception. He thinks that without solving this problem, we cannot reach human-level intelligent machines.

Natural Language as a way of perception. It is not addressed that much in AI community.

(What is generalized constraint-based systems?! [My question to myself. He address it later.])

Well …
Fundamental Thesis:
-Information = generalized constraint
-Proposition is a a carrier of information

Meaning Postulate:
-Proposition = generalized constraint

Computation with Words!

Quotation:
“Fuzzy logic” is not fuzzy logic.
Fuzzy logic is a precise logic of imprecision.

Granulation is a key concept!
There is no concept granulation in probability theory.

What is the maximum value of a curve that is drawn by spray pen?!

Precisiation!

(Is there anything like Fuzzy SVM or Fuzzy Margin Maximization?! Fuzzy Margins?! [My question to myself])

Precisiation and Imprecisiation
Humans have tolerance to imprecisiation

Fuzzy logic deals with summarization.

Cointensive:
There are many definitions (e.g. stability, causality, relevance, mountain, risk, linearity, risk, cluster, stationary) that are not cointensive. There is not a 0/1 concept of stability. These are fuzzy concepts. [Well! I like this idea!]

“Generalized Constraint Language”

Melanie Mitchell, Coevolutionary Learning

WCCI (CEC) 2006 Invited Talk:
Coevolutionary Learning by Melanie Mitchell

This invited talk is about making better learning systems. The idea is evolving both “classifier” and “training set” together (co-evolving them in a competitive setting). The way that this works well is by spatial co-evolution. This “spatial” property is very important and other cases (non-spatial co-evolution or spatial evolution) did not work well. However, I should note that none of the experiments was about real classifier. One of them is designing a cellular automata to do some specific task (majority voting) and the other is fitting a tree-like structure (actually a GP tree) to some function.

She thoughts that spatial Co-evolution improves performance because:
-maintain diversity in population
-produces “arm races” with ICs targeting weaknesses in CA and CAs adapting to overcome weaknesses.

Also you may like to read a quotation. This is one for you: “Co-evolution is all about information propagation!”

Finally I asked her if it is reasonable to evolve training set for a classification task? By evolving them too, the pdf of the training set will changed, and this means that the evolved new samples may be completely different from the pdf of the test set, i.e. the classifier evolved in a way to optimize a loss function for a distribution other than “population” (or test or true) distribution.
She answered me, I did not satisfied completely by her answer.

Self-Assembling in Swarm Robotics: The Swarm-bot Experiment

Now, I am sitting in this big Pavillion Ballroom waiting for Marco Dorigo to start his presentation. He is going to talk about “Self-Assembling in Swarm Robotics: The Swarm-bot Experiment”.

His group has built swarm-bots (s-bots). S-bot are small robots than can connect to each other and do things that a single robot cannot do.

The technological motivation for this work is: Robustness, Flexibility, Scalability

s-robot are 12cm.
Also I found out that Khepera is pronounced as Kepera.

The simulator they developed has different levels of abstraction as the model of the robot. One important problem for them was selecting the right level of complexity for the simulation of their robot.

To design robots’ controllers, they either hand-coded behavior-based system or evolved a neural network to do so. They did all designing in the simulator and then downloaded the result to the real robot.

The evolved a NN to do collective movement of a swarm of s-bots.
They used Perceptron and evolved their parameters using a simple ES method.
I missed (or maybe it is not mentioned) the observation space of the NN.

It seems that the resulted controller can work in different situations. For instance, it is scalable, can do coordinated movement with different initial formation of robots and etc.

Then they tried to evolve a controller for more complex tasks such as hole avoidance. They used a ground-hole detector for each robot and also a microphone to detect sounds. The action space is motion AND some sound. This sound can be used to communicate with other robots.

Path Formation:

On going work:
-Functional self-assembling
-Functional shape formation
-Adaptive task allocation
-Evolution of recurrent neural nets for memory …

Finally, I asked about using ideas of learning (RL). He said that the main reason that they did not test it is the lack of resources (as each experiment takes a lot of time).
After the talk, I asked about the comparison of the reinforcement learning and evolutionary approach to design autonomous agents. He said that the main problem is that nobody is going to really compare different methods in the same robotic task because it is time-consuming (and people in the field are lazy (these are my words!)).

Well … I liked the talk. It was fun. More importantly, I really impressed by the results.

You may get more information the project’s website: Swarm-bots

Memoirs of the World Congress on Computational Intelligence 2006

In this week, I am in Vancouver attending the World Congress on Computational Intelligence 2006 (WCCI). This congress consists of three important computational intelligence-oriented conferences: International Joint Conference on Neural Networks (IJCNN), Congress on Evolutionary Computation, and IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).

Here, I try to write about the congress. I guess I cannot devote enough time to write very organized and well-written reports from the conference, but I try to note my impression about some aspects of it. This means that I try to give a pointer to the papers that I find interesting, write a few key sentences about the content of invited talks, and even mentioning that the coffee was good or not.

One important thing that I must emphasize is that I cannot attend all sessions for sure (and there are usually 15 parallel sessions). This means that I may miss many good papers. Anyway, let’s see what will happen.

AI in the right way?

The question is this: Are we, as AI-interested people, going in the right direction in our research? Don’t we get stuck in a local minimum of the research and lose the currect direction?
Let’s be more precise. I assume that the goal of AI research is making a machine as intelligent and as capable as a human is (or even more). In this regard, I am a strong AI believer. Although I consider applied-AI (the AI that is used for helping human feel better and solve their daily needs) worthy, I do not believe it as the ultimate goal of the field (though it is just a subjective viewpoint).

Todays we have many different and powerful tools that all of them are considered as a part of AI, e.g. sophisticated margin maximizer used as a classifier for our pattern recognition problems, mathematically sound statistical theory of learning, different searching methods, several evolutionary-inspired methods, and many others. Now, if I ask you whether we can make an intelligent machine or not, what will be your answer? My answer is not! We cannot build it by our current level of knowledge. And I suspect we cannot make it even if we find better and tighter generalization bounds in the future! (;

I cannot prove it for sure (nobody can do!), but I feel so. I like this mathematical sophistication, but I think research on these topics distracted our mind from seeing the big picture. Where is a SVM is supposed to be in our intelligent robot? In its visual cortex?! 😀
What is the current way? I do not know. However, I have a guess. I will write more about it later.