Lotfi Zadeh: A New Frontier in Computation – Computation with Information Described in Natural Language

I have one other notes of WCCI conference. It is about an invited talk by Lotfi Zadeh, the father of Fuzzy logic. I do not know whether this post would be useful to anyone or not, because I do not try to make my notes into a readable text, e.g. sometimes I just wrote a word or phrase as a reminder for myself. However, it may act the same and give you some hints to something useful!

Monday (17 July 2006) – 5:25PM
Now, I am going to attend Lotfi Zadeh’s invited talk. He is the father of Fuzzy logic. I haven’t seen him before, so I am a bit excited. This confrontation is specially important for me because I highly respect his idea (some kind of idol for me!).
Let’s see what he will talk about. The title of the talk is “A New Frontier in Computation – Computation with Information Described in Natural Language”.

-He mentioned that one big problem in current AI research is the lack of attention to the problem of perception. He thinks that without solving this problem, we cannot reach human-level intelligent machines.

Natural Language as a way of perception. It is not addressed that much in AI community.

(What is generalized constraint-based systems?! [My question to myself. He address it later.])

Well …
Fundamental Thesis:
-Information = generalized constraint
-Proposition is a a carrier of information

Meaning Postulate:
-Proposition = generalized constraint

Computation with Words!

Quotation:
“Fuzzy logic” is not fuzzy logic.
Fuzzy logic is a precise logic of imprecision.

Granulation is a key concept!
There is no concept granulation in probability theory.

What is the maximum value of a curve that is drawn by spray pen?!

Precisiation!

(Is there anything like Fuzzy SVM or Fuzzy Margin Maximization?! Fuzzy Margins?! [My question to myself])

Precisiation and Imprecisiation
Humans have tolerance to imprecisiation

Fuzzy logic deals with summarization.

Cointensive:
There are many definitions (e.g. stability, causality, relevance, mountain, risk, linearity, risk, cluster, stationary) that are not cointensive. There is not a 0/1 concept of stability. These are fuzzy concepts. [Well! I like this idea!]

“Generalized Constraint Language”

Duality of Inversion and Gradient Descent?

Suppose that there is a linear problem defined as X = inv(A)b. You solve it using an iterative gradient-descent method defined on some error function like (X – inv(A)b).^2 (or AX-b).^2. Moreover, suppose that you know X is a member of class_X which simplifies the gradient descent method in someway (you know that you should not go in some directions). Can we incorporate this knowledge in traditional linear matrix equalities solvers (not those that are based on calculating the gradient of …) or use it in finding the inverse of A somehow more efficiently? I know that I have not defined this problem clear, but anyway, my descriptions may give some hint on what is the problem: is there any duality relation between calculation of inverse of A and solving the minimization function I mentioned?!

Seeking a Good Testbed for Testing Expertness-based Fuzzy Q-Learning

Yesterday, I had another session at IPM (Institute of Theoretical Physics and Mathematics) after a long period of not having any. This session was rather odd as I didn’t understand what exactly was going on – or more precisely my expectation of what it should have been was different from what it was. It was a kind of brainstorming session which I can call it constrained brainstorming session which means that it wasn’t cost free to discuss about everything (the atmosphere becomes stressful) but its aim was so. Anyway, we are looking for a good practical example for Fuzzy Q-Learning in which expertness was well-defined. Different applications was suggested such as 1) money exchange, 2) face recognition, 3) modular robots, 4) MIMO control, 5) site traffic controller, and 6) medical diagnosis expert. Have you any idea about it?

Fuzzy SSA, Old Notes, and Predictive NN

Today, I was there at control laboratory doing some kind of research! I supposed to write my technical report about value decomposition, credit assignment, and … in SSA, but I didn’t write a single word (except this post that you are reading and a few more in a comment section of some weblogs). In spite of that, today was not useless as I had a chance to discuss with Mohammad about a fuzzy generalization of SSA and also re-read some previously devised theories, methods and hypotheses of my project. In addition, I used MatLab’s Neural Network Toolbox for the first time and implemented a predictive network. I made a predictive model of LTI system and that was successful. Thereafter, I tried to predict my Anti Memoirs’ daily hit and that was not ok! It wasn’t as easy as I supposed. I will work on it later.

Approximate Reward report writing

Today, I came to Control Lab. in order to write a technical report about approximate reward in RL. I write something, but my efficiency is not very good, e.g. you may get involved in a long conversation and you cannot escape! 😀 Anyway …
During my writings, I found out that there might be some fallacy in agnostic learning: policy would change after changed agnostic reinforcement signal. I am not sure whether my result is correct or not.
If I can prove that policy does not change value function, everything would be ok! It is not generally correct, but may be correct in some situations, i.e. being sure that every state-action will be visited infinitely, then V->V* and so policy is irrelevant. emmm … must be thought!

Behavior learning in SSA: a mid-work report

I am working on SSA again. Behavior learning is possible but is not consistent in object lifting task, i.e. I cannot be sure whether it works in every trial or not. I changed that abstract problem to include “NoAction” actions with different behaviors (in both state and action space) and it seems fine. I must work more on it, but I believe the difficulty of object lifting task is inherent in it: 1) it is not Markov Problem and 2) reward function is not well-defined in it. Anyway, I am going to investigate my methods on it.

Post those-busy-days era: Chaos control and Co-evolution

At last, I finished that bulk of reporting stuff that I was engaged in during last week. I must have written a technical report about Chaos Control and a paper on Evolutionary Robotics. These heavy works –with becoming near deadlines and too little time to do- was too stressful for me. Fortunately, I did them!

The first one that is written in Persian (Farsi) is a literature survey on different methods of chaos control. I have been fascinated about chaos for a long time (perhaps from the time I was 12. Yes?! What is the problem?!), but I could not find any possibility to do some real scientific research or at least readings. Despite a short not-too-academic research that I did in the first year of BSEE, I have found a chance to do a real one when I entered graduate school and begin my MS study (The first one was about using a chaos signal in order to solve some optimization problem. After that, I did two chaos control ones too).
Thus, this rather good literature survery was a very pleasant experience for me. In spite of those readings, I am not a chaos specialist anyway! 😀

The second one, which is entitled Behavior Evolution/Hierarchy Learning in a Behavior-based System using Reinforcement Learning and Co-evolutionary Mechanism, was a result of some experiences on evolutionary robotics. You may know that I believe in the evolutionary mechanism (be natural or artificial), though many think that it is just an idiot (with IQ = 0.0001) given enough time to try every cases. Nevertheless, I got some good results mixing co-evolution and learning which was fascinating. I mainly did this research in order to satisfy the requirement of getting a mark for Dr.Lucas’ Biocomputing course, but that was only an ignition. Anyway, Dr.Nili and Dr.Araabi told me not to submit this paper to any place before submitting some other papers before.

What is my thesis about?!

I have not written anything directly related to my project there. You may wonder whether this guy is a machine learning student or a philosophy student. (; Anyway, I may change my high-security-with-copyrighted-material situation if everything goes this way. However, I try to write something about my project – wish it be fun and encouraging!
Let’s briefly discuss what I have done up to now:
As you know, I am working on learning in behavior-based systems. I have chosen Subsumption architecture as a base architecture due its success in designing a lot of behavior-based systems. I decomposed the learning process to two different situations: 1) structure learning, 2) behavior learning.
In the former case, I have supposed that the designer know how each behavior is working and s/he wants the learning mechanism places each behavior in its correct place. S/he guides this process by giving the agent a reinforcement signal that rewards or punishes its action. In the later case, the designer knows the correct structure of the architecture, but s/he is not aware of the way each behavior must act. For instance, s/he knows that there must be an obstacle avoidance behavior superior to any other behaviors, but s/he does not know what an appropriate action in each case is.
To learn a behavior-based system, one must solve these two problems. What I have done by now is trying to solve these two problems in a special case. I have got some partial results, but the problem is not solved completely.