The question is this: Are we, as AI-interested people, going in the right direction in our research? Don’t we get stuck in a local minimum of the research and lose the currect direction?
Let’s be more precise. I assume that the goal of AI research is making a machine as intelligent and as capable as a human is (or even more). In this regard, I am a strong AI believer. Although I consider applied-AI (the AI that is used for helping human feel better and solve their daily needs) worthy, I do not believe it as the ultimate goal of the field (though it is just a subjective viewpoint).
Todays we have many different and powerful tools that all of them are considered as a part of AI, e.g. sophisticated margin maximizer used as a classifier for our pattern recognition problems, mathematically sound statistical theory of learning, different searching methods, several evolutionary-inspired methods, and many others. Now, if I ask you whether we can make an intelligent machine or not, what will be your answer? My answer is not! We cannot build it by our current level of knowledge. And I suspect we cannot make it even if we find better and tighter generalization bounds in the future! (;
I cannot prove it for sure (nobody can do!), but I feel so. I like this mathematical sophistication, but I think research on these topics distracted our mind from seeing the big picture. Where is a SVM is supposed to be in our intelligent robot? In its visual cortex?! 😀
What is the current way? I do not know. However, I have a guess. I will write more about it later.
The problem is not theory or the direction of research. The problem in AI and throughout the academic world is that there are no big projects to guide research. Looking at the autonomous car we can see that even contemporary AI is capable of doing some cool things. AI researchers need to start working on AI applications of increasing sophistication not more theory.