I'm not an AI expert, but I have had some direct experience with some of the people developing AI systems. Back in the 1980's, I worked at a company where the Chief Technical Officer had direct connections with Marvin Minsky and the MIT AI Lab. The CTO declared that "fifth generation computing" was going to be a core technology that the company would pour research money into. The company spent millions of dollars on AI. They gave people "knowledge engineer" titles. They had flashy marketing videos and pamphlets featuring the CTO and Minsky promoting the future of computing. The company developed an AI workstation with a special custom AI processor chip. In the end, no significant revolution happened, millions of dollars were burned, and the "knowledge engineers" disappeared. The CTO left, and today his biography makes no reference to the years of the embarrassing AI mess that he dragged the company through.
Back then, I was a junior engineer and worked with some senior engineers who were very much against the company's AI strategy. Those senior engineers told me about all the flaws they saw with the AI developments, and how it was mostly marketing with little substance. Fortunately, my group only shared office space with the AI engineers, and we were not working on AI directly. All the negative predictions came true as the whole AI effort fell apart when it came time to deliver tangible results. It left me with a skeptical view any lofty claims about AI.
Now more than 20 years later, I have yet to see any AI system that has impressed me. Today's computers can store much larger databases and have much faster processors than 20 years ago, but what I have seen in AI systems have been variants on database queries and conditional structures that have been around almost as long as computer science has been studied. I have seen nothing revolutionary, only scaled up versions of old concepts.
Sure, it is fun to fantasize about thinking machines, but it is important to look at the problem realistically. I see too many cases where AI advocates jump to wild conclusions without even solving some of the basic problems of intelligence. It's like the basic problems are too mundane for them, and they want thinking computers now. I have been very skeptical of the recent writings of long time AI advocate, Ray Kurzweil, about "the singularity" since it blurs reality with science fiction, jumping to big conclusions without convincing me that the intermediate steps to "the singularity" are realistic. To me, "the singularity" smells like "The Rapture" or Heaven. They all have evidence to support wishful thinking, but the evidence is flawed.
I do believe that some of the huge knowledge repositories being developed will eventually be useful if an artificial intelligence is developed. Even Google would be useful for an AI entity, just as Google is useful to humans. Show me an artificial intelligence that is similar to the natural intelligence of an insect, and I'll will be impressed and feel that we are headed toward a higher AI. Making a database and a bunch of if-then statements that act like somewhat like an insect doesn't impress me; that's more like a video game character.
I do hope that an impressive AI will be developed, but I feel that there will have to be some major revolutions in computer architecture and programming. With today's architectures, we can barely make software that keeps running without crashing.