The Race to Build AI that Can Think, Not Just Predict
A contemporary AI will write a poem or summarize a document and do it very well. However, when it is called to solve a task that is more to do with actual logical thinking, the sort of thinking a detective or a scientist needs to do, it tends to collapse. Such a disconnect between prediction and reasoning is among the largest problems in AI today, and researchers around the world have been engaged in a race to bridge it.
What Is the Difference?
The existing AI models, even the best language models, are highly effective at pattern recognition. They have read billions of instances of human writing, and they have been taught to guess what the next word or sentence will be, and it can sound very clever. Anticipating the following word is one thing, but knowing what the word is all about is another.
There is real reasoning where one should make logical conclusions based on facts available, seek when facts are not complete, find inconsistencies, and revise a conclusion in the presence of new facts. It is the natural way of human beings. It is one of the challenges that most modern AI systems cannot cope with.
To illustrate this, say to an AI that all birds fly and then say whether a penguin can fly; the answer to most systems will be yes, since a penguin is a bird, and the AI was just mindlessly following the rule. A rational system would understand that there is an exception to the rule and that penguins are not to be forgotten.
Why Does It Matter?
The difference between a useful and a really intelligent AI is their reasoning ability. An AI with the ability to reason in medicine would be able to not only give a possible diagnosis, but also give a reason, indicate which other tests are necessary, and indicate cases where symptoms are inconsistent with each other. A reasoning AI might be used in law to understand the logical soundness of an argument, as well as to find relevant precedents.
Reasoning is important in business to plan, strategize, and manage unforeseen circumstances. Businesses seek AI that is able to reason out a problem, rather than pattern-match a previous solution.
Current Approaches
There are various research groups that are working on alternative methods. Chain-of-thought prompting is one of these, as the AI is prompted to think aloud, i.e., to break down a problem into steps before providing a final answer. This has yielded good outcomes in mathematics and logic problems.
The other method includes teaching AI formal logic and formal reasoning tasks, in other words, teaching the model the rules of rational thinking. Others are integrating AI with decades-old rule-based systems that have served the purpose of computer science.
Neurosymbolic AI, an attempt to unite the power of neural networks at pattern recognition with the accuracy of a symbolic reasoning system, is also receiving growing attention. It is an elaborate strategy, yet there are researchers who feel that it is the brightest way to go.
The Challenges
It is a step in the right direction, but the obstacles are great. The process of reasoning involves being able to get the context of the situation, not simply what words to say, but why something is important in this or that situation. It also needs common sense, which is very difficult to impart to a machine.
Also, the reasoning AI must have knowledge of what it does not know. The existing models tend to provide answers with a lot of confidence, even in situations where they are not correct. A real system of reasoning would acknowledge the unsure thing and declare it.
The Future
Teams that solve this dilemma will transform the capabilities of AI radically. A real reasoner would be an AI co-discoverer, one that assists humans in solving problems that we have not yet figured out to do on our own. The competition is fierce, and the rewards are too big.