IBM’s WATSON defeated world Jeopardy champions in 2011, and Deep Blue’s AlphaGO defeated the world GO champion last year. If algorithms have defeated the best of humans at games as intuitive and complex as GO and Jeopardy, nothing can stop them from developing other cognitive abilities and superhuman ones too. You may think so.
However consider this. Did WATSON or Alpha Go understand the games they was playing? What meaning do the games have for these programs, if any?
The Problem of Meaning
Is a program aware of what it is computing, does it understand? Does a coherent program produce understanding by itself, do intention and interpretation emerge out of it without anything additional?
A contentious issue for a long time now, this problem of meaning has been best elucidated by philosopher John Searle in his Chinese room thought experiment .

This is how he makes the case: Suppose you were enclosed in a room with a set of symbols, and a rule book to process them. Queries were being fed through an input window and you were to give answers through an output window. You succeeded in working through the symbols correctly, but were never told that the symbols were Chinese, a language you are totally unfamiliar with. So then, do you truly understand Chinese if you gave correct answers to the queries asked of you?
An observer outside the room could think of you as a Chinese speaker, but does that make you one ?
Broadly, Dr. Searle’s arguments tries to assert that human minds do not produce meaning from information processing alone. The mind arises from biology , and thus meaning is an epiphenomena of biology. Computers can at best only simulate the human minds, not duplicate it.
There are certain objections to the Chinese Room Argument posed by the proponents of strong Artificial Intelligence. They believe that with sufficient advancement AI will obtain human level cognitive abilities and very soon will surpass us towards super-intelligence.
a) The Systems Objection.
This objection says that the person inside the room might not understand Chinese, but the room as a whole produces understanding. The person inside the room can be likened to the CPU of a computer or like individual neurons firing in one’s brain . They might not individually produce meaning , but the brain as a whole does. Dr. Searle countered the objection by extending the thought experiment further. In a simplified example of his counter argument , let’s say you put half the non-Chinese speaking population of India inside the room , and asked them to manipulate and process the symbols. So the room represents half of India , just like the collection of neurons represents the brain. If they do it successfully, does it mean half of India is fluent in Chinese? It does not . He was trying to say that the system cannot possess any properties greater than an individual component, hence if the individual component cannot understand , the system cannot.
b) Robot objection
Here the AI team concedes to the idea that a symbol manipulator does not necessarily understand Chinese . They recognise that meaning also arises from sensory knowledge of things in the outside world. For example, a person who has never seen a hamburger will never know what the word identifies. So to be able to have sensory interaction and engagement with the outside world, the computer must be ’embodied’ , i.e.it must have senses. So then, they say, a human like robot can achieve understanding. For the strong AI proponents, robotics is the answer.
To this, Dr Searle says that a computer inside a robot is still manipulating symbols , even if it is getting inputs from ‘senses’. The new inputs simply add to information to be processed , but do not enhance understanding or give meaning.
b) Brain simulator objection:
Another objection by strong AI proponents is that simulating the brain will naturally produce understanding. Dr. Searle counter argues this by saying that however close a simulation of the brain you make, it will still not produce automatic understanding . To elucidate , he tweaks the initial thought experiment by saying that instead of manipulating symbols , we can have the person inside the room operating an elaborate system of water pipes, where each water connection corresponds to a synapse of the brain. There are valves to turn on and off , such that complex water connections could be made , to produce a result equivalent to the result of symbol manipulation. Inspite of the equivalent result, the pipes also do not understand Chinese. What Dr Searle is trying to say here is that reproducing the formal structure of the sequences of neuron firing is not enough to produce understanding .
c) Other minds objection
Proponents of this objection say that the way we know other people understand something is by their behaviour i.e. the only way to gauge whether someone comprehends or not is by how they respond externally . Thus , if you are attributing cognitive ability to a person by his response to you in Chinese , you will have to attribute the same cognition to a computer responding in Chinese .
To this Dr Searle says that attributing conscious understanding to computational process and output alone is a mistake . This is because such computational process and output can happen without cognition . e.g . A calculator has no cognition , it simply performs a set of computations.
Searle (1984) presents a three premise argument that because syntax is not sufficient for semantics, programs cannot produce minds.
A)Programs are purely formal (syntactic).
B)Human minds have mental contents (semantics).
C)Syntax by itself is neither constitutive of, nor
sufficient for, semantic content.
Therefore, programs by themselves are not constitutive of nor sufficient for minds.
What Dr Searle is trying to say that strong AI will not be possible because programs don’t truly understand, even if they can process .
Zenon Plyshyn , a Canadian cognitive scientist and philosopher has another thought experiment to argue against strong AI. He asks you to imagine that the cells in your brain were being replaced, one by one, with integrated circuit chips, programmed in such a way as to keep the input-output function of each unit identical to that of the unit being replaced . Then, you will keep speaking exactly as you do, except that it would stop making any meaning to you. What an outside observer might take to be words would become for you just certain noises that circuits caused you to make.
Roger Penrose, esteemed physicist makes an argument which implicates strong AI for believing in a ‘consciousness’ like entity. If strong AI is going to be independent of the formal structure that produces it, the suggestion is that the mind is independent of the brain. If algorithms in themselves are the important things and what they arise from is not, then it seems the brain does not give rise to the mind , but simply holds it. Which is ironical, as this argues for another entity like consciousness itself, which the AI team so badly wants to shun. Dr. Penrose himself believes that strong AI will not be possible with current science, that the brain cannot be duplicated at the classical, Newtonian level. He argues that mind arises from micro-structures at the quantum level, and till science has not progressed enough to solve for a quantum gravity theory, true duplication will be impossible. This elusive quantum gravity theory ,the correct one, as he puts it, should be a able to unify laws at all scale ,from the quantum to the cosmological. Thus, without a ‘Theory of everything’, he suggests, the human mind cannot be duplicated.
The Human Mind might not be just Algorithmic
Now , if the mind is going to be simulated with algorithms, it will serve us to look at the implications of mathematician Kurt Godel’s incompleteness theorems. Kurt Godel, a contemporary and good friend of Einstein’s was able prove that if R is a set of mathematical axioms and g(R) is a function defined by R , g(R) might not always be provable through the rules of R . Meaning what is true might not be provable or what is false might not be falsifiable.
Thus there can be more truths in the world than those that are programmable .
So if the human mind cannot program for a function G(R) despite knowing it is true, how will a machine have the function G(R) in its system? If it does not have G(R) in its system by way of a formal program, such a system will be an incomplete representation of the human mind. Won’t it ? Thus how will strong AI – human level or even super- human level happen?
Godel also proved with his second theorem that no absolute proofs of the consistency of formal system, like a program or algorithm can be given. It might turn out to be inconsistent or wrong.

He implied that the human brain might not be algorithmic , i.e. we can understand certain truths without being able to give comprehensive, detailed explanations of why we do. There can be gaps in our explanations even if we have an understanding of the thing .
Roger Penrose, in his book ‘The Emperor’s New Mind’ suggests that all physical laws known to us till now are algorithmic in nature, which are insufficient to program the human mind. He contends that there must be non-algorithmic laws which are yet to be discovered to explain the comprehensive functioning of the human mind. As stated earlier, he has often said that the mind can be simulated only at a quantum level, when a ‘correct’ quantum gravity theory is discovered.
Conclusion
The idea of the coming panacea of AI to make us eternal , suffering free trans-humans is being spread with religious fervour and conviction. However a deeper look at the situation tells us that the promises of this new religion are just what religions have always promised. False fantasies of deliverance. AI proponents believe that AI systems will evolve to become human like and then super human eventually, but when they are not working from the deepest fundamentals of human existence, how can they expect to have comprehensive understanding enough for simulation , duplication or transcendence ?
Hence technology lead by AI is not the future of humans, but newer, radically different fundamentals are. In fact, what we may need are deeper fundamentals ,newer paradigms,different starting points.To deliver science and us humans.
Fresh fundamentals that can be built up to explain all levels and facets of reality , in totality. Our universe, us humans and our place in it.