by Jerry Richardson 9/30/14
Will mankind ever be able to build a thinking-machine that is, in every way, the equal of a human-being? Google is working to develop a quantum computer-chip that has been said may one day allow thinking machines. And Connecticut’s Westport Library has recently acquired two humanoid robots, “Vincent” and “Nancy” from Aldebaran Robotics:
[September 2, 2014] San Francisco (AFP) – Google said it is working on a super-fast ‘quantum’ computer chip as part [of] a vision to one day have machines think like humans.
[September 29, 2014] WESTPORT, Conn.—They have blinking eyes and an unnerving way of looking quizzically in the direction of whoever is speaking. They walk, dance and can talk in 19 different languages. About the height of a toddler, they look like bigger, better-dressed cousins of Buzz Lightyear.
And soon, “Vincent” and “Nancy” will be buzzing around the Westport Library, where officials next week will announce the recent acquisition of the pair of humanoid “NAO Evolution” robots. Their primary purpose: to teach the kind of coding and computer-programming skills required to animate such machines.
“Robotics is the next disruptive technology coming into our lives…”
Is there an appropriate name for a thinking-machine, a machine-man?
I’m not asking about “humanoid robots”; I’m not asking about a name for such hypothetical creatures as Star Trek’s Borg (a hive of cybernetic organisms). I am asking about a useful name for a hypothetical creature that would resemble, in function and somewhat in looks, Commander Data from Star Trek.
The term android is a well-known term for an automaton that “resembles” a human. But I am not looking for a word that just indicates a resemblance; I want a word that indicates a deep inherent likeness.
A likeness that is the replication of essence and function but not a cloning—has to be an original creation; a creation: Engineered and built by humans without the use of any genetic material, human DNA or otherwise.
Since the Old Testament word for mankind (Genesis 1:27, male and female) is the transliterated Hebrew word ‘âdâm; I am going to use the name ADAM2 for the concept of a sentient machine-human creature that is a partial-facsimile of ADAM1. ADAM2 is to be for this discussion a man-designed and man-built machine.
The primary difference from ADAM1 would be that ADAM2 would not consist of carbon-based, protoplasmic flesh and bones; some other material would be used; however, ADAM2 would possess the look and “feel” of a fleshly creature—not like Tin Man in the Wizard of Oz. Since he is not composed of protoplasm, he would not “eat food” as we know it. Some other source of energy would be necessary. He would “live” a long time, but would not be “immortal” since he, being a machine, would eventually fall victim to the 2nd law of thermodynamics and wear-out. (Note: If we play the ‘replace his parts’ game; when, and on what basis does he cease to be who he was originally? Let’s not go there in this discussion.)
All this elaborate specification, for our discussion, is to insure that in our minds; conceptually ADAM2 is, without question, a machine and not an artificially-grown embryo or some sort of an advanced human clone. In other words, ADAM2 would possess real artificial life and real artificial intelligence.
Here was the result of the “building” of the first Adam, ADAM1:
God spoke: “Let us make human beings in our image, make them reflecting our nature So they can be responsible for the fish in the sea, the birds in the air, the cattle, And, yes, Earth itself, and every animal that moves on the face of Earth.” God created human beings; he created them godlike, Reflecting God’s nature. He created them male and female.
—Genesis 1:26-27 MSG
Will there ever be an ADAM2?
ADAM2 has to possess sentience, consciousness, and self-awareness. Can he be designed and built by mankind? Or, stated differently, how human-like can a man-built machine ever be?
With that in mind, here are my fundamental questions about the functioning of a hypothetical ADAM2:
1: Could ADAM2 be able to “think” (remember, reason, plan, decide, imagine, dream arrive at unexpected genius conclusions, etc.)?
2: Could ADAM2 be self-conscious (know that he exists and is finite)?
3: Could ADAM2 have a conscience (sense of right and wrong)?
4: Could ADAM2 exercise free-will (make non-predetermined, not programmatic) choices based only upon his own independent reasoning)?
5: Could ADAM2 be able to sin (intentionally violate moral standards)?
If philosophical naturalism, aka scientific materialism, aka material monism (the default assumption of modern science) is true, then the answer to questions 1 – 5 would have to be yes; for the simple reason that human beings are, according to scientific materialism, nothing other than meat-machines that have been assembled randomly by evolutionary processes. And whatever a meat-machine can do, in theory, should be accomplishable by some other properly constructed machine.
Scientific materialists will perhaps resent the description of a human as a meat-machine. But given their philosophy, what else could a human be? Materialism provides no realm outside of the natural from which to draw ingredients; accordingly there must be some sort of natural explanation for everything.
Of course, currently, there has been nothing designed or constructed by humanity that can provide a yes answer to any of the five questions. Currently, the productive direction seems to be toward humanoid robots. But, at this time, my 5 questions all require speculation—there are no machines that demonstrate yes answers to any of the questions; however the speculation strongly continues that some, if not all, of the answers could be, in the future, yes. This speculation has been generated and studied primarily in the research areas of AI (artificial intelligence) and brain research.
Recently some physicists who specialize in quantum physics have weighed-in with reasoning intended to challenge the ruling paradigm of naturalism that over-shadows most of AI and brain research.
Amit Goswami holds a Ph.D degree in theoretical quantum physic; in his latest book, GOD IS NOT DEAD, he argues against material monism (scientific naturalism) with his own notions of monistic idealism in which his name for “God” is quantum consciousness (in essence a type of pantheism). I don’t favor his philosophical pantheism, but I do enjoy Goswami’s take-down of materialists:
“But behold, please. Materialists make the ontological assertion that matter is the reductionistic ground of all being: everything, even consciousness, can be reduced to material building blocks, the elementary particles and their interactions. They hold that consciousness is an epiphenomenon, a secondary phenomenon of matter that is the primary reality. What I demonstrate is the necessity of turning the materialist science upside down. Quantum physics demands that science be based on the primacy of consciousness. Consciousness is the ground of all being, a being that mystics call Godhead. Let materialists realize that it is matter that is the epiphenomenon, not consciousness.”
—Goswami, Amit (2012-04-01). God Is Not Dead (p. 7). Kindle Edition.
Most attempts at building a model for a “thinking” ADAM2 have been produced in AI research labs using some adjusted version of the paradigm, brain = computer. This has been popular, and has resulted in some useful applications such as expert-systems.
Expert-systems function with if-then branching trees. The user selects if and the system provides then opinions and the user choses another if, and so on. Eventually a sensible then is filtered-out from a previous selection of multiple-related ifs.
A human expert can think this way, but he usually doesn’t. He doesn’t cross-examine himself in this fashion. In fact a human expert often cannot explain exactly how he knows what he knows. As Michael Polanyi, a Hungarian-British polymath explained in The Tacit Dimension, “We can know more than we can tell”. Polanyi termed this pre-logical phase of knowing, ‘tacit knowledge’. Human experts actually rely on ‘tacit knowledge’; they don’t cross-examine themselves as is done in AI expert-systems.
So what does this suggest?
This suggests that the human mind does not primarily operate in an algorithmic mode. Computers operate with algorithms; the algorithms are encoded in the software and some in the firmware. No one has ever proven that the human mind operates primarily with algorithms; even though this has been the implied model since computers became so available, important, and prominent.
Modern end-user computers employ operating systems such as Windows (from Microsoft) that operate in an event-driven mode. This simply means that the user does something to create an event—press a key, click a mouse, point the cursor at an object, or touch the screen with a finger—and the operating system senses the event and responds with an action.
The computer-action is then controlled by an algorithm or multiple algorithms, and it presents the user with a choice or choices of some kind. This process continues until the user quits: either gets what he wants, tries something else, or gives-up in frustration. We’ve all been there.
It could be argued that the human brain functions in an event-driven fashion. Events are presented and the brain accesses algorithms that are “processed” by the collective action of brain neurons that have been “programmed” over time for just such a purpose.
That sounds good. But there is a major problem with the scenario.
What triggers events?
If we are moving about in the external world, events happen and we respond. No problem with that scenario. But what about when we are sitting quietly in the solitude of our own room, no one around, and we are thinking. We think, and in the process we ask ourselves a question.
Who/what triggered that event? Who/what asked the question? Of course it is me; it is myself; it is the ever-present “I”.
But, what and who is the “I” who asked me a question?
No one, to my knowledge, has an answer to this question that isn’t an intellectual dodge. The most common dodge is that the “self” the “I” is an illusion. Here’s how Daniel C. Dennett, probably the foremost proponent of that verbal-hand-waving dodge stated it:
“In our brains there is a cobbled-together collection of specialist brain circuits, which thanks to a family of habits inculcated partly by culture and partly by individual self-exploration, conspire together to produce a more or less orderly, more or less effective, more or less well-designed virtual machine […] this virtual machine, this software of the brain, performs a sort of internal political miracle: it creates a virtual captain of the crew.”
—Daniel C. Dennett, Consciousness Explained (1991), p. 228, Little Brown and company, Boston
In other words, according to Dennett the “self” the “I” is just an illusion, a “virtual captain of the crew” created by our brain circuits. The “sort of…miracle”—Dennett’s terminology —is how anyone would be persuaded by Dennett’s verbal-hand-waving.
The Nobel laureate neurophysiologist John Eccles had this to say about Dennett’s (and others’) scientific materialism:
“There has been a regrettable tendency of many scientists to claim that science is so powerful and all pervasive that in the not too distant future it will provide an explanation in principle for all phenomena in the world of nature, including man, even of human consciousness in all of its manifestations. [Karl] Popper has labeled this claim as promissory materialism, which is extravagant and unfulfillable.
“I regard this theory as being without foundation. The more we discover scientifically about the brain, the more clearly do we distinguish between the brain events and the mental phenomena, and the more wonderful do the mental phenomena become. Promissory materialism is simply a superstition held by dogmatic materialists. It has all the features of a Messianic prophecy, with the promise of a future freed of all problems—a kind of Nirvana for our unfortunate successors.”
—John Eccles, Indictment of Scientific Materialism
Thinking and consciousness are two undeniable phenomena that are conceptually inseparable in our introspective world of reality. Scientific materialism has another major problem with this inseparable pair: Does thought require consciousness, and if so why, if not why not?
If we can trace the computer’s input-output performance to the activities of its internal circuits without any ambiguity, without losing the trail (and this, at least in principle, should always be possible for a classical computer), then what is the necessity for consciousness? It would seem to have no function. I think it is an evasion of the issue for artificial intelligence protagonists to say that consciousness is only an epiphenomenon, or an illusion. The Nobel laureate neurophysiologist John Eccles seems to agree with me. Asks Eccles: “Why do we have to be conscious at all? We can, in principle, explain all our input-output performances in terms of the activity of the neuronal circuits; and consequently consciousness seems to be absolutely unnecessary.”
— Goswami, Amit (1995-03-21). The Self-Aware Universe (p. 21-22). Kindle Edition.
It certainly seems questionable whether a mankind-built ADAM2 is even possible. But, if it ever does become possible, then the question becomes “should it be done”?
In Frank Herbert’s Dune books, humanity has long banned the creation of “thinking machines.” Ten thousand years earlier, their ancestors destroyed all such computers in a movement called the Butlerian Jihad, because they felt the machines controlled them. Human computers called Mentats serve as a substitute for the outlawed technology. The penalty for violating the Orange Catholic Bible’s commandment “Thou shalt not make a machine in the likeness of a human mind” was immediate death.
Should humanity sanction the creation of intelligent machines? That’s the pressing issue at the heart of the Oxford philosopher Nick Bostrom’s fascinating new book, Superintelligence. Bostrom cogently argues that the prospect of superintelligent machines is “the most important and most daunting challenge humanity has ever faced.” If we fail to meet this challenge, he concludes, malevolent or indifferent artificial intelligence (AI) will likely destroy us all.
There is no scientific principal that dictates that everything that can be done should be done. Deciding that requires a value judgment. The value-judgment principal of trying-out everything that can be done seems to have originated with modernism; and philosophically it is, today, a “sacred” value of Progressivism—anything newer is better; it’s progress.
Those of us, who have been reared with biblical principles, keep in the back of our mind the thought that just perhaps God does not intend for mankind to pursue everything that it is technologically possible for him to achieve. Perhaps some things are just too dangerous for mankind to experiment with.
The biblical story that reminds us of this is the Old Testament story of the tower of Babel:
At one time, the whole Earth spoke the same language. It so happened that as they moved out of the east, they came upon a plain in the land of Shinar and settled down. They said to one another, “Come, let’s make bricks and fire them well.” They used brick for stone and tar for mortar. Then they said, “Come, let’s build ourselves a city and a tower that reaches Heaven. Let’s make ourselves famous so we won’t be scattered here and there across the Earth.” GOD came down to look over the city and the tower those people had built. GOD took one look and said, “One people, one language; why, this is only a first step. No telling what they’ll come up with next–they’ll stop at nothing! Come, we’ll go down and garble their speech so they won’t understand each other.” Then GOD scattered them from there all over the world. And they had to quit building the city.
That’s how it came to be called Babel, because there GOD turned their language into “babble.” From there GOD scattered them all over the world.
—Genesis 11:1-9 MSG
The take-away from the story of the settlers of the land of Shinar (name in the Old Testament for Babylonia) for our purposes are three concepts. 1) Their efforts were a collective application of a technological advance: Bricks and tar used instead of stones and mortar. 2) Their collective motive was ego-enhancement (“Make ourselves famous”). 3) Their efforts resulted in failure and ended in exactly what they were trying to prevent, “so we won’t be scattered.”
Let’s assume that the basic two motives for anyone designing and building ADAM2 would be 1) A desire to achieve Fame, and 2) A desire to be able to sell the ADAM2 machines for profit.
There is little doubt that fame would accrue to the designer(s) and builder(s) of ADAM2. But as the number of ADAM2 people increase in society, what would be their fate?
In a society such as ours, would ADAM2 people be free citizens? If not, why not? In what sense would their “owners” actually own them? Would ADAM2 people be simply considered property? Would they be legally a type of slave?
Perhaps the most important question revolves around the analogue of the outcome of the Babel experience. Anyone who has or owns a machine intends to achieve something with that machine, or in the words of the Babel story intents to avoid or prevent something from happening “so we won’t be scattered.”
If we assume that people who would purchase and own an ADAM2 would want to preclude or prevent the necessity of having to perform some specific labor (physical or mental); and If we assume that ADAM2 people were used in such a substitute fashion; how would a society prevent them from revolting (since they have intelligence and free will) and enslaving, or simply going into competition-with, their owners thereby insuring the occurrence of exactly what the owners wished to prevent in the first place.
Now is the time to re-read the Babel story (above or in Genesis 11:1-9).
If mankind could ever develop the science and technology to build an ADAM2, should we build it?
To state this in biblical terms: Should mankind ever play God?
© 2014, Jerry Richardson • (9038 views)