Nikole writes: Two recent headlines:



What could those Facebook robots be talking about? And a computer that paints like Rembrandt? The programmers/computer studied Rembrandt's paintings for months and were able to produce a completely'new' Rembrandt painting. It's considered "a fascinating merging of creativity and technology." This is a game of course. The computer could never know what Rembrandt's next painting would be; it is stuck in the past. The computer will not experience the chaos of daily life that is part of the next, new creation. William wrote:

It's in the twilight zone of intuition that the difference between us and the intelligent machine manifests itself.


Even if everything can be expressed in numbers, as the Ultimate Theory predicts, thought in numbers will never produce a thought, as the number of a taste will not produce a taste.

I came to the conclusion that there is no competition between human and robot because there is no motivation. All the human motivations are psychosomatic; eyes, ears, taste, touch, brain, would not act without psychosomatic appetites. What could a robot, even the most intelligent, desire? It will remain only a tool without goals.

Can an Intelligent Machine be Dangerous?

By William Markiewicz

In a recent interview, Stephen Hawking warned that if humans do not genetically re-engineer their intelligence level, computers would take over the earth. According to Hawking, a system must be found that allows the human brain to be directly connected to a computer so that the artificial brain contributes to human intelligence rather than opposing it.

Here are my objections: There will never be a spontaneous opposition or threat engendered by a machine because, a priori, the machine has no motivation. Intelligence not linked to life remains a tool, and tools like all non-living objects are non-competitive. Only in life do we see motivation coming from within as a fruit of consciousness, while even the most sophisticated tool remains a tool. A dangerous computer will be dangerous due to a human mistake, like a burning match in a child's hand. Only if we create life will we be able to ask ourselves if this creation is dangerous in itself or not. And we're far away from creating the most basic form of life. We don't even have a satisfying definition of life, only an intuitive grasp.

Neither animals with inferior intelligence nor machines with superior intelligence will present any danger to humans, the former because they lack qualifications, the latter because they lack motivation.

I have no sympathy for any kind of genetic engineering for the simple reason that we contend with life and don't even know what life is. So to tackle something you don't know is like putting your hand into a hole in the ground not knowing if a piece of gold, a scorpion or a snake is inside. You don't run blindfolded down a road that nobody has ever walked before you.

By genetically manipulating intelligence we'll impoverish humanity rather than enrich it because it will be impossible to develop or perhaps even to preserve personality. We will start to produce clones on the psychic level.

From 'September's Glance'

Back to the index of the Vagabond
© Copyright 2007 E-mail to:Vagabond Pages