Super computer can be confused to be a human?
Amusing one way, recent experiment , where a super computer passed the Turin test, to be beieved as if it was a 13 year old boy.
Humans to think something is true while it is not , is at the foundation of this experiment.
If you look at the world around using latest digital media tools humans for a price do that to make innocent multitudes niece as truth , while it is not so.
Also without being aware our own cognitive and thinking skills are also getting fundamentally modified from generations , because if intensive interaction with thinking machines.
The word thinking is gaining different connotation. The word feeling is what we need to watch out. Can machine fool us that it feels? I believe , it is possible to simulate this behaviour too for unsuspecting tester.
Memory, database, retrieval , cogent logical checks, certainly make machine compete with a fallible, forgetful human, who tends to cloud his judgement with emotional filters.
Come to think of it, I do not find it difficult to build different set of emotional filters of prejudices, likes, dislikes in to algorithms , in cluding schizophrenic behaviour!
But to my mind, since feelings too are states of mind and thoughts, which control behaviour aand response, critical words or series of responses can trigger anger/love/joy and resultant response by super computer!
By introducing fallibility like human, in randomly occurring manner, and correcting itself on a question whether sure of the response, the super computer may correct itself or fight over it, thus, the super computer can be made to sound more and more like humans! Less perfect machine behaviour can be easily simulated, in my opinion!
Humans to think something is true while it is not , is at the foundation of this experiment.
If you look at the world around using latest digital media tools humans for a price do that to make innocent multitudes niece as truth , while it is not so.
Also without being aware our own cognitive and thinking skills are also getting fundamentally modified from generations , because if intensive interaction with thinking machines.
The word thinking is gaining different connotation. The word feeling is what we need to watch out. Can machine fool us that it feels? I believe , it is possible to simulate this behaviour too for unsuspecting tester.
Memory, database, retrieval , cogent logical checks, certainly make machine compete with a fallible, forgetful human, who tends to cloud his judgement with emotional filters.
Come to think of it, I do not find it difficult to build different set of emotional filters of prejudices, likes, dislikes in to algorithms , in cluding schizophrenic behaviour!
But to my mind, since feelings too are states of mind and thoughts, which control behaviour aand response, critical words or series of responses can trigger anger/love/joy and resultant response by super computer!
By introducing fallibility like human, in randomly occurring manner, and correcting itself on a question whether sure of the response, the super computer may correct itself or fight over it, thus, the super computer can be made to sound more and more like humans! Less perfect machine behaviour can be easily simulated, in my opinion!
Comments