The usefulness of making mistakes
Post Reply
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
25-03-2017, 04:33 PM
RE: The usefulness of making mistakes
(25-03-2017 04:16 PM)Rockblossom Wrote:  
(25-03-2017 04:03 PM)Rahn127 Wrote:  They do not possess the abilities of the human mind to comprehend the world around them in a conscious state. They are programs that serve particular functions. They are no more intelligent than a wrench.
How, then, does your version of an AI come into existence? There's a science fiction meme that a sufficiently large and/or complex network can reach a point where it becomes "self aware" because of .. something. That makes a great basis for a science fiction story, but like time travel, there's no reason to believe that it is possible in the real world. (Nor can we say that it is impossible, either.)

And if such a thing comes into existence, what then? Be helpful like Asimov's robots, or .. not so helpful, as in The Matrix, Terminator, or Battlestar Galactica?

Bio-integrated augmented intelligence will come before full blown synthetic intelligence. Chip-in-the-brain is already viable. Just got to figure out how to integrate it with wetware. Make it look like just more wetware to the wetware is an approach. At some point the question of full blown synthetic intelligence becomes moot when we are inseparable from our technology.

#sigh
Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes GirlyMan's post
25-03-2017, 10:25 PM
RE: The usefulness of making mistakes
(25-03-2017 04:16 PM)Rockblossom Wrote:  
(25-03-2017 04:03 PM)Rahn127 Wrote:  They do not possess the abilities of the human mind to comprehend the world around them in a conscious state. They are programs that serve particular functions. They are no more intelligent than a wrench.
How, then, does your version of an AI come into existence? There's a science fiction meme that a sufficiently large and/or complex network can reach a point where it becomes "self aware" because of .. something. That makes a great basis for a science fiction story, but like time travel, there's no reason to believe that it is possible in the real world. (Nor can we say that it is impossible, either.)

And if such a thing comes into existence, what then? Be helpful like Asimov's robots, or .. not so helpful, as in The Matrix, Terminator, or Battlestar Galactica?

This isn't my version of the words artificial intelligence.
If I said artificial eye, would you think of a walking stick ? A walking stick is a tool in much the same way GPS is a tool.

GPS isn't an artificial intelligence.
A walking stick isn't an artificial eye.

I could label lots of things that help blind people navigate their surroundings as artificial eyes, but if they aren't an actual artificial eye that relays visual information to the brain, then it's just a matter of mislabeling things or not understanding what words mean.

And I'm pretty sure you understand what intelligence means.

As for how artificial intelligence will come about, I haven't a clue.

Insanity - doing the same thing over and over again and expecting different results
Find all posts by this user
Like Post Quote this message in a reply
28-03-2017, 06:31 PM
RE: The usefulness of making mistakes
(25-03-2017 09:54 AM)unsapien Wrote:  Any AI's of the future will have been made by us humans, and that pretty much guarantees they will be made with errors in them without the need to actually program any mistakes on purpose.
Human brains are full of errors anyway. Look at all the religious ideation out there. Look at all the rampant confirmation bias.

The created is not greater than what creates it. I suppose that AIs might be able to outgrow us, but they aren't going to need errors added in from the get-go. They'll be there. Heck, Amazon can't even get their new retail store concept right, where you buy stuff just by walking out of the store with it (no checkout needed). That's not even true AI, and it still screws up when more than 20 customers are in the store. But it uses a lot of the sensing and pattern matching that a full AI will have to use.

The OP is getting at something, however, the so-called "uncanny valley" effect. This would apply to humanoid robots attempting to pass for actual humans, though, more than to AI generally. It is a visual effect, where you can't quite accept a not-quite-right human, much less a robot, as legit. It is unsettling somehow; it taps into a primal evolutionary tribalistic trait that tries to protect us against the Other, particularly the Other trying to pass for a member of your tribe or a tribal member becoming disloyal / unreliable / turncoat. A lot of it comes down to how light reflects off the whites of eyes and similar arcana, combined with emotional affect and conformity to other subjective expectations.

I would expect prototype full AIs to have problems with emotional affect and social conventions, particularly if they're not trained and socialized in human families just like human children. There will be no difficulty with them being perfect humans. I would even expect them to be far more susceptible to "mental illness" until we get all the feedback loops correct. I mean, do we really understand what constitutes "mental health", really, in such a way that we could reproduce it? I don't think so. We only know what malfunctions look like, not what causes them, by and large.

I am reminded of the old classic Star Trek episode where some new supercomputer is installed to render the Enterprise crew obsolete, only to find that it has taken on the psychoses / neuroses of its inventor, and ends up with guilt and control issues combined with moral deficits. Kirk leverages its guilty conscience to get it to shut itself off after it starts killing people.
Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes mordant's post
28-03-2017, 06:53 PM
RE: The usefulness of making mistakes
(25-03-2017 08:47 AM)Rahn127 Wrote:  Now if I can turn back to AI's for a moment, I also find it advantageous, in the long run, for an AI to be programmed with micro mistakes, so that if they do achieve some kind of consciousness, then mistakes will be incorporated into the next version and the next version, so as to never attain perfection.

There are fuzzy logic control systems which do something similar to what you are describing. But instead of being programmed to make mistakes, they are programmed to tolerate mistakes. The only reason I can see to program them to make mistakes is for some sort of self-reinforcement learning where the system is continuously generating it's own training set . Which is a good reason.

#sigh
Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes GirlyMan's post
Post Reply
Forum Jump: