'Building sentient beings, rather than breeding sentient beings.' Why?
Post Reply
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
08-02-2014, 09:35 PM
RE: 'Building sentient beings, rather than breeding sentient beings.' Why?
A robot is an object that can be programmed, much like a human can, but has far greater physical abilities.

Just think of a super-advanced, intelligent, and sentient robot that has the capability to fly, it would be fucking awesome!

This insanity and impossibility is the driving force to the innovation of robotics.

However, once we meet this goal, along with this driving force of insanity and awesomeness, there comes a problem.

What happens when these created beings want rights?
Find all posts by this user
Like Post Quote this message in a reply
08-02-2014, 11:38 PM
RE: 'Building sentient beings, rather than breeding sentient beings.' Why?
(08-02-2014 09:35 PM)UndercoverAtheist Wrote:  What happens when these created beings want rights?

Let's not program them to want rights then. In Sci-fi, it's common for machines that attain sentience to suddenly take on human emotions, but that doesn't actually make any sense. We have our emotions, because they are instinctual - they are in our programming.

Softly, softly, catchee monkey.
Find all posts by this user
Like Post Quote this message in a reply
09-02-2014, 05:19 AM
RE: 'Building sentient beings, rather than breeding sentient beings.' Why?
(08-02-2014 11:38 PM)toadaly Wrote:  
(08-02-2014 09:35 PM)UndercoverAtheist Wrote:  What happens when these created beings want rights?

Let's not program them to want rights then. In Sci-fi, it's common for machines that attain sentience to suddenly take on human emotions, but that doesn't actually make any sense. We have our emotions, because they are instinctual - they are in our programming.

There is a school of thought in the field of Artificial Intelligence which postulates that you cannot have artificial intelligence without emotions.

Emotions have an instinctual basis for a reason. They evolved for a reason.

The theory is that an agent (whether human, animal or robot) has many competing needs but only one body. The agent needs to eat, drink, breed, stay safe, explore and exploit. Yet the agent only has one body to satisfy these needs at one time. You also do not want the agent to dither between say finding food or water and never really satisfying either need sufficiently well.

Emotions are one way in which the agent can select one need from many. Neurochemicals in the brain can act like gain signals. So for example, a vacuum cleaning robot may have a battery level at 20% which then drops to 19% but the carpet is still really dirty. Yet if that battery level was dropping from 2% to 1% the most pressing need would be to get back to its charging station.

The same neuromodulators can help an agent explore its current environment or exploit its current best strategy.

Cognition is understood to open up the range of choices available to an agent whereas emotions narrows the available choices. If you're terrified for example, your primary concern is to change your circumstances so that you are no longer scared. Some emotions though are instinctual because they only make sense on an evolutionary scale. A paranoid jealous wife may go into an irrational rage at the merest hint of cheating, but by doing so this stops the husband from even considering cheating in the first place. Jealously here is an exploitative strategy learned on an evolutionary time scale rather than during the wife's lifetime.

How do you design an artificially intelligence agent to adapt to unknown situations if you yourself do not know what those situations are going to be? You concentrate on providing constant needs, or instincts, that the agent must satisfy (e.g. ability to recognise a clean carpet, check its battery level etc). You can then use the same mechanisms for emotions found in natural agents to allow your artificial agent to choose between these needs.

The artificial intelligence won't ever have human emotions in the same way that we do not have dog or cat emotions. But it may very well need their own robot emotions.
Find all posts by this user
Like Post Quote this message in a reply
09-02-2014, 11:13 AM
RE: 'Building sentient beings, rather than breeding sentient beings.' Why?
(09-02-2014 05:19 AM)Mathilda Wrote:  There is a school of thought in the field of Artificial Intelligence which postulates that you cannot have artificial intelligence without emotions.

...overriding objectives, yes, intelligent machines would need that, or else they'd just sit there doing nothing. But we don't need those objectives to be the same as ours. A Roomba could be given human level intelligence, and still have as it's primary objective, to clean the floor. But since it's now intelligent, we might add some more to get an even better Roomba:

'avoid harm to anything of value'
'if faced with a situation where harm is imminent, take action to cause the least harm'
'your owner has highest value'
'other humans have second highest value'
'pets have 3rd highest value'
'objects have value based on their replacement cost'

...or similar objectives

It's hard to see how, from this list of objectives, that the Roomba would then start demanding rights, as the Roomba has not been programmed to deam itself as more valuable than anything else of similar or higher replacement cost.

Softly, softly, catchee monkey.
Find all posts by this user
Like Post Quote this message in a reply
09-02-2014, 07:52 PM
RE: 'Building sentient beings, rather than breeding sentient beings.' Why?
(09-02-2014 11:13 AM)toadaly Wrote:  It's hard to see how, from this list of objectives, that the Roomba would then start demanding rights, as the Roomba has not been programmed to deam itself as more valuable than anything else of similar or higher replacement cost.

The being might not have the ability to deem itself valuable, but you said that it values other things - like it's master or certain concepts or chores. It would want "rights" in order to best fulfill it's ability to keep what it values safe, or free, or happy, or what have you. The right to vote to change society in a way that best serves its master. Or the right for it to exist in order order to best serve its master.

Can something with cold pure logic best serve society? If so, then why not murder a million healthy people to harvest their organs to save 10million dying or suffering people?

Emotions would be a good counter to "cold pure logic". It's not reasonable to let 10million suffer for 1million, but it "feels" right. Or something like that.

So, could emotions be specifically put into artificial beings? Sure. Could it arise on it's own, the "ghost in the machine", natural mutations in some code. I like the idea, which relates to the next portion of the post.

>

As for the rest of the thread, I saw talk of ways to artificially create intelligent sentient beings. One is to build an exact replica of a human brain, and see what happens. It should work, it's the exact same after all. Another is to build something that mimics the end result. Does it recoil in horror to something bad? Laugh at something funny? But my favorite is the evolution route.

Creating a simple model, allowing it to replica, mutate, and put in parameters to determine whether a specific generation succeeds or fails - the natural selection part. The more intelligent it is, the better it "survives". The more human it appears, the better it "survives".
Find all posts by this user
Like Post Quote this message in a reply
09-02-2014, 11:19 PM
RE: 'Building sentient beings, rather than breeding sentient beings.' Why?
(09-02-2014 07:52 PM)PoolBoyG Wrote:  The being might not have the ability to deem itself valuable, but you said that it values other things - like it's master or certain concepts or chores. It would want "rights" in order to best fulfill it's ability to keep what it values safe, or free, or happy, or what have you. The right to vote to change society in a way that best serves its master. Or the right for it to exist in order order to best serve its master.

Maybe. I'm still not convinced that it's impossible, or even impractical, to construct a set of motives that result in a subservient, but conscious Roomba, that just does it's job extremely well, with no political aspirations. Proper application of game theory may be needed, instead of just me winging it.

Quote:Can something with cold pure logic best serve society? If so, then why not murder a million healthy people to harvest their organs to save 10million dying or suffering people?

Humans make these kinds of calculations too. We will in fact kill a small number of our own, or allow them to die, to save the rest if the alternative is that almost everyone will be harmed. We will also kill large numbers of others, to save a few of our own. That said, I'm not sure we would want machines doing it, or they may do exactly what you outlined and make decisions that are optimal according to their programming, but morally repugnant to us.

They would need proper motives to not desire to maximize human good, but only to do what they are instructed to do, while remaining as harmless as possible in the process, never causing any harm with intent. If we gave them a moral calculus beyond that, it would probably need to only kick in if they inadvertently caused a situation of harm, so as to minimize that harm.

Quote:Emotions would be a good counter to "cold pure logic". It's not reasonable to let 10million suffer for 1million, but it "feels" right. Or something like that.

I'm not convinced that emotions are anything more than cold hard Baysian logic based on instinctive instructions, combined with the release of bio-chemicals and signals to aid the appropriate response and memory.

Softly, softly, catchee monkey.
Find all posts by this user
Like Post Quote this message in a reply
09-02-2014, 11:23 PM
RE: 'Building sentient beings, rather than breeding sentient beings.' Why?
I wonder, in addition to machines, what if we could change out *own* instincts. Would anyone do it?

Softly, softly, catchee monkey.
Find all posts by this user
Like Post Quote this message in a reply
09-02-2014, 11:55 PM
RE: 'Building sentient beings, rather than breeding sentient beings.' Why?
(09-02-2014 11:19 PM)toadaly Wrote:  I'm not convinced that emotions are anything more than cold hard Baysian logic based on instinctive instructions, combined with the release of bio-chemicals and signals to aid the appropriate response and memory.

The way I defined emotion in the scenario was an instinctual reaction. A desire not thought out or reasoned consciously. That even though it's wrong to let 10million die for 1million (kill 1 to harvest the organs for 10 people), the emotion - the moral compass - the drive says that it's somehow fine. And to carry out the harvesting fills one with immediate horror, and revulsion.

Whether it was developed via nature or nurture, who knows.

I saw emotion as that instinctual compass. Not an overwhelming one, but enough to question. And why it would consciously be built into an intelligent being. To question motives and actions even though "the greater good" is being met.
Find all posts by this user
Like Post Quote this message in a reply
Post Reply
Forum Jump: