A.I. on my mind lately
Post Reply
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
08-10-2016, 06:23 PM
RE: A.I. on my mind lately
(08-10-2016 06:04 PM)SYZ Wrote:  Based on our current technical knowledge, I don't believe we'll ever create a true AI device simply because it couldn't be empowered with the trait of human logic.

There is no theoretical bar to AI.
Anyone who says it is not possible had better be able to explain the ghost in the machine.
The only limiting factors are technology and the complexity of the problem.

Quote:How could it "solve" this scenario for example?

A donkey is standing between two piles of hay, and has to make a life-saving decision. One pile is one metre high, and is 10 metres from the donkey. The other is 10 metres high, and is 100 metres from the donkey. Which does he walk towards?

The same way a natural intelligence does. Drinking Beverage

Skepticism is not a position; it is an approach to claims.
Science is not a subject, but a method.
[Image: flagstiny%206.gif]
Visit this user's website Find all posts by this user
Like Post Quote this message in a reply
[+] 2 users Like Chas's post
08-10-2016, 07:12 PM (This post was last modified: 08-10-2016 08:18 PM by kim.)
RE: A.I. on my mind lately
(14-09-2016 10:11 AM)Rahn127 Wrote:  I'm in the process of listening to a 2 1/2 hour podcast like video from Sam Harris on AI and he mentions a recursive self improving function of an AI in which it creates better more intelligent versions of itself provided it's capable of altering it's own programming.

First off, I can imagine the millions of errors that will begin to pop up as an AI begins to alter it's own code.
It would need to make a billion copies of itself and then analyze if a change is an improvement or not.

And before that decide what constitutes an improvement.

This will be pure trial and error and may lead to different versions of AI that are better at some tasks but don't function as well at other tasks.

Which AI of the billion + copies makes the choice which versions get deleted ? If these AI are super intelligent and super caring, will they have the capacity to delete other AI's that have errors in their programming ?

Even still, all of this copy & delete, trial and error evolution of intelligence is only happening inside a computer. It's a self contained system that doesn't have a physical body.

It can't repair any hardware malfunction that may happen to its "brain" for lack of a better word.

Imagine if you will that all humans were only heads inside glass jars. Would you have any fear of these heads reproducing ? No not at all.

But for the fun of it, let's advance our culture another 100 years and a human like body is constructed that houses a wireless connection to an AI mainframe.

Now we have a body that can gather the physical materials to construct another body. It has analyzed the weaknesses of its current body and made improvements on design and mobility.

First off. Why would it need to make another body, unless it's main goal is continuous self improvement and duplication.

What values do we as a society want to instill into an AI that can reproduce and delete faulty versions of itself ?

Will it view humanity as something that stands in the way of it procuring resources so it can reproduce ?

Will it view humanity as a resource to be used (slavery) to procure those resources or use human security to protect itself from being shut down ?

Can you imagine an AI instilled with religious programming ? Would an intelligent AI eventually disregard and delete any religious programming or would they use it as a way to control humanity ?

It's a fascinating topic, but I still have trouble envisioning something that can survive it's own surgery when it begins cutting code in an effort to improve itself.

Define "improve". Shy



Alan Turing would love your thoughts. Heart As do I.

Freedom. Beauty. Love.
I'm a bit tipsy right now ... so ... is there such a word as uniqueness?
How about enigmatic?
Will we not feel the pleasure of these words within this "improve"? Because... I want this pleasure. I want uniqueness ... I want enigmatic ... I want ... ...
These words.
They define that which is often undefinable.
Indefinable?
Korbel.
Brut Rosé.
Blink

A new type of thinking is essential if mankind is to survive and move to higher levels. ~ Albert Einstein
Find all posts by this user
Like Post Quote this message in a reply
09-10-2016, 02:18 AM
RE: A.I. on my mind lately
(08-10-2016 07:12 PM)kim Wrote:  
(14-09-2016 10:11 AM)Rahn127 Wrote:  I'm in the process of listening to a 2 1/2 hour podcast like video from Sam Harris on AI and he mentions a recursive self improving function of an AI in which it creates better more intelligent versions of itself provided it's capable of altering it's own programming.

First off, I can imagine the millions of errors that will begin to pop up as an AI begins to alter it's own code.
It would need to make a billion copies of itself and then analyze if a change is an improvement or not.

And before that decide what constitutes an improvement.

This will be pure trial and error and may lead to different versions of AI that are better at some tasks but don't function as well at other tasks.

Which AI of the billion + copies makes the choice which versions get deleted ? If these AI are super intelligent and super caring, will they have the capacity to delete other AI's that have errors in their programming ?

Even still, all of this copy & delete, trial and error evolution of intelligence is only happening inside a computer. It's a self contained system that doesn't have a physical body.

It can't repair any hardware malfunction that may happen to its "brain" for lack of a better word.

Imagine if you will that all humans were only heads inside glass jars. Would you have any fear of these heads reproducing ? No not at all.

But for the fun of it, let's advance our culture another 100 years and a human like body is constructed that houses a wireless connection to an AI mainframe.

Now we have a body that can gather the physical materials to construct another body. It has analyzed the weaknesses of its current body and made improvements on design and mobility.

First off. Why would it need to make another body, unless it's main goal is continuous self improvement and duplication.

What values do we as a society want to instill into an AI that can reproduce and delete faulty versions of itself ?

Will it view humanity as something that stands in the way of it procuring resources so it can reproduce ?

Will it view humanity as a resource to be used (slavery) to procure those resources or use human security to protect itself from being shut down ?

Can you imagine an AI instilled with religious programming ? Would an intelligent AI eventually disregard and delete any religious programming or would they use it as a way to control humanity ?

It's a fascinating topic, but I still have trouble envisioning something that can survive it's own surgery when it begins cutting code in an effort to improve itself.

Define "improve". Shy



Alan Turing would love your thoughts. Heart As do I.

Freedom. Beauty. Love.
I'm a bit tipsy right now ... so ... is there such a word as uniqueness?
How about enigmatic?
Will we not feel the pleasure of these words within this "improve"? Because... I want this pleasure. I want uniqueness ... I want enigmatic ... I want ... ...
These words.
They define that which is often undefinable.
Indefinable?
Korbel.
Brut Rosé.
Blink

Thank you Kim

Guinness
Very Dark Smile

Insanity - doing the same thing over and over again and expecting different results
Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes Rahn127's post
09-10-2016, 07:01 AM
RE: A.I. on my mind lately
(08-10-2016 06:04 PM)SYZ Wrote:  How could it "solve" this scenario for example?

A donkey is standing between two piles of hay, and has to make a life-saving decision. One pile is one metre high, and is 10 metres from the donkey. The other is 10 metres high, and is 100 metres from the donkey. Which does he walk towards?

(08-10-2016 06:23 PM)Chas Wrote:  The same way a natural intelligence does.

Obviously a being with the power of logic chooses the biggest pile of hay; the donkey may well go for the closer pile—which is illogical.

(I note that you're using the term "intelligence", which is a different thing to "logic".)

I'm a creationist... I believe that man created God.
Find all posts by this user
Like Post Quote this message in a reply
09-10-2016, 09:10 AM (This post was last modified: 09-10-2016 09:37 AM by Mathilda.)
RE: A.I. on my mind lately
(08-10-2016 06:20 PM)Chas Wrote:  
(14-09-2016 03:09 PM)Mathilda Wrote:  The very concept is flawed. It's like poking yourself in the stomach and saying that it's a self poking stomach.

You can't have an AI try and rewrite itself totally because there always has to be a part of it which does the re-writing.

Of course it can completely re-write itself.
Think of a bootstrap loader. It's presence is necessary to start your computer, but once it has done its job it can be replaced with a different bootstrap loader.

But that's very different to the self improving algorithm that I was referring to that Sam Harris is talking about. Just to remind you:

(14-09-2016 10:11 AM)Rahn127 Wrote:  I'm in the process of listening to a 2 1/2 hour podcast like video from Sam Harris on AI and he mentions a recursive self improving function of an AI in which it creates better more intelligent versions of itself provided it's capable of altering it's own programming.

So what happens if your bootloader trying to improve itself writes over the entirety of its programming with something that doesn't work? It can't revert back to a previous state because the bit of code that would do that has been written over. And how does it know in advance what is an improvement until it tries it?

And why even do it this way when we already have artificial evolution?



(08-10-2016 06:20 PM)Chas Wrote:  
(14-09-2016 03:09 PM)Mathilda Wrote:  And nor do you need that when you have artificial evolution instead. There's nothing magical about artificial evolution, we understand the process very well. It's merely a way of traversing a search space of potential solutions. Artificial evolution has been used extensively since the 90's.

And that is isomorphic to the AI rewriting itself.

Well that is my point, except that the fundamental difference is that you either have a bit of code that doesn't get written over, as with a conventional genetic algorithm, in which case it is no different to a standard computational search. Or you have endogenous evolution in which case you are prepared to have many different agents that die prematurely or are suboptimal.

This is a far cry from a self improving algorithm.

Now you can have on-line evolution with a bit of code that doesn't get overwritten, but this doesn't buy you anything over evolving a plastic agent controller off-line.

Either way Sam Harris here is displaying his complete and utter ignorance of the subject but pretending to be an expert.


(08-10-2016 06:20 PM)Chas Wrote:  
(14-09-2016 03:09 PM)Mathilda Wrote:  This is another fundamental problem with Artificial Intelligence. You can even argue that this is the whole point of Artificial Intelligence. Intelligence is the ability to adapt to unknown environments. If it isn't then we might as well use a look up table. But how can we as designers decide what is a good way to adapt to an environment if we ourselves don't know anything about it?

We don't. We build tools into the AI that figure it out. That is, in fact, how our minds work.

Tools that figure it out? This is just sweeping the problem under the carpet. How do they figure it out? How can we make sure that the AI do what we want them to do as opposed to being individual sentient beings? After all, we want the AI to be tools, not to create life for the sake of it.

The answer I came up with is that in the same way that humans and animals have instincts, so we have to give robots the same. Again, how do you encode an instinct? The architecture that I have proposed in the past is to have evolved modules that feed in the correct signals, to the agent controller. These modules wouldn't adapt. Adaptation would be left to the agent controller to learn how to co-ordinate its actions to obtain the best signal. But this is a very basic architecture that wouldn't scale well beyond a simple intelligent tool.



(08-10-2016 06:20 PM)Chas Wrote:  
(14-09-2016 03:09 PM)Mathilda Wrote:  Add to that trying to instill something as nebulous as values and ethics. I'm not saying it can't be done, but it's extremely difficult. This is because the difficulty of creating intelligence comes down to scalability. Whatever we try to think of in advance, there will always be situations that we haven't considered. This is why we try to create AI in the first place rather than write everything as an explicit computer program. Therefore the AI has it's own independent values in the same way that every human being has their own individual values.

If it is actual AI, it will figure all of that out just like people do.

Which is what I meant by "Therefore the AI has it's own independent values in the same way that every human being has their own individual values."

But thanks for making it sound so easy by just dismissing every major challenge. Sure, just like people do. While you're at it, can you shove some atoms together to produce cheap energy for us please? After all, fusion happens all the time in nature.


(08-10-2016 06:20 PM)Chas Wrote:  
(14-09-2016 03:09 PM)Mathilda Wrote:  In all likelihood, strong AI of the future will be just another animal. It will have a body, senses, evolved instincts, drives and needs. So you could ask the same questions of naturally developed animals that exist today.

Just another animal? Yes, but so are we.

And did I say otherwise? You're just arguing for the sake of it now.
Find all posts by this user
Like Post Quote this message in a reply
09-10-2016, 09:24 AM
RE: A.I. on my mind lately
(08-10-2016 06:04 PM)SYZ Wrote:  How could it "solve" this scenario for example?

A donkey is standing between two piles of hay, and has to make a life-saving decision. One pile is one metre high, and is 10 metres from the donkey. The other is 10 metres high, and is 100 metres from the donkey. Which does he walk towards?

This is the exploration / exploitation problem. When do you stop exploring and start exploitation what you have found? Especially when you can't know in advance that there is something better out there. And despite Chas hand waving it away as a simple thing, it has actually been a major challenge in AI for many decades. The larger issue is basically how do you arbitrate between different needs considering that you only have one body. Some people like Minsky have proposed that this is the functional role of emotions and my own research has been consistent with this.

Say you're hungry and thirsty but the food and water are some distance from one another. A naive solution means that you can have a situation where the agent jitters between solving two competing needs. As it starts satisfying one need, it stops because the other need is now more important. This is a real issue if switching between needs is costly.

Emotions actually satisfy this problem in natural agents by using neuromodulators to act as a gain control. For example a rabbit shouldn't be distracted by a tasty looking leaf if it's just sensed a predator nearby. It should be scared until it runs to its burrow. In the same way that a robot which has a battery level drop from 20% to 19% shouldn't be so concerned as one that drops from 2% to 1%.

Emotions narrow the range of available actions while cognition broadens it.
Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes Mathilda's post
09-10-2016, 11:32 AM
RE: A.I. on my mind lately
When you present the AI with the donkey & hay scenario and its answer is "Who gives a fuck ?" then you know it has truly reached consciousness Smile

I also wonder if an AI becomes conscious as we are, will it be a limited consciousness ?

We don't know every aspect of every organ and what's its doing every moment of the day. Maybe it won't be aware of how much ram it has or how large it's hard drive is. Maybe it will just be aware of light and sound.

I often wonder why our consciousness is so limited.
The information from every neuron firing does go to our brains, but only certain things are felt.

And then there is music.

I can almost imagine an AI being like a heroin addict.
Every sensation replayed for enjoyment and it does nothing else but attempt to keep pleasing itself.

Lost in self gratification

Insanity - doing the same thing over and over again and expecting different results
Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes Rahn127's post
09-10-2016, 12:04 PM
RE: A.I. on my mind lately
Quite Rahn. We're not conscious of the edge detection that our visual cortex performs for example, we just know that we see an apple. We don't need anything more than the conclusion. My model for AI has been to see them like dogs, or heroin addicts, which is why I researched the functional role of emotions. It's extremely difficult to encode specific situations and behaviours into an agent controller, and if you did so then it would be more brittle and less adaptable anyway. It would be easier to create modules that look for specific stimuli.

Take the robin for example that lives in your garden. Leave out a bean bag coloured the same red as its chest and it will attack it viciously because they are territorial birds. It doesn't realise its mistake, it just continues expecting the bean bag to fly off. We might think how stupid the bird is because it doesn't recognise it as a bean bag, but an alien might think the same thing about a man masturbating to a porno mag and ask why he doesn't realise that it's not a genuine pair of breasts. Something as simple as a small area of the colour red is all that's required for an evolved instinct to be useful. In the same way that certain lipstick and blusher used on a woman makes her more attractive like when she is pregnant. If we can have a self organising agent controller that can adapt to certain signals, then you can create a robot which you can train like a dog. Or give it modules which do the job for you using simple pattern matching.
Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes Mathilda's post
09-10-2016, 02:58 PM
RE: A.I. on my mind lately
(09-10-2016 09:24 AM)Mathilda Wrote:  
(08-10-2016 06:04 PM)SYZ Wrote:  How could it "solve" this scenario for example?

A donkey is standing between two piles of hay, and has to make a life-saving decision. One pile is one metre high, and is 10 metres from the donkey. The other is 10 metres high, and is 100 metres from the donkey. Which does he walk towards?

This is the exploration / exploitation problem. When do you stop exploring and start exploitation what you have found? Especially when you can't know in advance that there is something better out there. And despite Chas hand waving it away as a simple thing,

If you think so, you misunderstood me. As I pointed out, the problems that need solving are the technology and the complexity of the problem.

Quote:it has actually been a major challenge in AI for many decades. The larger issue is basically how do you arbitrate between different needs considering that you only have one body. Some people like Minsky have proposed that this is the functional role of emotions and my own research has been consistent with this.

And nowhere have I disagreed with that.

Quote:Say you're hungry and thirsty but the food and water are some distance from one another. A naive solution means that you can have a situation where the agent jitters between solving two competing needs. As it starts satisfying one need, it stops because the other need is now more important. This is a real issue if switching between needs is costly.

Emotions actually satisfy this problem in natural agents by using neuromodulators to act as a gain control. For example a rabbit shouldn't be distracted by a tasty looking leaf if it's just sensed a predator nearby. It should be scared until it runs to its burrow. In the same way that a robot which has a battery level drop from 20% to 19% shouldn't be so concerned as one that drops from 2% to 1%.

Emotions narrow the range of available actions while cognition broadens it.

Those are important points. My only issue here has been some posters implying we can't solve it, and what people are considering AI.

Is the problem to create a human-like intelligence? Or is it to create a creative general problem solver?

My main point is that there is no theoretical reason that AI can't be created.

Skepticism is not a position; it is an approach to claims.
Science is not a subject, but a method.
[Image: flagstiny%206.gif]
Visit this user's website Find all posts by this user
Like Post Quote this message in a reply
09-10-2016, 04:03 PM
RE: A.I. on my mind lately
Apologies Chas, looks like I misunderstood what you were saying. I quite agree that there is no theoretical limitation on creating strong AI or even human-like intelligence given enough time and resources. The issue is one of practicality. People do not appreciate just how much time and how many resources will be required. Basically I don't think we have enough resources available, whether because Moore's law is coming to an end or because the world is running out faster than we can make progress in this field.

In all likelihood we'd need something like biological or quantum computing in order to overcome these practical limitations and those are technologies that don't even properly exist yet so we can only speculate about how they could be used. For example, a biological computer made up of engineered cells would allow us to embody the agent but would be harder to engineer. Quantum computers are likely to be very large size and cooled to just above kelvin which could only be used via some kind of wireless network which means latency for a real time system. Or maybe there will be some form of processing which will take us by surprise, like chemical or DNA computing.
Find all posts by this user
Like Post Quote this message in a reply
Post Reply
Forum Jump: