Please can Sam Harris shut the fuck up about Artificial Intelligence
Post Reply
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
08-10-2016, 06:15 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(08-10-2016 06:14 AM)Chas Wrote:  
(08-10-2016 06:00 AM)TheInquisition Wrote:  Just wait until the Microsoft bots meet the Apple bots.

The Microsoft bots will shit their pants and reboot, while the Apple bots will panic and drain their irreplaceable batteries.
The Samsung bots will be so horrified, they'll catch fire and explode.

Skepticism is not a position; it is an approach to claims.
Science is not a subject, but a method.
[Image: flagstiny%206.gif]
Visit this user's website Find all posts by this user
Like Post Quote this message in a reply
[+] 2 users Like Chas's post
08-10-2016, 06:47 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(08-10-2016 06:06 AM)TheInquisition Wrote:  Don't you think AI research has reached a critical point though? It's starting to yield things like self-driving cars, this is more of a commercial application rather than pure research funded by government. I think this is a very important change, it's already producing usable benefits which brings it out of the government research sphere into the commercial sphere.

Harris running his mouth might impact government funding, but I think there is some serious commercial interest in this field now and there isn't anything anyone can do to slow it down at this point.


I absolutely do not think that AI research has reached a critical point, but I wouldn't blame anyone for thinking that it has. The problem is one of scalability and many companies and research projects have failed because people do not appreciate this pitfall. They create a prototype or some smart program that works in a very specialised case and then think that because it works for that then they can do something useful with it. But then they find that they can't scale up their AI.

It's called the curse of dimensionality and it doesn't just affect AI but any computer program where you need to take into account multiple real world variables. Say for example you have 10 different sensors on a fighter aircraft and want to predict when a component will fail. You could plot all this on a graph and try and analyse the hyperdimensional space. Add one more sensor though and the space that you need to work with grows exponentially. The travelling salesman problem is a classic example of this. Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city? Add too many cities and you very soon run out of enough computing time available in the history of the universe.

There is a reason why the human brain has so many neurons and such a high connectivity between each neuron.

So the problem is that people expect progress to follow exponential curve. It's taken this long to be able to achieve self driving cars what will happen in another five years? But what they aren't taking into account is that we reached this point because of the exponential doubling of processing power from Moore's law over many decades. Our understanding of intelligence hasn't progressed as fast, in fact nothing else has. And Moore's law is coming to an end.

Another aspect to the issue of scalability, is that each small progression in AI has large changes because it affects many people. So for example, there used to be whole call centres full of people that can now be replaced by an automated system. The automated system is extremely constrained but that doesn't matter much to the thousands of people looking for new jobs. Sure, talk about this, it's an issue of socio-economics rather than AI. What Same Harris is talking about is science fantasy.
Find all posts by this user
Like Post Quote this message in a reply
[+] 2 users Like Mathilda's post
08-10-2016, 06:50 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(08-10-2016 06:07 AM)EvolutionKills Wrote:  Fair enough. But it unfortunately wasn't the driving force behind the technology. Plus if we didn't have a valuable use for the technology, like generating electricity, we probably wouldn't be having this conversation.

AI development is a potentially powerful and valuable tool, and it's that very value and power that gives me pause. Given its world ending capability, it's frightening how incredibly blase we are towards nuclear armaments. We cannot pretend that such power won't in some fashion be turned towards warfare, and that is not something be taken lightly, let alone after AI is on the battlefield.


Drones are fucking awesome. I can't wait until they start delivering pizza. But of course, they had to attach missiles to them and fuck it up. We don't need to be giving them AI too. We can't get a handle on weaponized drones, so no, I wouldn't trust us with AI just yet.

So why aren't we talking about other subjects so far in the future that they are completely irrelevant to the most pressing needs of today? Like for example the means to edit people's personalities using neuroscience, or interstellar travel allowing us to meet aliens and to cause an alien invasion? Or that Physics research is dangerous because it will allow us to create a quantum bomb that will blow up the entire planet? These are just as ludicrous as the idea that AI will cause a digital apocalypse.
Find all posts by this user
Like Post Quote this message in a reply
08-10-2016, 06:52 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(08-10-2016 06:14 AM)Chas Wrote:  
(08-10-2016 06:00 AM)TheInquisition Wrote:  Just wait until the Microsoft bots meet the Apple bots.

The Microsoft bots will shit their pants and reboot, while the Apple bots will panic and drain their irreplaceable batteries.

The Linux bots will work perfectly but will be completely ignored because they are largely unintelligible to the public.
Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes Mathilda's post
08-10-2016, 07:00 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(08-10-2016 06:50 AM)Mathilda Wrote:  
(08-10-2016 06:07 AM)EvolutionKills Wrote:  Fair enough. But it unfortunately wasn't the driving force behind the technology. Plus if we didn't have a valuable use for the technology, like generating electricity, we probably wouldn't be having this conversation.

AI development is a potentially powerful and valuable tool, and it's that very value and power that gives me pause. Given its world ending capability, it's frightening how incredibly blase we are towards nuclear armaments. We cannot pretend that such power won't in some fashion be turned towards warfare, and that is not something be taken lightly, let alone after AI is on the battlefield.


Drones are fucking awesome. I can't wait until they start delivering pizza. But of course, they had to attach missiles to them and fuck it up. We don't need to be giving them AI too. We can't get a handle on weaponized drones, so no, I wouldn't trust us with AI just yet.

So why aren't we talking about other subjects so far in the future that they are completely irrelevant to the most pressing needs of today? Like for example the means to edit people's personalities using neuroscience, or interstellar travel allowing us to meet aliens and to cause an alien invasion? Or that Physics research is dangerous because it will allow us to create a quantum bomb that will blow up the entire planet? These are just as ludicrous as the idea that AI will cause a digital apocalypse.


Tu quoque fallacy much?

I wouldn't trust us with memory rewriting capabilities or quantum bombs either. As already stated, our handling of the splitting of the atom has been rather juvenile. So should be just go forward with no thought for the morrow? Certainly not. Should we stop moving forward? That's simply not possible; because if not us, then who? Should we have a dialogue so that hopefully we have the details rather well hashed out before we reach the point of no return? Of course, and I fail to see how anyone can reasonably argue against that.

That said, I'm more worried about the implications of elective human engineering explored in works like Gattaca, than I am in apocalyptic artificial intelligence seen in the Terminator franchise.

[Image: E3WvRwZ.gif]
Find all posts by this user
Like Post Quote this message in a reply
08-10-2016, 07:16 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(08-10-2016 07:00 AM)EvolutionKills Wrote:  
(08-10-2016 06:50 AM)Mathilda Wrote:  So why aren't we talking about other subjects so far in the future that they are completely irrelevant to the most pressing needs of today? Like for example the means to edit people's personalities using neuroscience, or interstellar travel allowing us to meet aliens and to cause an alien invasion? Or that Physics research is dangerous because it will allow us to create a quantum bomb that will blow up the entire planet? These are just as ludicrous as the idea that AI will cause a digital apocalypse.


Tu quoque fallacy much?

I wouldn't trust us with memory rewriting capabilities or quantum bombs either. As already stated, our handling of the splitting of the atom has been rather juvenile. So should be just go forward with no thought for the morrow? Certainly not. Should we stop moving forward? That's simply not possible; because if not us, then who? Should we have a dialogue so that hopefully we have the details rather well hashed out before we reach the point of no return? Of course, and I fail to see how anyone can reasonably argue against that.

Except we're not having a proper conversation about AI. The likes of Sam Harris aren't reasonably discussing the implications of AI but killing off the field before it's even barely started to feed their own careers. They're parasites. And they can do this with the suggestion of an digital apocalypse because people are largely ignorant of the subject whereas most people realise that talking about alien invasion is quite irrelevant to our futures. And because so called 'experts' jump on the bandwagon, people think that it's more likely than it is.

This is why I brought up the other science fantasy arguments that they could be making. Not to make a Tu quoque fallacy but to try and get across just how far fetched these talks about a digital apocalypse are.

Let's have a conversation about AI and digital apocalypses when it's feasible within our lifetimes.
Find all posts by this user
Like Post Quote this message in a reply
08-10-2016, 07:47 AM (This post was last modified: 08-10-2016 07:54 AM by Gloucester.)
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
TheInquisition wrote:

Quote:Don't you think AI research has reached a critical point though? It's starting to yield things like self-driving cars . . .

I don't see self driving cars as anything like an AI. They are super sophisticated versions of the early 1970s "mouse", or "turtle", that could follow a track and avoid obstacles. OK, these can spot people but that is no great shakes, there are phone apps that can recognise faces and put names to them I believe. Great with something like Google Glass at a conference, preload with the names and faces and be able to recognise everyone!

More worrisome are ideas like the "swarm bots" that recognise each other and gang up! Now, intelligence plus something a bit more aggressive . . .




AI implies the ability to be, to some degree, sentient - the rest are just preprogrammed appliances.

Actually, as I posted I remembered the DARPA autonomous vehicle challenge




Tomorrow is precious, don't ruin it by fouling up today.
Find all posts by this user
Like Post Quote this message in a reply
[+] 3 users Like Gloucester's post
08-10-2016, 09:05 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
Two fiction books sort of relevant to this thread take computers, gaming, the Google Glass concept etc and stretch it into a malignant AI, and are damn good stories, are Daniel Suarez' "Daemon" and the sequel "Freedom".

In some bits the stretch is not that far . . .

Tomorrow is precious, don't ruin it by fouling up today.
Find all posts by this user
Like Post Quote this message in a reply
08-10-2016, 09:42 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(08-10-2016 07:16 AM)Mathilda Wrote:  
(08-10-2016 07:00 AM)EvolutionKills Wrote:  Tu quoque fallacy much?

I wouldn't trust us with memory rewriting capabilities or quantum bombs either. As already stated, our handling of the splitting of the atom has been rather juvenile. So should be just go forward with no thought for the morrow? Certainly not. Should we stop moving forward? That's simply not possible; because if not us, then who? Should we have a dialogue so that hopefully we have the details rather well hashed out before we reach the point of no return? Of course, and I fail to see how anyone can reasonably argue against that.

Except we're not having a proper conversation about AI. The likes of Sam Harris aren't reasonably discussing the implications of AI but killing off the field before it's even barely started to feed their own careers. They're parasites. And they can do this with the suggestion of an digital apocalypse because people are largely ignorant of the subject whereas most people realise that talking about alien invasion is quite irrelevant to our futures. And because so called 'experts' jump on the bandwagon, people think that it's more likely than it is.

This is why I brought up the other science fantasy arguments that they could be making. Not to make a Tu quoque fallacy but to try and get across just how far fetched these talks about a digital apocalypse are.

Let's have a conversation about AI and digital apocalypses when it's feasible within our lifetimes.

I don't see how a generalized AI could be feasible in the immediate future. It might be useful for researchers in the field to start talking of a clear distinction between a robust, generalized AI and the specialized and limited kinds that we see being developed now.

The pop culture version of AI is quite different from the reality and there is a significant gulf between these kinds of AI.

Gods derive their power from post-hoc rationalizations. -The Inquisition

Using the supernatural to explain events in your life is a failure of the intellect to comprehend the world around you. -The Inquisition
Find all posts by this user
Like Post Quote this message in a reply
08-10-2016, 11:12 AM (This post was last modified: 08-10-2016 11:20 AM by Mathilda.)
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(08-10-2016 09:42 AM)TheInquisition Wrote:  I don't see how a generalized AI could be feasible in the immediate future. It might be useful for researchers in the field to start talking of a clear distinction between a robust, generalized AI and the specialized and limited kinds that we see being developed now.

The pop culture version of AI is quite different from the reality and there is a significant gulf between these kinds of AI.

Exactly. And it's not just a matter of processing power, we really do not have a clue how to even go about it.

A naive way of to implement AI (i.e. good old fashioned classical AI) in a house keeping robot would be to write a clause like

If dog is hungry then feed dog.

But how do you recognise what a dog is? How do you know that a dog is hungry, how do you feed it? Feed it what? What about a cat? Hamster? Rabbit who wants to eat leaves? A python that wants to eat a rat? How do you even pick up the pet food? What if it's been moved? Again this is about scalability. The real world is noisy and there is a myriad of ways in which it can change. This is the very reason why we want AI.

Computers are very fast idiots. Everything has to be explicitly stated. The whole point of AI is to make our computer programs and robots more flexible so we don't have to micro manage them and to make them more robust to operating in a noisy world.

So if you have any kinds of symbolic commands then there will be a mapping problem from the non symbolic real world to what the computer can adapt to.

Add to that, real intelligence self organises. We don't have other people directly training the neurons inside our heads. We're part of a loop of Environment -> Senses -> Action selection -> Action -> Environment.

This means that as a starting point for generalised strong intelligence that can adapt to some arbitrary environment we need a non symbolic system that self organises to incoming signals that change over time. This could be a system of interconnected neurons or dynamical systems etc.

Not only this, but the system needs to adapt on-line and act in the real world in real-time. It can't be some neural network that is trained off-line, tested and then rolled out to hundreds of thousands of robots who then don't have to learn. Not if it's to be the kind of strong generalisable AI that Sam Harris is talking about.

So using that as a starting point for strong generalisable AI, we're faced with some fundamentally difficult questions. Say that intelligence is the ability to adapt to an unknown environment, how do we as AI designers create robots that adapt to an environment that we do not know about ourselves? How do we tell a house keeping robot not to feed the pet cat to the pet python without using any symbols? How can it learn on-line what a pet is and then to recognise it afterwards? How do we even encode the need to feed a pet?

For a long time now we've been able to create stimulus / response systems, but that only works at the current time step. How do we have planning in a non symbolic self organising real-time system? How do we learn sequences? Or learn which actions to perform that are better in the long run? How do we create memories? How do we learn and apply utility (value) to some neutral sensory input? How do we arbitrate between exploration and exploitation? For example if we have two urgent needs and we perform some action or actions which are sub-optimal in satisfying those needs, how and when should the robot decide between continuing to perform them or try something different?

Most of our brain isn't even devoted to what we'd call intelligence. Most of it is devoted to analysing sensory input and controlling our bodies. Take vision for example. If I hold an apple in front of you, then your brain will still recognise it as the same object if I move it up and down, to and away from you, partially hide it or change the lighting so it's a different colour. And just picking up a kettle for example involves a massive amount of intelligence, timely co-ordination of different muscle movements based on different sensory input etc. A generalisable strong AI would need to learn to do all this by itself, just like an animal does and then feed the signals from this into the rest of its brain devoted to memory, action selection etc.

Self driving cars and Go-playing computers have none of these issues because they are not self organising. They have simple networks which have data presented to them and are trained off-line. The neural networks they use basically act as statistical functions and are used by symbolic computer programs. There are practical requirements to this as well. A company needs to test, understand and know how any system it produces will work. It wouldn't want to be sued for example. Clients get very nervous even with simple three layer artificial neural networks because it seems like an unexplainable black box.
Find all posts by this user
Like Post Quote this message in a reply
[+] 4 users Like Mathilda's post
Post Reply
Forum Jump: