Please can Sam Harris shut the fuck up about Artificial Intelligence
Post Reply
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
08-10-2016, 11:15 AM
Please can Sam Harris shut the fuck up about Artificial Intelligence
I wasn't aware free will deterministic thinkers would be so fearful of ai.

I know he has the whole inner wonder strangeness of the brain fondness though

"Allow there to be a spectrum in all that you see" - Neil Degrasse Tyson
Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes ClydeLee's post
08-10-2016, 12:04 PM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
I wonder how many "components" of more complex systems have yet to get tied together? I am way behind even my casual interests.

Thinking about "expert systems", ecg machines that read the wave shape and offer a diagnosis. In essence their programming is analogous to human learning. A teacher/programner effectively says, "This shape indicates that the q-t time is short, suggesting . . ." I know it always gets it right for me!

Given enough cash and space I wonder just how big and "pseudo--intelligent" an expert medical system could be built? I have had three different machines attached to me with two doctors trying to integrate the results into a sensible picture. Lots of, "Hmm, but.. , "Maybe but only if...".

But patient data would have to be gathered and interpeted in a different way to "train" such machines. Take a while possibly.

Tomorrow is precious, don't ruin it by fouling up today.
Find all posts by this user
Like Post Quote this message in a reply
09-10-2016, 12:34 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence



"IN THRUST WE TRUST"

"We were conservative Jews and that meant we obeyed God's Commandments until His rules became a royal pain in the ass."

- Joel Chastnoff, The 188th Crybaby Brigade
Find all posts by this user
Like Post Quote this message in a reply
09-10-2016, 02:01 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(08-10-2016 02:50 AM)Mathilda Wrote:  I'm getting sick of Sam Harris and his ilk posting bloody TED talks or videos about the dangers of Artificial Intelligence running amok and making humans extinct. He talks about having to avoid a digital apocalypse.

It literally is the equivalent of arguing that we should be concerned about space travel because one day we'll invent interstellar travel and meet aliens which will then want to destroy us. In other words, complete science fiction.

If he wants to talk about neuroscience, then fine, go ahead and I'll listen. Sam Harris is a neuroscientist who has hands-on experience in the field. But he does not have hands on experience of Artificial Intelligence. He doesn't appreciate quite how little progress the field has made. He doesn't understand that most of what is called Artificial Intelligence isn't actually AI but just smart programming. So why the fuck should anyone listen to him on the subject?

There is so little funding or dedicated research in this field compared to any other scientific field. Yet you have so called 'experts' (in others fields, or in self promotion) warning against how it can develop. They are promoting themselves with their titillating subjects at the cost of a genuine scientific field that has such potential to benefit human kind.

It is on a par with a Christian church warning against developing Physics two hundred years ago because it could allow us to do terrible things, even though no one at the time would know how.

I could have had a successful scientific career if I had been in any other field. That much is clear to me from working in bioinformatics recently. But I dedicated myself to the field of Artificial Intelligence precisely because of the challenges. But the challenges are so great and the progress has been so slow that the ones giving out the funding don't have any idea of how it could be useful. Add to that self proclaimed experts who are neither qualified nor experienced in the field warning about the dangers of AI, well no wonder there hasn't been any progress.

The field of AI has been plagued throughout its history of people talking up what it can do. This is why we have had multiple AI winters already where funding completely dries up for an entire generation. Yes there are genuine concerns about algorithms running our lives, but this isn't Artificial Intelligence, this is the field of statistics. This is the result of our monetary system and the need for the economy to constantly expand.

Sam Harris is a neuroscientist so he should at least appreciate the scale of the brain. There are about 100 billion neurons in the brain. Each neuron is more than a mere adding unit, each one has the computational complexity of an artificial neural network. Each neuron will connect to on average 7,000 other neurons. Whereas only now are we getting genuine 8-core processors in computers. Yes we could create a supercomputer made up of a massive cluster of thousands of computers connected together, but this would require 10 megawatts of electricity rather than 100 watts. Which means that you can't exactly embody it inside a single body and have a whole population of them evolving over hundreds of generations. Fact is that Moore's law is nearing the end of its reign and won't last long enough.

Sam Harris is spouting pseudo-science which is why I posted in this section.

If Sam Harris wants to be useful, maybe he should use his celebratory status to talk up the good that AI can do and how it could save mankind. There are many, many real and more immediate dangers facing human society right now. Antibiotic resistance, climate change, soil depletion meaning there are only 100 harvests left, an economy that needs to expand exponentially in order to function and therefore uses up the Earth's resources at an exponential rate. These are real dangers that need to be addressed far more urgently some science fiction fantasy.

For me, only space exploitation or fusion power will allow human progress to continue. And for this, AI will be critical because space is a hostile environment for us.

Most of our brain isn't needed if we just wanted to have self awareness and higher cognitive functions. An AGI it could be created much more efficiently than emulating one of our brains.

Allot of our hardware runs other functions in the body - regulating hormones, heartrate, threat response. All the motor control, etc.

Large brains doesn't make you smarter. Elephants are smart but not as smart as us. Many neurons are used up amplifying nerve signals to control their large bodies. Large bodies require more neurons/overhead.

Many birds with brains smaller than a walnut display incredible intelligence (learning and problem solving ability.) Demonstrating tool use and applying things they learned to new situations. From that I would say that a human+ AGI doesn't need anywhere near the total compute power of our brain to function.

What I do see is incredible progress being made by Google and others. Computers are getting pretty amazing at categorising images and beating people at most strategy games.

We live in interesting times.

“Forget Jesus, the stars died so you could be born.” - Lawrence M. Krauss
Find all posts by this user
Like Post Quote this message in a reply
09-10-2016, 09:56 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
Well, here's the words from Sam Harris himself, from his first podcast after his AI TEDtalk.

"My TEDtalk on AI risk finally saw the light of day. It's available on my blog and the TED site, and needless to say 15 minutes didn't allow me to say everything I think on the topic. Some people have come away believing that I'm a pure doomsayer, and advocating that we pull the brakes on AI. But as I say in the talk, I don't think there are any brakes to pull. Intelligence is out most precious resource, there's no question we want more of it. So I'm not advocating we pull the brakes, and I think we will eventually build general artificial intelligence's if we don't destroy ourselves first. So my concern is that many people who are doing the work seem to assume that we'll figure out the safety issues as we just muddle along, and that there are no special concerns here beyond those that come with any powerful technology; 'don't give your super intelligent AI to the next Hitler'. That doesn't suggest an understanding of the problem, and worse many seem to image that there is some magical harmony between the advances we make in AI and our understanding about how to build it safely. It's almost as if it can't go wrong in principle, and something that's superhuman intelligence on the part of a machine will of necessity produce superhuman ethics; which is to say a morality that is better than our own. Now that is a very strong assumption, that it seems to me, could easily prove mistaken. Now it's true that there are smart people who do not share my concerns at all and I've had at least two of them on the podcast; Neil deGrasse Tyson and David Deutsch, and for reasons that I think I expressed in those conversations I'm not persuaded by their view. I'm actually in good company, there are many smart people who take concerns about AI as seriously as I do, probably the most famous being Stephan Hawking and Bill Gates and Elon Musk. And you could add Max Tegmark, another one of my podcast guests to that list. Anyway, I'll talk more about this in the months ahead no doubt, because these issues aren't going anyway and I'll be going to at least two conferences on AI in the near term; and I'll try to bring you someone on the podcast who's credentials in computer science are impeccable, because mine are nonexistent, and we'll see how that conversation goes."

[Image: E3WvRwZ.gif]
Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes EvolutionKills's post
09-10-2016, 10:04 AM (This post was last modified: 09-10-2016 10:08 AM by Mathilda.)
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(09-10-2016 02:01 AM)DeepThought Wrote:  Most of our brain isn't needed if we just wanted to have self awareness and higher cognitive functions. An AGI it could be created much more efficiently than emulating one of our brains.

Allot of our hardware runs other functions in the body - regulating hormones, heartrate, threat response. All the motor control, etc.

Large brains doesn't make you smarter. Elephants are smart but not as smart as us. Many neurons are used up amplifying nerve signals to control their large bodies. Large bodies require more neurons/overhead.

But an Artificial General Intelligence needs a body in order for it to be intelligent. This is essentially Searle's Chinese room argument.

Or the analogy that I like to use, say you took a human baby and stuck in an sensory deprivation chamber, strapped it down, had pipes feeding food and water into its stomach and pipes taking away waste material. Then let it grow for the next 20 years. You would not have an intelligent human being at the end of it. But this is exactly what we hope for a disembodied artificial general intelligence. Strong AI needs to be embodied otherwise it's like trying to describe a colour to someone who has been blind from birth. Intelligence is limited by its environment.

And if an agent has a body, then it also needs to be able to sense with it and control it to interact with the environment. How else can we hope for anything to mean anything to an AI?

You're right that the actual action selection part is very small, but you can't have that all by itself. As I said earlier, the visual cortex devotes a huge amount of brain power to just seeing, so if I hold up an apple in front of you then you recognise it regardless of the angle, distance, colour, whether it is partially hidden or moving. This is just as important and difficult as deciding how to act based on the information that we have just seen an apple.

But in the same way that you can do a lot with very little, the field of AI is also one where things that you think should work just don't. Or quite often when you do have some success you find that it's working and you don't even know how. We're talking about extremely complex systems here. You can't just plonk bits together and assume that it will work like a silicon circuit.

I once spent the best part of a year trying to get one simple neural network to arbitrate between two other neural networks that worked exactly the same way. There was no reason to think it wouldn't work. In the end I gave up. Or one neural network that I spent two weeks disabling bit by bit wondering how it was still adapting. If you evolve something, then be prepared to devote several months just figuring out how it works. Experience has shown me to expect my assumptions to be proved wrong and to let the results point me in the right direction.


(09-10-2016 02:01 AM)DeepThought Wrote:  Many birds with brains smaller than a walnut display incredible intelligence (learning and problem solving ability.) Demonstrating tool use and applying things they learned to new situations. From that I would say that a human+ AGI doesn't need anywhere near the total compute power of our brain to function.

True you can do really good stuff with very little, but as I said before, the problem is one of scalability. There is a very real limit to what a bird can learn or solve. As I said, this is both the challenge and the pitfall of artificial intelligence. Initial success doesn't scale. This is because as with the travelling salesman problem, the problem domain grows exponentially because of the curse of dimensionality.


(09-10-2016 02:01 AM)DeepThought Wrote:  What I do see is incredible progress being made by Google and others. Computers are getting pretty amazing at categorising images and beating people at most strategy games.

I'll quote what I said earlier in the thread.

(08-10-2016 11:12 AM)Mathilda Wrote:  Self driving cars and Go-playing computers have none of these issues because they are not self organising. They have simple networks which have data presented to them and are trained off-line. The neural networks they use basically act as statistical functions and are used by symbolic computer programs. There are practical requirements to this as well. A company needs to test, understand and know how any system it produces will work. It wouldn't want to be sued for example. Clients get very nervous even with simple three layer artificial neural networks because it seems like an unexplainable black box.

Again it's scalability. You have some initial success and think it will scale up to solve larger problems but it doesn't.
Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes Mathilda's post
09-10-2016, 10:25 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(09-10-2016 09:56 AM)EvolutionKills Wrote:  Well, here's the words from Sam Harris himself, from his first podcast after his AI TEDtalk.

"My TEDtalk on AI risk finally saw the light of day. It's available on my blog and the TED site, and needless to say 15 minutes didn't allow me to say everything I think on the topic. Some people have come away believing that I'm a pure doomsayer, and advocating that we pull the brakes on AI. But as I say in the talk, I don't think there are any brakes to pull. Intelligence is out most precious resource, there's no question we want more of it. So I'm not advocating we pull the brakes, and I think we will eventually build general artificial intelligence's if we don't destroy ourselves first. So my concern is that many people who are doing the work seem to assume that we'll figure out the safety issues as we just muddle along, and that there are no special concerns here beyond those that come with any powerful technology; 'don't give your super intelligent AI to the next Hitler'. That doesn't suggest an understanding of the problem, and worse many seem to image that there is some magical harmony between the advances we make in AI and our understanding about how to build it safely. It's almost as if it can't go wrong in principle, and something that's superhuman intelligence on the part of a machine will of necessity produce superhuman ethics; which is to say a morality that is better than our own. Now that is a very strong assumption, that it seems to me, could easily prove mistaken. Now it's true that there are smart people who do not share my concerns at all and I've had at least two of them on the podcast; Neil deGrasse Tyson and David Deutsch, and for reasons that I think I expressed in those conversations I'm not persuaded by their view. I'm actually in good company, there are many smart people who take concerns about AI as seriously as I do, probably the most famous being Stephan Hawking and Bill Gates and Elon Musk. And you could add Max Tegmark, another one of my podcast guests to that list. Anyway, I'll talk more about this in the months ahead no doubt, because these issues aren't going anyway and I'll be going to at least two conferences on AI in the near term; and I'll try to bring you someone on the podcast who's credentials in computer science are impeccable, because mine are nonexistent, and we'll see how that conversation goes."

Oh great. I can hardly wait to hear more from Sam Harris on this titillating subject.

Name dropping two cosmologists / theoretical physicists, an entrepreneur and a software engineer doesn't help his argument. Even Bill Gates doesn't have experience of strong Artificial Intelligence. AI is not the same as computer science. The field stagnated for the first 40 years because computer scientists thought that it was. This is what we refer to as classical AI. The majority of what is called AI now is basically statistics.

But warning of the statistical apocalypse just doesn't sound as exciting.
Find all posts by this user
Like Post Quote this message in a reply
[+] 2 users Like Mathilda's post
09-10-2016, 11:25 AM
RE: Please can Sam Harris shut the fuck up about Artificial Intelligence
(09-10-2016 10:25 AM)Mathilda Wrote:  
(09-10-2016 09:56 AM)EvolutionKills Wrote:  Well, here's the words from Sam Harris himself, from his first podcast after his AI TEDtalk.

"My TEDtalk on AI risk finally saw the light of day. It's available on my blog and the TED site, and needless to say 15 minutes didn't allow me to say everything I think on the topic. Some people have come away believing that I'm a pure doomsayer, and advocating that we pull the brakes on AI. But as I say in the talk, I don't think there are any brakes to pull. Intelligence is out most precious resource, there's no question we want more of it. So I'm not advocating we pull the brakes, and I think we will eventually build general artificial intelligence's if we don't destroy ourselves first. So my concern is that many people who are doing the work seem to assume that we'll figure out the safety issues as we just muddle along, and that there are no special concerns here beyond those that come with any powerful technology; 'don't give your super intelligent AI to the next Hitler'. That doesn't suggest an understanding of the problem, and worse many seem to image that there is some magical harmony between the advances we make in AI and our understanding about how to build it safely. It's almost as if it can't go wrong in principle, and something that's superhuman intelligence on the part of a machine will of necessity produce superhuman ethics; which is to say a morality that is better than our own. Now that is a very strong assumption, that it seems to me, could easily prove mistaken. Now it's true that there are smart people who do not share my concerns at all and I've had at least two of them on the podcast; Neil deGrasse Tyson and David Deutsch, and for reasons that I think I expressed in those conversations I'm not persuaded by their view. I'm actually in good company, there are many smart people who take concerns about AI as seriously as I do, probably the most famous being Stephan Hawking and Bill Gates and Elon Musk. And you could add Max Tegmark, another one of my podcast guests to that list. Anyway, I'll talk more about this in the months ahead no doubt, because these issues aren't going anyway and I'll be going to at least two conferences on AI in the near term; and I'll try to bring you someone on the podcast who's credentials in computer science are impeccable, because mine are nonexistent, and we'll see how that conversation goes."

Oh great. I can hardly wait to hear more from Sam Harris on this titillating subject.

Name dropping two cosmologists / theoretical physicists, an entrepreneur and a software engineer doesn't help his argument. Even Bill Gates doesn't have experience of strong Artificial Intelligence. AI is not the same as computer science. The field stagnated for the first 40 years because computer scientists thought that it was. This is what we refer to as classical AI. The majority of what is called AI now is basically statistics.

But warning of the statistical apocalypse just doesn't sound as exciting.

Well, if only experts were allowed to have opinions, Art Majors would be crammed farther up their own asses than they already are. Tongue

Still, I get the distinct feeling that you two are talking past each other, and that's no basis for a good conversation.

[Image: E3WvRwZ.gif]
Find all posts by this user
Like Post Quote this message in a reply
Post Reply
Forum Jump: