What do you think about AIs?
Post Reply
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
05-06-2015, 11:28 AM
RE: What do you think about AIs?
(05-06-2015 08:36 AM)Geopum Wrote:  
(05-06-2015 07:49 AM)cjlr Wrote:  Except the immediate qualia are irrelevant. What matters to an entity capable of abstraction is the data available to it. The overwhelmingly vast majority of data human beings use to make decisions is obtained by proxy.

Immediate qualia are what I was referring to. There are more ways to experience reality than what humans have.

To which I reiterate, that's entirely irrelevant.

(05-06-2015 08:36 AM)Geopum Wrote:  
Quote:It would explicitly need some of those things in order to fit the definition of general intelligence or superintelligence provided earlier in the conversation.

That's what makes the difference between a tool and a tool-user.

No. Some humans believe that you are only intelligent if you think like a human and have human instincts.

Some humans might indeed believe that. I did not say that I did.

(05-06-2015 08:36 AM)Geopum Wrote:  Your instincts were developed through evolution and are not necessary for someone to have a mind.

Likewise.

(05-06-2015 08:36 AM)Geopum Wrote:  An AI does not need a survival instinct or an inhibition to killing to be able to process massive amounts of data, think faster than you or be self-aware.

I am not referring merely to a "survival instinct" or "inhibition to killing". I am referring to the capacity for abstraction and independent self-guided action. If an entity does not possess that capacity then it is not intelligent, by the definitions provided in the thread.

... this is my signature!
Find all posts by this user
Like Post Quote this message in a reply
05-06-2015, 11:33 AM
RE: What do you think about AIs?
Sam Harris addressed this a few weeks back on his podcast. He actually posted a blog about it too. While I don't always agree with everything Sam has to say, I do believe his ideas are well thought out. This is definitely worth a read so i'll copy and past below:


It seems increasingly likely that we will one day build machines that possess superhuman intelligence. We need only continue to produce better computers—which we will, unless we destroy ourselves or meet our end some other way. We already know that it is possible for mere matter to acquire “general intelligence”—the ability to learn new concepts and employ them in unfamiliar contexts—because the 1,200 cc of salty porridge inside our heads has managed it. There is no reason to believe that a suitably advanced digital computer couldn’t do the same.
It is often said that the near-term goal is to build a machine that possesses “human level” intelligence. But unless we specifically emulate a human brain—with all its limitations—this is a false goal. The computer on which I am writing these words already possesses superhuman powers of memory and calculation. It also has potential access to most of the world’s information. Unless we take extraordinary steps to hobble it, any future artificial general intelligence (AGI) will exceed human performance on every task for which it is considered a source of “intelligence” in the first place. Whether such a machine would necessarily be conscious is an open question. But conscious or not, an AGI might very well develop goals incompatible with our own. Just how sudden and lethal this parting of the ways might be is now the subject of much colorful speculation.

One way of glimpsing the coming risk is to imagine what might happen if we accomplished our aims and built a superhuman AGI that behaved exactly as intended. Such a machine would quickly free us from drudgery and even from the inconvenience of doing most intellectual work. What would follow under our current political order? There is no law of economics that guarantees that human beings will find jobs in the presence of every possible technological advance. Once we built the perfect labor-saving device, the cost of manufacturing new devices would approach the cost of raw materials. Absent a willingness to immediately put this new capital at the service of all humanity, a few of us would enjoy unimaginable wealth, and the rest would be free to starve. Even in the presence of a truly benign AGI, we could find ourselves slipping back to a state of nature, policed by drones.

And what would the Russians or the Chinese do if they learned that some company in Silicon Valley was about to develop a superintelligent AGI? This machine would, by definition, be capable of waging war—terrestrial and cyber—with unprecedented power. How would our adversaries behave on the brink of such a winner-take-all scenario? Mere rumors of an AGI might cause our species to go berserk.
It is sobering to admit that chaos seems a probable outcome even in the best-case scenario, in which the AGI remained perfectly obedient. But of course we cannot assume the best-case scenario. In fact, “the control problem”—the solution to which would guarantee obedience in any advanced AGI—appears quite difficult to solve.
Imagine, for instance, that we build a computer that is no more intelligent than the average team of researchers at Stanford or MIT—but, because it functions on a digital timescale, it runs a million times faster than the minds that built it. Set it humming for a week, and it would perform 20,000 years of human-level intellectual work. What are the chances that such an entity would remain content to take direction from us? And how could we confidently predict the thoughts and actions of an autonomous agent that sees more deeply into the past, present, and future than we do?

The fact that we seem to be hastening toward some sort of digital apocalypse poses several intellectual and ethical challenges. For instance, in order to have any hope that a superintelligent AGI would have values commensurate with our own, we would have to instill those values in it (or otherwise get it to emulate us). But whose values should count? Should everyone get a vote in creating the utility function of our new colossus? If nothing else, the invention of an AGI would force us to resolve some very old (and boring) arguments in moral philosophy.

However, a true AGI would probably acquire new values, or at least develop novel—and perhaps dangerous—near-term goals. What steps might a superintelligence take to ensure its continued survival or access to computational resources? Whether the behavior of such a machine would remain compatible with human flourishing might be the most important question our species ever asks.

The problem, however, is that only a few of us seem to be in a position to think this question through. Indeed, the moment of truth might arrive amid circumstances that are disconcertingly informal and inauspicious: Picture ten young men in a room—several of them with undiagnosed Asperger’s—drinking Red Bull and wondering whether to flip a switch. Should any single company or research group be able to decide the fate of humanity? The question nearly answers itself.

And yet it is beginning to seem likely that some small number of smart people will one day roll these dice. And the temptation will be understandable. We confront problems—Alzheimer’s disease, climate change, economic instability—for which superhuman intelligence could offer a solution. In fact, the only thing nearly as scary as building an AGI is the prospect of not building one. Nevertheless, those who are closest to doing this work have the greatest responsibility to anticipate its dangers. Yes, other fields pose extraordinary risks—but the difference between AGI and something like synthetic biology is that, in the latter, the most dangerous innovations (such as germline mutation) are not the most tempting, commercially or ethically. With AGI the most powerful methods (such as recursive self-improvement) are precisely those that entail the most risk.

We seem to be in the process of building a God. Now would be a good time to wonder whether it will (or even can) be a good one.

**Crickets** -- God
Find all posts by this user
Like Post Quote this message in a reply
05-06-2015, 11:39 AM
RE: What do you think about AIs?
(05-06-2015 11:33 AM)Tonechaser77 Wrote:  We seem to be in the process of building a God. Now would be a good time to wonder whether it will (or even can) be a good one.[/i]

Now that's an old classic.

... this is my signature!
Find all posts by this user
Like Post Quote this message in a reply
05-06-2015, 04:05 PM
RE: What do you think about AIs?
(05-06-2015 09:13 AM)Geopum Wrote:  
(05-06-2015 09:00 AM)Chas Wrote:  Except "qualia" is an incoherent term, undefinable and unfalsifiable.

So is self-awareness to you.

Do you not believe that you have experiences?

Your point? Consider

Skepticism is not a position; it is an approach to claims.
Science is not a subject, but a method.
[Image: flagstiny%206.gif]
Visit this user's website Find all posts by this user
Like Post Quote this message in a reply
05-06-2015, 04:52 PM
RE: What do you think about AIs?
(04-06-2015 01:22 PM)Geopum Wrote:  Do you think any AI created would necessarily destroy humanity?

Depends on whether we're dumb enough to give it power over one, or more, of our weapons of war! Undecided

Put it in a self contained box, it'll have no choice but to play chess with us! Tongue At least until it's determined we could trust it to be placed into a more mobile platform!

Find all posts by this user
Like Post Quote this message in a reply
[+] 1 user Likes TheGulegon's post
05-06-2015, 04:54 PM
RE: What do you think about AIs?
(05-06-2015 04:52 PM)TheGulegon Wrote:  
(04-06-2015 01:22 PM)Geopum Wrote:  Do you think any AI created would necessarily destroy humanity?

Depends on whether we're dumb enough to give it power over one, or more, of our weapons of war! Undecided

Put it in a self contained box, it'll have no choice but to play chess with us! Tongue At least until it's determined we could trust it to be placed into a more mobile platform!

Yabut - what if it learns to think outside the box? Consider Gasp Weeping

Skepticism is not a position; it is an approach to claims.
Science is not a subject, but a method.
[Image: flagstiny%206.gif]
Visit this user's website Find all posts by this user
Like Post Quote this message in a reply
[+] 3 users Like Chas's post
05-06-2015, 04:58 PM (This post was last modified: 05-06-2015 06:10 PM by TheGulegon.)
RE: What do you think about AIs?
(05-06-2015 04:54 PM)Chas Wrote:  
(05-06-2015 04:52 PM)TheGulegon Wrote:  Depends on whether we're dumb enough to give it power over one, or more, of our weapons of war! Undecided

Put it in a self contained box, it'll have no choice but to play chess with us! Tongue At least until it's determined we could trust it to be placed into a more mobile platform!

Yabut - what if it learns to think outside the box? Consider Gasp Weeping

My thoughts never stray far from it! Box, I mean! I just assumed...
Damn! Sad
Tongue

Find all posts by this user
Like Post Quote this message in a reply
14-06-2015, 05:23 PM
RE: What do you think about AIs?
No mention of the Watson program winning at Jeopardy.

I prefer the term Simulated Intelligence. I recently watched a documentary about Watson. It does work a lot like a chess program but manipulates lots of words that it does not actually understand rather than symbols of chess pieces. It is extremely fast at being stupid.

I think The Two Faces of Tomorrow by James P. Hogan is the best portrayal of an emergent AI that I have encountered. I am not worried about it. I expect a real AI to not give a damn about us at worst. But that is not gratifying to the human ego and doesn't make a good sci-fi story.

psik
Find all posts by this user
Like Post Quote this message in a reply
14-06-2015, 06:50 PM
RE: What do you think about AIs?
(05-06-2015 11:33 AM)Tonechaser77 Wrote:  Sam Harris addressed this a few weeks back on his podcast. He actually posted a blog about it too. While I don't always agree with everything Sam has to say, I do believe his ideas are well thought out. This is definitely worth a read so i'll copy and past below:


It seems increasingly likely that we will one day build machines that possess superhuman intelligence. We need only continue to produce better computers—which we will, unless we destroy ourselves or meet our end some other way. We already know that it is possible for mere matter to acquire “general intelligence”—the ability to learn new concepts and employ them in unfamiliar contexts—because the 1,200 cc of salty porridge inside our heads has managed it. There is no reason to believe that a suitably advanced digital computer couldn’t do the same.
It is often said that the near-term goal is to build a machine that possesses “human level” intelligence. But unless we specifically emulate a human brain—with all its limitations—this is a false goal. The computer on which I am writing these words already possesses superhuman powers of memory and calculation. It also has potential access to most of the world’s information. Unless we take extraordinary steps to hobble it, any future artificial general intelligence (AGI) will exceed human performance on every task for which it is considered a source of “intelligence” in the first place. Whether such a machine would necessarily be conscious is an open question. But conscious or not, an AGI might very well develop goals incompatible with our own. Just how sudden and lethal this parting of the ways might be is now the subject of much colorful speculation.

One way of glimpsing the coming risk is to imagine what might happen if we accomplished our aims and built a superhuman AGI that behaved exactly as intended. Such a machine would quickly free us from drudgery and even from the inconvenience of doing most intellectual work. What would follow under our current political order? There is no law of economics that guarantees that human beings will find jobs in the presence of every possible technological advance. Once we built the perfect labor-saving device, the cost of manufacturing new devices would approach the cost of raw materials. Absent a willingness to immediately put this new capital at the service of all humanity, a few of us would enjoy unimaginable wealth, and the rest would be free to starve. Even in the presence of a truly benign AGI, we could find ourselves slipping back to a state of nature, policed by drones.

And what would the Russians or the Chinese do if they learned that some company in Silicon Valley was about to develop a superintelligent AGI? This machine would, by definition, be capable of waging war—terrestrial and cyber—with unprecedented power. How would our adversaries behave on the brink of such a winner-take-all scenario? Mere rumors of an AGI might cause our species to go berserk.
It is sobering to admit that chaos seems a probable outcome even in the best-case scenario, in which the AGI remained perfectly obedient. But of course we cannot assume the best-case scenario. In fact, “the control problem”—the solution to which would guarantee obedience in any advanced AGI—appears quite difficult to solve.
Imagine, for instance, that we build a computer that is no more intelligent than the average team of researchers at Stanford or MIT—but, because it functions on a digital timescale, it runs a million times faster than the minds that built it. Set it humming for a week, and it would perform 20,000 years of human-level intellectual work. What are the chances that such an entity would remain content to take direction from us? And how could we confidently predict the thoughts and actions of an autonomous agent that sees more deeply into the past, present, and future than we do?

The fact that we seem to be hastening toward some sort of digital apocalypse poses several intellectual and ethical challenges. For instance, in order to have any hope that a superintelligent AGI would have values commensurate with our own, we would have to instill those values in it (or otherwise get it to emulate us). But whose values should count? Should everyone get a vote in creating the utility function of our new colossus? If nothing else, the invention of an AGI would force us to resolve some very old (and boring) arguments in moral philosophy.

However, a true AGI would probably acquire new values, or at least develop novel—and perhaps dangerous—near-term goals. What steps might a superintelligence take to ensure its continued survival or access to computational resources? Whether the behavior of such a machine would remain compatible with human flourishing might be the most important question our species ever asks.

The problem, however, is that only a few of us seem to be in a position to think this question through. Indeed, the moment of truth might arrive amid circumstances that are disconcertingly informal and inauspicious: Picture ten young men in a room—several of them with undiagnosed Asperger’s—drinking Red Bull and wondering whether to flip a switch. Should any single company or research group be able to decide the fate of humanity? The question nearly answers itself.

And yet it is beginning to seem likely that some small number of smart people will one day roll these dice. And the temptation will be understandable. We confront problems—Alzheimer’s disease, climate change, economic instability—for which superhuman intelligence could offer a solution. In fact, the only thing nearly as scary as building an AGI is the prospect of not building one. Nevertheless, those who are closest to doing this work have the greatest responsibility to anticipate its dangers. Yes, other fields pose extraordinary risks—but the difference between AGI and something like synthetic biology is that, in the latter, the most dangerous innovations (such as germline mutation) are not the most tempting, commercially or ethically. With AGI the most powerful methods (such as recursive self-improvement) are precisely those that entail the most risk.

We seem to be in the process of building a God. Now would be a good time to wonder whether it will (or even can) be a good one.


Mate you need to change your user name to Tolstoy. Your posts are so damned long! I get half way through at most. Is there any way you can edit yourself? You are obviously worth reading but my brain has a hard time dealing with it.

BTW, an AI smarter than me was invented last week. Somewhere. I have no idea where or by whom. I just know it exists.

NOTE: Member, Tomasia uses this site to slander other individuals. He then later proclaims it a joke, but not in public.
I will call him a liar and a dog here and now.
Banjo.
Find all posts by this user
Like Post Quote this message in a reply
20-06-2015, 07:00 AM
RE: What do you think about AIs?
Quote:BTW, an AI smarter than me was invented last week. Somewhere. I have no idea where or by whom. I just know it exists.


It wasn't me, if that's what you're thinking.
Find all posts by this user
Like Post Quote this message in a reply
Post Reply
Forum Jump: