AI dictatorship
Post Reply
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
07-09-2013, 08:13 PM
AI dictatorship
I was thinking, why not create an AI which you can give a model "perfect" society and give in all the present date of a country, let it connect to as many government services as possible like school computers, banks, immigration services, weather stations, etc. and let it calculate which problem to prioritize. It should be a complex AI of course but I doubt it's impossible. You could have people give in additional information manually like when a car crash happens, let the AI be as aware of everything as possible.
It would be a dictatorship but unlike humans who get corrupt the moment they gain all power and money they want an AI will just use logic to achieve the model society it was given.

chan chan ki sikin aman
Find all posts by this user
Like Post Quote this message in a reply
07-09-2013, 08:34 PM
RE: AI dictatorship
Asimov touched on this.

So the AI's priority would be to lessen human suffering? Greatest possible happiness for everyone, right?

What if the best possible society is a more medieval feudal society? Slowly roll back technology and knowledge, and lower populations until we're all living simple lives.

And the people who get in the way to prevent this are essentially criminals, preventing perfection. Should they be eliminated?

What if human extinction... a slow one, that's unnoticeable and secret, lessening the population by a percent each year, in such a way that manufacturing and governance remains functional until the end, is the best outcome. No more people = no more suffering.
Find all posts by this user
Like Post Quote this message in a reply
[+] 4 users Like PoolBoyG's post
07-09-2013, 08:51 PM
RE: AI dictatorship
(07-09-2013 08:34 PM)PoolBoyG Wrote:  Asimov touched on this.

Yeah read the last of the shorts in I-Robot (but also read the rest of them because those stories are awesome).

It would have to start in a very limited/specialized manner and grow from there, and there would have to be human overrides for safety against hacking and the like.

Polls could factor in to prioritizing and policy decisions...

I feel like the biggest problem for a machine would be balancing the long and the short game, how much resources equity could/should be spent in any one year/population control (is it better for 16 billion humans to live half starving for 100 years or1 billion living comfortably for 2000 years or 1 million to live comfortably for eons, etc.)

Visit this user's website Find all posts by this user
Like Post Quote this message in a reply
07-09-2013, 10:39 PM (This post was last modified: 07-09-2013 10:43 PM by PoolBoyG.)
RE: AI dictatorship
The most fantastical element here is being able - IN THE FIRST PLACE - to get the richest and most powerful peoples and powers to give up their power. To actually be subordinate to another power.

This power transfer is either going to have to be short and brutal, or long term and secretive, with the AIs own idea of what's best.

If we program the AI from the start, then you still have the influences and corruption of the leaders who are behind the programing of the AI.

(07-09-2013 08:51 PM)ridethespiral Wrote:  
(07-09-2013 08:34 PM)PoolBoyG Wrote:  Asimov touched on this.

Yeah read the last of the shorts in I-Robot (but also read the rest of them because those stories are awesome).

It would have to start in a very limited/specialized manner and grow from there, and there would have to be human overrides for safety against hacking and the like.

Polls could factor into prioritizing and policy decisions...

I feel like the biggest problem for a machine would be balancing the long and the short game, how much resources equity could/should be spent in any one year/population control (is it better for 16 billion humans to live half starving for 100 years or1 billion living comfortably for 2000 years or 1 million to live comfortably for eons, etc.)

Reminds me of the "God Emperor" from Dune- A few thousand years of universal oppression and slaughter for the guaranteed existence and well being of humanity later on.

Another thought. Do we harvest the organs of one healthy innocent person to save 7 suffering people? If you act, you benefit 6 people. If you don't act, 6 people suffer. Choice seem pretty clear... to a calculating mind anyway. Or instead of giving organs, what about freedom... slavery. A few guaranteed suffering for the guaranteed well being of many.
Find all posts by this user
Like Post Quote this message in a reply
08-09-2013, 05:22 AM
RE: AI dictatorship
We can't relate to A.I. leaders.
Find all posts by this user
Like Post Quote this message in a reply
08-09-2013, 07:28 AM
RE: AI dictatorship
(07-09-2013 10:39 PM)PoolBoyG Wrote:  Another thought. Do we harvest the organs of one healthy innocent person to save 7 suffering people? If you act, you benefit 6 people. If you don't act, 6 people suffer. Choice seem pretty clear... to a calculating mind anyway. Or instead of giving organs, what about freedom... slavery. A few guaranteed suffering for the guaranteed well being of many.

This was the same thought experiment I ran through my mind when thinking of Sam Harris's proposed objective morality that minimizes suffering of conscious beings. Human sacrifice in the manner you describe would perhaps minimize suffering except that if you knew this was happening, that knowledge alone might increase suffering. AI would no doubt see that it would be beneficial that humans not be made aware of such an organ procurement policy.

Human morality has evolved through natural selection for the purpose of passing on genes. Our morality is not based on minimizing suffering, but rather our own repulsion toward harming those close to us. Human morality is not always rational, and therefore would be difficult to program into AI.
Find all posts by this user
Like Post Quote this message in a reply
08-09-2013, 07:42 AM
RE: AI dictatorship
the way u r meaning that show the evil u r

right existence is never about u nor about anything

right existence so what could b given by any objective intelligence, is about reality positive freedom that confirm truth existence through free constant superior interactions being the reason of true positive ends

so it would tell u what never concern urself nor ur life

so it would reveal how right existence is reversed, then would ask u to get after that which on the contrary would wake u to support more
Find all posts by this user
Like Post Quote this message in a reply
08-09-2013, 03:47 PM
RE: AI dictatorship
(08-09-2013 07:28 AM)BryanS Wrote:  
(07-09-2013 10:39 PM)PoolBoyG Wrote:  Another thought. Do we harvest the organs of one healthy innocent person to save 7 suffering people? If you act, you benefit 6 people. If you don't act, 6 people suffer. Choice seem pretty clear... to a calculating mind anyway. Or instead of giving organs, what about freedom... slavery. A few guaranteed suffering for the guaranteed well being of many.

This was the same thought experiment I ran through my mind when thinking of Sam Harris's proposed objective morality that minimizes suffering of conscious beings. Human sacrifice in the manner you describe would perhaps minimize suffering except that if you knew this was happening, that knowledge alone might increase suffering. AI would no doubt see that it would be beneficial that humans not be made aware of such an organ procurement policy.

Human morality has evolved through natural selection for the purpose of passing on genes. Our morality is not based on minimizing suffering, but rather our own repulsion toward harming those close to us. Human morality is not always rational, and therefore would be difficult to program into AI.

I'm sure you could come up with some complex mathematical equation though.
So like 1 person being killed off would be -100 (for example) and 6 people suffering would be -60 (-10 each). The same with slavery. Someone having no rights could be -1000 for example.


It's certainly an interesting concept to think about. Possibly the most plausible replacement for government.
The problem though is that AI is linear. You program it to get from A to C it's going to go ABC. And so you're still going to need people to determine what is acceptable and what isn't. So say you want to go from A to D but B is something horrible like slavery. You still need people to determine that B is a horrible thing and stop the computer from going ABCD and instead just ACD.
In that respects it's extremely counter productive because you still end up with a problem of who determines what direction to go in and you're still going to have people who want to go in one direction or another.

ie: Currently here there are 2 major parties, Labor and National.
Labor is the left for the worker nanny state garbage party. National is the for business further the economy only viable voting choice party.
Both take the country is a very different direction.

You have the exact same problem with the computer. Some people would disagree with raising the minimum wage for example (a typical Labor thing to do). Or some people would disagree with small business start up grants etc...

It doesn't really solve the problem, at all. (and yes I realize I contradicted myself like 3 times in this post, idc)

[Image: 3cdac7eec8f6b059070d9df56f50a7ae.jpg]
Now with 40% more awesome.
Find all posts by this user
Like Post Quote this message in a reply
08-09-2013, 04:12 PM
RE: AI dictatorship
Yea, at first the computer won't be the most productive and will need people to determine if the options it gives are viable but if it has learning capabilities over time it knows what should be avoided (like slavery or killing people for the good of society)

chan chan ki sikin aman
Find all posts by this user
Like Post Quote this message in a reply
08-09-2013, 04:27 PM
RE: AI dictatorship
(08-09-2013 03:47 PM)earmuffs Wrote:  
(08-09-2013 07:28 AM)BryanS Wrote:  This was the same thought experiment I ran through my mind when thinking of Sam Harris's proposed objective morality that minimizes suffering of conscious beings. Human sacrifice in the manner you describe would perhaps minimize suffering except that if you knew this was happening, that knowledge alone might increase suffering. AI would no doubt see that it would be beneficial that humans not be made aware of such an organ procurement policy.

Human morality has evolved through natural selection for the purpose of passing on genes. Our morality is not based on minimizing suffering, but rather our own repulsion toward harming those close to us. Human morality is not always rational, and therefore would be difficult to program into AI.

I'm sure you could come up with some complex mathematical equation though.
So like 1 person being killed off would be -100 (for example) and 6 people suffering would be -60 (-10 each). The same with slavery. Someone having no rights could be -1000 for example.


It's certainly an interesting concept to think about. Possibly the most plausible replacement for government.
The problem though is that AI is linear. You program it to get from A to C it's going to go ABC. And so you're still going to need people to determine what is acceptable and what isn't. So say you want to go from A to D but B is something horrible like slavery. You still need people to determine that B is a horrible thing and stop the computer from going ABCD and instead just ACD.
In that respects it's extremely counter productive because you still end up with a problem of who determines what direction to go in and you're still going to have people who want to go in one direction or another.

ie: Currently here there are 2 major parties, Labor and National.
Labor is the left for the worker nanny state garbage party. National is the for business further the economy only viable voting choice party.
Both take the country is a very different direction.

You have the exact same problem with the computer. Some people would disagree with raising the minimum wage for example (a typical Labor thing to do). Or some people would disagree with small business start up grants etc...

It doesn't really solve the problem, at all. (and yes I realize I contradicted myself like 3 times in this post, idc)


There would be just too many corrections to an equation designed to maximize wellbeing of humans. We could drastically reduce the effects of disease like HIV by instituting mandatory regular testing and a forced quarantine for any infected individuals. We don't do this because our social morals wold not allow this for a disease with transmission that relies primarily on actions controllable by both parties.

You put your finger on the real problem when you say that you'd need human intervention and interaction with the program in order to keep guiding it in the 'right' direction. Ultimately, it would just be another public policy tool like any other of our legislation or regulatory agencies are tools of public policy now.
Find all posts by this user
Like Post Quote this message in a reply
Post Reply
Forum Jump: