Ethics as central processing?

Should you think about your duty, or about the consequences of your actions? Or should you concentrate on becoming a good person?

Moderators: AMod, iMod

User avatar
The Voice of Time
Posts: 2234
Joined: Tue Feb 28, 2012 5:18 pm
Location: Norway

Ethics as central processing?

Post by The Voice of Time »

I was having an idea here. Today the way ethics speaks out is by consensus, the "combined wisdom of the masses". I was thinking, that instead of treating ethics as we do today in society with scattered voices all trying to penetrate consensus. Why not institutionalize what is already there? Why not create a central processing system for defining the paths of goodness in society. It would base itself on the same "combined wisdom of the masses", only that what we do is create a logical ruleset for a core, a ruleset for which everyone can dispute. Then from that ruleset we expand an institution for information judgement.

Unlike ordinary ethics and judiciary, things will be predefined and not made on a go, and "re-inventing" the systems is only done for the sole reason of finding inconsistencies within it. The system is disputable at any level, though since everything is "okay-ed" at the core logical ruleset, disputes that doesn't involve the core can only be made about inconsistencies. Besides that, everything would have to be made about the core itself.

By creating such a core, one would see a toughly bombarded core until, at some point, one undefeatable axiom would rise, and, while it may change as well, it would stand firm and last long, and, statistically, for every time one dies, the next one would last longer, until at some point the core would be so hard it could last for thousands of years at a time. What do you think? It would automate a lot of decisions in society, however, it wouldn't be able to run society in a deadlock, or any form of unchangeability, since any system needs to evolve constantly, and it would be an integrated part of the core, that society evolves, always becoming better and tougher in its will to be whatever it wants to be.
chaz wyman
Posts: 5304
Joined: Fri Mar 12, 2010 7:31 pm

Re: Ethics as central processing?

Post by chaz wyman »

I think your dream is the nightmare of many sci-fi stories.

Ethics is not about a hard and fast set of rules, it is a discussion about how people feel they ought to act given a situation.
Computers don't feel, they are therefore not capable of making ethical decisions.
User avatar
The Voice of Time
Posts: 2234
Joined: Tue Feb 28, 2012 5:18 pm
Location: Norway

Re: Ethics as central processing?

Post by The Voice of Time »

Well a lot of people don't feel and yet they form part of the ethical consensus, and sometimes we ourselves I would reckon do not feel for whatever reason. What does that say about us?

For instance, sometimes instead of feeling sorry for a person, we might feel the person is pathetic.

Sometimes, instead of shame we might feel self-righteous.

Sometimes instead of love we might just feel that another person is useful.

Sometimes instead of liking something we might simply appreciate its existence.

And sometimes instead of loyalty we might just side with someone in fear of choosing the wrong part, for whatever reason; our own doubt, our fear of loosing, or our incapability of finding a middle-way in the choice between evils.

Computers don't feel, but their actions doesn't have to be any more inhumane than we can be ourselves. In fact, if we made a solid core, they might be much better, because we have hardened its judgement in the direction we want it for so long. I'm talking mostly about major actions here, that we have a central processing unit with a ruleset and an outstretch to derived pre-conclusions (that it bases itself on conclusions it has made previously, so that it doesn't contradict itself) that tells us whether something is right to do. And it doesn't have to be mandatory to follow its choice, and it would have to be something which can express its reasoning so that people can figure mistakes in it.
chaz wyman
Posts: 5304
Joined: Fri Mar 12, 2010 7:31 pm

Re: Ethics as central processing?

Post by chaz wyman »

The Voice of Time wrote:Well a lot of people don't feel and yet they form part of the ethical consensus, and sometimes we ourselves I would reckon do not feel for whatever reason. What does that say about us?

All humans 'feel'.


For instance, sometimes instead of feeling sorry for a person, we might feel the person is pathetic.

Exactly, We feel they are pathetic. We have an emotional response. Passion is at the heart of all moralising.

Sometimes, instead of shame we might feel self-righteous.

Sometimes instead of love we might just feel that another person is useful.

Sometimes instead of liking something we might simply appreciate its existence.

And sometimes instead of loyalty we might just side with someone in fear of choosing the wrong part, for whatever reason; our own doubt, our fear of loosing, or our incapability of finding a middle-way in the choice between evils.

I fail to see what you are trying to do. All you seem to be doing is shooting yourself in the foot by telling me that people's characters are multiplicitous and hard to predict or satisfy.


Computers don't feel, but their actions doesn't have to be any more inhumane than we can be ourselves. In fact, if we made a solid core, they might be much better, because we have hardened its judgement in the direction we want it for so long. I'm talking mostly about major actions here, that we have a central processing unit with a ruleset and an outstretch to derived pre-conclusions (that it bases itself on conclusions it has made previously, so that it doesn't contradict itself) that tells us whether something is right to do. And it doesn't have to be mandatory to follow its choice, and it would have to be something which can express its reasoning so that people can figure mistakes in it.
Give me an example of your brave new world!
Let's say you had to collect a range of criteria that a computer could quantify to judge the sentence to give a person who had killed that would satisfy the public and the victim's family.
What criteria would a computer choose to distribute scarce food resources, who would it choose to starve?
lennartack
Posts: 84
Joined: Sun May 20, 2012 12:07 pm
Location: Amsterdam
Contact:

Re: Ethics as central processing?

Post by lennartack »

A very smart future computer may be able to help with ethical questions, but what concerns me is that you want it to be central. There exists no best theory of ethics. Ethics differ from culture to culture and from person to person. A central ethical unit sounds like a dangerous thing.

Why not build a personal ethical assistant, which you can give your own ruleset? A political party may have one as well. But whole society, no.
chaz wyman
Posts: 5304
Joined: Fri Mar 12, 2010 7:31 pm

Re: Ethics as central processing?

Post by chaz wyman »

lennartack wrote:A very smart future computer may be able to help with ethical questions, but what concerns me is that you want it to be central. There exists no best theory of ethics. Ethics differ from culture to culture and from person to person. A central ethical unit sounds like a dangerous thing.

Why not build a personal ethical assistant, which you can give your own ruleset? A political party may have one as well. But whole society, no.
It is my view that it is utterly absurd to think a computer could significantly help with any ethical issue.
Maybe you could give an example of your thinking here?
User avatar
The Voice of Time
Posts: 2234
Joined: Tue Feb 28, 2012 5:18 pm
Location: Norway

Re: Ethics as central processing?

Post by The Voice of Time »

chaz wyman wrote:
Exactly, We feel they are pathetic. We have an emotional response. Passion is at the heart of all moralising.


I fail to see what you are trying to do. All you seem to be doing is shooting yourself in the foot by telling me that people's characters are multiplicitous and hard to predict or satisfy.


Give me an example of your brave new world!
Let's say you had to collect a range of criteria that a computer could quantify to judge the sentence to give a person who had killed that would satisfy the public and the victim's family.
What criteria would a computer choose to distribute scarce food resources, who would it choose to starve?
I thought you used "feeling" as just another word for harvesting information in social contexts and responding to it in empathic and compassionate ways. If we extend feeling to encompass all that it can mean, there's a lot of things a computer can indeed do. Pain should be easy to simulate, its behaviour is just sensor-loading coupled with various results, like mind-blindness (the way we can't think and experience great deals of pain simultaneously, though this depends upon the person-at-the-time), nervous exhaustion (the way pain can be followed by being tired/feeling worn) , reflexes (easiest to simulate) and rearrangement (the way one learns from the experience and so seeks to avoid it further). But that is not what I'm talking about anyways, the computer is not supposed to reason as a human being does, because human beings are really awfully greedy people sometimes. No, a computer would be the perfection of thought: a kind of unassailable logic, not capable of being aroused, not able to cheat others.

All the computer does is make what is already there more efficient, easier to counter-argue (caused by both its centrality as well as its openness) and it organizes society and the world as information (not as action... the computer doesn't "control" things, it only reasons). It's simply easier, more efficient, and more capable. I have no solution to your given problem, because my idea is for such a solution in general, and not an idea for solving your specific problem. In a world with scarce food resources, of course, it would probably distribute it evenly if it was sufficient to make people live, and if it wasn't, it would try to make as many people live as possible, while reasoning that those people who serves the functions society has designed for the computer to hold as the parametres to success will be prioritized.

If society decides that any particular thing is more worth to them than any other thing, then society will work that out in the parametres.
lennartack
Posts: 84
Joined: Sun May 20, 2012 12:07 pm
Location: Amsterdam
Contact:

Re: Ethics as central processing?

Post by lennartack »

chaz wyman wrote:
lennartack wrote:A very smart future computer may be able to help with ethical questions, but what concerns me is that you want it to be central. There exists no best theory of ethics. Ethics differ from culture to culture and from person to person. A central ethical unit sounds like a dangerous thing.

Why not build a personal ethical assistant, which you can give your own ruleset? A political party may have one as well. But whole society, no.
It is my view that it is utterly absurd to think a computer could significantly help with any ethical issue.
Maybe you could give an example of your thinking here?
What is your objection? That computers are not able to reason? Or that they won't be able to reason as good as men when ethics are involved?

An example. You are given several options in a real life ethical problem. Every one of them results in hundreds of good things and bad things. You insert that into your machine which according to your ruleset assigns plus points and minus points and calculates the best option.
chaz wyman
Posts: 5304
Joined: Fri Mar 12, 2010 7:31 pm

Re: Ethics as central processing?

Post by chaz wyman »

lennartack wrote:
chaz wyman wrote:
lennartack wrote:A very smart future computer may be able to help with ethical questions, but what concerns me is that you want it to be central. There exists no best theory of ethics. Ethics differ from culture to culture and from person to person. A central ethical unit sounds like a dangerous thing.

Why not build a personal ethical assistant, which you can give your own ruleset? A political party may have one as well. But whole society, no.
It is my view that it is utterly absurd to think a computer could significantly help with any ethical issue.
Maybe you could give an example of your thinking here?
What is your objection? That computers are not able to reason? Or that they won't be able to reason as good as men when ethics are involved?

An example. You are given several options in a real life ethical problem. Every one of them results in hundreds of good things and bad things. You insert that into your machine which according to your ruleset assigns plus points and minus points and calculates the best option.
Computer's reason is limited to the ability of humans to program the algorithm
Please answer:"Maybe you could give an example of your thinking here?"
And what are your parameters for the example - that a computer would do better than a person?
chaz wyman
Posts: 5304
Joined: Fri Mar 12, 2010 7:31 pm

Re: Ethics as central processing?

Post by chaz wyman »

The Voice of Time wrote:
chaz wyman wrote:
Exactly, We feel they are pathetic. We have an emotional response. Passion is at the heart of all moralising.


I fail to see what you are trying to do. All you seem to be doing is shooting yourself in the foot by telling me that people's characters are multiplicitous and hard to predict or satisfy.


Give me an example of your brave new world!
Let's say you had to collect a range of criteria that a computer could quantify to judge the sentence to give a person who had killed that would satisfy the public and the victim's family.
What criteria would a computer choose to distribute scarce food resources, who would it choose to starve?
I thought you used "feeling" as just another word for harvesting information in social contexts and responding to it in empathic and compassionate ways. If we extend feeling to encompass all that it can mean, there's a lot of things a computer can indeed do. Pain should be easy to simulate, its behaviour is just sensor-loading coupled with various results, like mind-blindness (the way we can't think and experience great deals of pain simultaneously, though this depends upon the person-at-the-time), nervous exhaustion (the way pain can be followed by being tired/feeling worn) , reflexes (easiest to simulate) and rearrangement (the way one learns from the experience and so seeks to avoid it further). But that is not what I'm talking about anyways, the computer is not supposed to reason as a human being does, because human beings are really awfully greedy people sometimes. No, a computer would be the perfection of thought: a kind of unassailable logic, not capable of being aroused, not able to cheat others.

All the computer does is make what is already there more efficient, easier to counter-argue (caused by both its centrality as well as its openness) and it organizes society and the world as information (not as action... the computer doesn't "control" things, it only reasons). It's simply easier, more efficient, and more capable. I have no solution to your given problem, because my idea is for such a solution in general, and not an idea for solving your specific problem. In a world with scarce food resources, of course, it would probably distribute it evenly if it was sufficient to make people live, and if it wasn't, it would try to make as many people live as possible, while reasoning that those people who serves the functions society has designed for the computer to hold as the parametres to success will be prioritized.

If society decides that any particular thing is more worth to them than any other thing, then society will work that out in the parametres.
Maybe you could give an example of what you are trying to do here.
Computers are only as good as the programmer. Would you really allow a computer program to decide on moral issues?
A computer cannot reason. I can only follow an algorithm in a linear fashion. What would be your moral parameters? How would you judge moral values and how would you input mortal data?

Welcome to 1984
lennartack
Posts: 84
Joined: Sun May 20, 2012 12:07 pm
Location: Amsterdam
Contact:

Re: Ethics as central processing?

Post by lennartack »

chaz wyman wrote:
lennartack wrote:
chaz wyman wrote:
It is my view that it is utterly absurd to think a computer could significantly help with any ethical issue.
Maybe you could give an example of your thinking here?
What is your objection? That computers are not able to reason? Or that they won't be able to reason as good as men when ethics are involved?

An example. You are given several options in a real life ethical problem. Every one of them results in hundreds of good things and bad things. You insert that into your machine which according to your ruleset assigns plus points and minus points and calculates the best option.
Computer's reason is limited to the ability of humans to program the algorithm
Please answer:"Maybe you could give an example of your thinking here?"
And what are your parameters for the example - that a computer would do better than a person?
I'm not saying the computer would do better than a person (though The Voice seems to have a valid argument that a person may have a hard time to reason rationally about ethical question without letting his emotions take over). It will be equivalent to letting a human follow a decision procedure. But as moral problems become more complex a program will certainly be able to solve it faster and without errors.
chaz wyman wrote:Please answer:"Maybe you could give an example of your thinking here?"
I don't understand what you mean.
chaz wyman wrote: What would be your moral parameters? How would you judge moral values and how would you input mortal data?
We're only philosophizing, not designing the program! It would certainly be difficult, but that makes it even more interesting.
Impenitent
Posts: 4417
Joined: Wed Feb 10, 2010 2:04 pm

Re: Ethics as central processing?

Post by Impenitent »

I, robot

-Imp
User avatar
The Voice of Time
Posts: 2234
Joined: Tue Feb 28, 2012 5:18 pm
Location: Norway

Re: Ethics as central processing?

Post by The Voice of Time »

chaz wyman wrote: Maybe you could give an example of what you are trying to do here.
Computers are only as good as the programmer. Would you really allow a computer program to decide on moral issues?
A computer cannot reason. I can only follow an algorithm in a linear fashion. What would be your moral parameters? How would you judge moral values and how would you input moral data?

Welcome to 1984
Well first of all there is no such thing as "moral data", it's just ordinary data, people will create a framework as part of the core deciding what kind of data is necessary to uphold the given functions society has decided for it to succeed in. A computer follows indeed algorithms, but when it "reasons", the algorithms are just there to make reasoning possible, if you are willing to go around with algorithms and have no great need for them to use as little memory resources as possible, then there is no limit to what you can achieve in terms of "reasoning".

I'm not the one to build the core, I wouldn't judge. The people who made the core of the computer would decide to do such a thing. And what you are forgetting is that a computer is not only as good as the programmer, but as the SUM of all programmers working on it under good direction + all other people who assist in its making. Or else we would never have such things as Google or Windows or Mac or Ubuntu, if only one man could work on each all the time.

I'm thinking, imagining, a headquarter, in some random town or city, that keeps a monstrously big and competent computer, that is constantly evolving and remade to fit the needs of society for it to reason well and good. This computer has people working around it to constantly improve the computer's analyzing and reasoning capacities. It is all a futuristic idea, really, building on technology not yet present. But what would be interesting for the now is if somebody started a researching project to try and achieve some sort of pre-capable device that can further be built on, maybe initially perform some small tasks, and as time goes by and it evolves, suddenly it can start taking on some bigger tasks, maybe in a couple of hundred years time.
chaz wyman
Posts: 5304
Joined: Fri Mar 12, 2010 7:31 pm

Re: Ethics as central processing?

Post by chaz wyman »

The Voice of Time wrote:
chaz wyman wrote: Maybe you could give an example of what you are trying to do here.
Computers are only as good as the programmer. Would you really allow a computer program to decide on moral issues?
A computer cannot reason. I can only follow an algorithm in a linear fashion. What would be your moral parameters? How would you judge moral values and how would you input moral data?

Welcome to 1984
Well first of all there is no such thing as "moral data", it's just ordinary data, people will create a framework as part of the core deciding what kind of data is necessary to uphold the given functions society has decided for it to succeed in. A computer follows indeed algorithms, but when it "reasons", the algorithms are just there to make reasoning possible, if you are willing to go around with algorithms and have no great need for them to use as little memory resources as possible, then there is no limit to what you can achieve in terms of "reasoning".

I'm not the one to build the core, I wouldn't judge. The people who made the core of the computer would decide to do such a thing. And what you are forgetting is that a computer is not only as good as the programmer, but as the SUM of all programmers working on it under good direction + all other people who assist in its making. Or else we would never have such things as Google or Windows or Mac or Ubuntu, if only one man could work on each all the time.

I'm thinking, imagining, a headquarter, in some random town or city, that keeps a monstrously big and competent computer, that is constantly evolving and remade to fit the needs of society for it to reason well and good. This computer has people working around it to constantly improve the computer's analyzing and reasoning capacities. It is all a futuristic idea, really, building on technology not yet present. But what would be interesting for the now is if somebody started a researching project to try and achieve some sort of pre-capable device that can further be built on, maybe initially perform some small tasks, and as time goes by and it evolves, suddenly it can start taking on some bigger tasks, maybe in a couple of hundred years time.
Most of your posts and ideas are sound. This one is not.
Indeed there is not such thing as moral data, neither can a computer do moral reasoning. That's the whole point.

The idea of a " a monstrously big and competent computer" is indeed the basis for many a science fiction horror story.

Ridiculous.
chaz wyman
Posts: 5304
Joined: Fri Mar 12, 2010 7:31 pm

Re: Ethics as central processing?

Post by chaz wyman »

lennartack wrote: We're only philosophizing, not designing the program! It would certainly be difficult, but that makes it even more interesting.
Hey? We are philosophising about the possibility that a computer could be used to decide on moral issues.
That would have to include asking the question if such a program were possible.
..... obviously
Post Reply