Ad blocker interference detected!
Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers
Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.
Personhood is a big discussion topic in the future. Will machines be conscious, will robots get rights, what about intelligent gorillas and so on.
But there won't be a key criteria to personhood in the future. Maybe in the beginning we will struggle with this problem, but eventually we are bound to realise that different levels of cognition-intelligence-awareness warrant different rights and obligations.
Being granted personhood rights is not a binary option. Common sense dictates that for different situations we should have different solutions.
And it really isn't very productive to ponder these problems today, because the answers depend almost entirely on our social assumptions and perception. Today we are blinded by our images of present-day humans/machines and we can hardly be objective when envisioning future scenarios.
Eventually everyone/everything will be given just as many rights as he/she/it deserves. The problem of us accidentally enslaving intelligent machines is totally bogus and worrying about it today (unless the interest is purely academic) is pointless, unless one is willing to completely forgo his humanity and be 100% rational in judging what is right (or simply are as good at it as Wreeds from "Calculating God").
An interesting question: "Is it ethical to exploit a being that has only cooking-intelligence?" (i.e. a somewhat intelligent slave)
Of course, it is. The important part is not whether something is capable of understanding that it's being enslaved/exploited, but whether something actually suffers from this enslavement. Freedom and self-interest are not inherent characteristics of an intelligent creature. Soldiers voluntarily follow orders, children do what they are told by parents, people behave in socially acceptable ways - people abandon (are deprived of) their freedom in many situations. Similarly, self-interest is absent in many cases - people help each other, they do what they feel they are expected to do, many people have altruistic leanings and it's in their nature to do what others need.
Enslaving an autistic person is probably wrong, because he will be able to better use his potential as a human being when free. We find it wrong when people take advantage of others only when these other people suffer - taking advantage of someone isn't necessarily bad otherwise. If you consciously and independently decide that money is evil and I persuade you to give me all your money, most people wouldn't consider it unethical. But when I con you into following my religious cult and force you to give me your money, that is clearly wrong.
So when a cooking robot is exploited, that's ok. An objective evaluation of that robot's individuality shows that it can best utilize its potential while cooking food for someone. It doesn't matter whom does it cook for and what, cooking cocain for some drag addict is not perceived differently from cooking cakes for a cute 5-year-old girl, a guard dog, a Nobel Prize winner or a team of basketball players. The ability to choose what to cook and for whom doesn't benefit the robot (the affinity to variety and ability to differentiate between people will probably not be built into the robot). It doesn't need the freedom and exploiting it would be ok.
An autistic cooking savant, on the other hand, could benefit from being able to choose his occupation, work conditions, schedule, etc., even if he is personally incapable of that choice (and his guardian must choose what is best for him). If, however, that person is only happy when cooking, is incapable of any social relations, has no interests outside of cooking and cannot lead an independent life, then enslaving him and "forcing" him to cook is usually regarded as a highly moral action. Consider, for example, the kids with Down syndrome who worked at the TransVision 2005. Was it immoral to tell them what to do, knowing full well that they may not understand whether they are being exploited or not? Of course, it was completely fine.
If we use logic and go up from the basic principles, then any ethical problem can be resolved with relative ease, as long as we do not insist on applying our existing assumptions and preconceptions.
Source: ideas came up in a wta-talk discussion