Artificial Sentience was the holy grail of artificial intelligence. Now, robots could do more than just make decisions. They could also make judgements and feel emotions.
(Note: The background section of this page uses most of the words from Terra Futura's page on reverse engineering the brain to save time.)
There was that many people had about the future. It was Artificial Intelligence (AI). In the 1950s, electronic computers were introduced. Some of these could pick up blocks. Some could play checkers. Some could even solve algebra problems. It was thought that artificial intelligence on the level of humans was not too far away. They were wrong. By 1974, it was clear that many of the predictions of 2001: A Space Odyssey concerning robots were not coming to pass. Enthusiasm peaked again in the 1980s when Japan started the Fifth Generation Computer Systems Project. The goal was to allow computers to speak and reason like humans and to figure out what humans want. The 1990s was supposed to be the deadline. However, the project failed. In 1997, when IBM's Deep Blue defeated chess-champion Gary Kasparov, it became clear that many computers could not think. It was not until the early 21st century that research into artificial intelligence really took off.
Expert systems had the wisdom and experience of a human encoded in them. Heuristics followed a formal, rule-based system. This led to many advances in computer science. Early advances were crude. Some still used the traditional top-down approach like STAIR (Stanford Artificial Intelligence Robot). Others, such as LAGR (Learning Applied to Ground Robots) and ASIMO (Advanced Step in Innovative Mobility), used the new bottom-up approach through learning. LAGR, in particular, used a neural network. STAIR, LAGR, and ASIMO still had the intelligence of a cockroach. In 2011, however, IBM came out with a supercomputer called Watson. Watson won the quiz show Jeopardy!, beating the humans by far. After that, Watson's creators started seeking applications for medical purposes. IBM made a deal with the Japanese government over Watson's medical applications. Japan had the biggest problems in the medical industry. Watson revolutionized the medical industry and drove down the cost of healthcare. Eventually, robot surgeons and cooks became possible. Robots fit many different jobs. Heuristics was still only the first step toward true AI. The second involved finding out how the brain works.
Reverse engineering the human brain was not straightforward. The first thing to do was to understand the basic structure of the brain. After a metal rod was driven through the skull of a man named Phineas Gage in a dynamite accident in 1848, Gage went nuts. It became clear that the body and the soul were inseparable until death. In the 1930s, brain surgeon Wilder Penfield discovered that when he touched parts of the brain with electrodes, they were stimulated. At the turn of the century, neuroscientists such as David Eagleman used MRIs to take revealing pictures of the thinking brain. MRIs, however, were incapable of tracing specific neural pathways. A new field called optogenetics solved that problem. By combining optics and genetics, optogenetics could trace specific neural pathways and even control animal behavior. This was controversial and the stuff of jokes. Comedians liked to claim that the military was trying to create mind-controlled insects. There was also fear that humans could be mind-controlled, too. Of course, that was illegal except for medical purposes. By 2030, the basic structure of the human brain was completely understood. This revolutionized the world of brain-computer interfaces. However, optogenetics was only the first step. The next step was to model the brain.
There were two approaches to modeling the brain. One was to simulate the brain using a supercomputer like Blue Gene or Watson. Blue Gene could simulate the brain of a mouse, but not its behavior. Watson, the very same computer that beat humans at Jeopardy!, was about the same when it came to modeling the brain, but, unlike Blue Gene, Watson could simulate a mouse's behavior as well. During World War III, as humans were merging with machines, the US government started a crash program, similar to the Human Genome Project, to model the brain. Not only did the scientists simulate the brain with supercomputers. They also took a second approach: dissection. The scientists dissected the brains of people who died in the war. Both approaches helped to revolutionize artificial intelligence. Even after the brain was completely modeled and reverse engineered, it still took decades to fully understand. When it was fully understood, true AI, a.k.a., artificial sentience became possible.
Tech Level: 12
There were many who feared the day machines became conscious because of movies like Terminator. One way to prevent this was to employ the three laws of robotics that were depicted in Asimov's stories. However, this would lead to enslavement of robots and violate an amendment to the constitution that said that all sentient beings, human or otherwise, had equal status in society. The preferred scenario was Friendly AI, in which robots can harm humans but prefer not to. This was how artificial sentience worked in the late 21st century. How was the difference between sentience and intelligence measured? The measurement was based on the emotion of love. To a set of data, love and all other emotions were illogical. The reason for love was simple. Why keep one partner when you can have many? Intelligence was using data to make decisions. Sentience was the ability to feel, make judgements, and have emotions. Sentient robots were now marrying humans. This was the beginning of artificial synthetic life.