This work is available here free, so that those who cannot afford it can still have access to it, and so that no one has to pay before they read something that might not be what they really are seeking.  But if you find it meaningful and helpful and would like to contribute whatever easily affordable amount you feel it is worth, please do do.  I will appreciate it. The button to the right will take you to PayPal where you can make any size donation (of 25 cents or more) you wish, using either your PayPal account or a credit card without a PayPal account.

Follow-Up to "Some Thoughts About How Machines Could Think"
Rick Garlikov

Related essays:

I am more confident now that machines could be designed and built that would think, and that could also feel emotions and sensations, have a sense of reasonable morality, a sense of reasonable purpose, a sense of beauty, a sense of wonder, a sense of astonishment, and even senses of humor, irony, sarcasm, cynicism, etc.  And I think they can understand and see significance in acts and circumstances, though it may not be the same significances (for humans) which we see.  The emotions and sensations they experience may or may not be the same exact ones we feel.  The specific mechanisms for doing all these things may be different from how people think and feel, but the underlying operational logic of the mechanisms could be the same (and I will try to make the case that it is), or still work even if very different.  So in this essay I want to try to make that case more strongly than I did in the first essay. 

This is going to be a supplement to that essay, not just a complete essay on the topic in itself.  I believe all this because I believe people are, in a sense, thinking machines and because 1) there are a great many instincts, tendencies, or impulses that seem to be built into human beings (and animals). And I think we could build in such basic tendencies, impulses, drives, reactions, etc. since these are just reactions to particular conditions or stimuli (e.g., instincts are initially automatic responses to kinds of stimuli or conditions, but they can learn to be overridden with training and practice), and because 2) the teaching curve of life, which is tantamount to programming, is extremely steep, labor intensive, and time-consuming, particularly in modern life.  I see no reason that if we spent as much time programming and teaching advanced computers and giving them their own learning experiences (with many of the same kinds of basic tendencies programmed or hard-wired into them that animals and humans have) as we do teaching children and adults and letting them learn from their own experiences, we couldn't get machines that think and experience emotions and sensations, etc. pretty much in the same kinds of ways children and adults do.  But (3) we also need to build at last basic logic into thinking machines and potentially lifelong learning with a wide-open, often contradictory data base of a mixture of believed facts and opinions, some better, more reasonable and/or reliable than others.  The information from which machines need to learn should be the same sorts of information from which people learn and try to ferret out truth from falsehood by any and every means possible, using logic, trusted resources, and trying to make any conclusions  consistent with all available evidence, seeking out contradictions where they occur and accounting for all apparent contradictions as reasonably as possible, but always on the lookout, and being receptive to new evidence, or previously known facts seen in a new perspective, that conflicts with any belief.  Whenever such conflicts or apparent conflicts occur, the search must go on, even if in the background (as in the backs of people's minds), to resolve them.  Thinking machines, like the best thinking humans, need to be able to recognize and explain real, potential, and apparent contradictions and implied contradictions in order to avoid. resolve, or reasonably discount or dismiss them.  Just as for humans, all of life has to serve as evidence, difficult as that might be. (4) Machines also need to develop pattern recognition and be able to test 'perceived' or patterns (such as those of cause and effect) by testing hypotheses and probability.  (See "Scientific Confirmation".)   Knowledge of facts alone is not sufficient to comprise wisdom and the ability to utilize them profitably or productively.  One of the common laments about many college students is that even when they know facts, they cannot use them in any meaningful way to analyze and evaluate ideas to which they might be relevant.

I will use the phrase 'thinking and feeling like people do' or 'thinking and feeling like we do', but I also want to discuss modifying machines so that they learn and know more than we do and that they behave better morally and less self-contradictory than we, as a species, tend to do, even though, in some sense they still will be thinking in the way we do, but will do it better and more compassionately, logically, and sensitively.  This will not take away their "free will" or ability to think or to think independently.  There is nothing about being a good or a reasonable person that makes one have less free will than being a bad or irrational persons, and insofar as irrational or wrongful behavior is the result of impulsive biological instincts without the (utilized) ability to control or overrule them, bad, irrational people may even have less free will than good or rational ones.  But surely Mother Teresa had no less free will than Adolph Hitler, and their having had far different ideas did not mean that they had different sorts of thinking mechanisms.  'Thinking differently' in the sense of having different ideas does not mean 'thinking differently' in terms of how one has ideas or consciousness.  Moreover, we may want machines to think more rationally and more sensitively and compassionately than people do (as long as that does not inhibit their 'imaginations' or their diversity of good and worthwhile ideas), so that they avoid many of the kinds of errors people make.  In other words, there should be some machines interested in exploring ideas, say in science and medicine, that may be improbable, but turn out to be true, so you don't want machines all just slavishly believing only what is most probable or what seems most probable at any given time or in regard to the evidence available at that time.  Insofar as the original Star Trek characters, Kirk, Spock, and Dr. McCoy ("Bones") mirror Plato's tripartite division of the mind into reason, emotions, and spirit which decides between reason and emotion, we don't want machines that are all Spock, all mere logic or operating only and always on probabilities without some basic instincts.  But we also do not want machines that operate only or primarily on basic instincts as McCoy often did outside of his making medical decisions.  The history of inventions is replete with dreams pursued, and often accomplished, by people who were able to invent, discover, and create what most reasonable, but unimaginative, people, thought was impossible, based on prior, unrealized narrow, experience and would not have even tried to do.  In the case of photocopying machines, people, such as those in charge of IBM, not only thought they could not be created but there would be no point in them if they could, since there already was carbon paper for creating duplicate documents.  So we do not want a world of machines that would all only operate on probabilistic logic and not be able to dream of improbable things and be able to successfully act on those dreams to bring them into existence.  In the mid 1990s, when webpages were offered to business people fairly inexpensively, most saw no point in buying or having them ("Why would I need to be on the Internet?"), and even Microsoft was relatively late to the Internet party in terms of designing and incorporating a web browser into Windows.

As stated at the beginning, however, this is about developing machines that can actually feel sensations and emotions, not just emulate (other) people's feelings in the way a sociopath or hypocrite might or as someone might who is faking feelings such as love, sorrow, pity, curiosity, interest, or a specific feeling as a malingerer fakes pain or a person fakes orgasm to get sex to stop or make the partner feel accomplished.  It is about designing and building machines that can actually have feelings, emotions, and sensations, and that can make new discoveries and have new ideas, not simply follow programmed directions.  (Hereafter I will often refer to androids/computers/robots simply as 'androids', but meaning a computer that can think and perceive things and is mobile and 'able-bodied' or mechanically able enough to be a companion, perceiving what we do, privy to our conversation, and able to be physically helpful, etc.  The mobility and physically helpful parts are not necessary for a machine or computer to think, but it is important for it to be a good robot in the sense we tend to think of robots.)

The essential idea is that animals and people seem to have basic or underlying instincts, urges, tendencies, drives, impulses, and they, particularly people, have, to different degrees, abilities that utilize those instincts, urges, and drives to develop more complex behaviors, including complex planning, discovery, and language.  I believe that machines can be designed and built to have many of the same kinds of basic tendencies instincts, urges, drives, impulses, and abilities that utilize them.  And I think they can have abilities to utilize them in ways that lead to more complex behaviors, including thinking and language that they understand in the way people understand language and thinking, not just emulate. Some people seem to have more complex thought processes but also more complex or at least different basic instincts/impulses from other people in that, for example, some people will want to emulate other people's behaviors they see, whereas others will not.  I find this particularly interesting with regard to bad behavior and bad examples.  I do not understand why some people want to emulate behavior they themselves surely must realize in some way is hurtful, bad, unjust, or in some other way wrong, since they were the victims of it.  But often people do that, for example becoming the same sort of bad parents their own parents were.  It may be that different experiences govern what will be desirable to emulate, but it may also be that there are conflicting built in instincts in some people that others do not have.  I doubt that as of this writing we know all the basic complex, nuanced kinds of instincts that people might have, and insofar as we do not and/or do not invent ones that would be desirable for machines to have, initial thinking machines are likely to be less sophisticated or different in their thinking from each other when faced with similar phenomena. 

For example, there seem to be myriad kinds of jokes people find humorous, and different comedians, or comedians in different times, often invent new kinds of jokes and deliveries to make people laugh.  The humor we see in situations can often be dissected logically once it is invented, but the invention of it and the response we have to a new kind of joke or delivery style that makes us laugh at it is fairly immediate and is not at the conscious level. So it may be that there are all kinds of logical ways to trigger a laugh instinct or it may be that there are many different possible basic laugh impulses that can be activated by the right stimuli.  In other words, if we created thinking machines that had a sense of humor, it might only find funny the kinds of jokes that trigger the impulses we know to build into it now, which might be only a fraction of the kinds of laugh impulses different people might have.  I don't know, because I don't know how humor evolves or changes in people or from one generation to another.  I don't know whether a machine that finds funny the kinds of jokes, say, that I find funny would also find funny a joke or delivery style that might be new to me in ten years that I would also find funny. 

But then, I found funny things my parents never could, such as the "nonsense jokes" of the 1950's, like "What is the difference between an orange?" If you asked that of people, they all immediately said "An orange and what?" and you would say, "just an orange.  What is the difference between an orange?" They would then ask what the answer was since they had no clue how to answer it, and the answer was "A monkey, because elephants can't walk on lily pads."  Many of us found that hysterically funny because it made no sense and was stupid to the point of absurdity, though in linguistically and logically interesting ways, but many people, particularly older people did not find it funny at all precisely because "that makes no sense, and is stupid."  Other jokes at the time that were considered nonsense jokes had a different kind of absurdity to them, one where the answer was true in some way but ridiculous, such as "What are the three ways you can tell there is an elephant in your refrigerator?" "You can smell the peanuts on its breath; you can't get the door closed; and there will be footprints in the jello."  Some people would laugh hysterically at that and others would just look at you as if you were deranged or just trying to waste their time.

Humor and other kinds of responses we have may be the result of a myriad of evolutionary developments and personal (or cultural) experiences that might be difficult to know until they manifest themselves, so it might be that we can create thinking machines that react in ways we know we do now without their thus reacting the same way to the yet uninvented or undiscovered kinds of stimuli  we will react to in that way in the future.  Or it just may be that people have the same basic senses of humor but that different experiences trigger different paths to it, so that we each find different kinds of things funny.  If this is the case, then machines with a basic kind of impulse to laugh will do so at different things, depending on their experiences too.  Insofar as our reactions to stimuli depend on different instincts, rather than different experiences, machines will be more or less complex than people relative to the number of basic instincts we build them with compared to the number that people on average (might) have from nature.  The same sorts of elements will apply to machines being able to write or appreciate fiction, philosophy, ethics, compose or appreciate music, create or appreciate art, and even science, math, and logic. It will be not only the skills built into machines but the experiences they have and what they deduce or learn from them, using those skills, which will foster divergent and/or convergent ideas.

Integrating Sensory Systems
It will also be important for machines to have their different sensory systems integrated in ways that people and animals do.  That is, animals and people seem to have their basic impulses and their abilities tied in with their sense organs in ways that let them react to visual, auditory, kinesthetic, olfactory, taste in various similar ways, as in jumping in a moment of fright at a sudden unexpected visual appearance or a sudden unexpected noise, or to react to what is written in a similar way as to its being spoken -- such as good news or bad news.  Of course, there are often differences between what we can comprehend (as easily) visually versus orally, in terms of complex images or idea, and we "take in" a lot of information visually at one time that would take a great many words to describe, particularly in some sort of temporally linear fashion, because words cannot be heard (and understood) all at once the way things they depict can be seen and (at least to some extent) comprehended all at once (or in much faster linear order -- as in reading a menu versus hearing it spoken or as in scanning a visual scene even if you cannot take in all the details at once -- though different particular details might stand out to you or be noticed fairly simultaneously).  We can hear many different sounds at once, as in music played by a symphony orchestra, but we cannot hear many words and ideas at once and comprehend them. 

We can translate some sound into sight with sonar equipment, and we can translate some patterns of light into patterns of sound that are easier to detect that way. And basically it seems to me that machines could be built that could do all this too, probably even better, since machines could have all kinds of sonar, radar, ultraviolet, infrared, and other electronic or chemical detection devices we do not (more sensitive than our noses, for example), and can attend to all of them -- so that machines could, for example, have 360 degree awareness in all axes and basically perceive everything around them instead of what is just in front visually.  Vision could be omni-directional for machines as sounds, and smells are to humans, though they could know their source directions more readily.  And if all those means of gathering information could be coordinated, associated with words, phrases, or language in general, and could trigger the basic instincts and perhaps logic, and self-learning ability, and discernment and verification of patterns, then we could build them into machines in a way something like nature builds into us, and they could think and feel and understand and learn and grow and discover and invent things as people do, and likely better.  In other words, it seems to me that this could be done basically in terms of physical and electro-chemical states and mechanisms of the machine in combination with pattern recognition, word/phrase associations with 'perceptions', information gathering, and logic programming, such as the principle of non-contradiction, that statements P and not-P cannot both be true, and the impulse to try to resolve any cases where it does appear to be true.  I will try to show here how that might be done in terms of ethical ideas such as fairness and in terms of humor. 

But this needs to be done in a way that does not reduce all ethics, aesthetics, humor, and emotions, or even science, history, or mathematical creativity and insight to mere physical states, because I do not think they all are just those.  I think there is a kind of logic involved in our senses of humor, beauty, justice and morality.  If they were merely physical states devoid of logic, it seems to me that they would then be arbitrary judgments, and I don't believe that is true of all of them, though it is true of some of them, such as 1) frustrations prompted by irrational OCD as opposed to artificial impediments to achieving reasonable, actually important, pursuits, and such as 2) feelings of justice or injustice based on the satisfactions or the frustrations of unreasonable merely egoistic desires rather than on actual and reasonable moral wrongs. 

It seems to me that at least part of understanding words and ideas is associating them with perceptions and experiences, and with other words and ideas we already have learned to associate with those perceptions and experiences.  For example, it does not help you understand the meaning of an unfamiliar word or expression if you are only given a synonym for it that is also unfamiliar to you, but if the meaning can be explained in words or images you do already understand or if the meaning can be shown to you in terms of a place or activity you see or an emotion you can experience, then you will, to that extent at least, have an understanding of it.  So it is important that language or symbols be associated with perceptions, but it is also important for there to be logic involved.  Both concepts and perceptions are difficult.  That is easily seen when trying to view what you are being told to look at through a microscope or when you are shown something like a face in a picture of mud and snow.  Many famous 'optical illusions' are difficult to see one way rather than another, such as the ascending versus descending staircase or the two facial profiles facing each other versus a vase.  Or if someone shows you a cross-section of something from a different angle, it is often difficult to tell what you are looking at.  Unfamiliar words, or words in a dialect with which you are not familiar, are difficult to recognize.  The first time I went to England, it took me three days to be able to understand conversations I was overhearing. Other examples are understanding the speech of toddlers, which mothers can often do when others cannot, or understanding the mumblings of teenagers, which other teens can often do when adults cannot. 

Two of the difficulties in both visual and auditory recognition are 1) distinguishing the main word or object from the background or surrounding visual or verbal objects, basically seeing or hearing the boundaries of the word or visual object, and 2) distinguishing which features of the sounds or visual objects go together to form the whole, and in what way.  In one office I did business with there was a receptionist whose name was Mariella Pila.  But when she would say her name for me before I saw it written, I couldn't tell whether she was saying it was Marielle La Pila or Mary Ella Pila or Mary Elappia or something else; I couldn't tell where the syllables were suppose to 'break'.  In the previously mentioned optical illusion of a face in mud and snow, it is not only difficult to see the outline of the face, to distinguish it from the background, but it is just as difficult to see the internal features, such as the mouth and eyes.  You can no more see, say, the chin or neck or hair separate from the background than you can see the eyes, nose, mouth, that let you see a face at all, let alone one distinct from the non-face parts of the picture.   

In short, it seems to me that much if not all of our knowledge is about what we perceive, including not only objects (and parts of objects) but the patterns, relationships, and logic involved among objects, facts, and among ideas, as in math, formal logic, and other theoretical endeavors, such as the invention of games and the development of strategies to play them well, the theoretical parts of science, etc. Adding language to our abilities and knowledge adds relationships between speech sounds and what we intend them to represent -- whether objects (nouns and noun phrases), actions (verbs and verb phrases), ideas, concepts, patterns, formulas, logic, or any of the myriad other things language tries to capture and convey . For example, little children first learning language, when they learn a new word, will often point out many of the things it applies to that they see, such as a bird or a fire truck. They are learning to associate the words with the visual perceptions of the objects. A computer could do that, just in the way now facial recognition matches on a computer will bring up a file with the person's name. But it could easily (and annoyingly) be programmed to name out loud each person it sees. "There's John Simmons; there's Anne Ryan; there's Michael Jordan; there's Robin Huddleston; oh look, is that Jon Stewart?!  If not, he bears an amazing resemblance to him" etc." So a computer could be programmed to recognize and name objects by sight, even if it sometimes makes errors; people make errors sometimes too. It could then learn to associate phrases and sentences with things it sees and/or hears just in the way people do. And to talk about them, describing what they do, and perhaps trying to ascribe explanations (patterns of behavior) to individuals or to different individuals who act similarly or opposite. Is that not much of what people do? Is that not tantamount to what we consider understanding language and to understanding what, and the way, things happen? Add to that language about ideas, logic, patterns, and other relationships that we discover or notice. All this would make a machine of the sort that Turing wrote about that could converse with people about almost anything that another person could converse with them about, and do it at least as well, if not better. It would likely know more than most people, particularly if it had all the knowledge available to it that is on the Internet, where now you can look up almost any subject from the causes of yellow leaves on knockout roses to how to remove a LoJack device from a car to the symptoms and treatment of Achilles tendonitis, to what is 'trending' on Twitter or Facebook, and where any performer's next live performance will be given or how their last major performance was reviewed, etc.   Computers could have and assimilate much more information than we do and find more patterns in it all than we do, perhaps including the kinds of biting insights Jane Austen or witty and clever political satirists have and the colorful, poetic language and imagery of Shakespeare, and the poetry and power of the prose of a Thomas Jefferson or Martin Luther King, Jr.
[Personal Identity and Death -- a momentary digression
Plus, one of the interesting potential future or at least science fiction aspects of machines thinking is that as long as the 'memories' or data of machines can be transferred from one to another with the same manufacturing design, as they can be now from one computer to another, machines would not have to face or fear death -- in terms of the disintegration of their knowledge and recorded biographical history and their ways to respond to experiences and knowledge, though they could face death if all, or a significant portion, of the data and/or commands were deleted from all the computers that had it, and they could face a kind of death if different computers reacted to, and utilized, the data and 'memories' of experiences in different ways because either their initial programming was different or because their different experiences and self-learning were different; just as people now do.  While we can pass on our ideas to others to some extent, that is not the same as preserving them in others the way they might be preserved in us, and it is not the same thing as keeping them in reserve for how we would use them later or what we might gain from them later.  Part of what distinguishes people one from another and makes each of us unique is that we have different ideas and different experiences with different reactions to, perceptions of, and recollections of them, which we cannot directly transfer to each other, but can only transfer indirectly (and not always very well) through words, pictures, stories, poetry, etc.  Machines would be able to have the actual same data/memory, though they might use it in different ways in the future.  Whether the machine that originally supplied the data would fret over that or consider it death if it were no longer able to function or not, I don't know.  It might be a kind of death, but may not have as much sting.  And if its programming and memory were transferred into totally new machines with no other memory or programming of their own, it might not be death at all, but a virtual immortality as long as the process could be continued into new identical machines. 

One of the questions I ask in philosophy class is whether, if there were a machine that could clone your body and mind (with all its ideas, memories, abilities, etc.) exactly as they are now (say in a distant city, so that you could travel anywhere virtually instantaneously), would it be okay to disintegrate the original each time to prevent overpopulation of "you" with it attendant problems of which of you should keep your job, spouse, home, car, bank and retirement accounts, etc.  What is interesting is that students split on this issue with some thinking it would be no different from how we proceed from day to day or year to year over time anyway, and others thinking that the clone would not be the same person and that disintegration of the original would be the death of them.  Basically I raise the strange question because it allows me to point out how our concept of identity is complex and possibly somewhat arbitrary in regard to any kinds of things that change over time but which we still consider to be the 'same' thing despite its changes.  E.g., if we build an identical replica of our car out of all new parts, we would say that is a different car though it would be an identical one, but if we replace all the parts in our car over time as they need replacing, we would say that is the same car we originally owned.  The concept of things being the same which change through time is probably more complex and somewhat arbitrary, perhaps even contradictory, than most people recognize or imagine.]
But back to current reality, contemporary answering machines emulate language but do not understand it in the way I have in mind, though some of them can relate words or phrases to synonymous ones, which is a simple kind of understanding or ability to use language.  As of this writing a phone answering machine with pleasant responses available for it to produce can emulate questions and comments by ask you what you are calling about and then convincingly say "I am sorry, I do not understand; can you say that again in different words?" or if you say "I think my bill is mistaken", the machine can say "Am I correct that you want to speak to someone in accounting about an error in your bill?" and when you say "Yes", then say "Okay, I've got it; let me pull up your account and connect you with someone in that department to help you." 

Those devices are improving and sometimes they work pretty well.  But they are not thinking in the sense I am talking about allowing machines to be able to do.  Often they are still not particularly good and are frustrating, because the options you are given are insufficient or unclear as to which one you need to pursue, and they sometimes lead you down the wrong path from which there is no escape by the time you discover that, other than to hang up and call back and try a different route.  Some seem to be programmed now to connect you with a person if you say things the machine is not programmed to recognize, particularly if it has already asked you to repeat your request in different terms, and it still does not recognize what you want.  There was one answering system with a major company that if the caller got angrily frustrated, after repeated failed attempts to say something the machine could utilize, and just started swearing, the machine would say something like "OK. Let me connect you with someone who can help you" and then connect you to a human customer service representative.  It was best to just cuss at the machine at the beginning if you knew you needed to be connected to a human.

Even though these machines are not thinking, many of them do sound like they are and they are often actually more helpful and do better than really stupid, lazy, or ignorant humans who don't really try to understand your questions and who give answers that are irrelevant or simply wrong because they do not know the right answer or care that you have it, or they connect you to the wrong department to address them, just to be done with you.  In many cases such lazy or irresponsible operators (who are customer disservice representatives) are freely associating the language you use with whatever pops into their heads, and they are not really understanding you or even trying to but just making surface connections between your words and departments or people or answers they associate with those words at a surface level.  That is not significantly different from what a computer could do now, and it is not 'real' or higher order 'thinking' in any meaningful sense when a human does it -- any more than when a machine does it.. Or if the machine does not recognize your vocabulary and just connects you with someone, that is not terribly different from having a human operator who has to connect you to someone else who has or might have greater knowledge about your question or problem, and making his/her best guess as to who that might be, or just choosing someone to get you off their hands.  In other words, some well-programmed automated answering systems now are more helpful than poorly trained or poorly motivated human beings, but the machines are not thinking, and the humans are only thinking minimally and poorly, if what they are doing could be thinking at all.  What I am interested in here are machines that do think and that do it well.  In my writings about education, much of it is about teaching students for understanding, not just teaching them to memorize the material, nor training them to respond merely automatically, which is tantamount to programming them to a certain extent.  I am interested in machines and humans who can think, understand, and feel in the ways that would be considered intelligent, wise, sensitive, and caring.  This essay is about exploring what that really means and involves for humans and what it would mean or involve for androids.

As to basic instincts to be programmed in or learned, let me start with some examples before giving a general principle.  Suppose when your car is low on gas, instead of just having a gauge that shows you that, it also cries like a baby, crying louder and louder the lower it is on gas and the longer you take to begin filling up the tank; or suppose that instead of having a gas gauge, manufacturers just have the infamous 'check engine' light come on when the car begins to get low on fuel, since to laymen the check engine light is the automotive equivalent to a baby's crying -- indicating that something is, or might be, troubling it or that the light is just 'fussy' (temporarily malfunctioning as a false positive), but you can't tell what or whether it is serious or not. 

Moving up to the older toddler stage, suppose, that we make use of the onboard GPS and wireless Internet system to find nearby gas stations and even perhaps sets a course to take you to one, resisting your efforts to turn it away from the path it has charted.  Suppose that as it gets lower on gas, it resists your attempts to steer it away even with more force.  I don't see that as being impossible to do today.  And it would not be much different from the efforts of a young child to get what it wants that it sees at a supermarket, and being difficult to dislodge from the attempt.  Then if the car had some sort of self-programmable memory, when we pull into a gas station to fill it up, it remembers that gas station's location and tries to go there.  We might mollify it by taking it to other gas stations, more convenient. 

At some point, with built in optical scanners, etc. it might recognize gas stations as such, even ones it has not seen before.  If it had a voice recognition system and audible device, we might be able to "teach" it we are putting gas in it and have it just ask for gas the next time it starts to get low.  It could start out gently asking, but get more "nagging" or loud and insistent as the gauge gets lower and lower.  At some point, if it has some sort of logic built into it, we could get it to see that it is not seeking a gas station as much as it is seeking gasoline, and that gas stations are normally where to find gasoline in usable form, but there are other possible places, such as the fuel can in your garage.  The car could be programmed to nag you about getting gas as well, nagging more insistently (e.g., more frequently, louder, shriller, etc.).  Self-checkout counters in supermarkets almost seem to be nagging (though just following a program) when they keep telling you to place your item in the bagging area, and they stop functioning when you don't, or when they repeatedly keep saying, after you have paid, "Please take all your items from the bagging area" until you do, and it senses that by the weight's being lifted from the bagging area.  It is not a stretch in technology to think that we could have all kinds of sensors for all kinds of things and have programmed logic that can be self-reprogrammed by experiences to let it know what messages to give, how to give them most effectively, and under what total set of circumstances.  That would be expensive (at least initially) and unreasonable to do past certain points, but it is not theoretically impossible. 

And once done for one machine, it would be done for all (at least all similar ones).  The point is to associate words with all its 'perceptions' in the way people do, and to also program a certain amount of logic into it.  Giving it 'desires' would just involve having it pursue or avoid certain stimuli or physical states.  Androids could then even be self-conflicted by having instinctual (i.e., programmed) desires for things it learns from experience yield consequences it prefers to avoid.  And it could be programmed to give more response to near time consequences (seeking instant gratification) to those further in the future.  It would have to learn, as humans do, the blessings of delayed gratification.  And it could be programmed with elements that potentially lead to conflicting desires, such as the desire to achieve things but the desire to avoid doing the work necessary to achieve them, even when it sees the connection between them and sees the necessity for the work.

Once the technology is developed and the learning accomplished, it could be transferred to other machines fairly quickly and readily, whereas we have to teach each individual human being much from scratch (apart from the things they learn on their own naturally, such as object permanence and some logic and mathematical ideas, a sense, even if limited or sometimes mistaken, of cause and effect, etc.).  Fortunately discoveries by one person can be transmitted to others, so that not everyone has to reinvent every wheel, but clearly even then, education is a long process for people that requires the same sorts of repetitive teaching and exposure to experiences for each individual.  Machines could transfer knowledge from one to another much faster and far more completely, once machine knowledge becomes possible at all.  There would not be mistakes of ambiguity or interpretation, since the ideas along with all their evidence and the perceptions that lead to it could be transmitted/copied from one machine to another intact and whole.  And once 'knowledge', reasoning, and understanding is able to be built into one machine, it can be readily built into many all at once.

The difference between humans (or animals) and machines in this regard is that humans and animals are born with limited sets of instincts and urges (i.e., causal responses or reactions to stimuli) compared to what they are taught and learn on their own as they age, whereas machines would not have to start from scratch in the teaching/learning process each time the way babies and baby animals do.  We could simply copy what an older machine has 'learned' or reprogrammed itself to do, into a new machine.  That is no more "cheating" about more efficiently teaching machines to think than is developing great teaching tools for children that more efficiently help them learn what others have discovered.  The progress of civilization would have early come to a screeching halt if the knowledge discovered, invented, or learned by others could not be taught and passed on from one person or generation to another in a way more efficient than it took to discover it.  People with average intelligence have knowledge it took the finest minds collectively to discover or invent over the span of human history.  History and civilization make up for the brevity and narrowness of individual biology and biography. 

And in fact, there is so much knowledge in so many areas now, that people need longer and longer schooling or training, and they need to specialize because otherwise it would take too long to learn everything that is known.  Insofar as machines would not have to go through all that time-consuming training, if they could think at all, as I believe they can be built to, they would make far more progress far faster than humans do.  But at first machines could have basic kinds of instincts and programming/learning ability that humans and animals do, and they could slowly be taught what human infants and children are taught.  That would be time-consuming and frustrating, just as it is with children.  And there would be dangers to the machine and misunderstandings by it, just as there are with children (and even with adults).  For example, as I was driving my three year old grandson somewhere, somehow the subject of lawn mowers and weed trimming machines arose because of what we passed, and I told him about my having to weed-whack a section of my yard, and he asked why I didn't just mow it with the lawn mower.  I said it was too steep to be able to do that safely and I was then able to talk with him about safety using such equipment on steep slopes.  He didn't understand what "steep" meant.  That seemed difficult to explain while driving, except to talk about how some hills were really difficult to climb or that made you go really fast when you went down them,since he had experience with some different hills, many of them very steep.  But it would have been easier to explain it with visual examples while not driving.  For example, one might take a book or piece of cardboard and hold it at different slanting angles, saying which slants you were showing him were steeper.  Of course, just seeing angles and slants would not necessarily convey that steeper angles were harder to climb or descend, or to walk across from one side to an other without losing your balance, etc.  They would not show that it was more difficult to push or pull a heavy object up a steeper incline or to control its speed going down one.

All those sorts of things go into our concepts of "steep" versus less steep or flat/level.  Galileo learned even more about steepness and its relationship to acceleration and gravity when he experimented with the speeds of objects rolling down planes tilted at varying angles.  Most adults today probably do not know the particulars or the physics formulas that Galileo discovered, and so their concept and understanding of the word steep is not as complex or complete as his or of those people who have learned from his and subsequent work, though they certainly can recognize steep hills or climbs (or house roofs) when they see them or walk on them.  And we normally only teach children the parts of the concept we can explain to them which we think they can comprehend or that we think they need to know.  I see no reason we could not teach a computer to recognize and describe concepts like steep hills or terrain or climbs, versus less steep or level ones.  It would take time and they would have to have visual angle sensors to detect degrees of incline, and/or kinesthetic ones that either relate physical platform position (i.e., its 'foot' or 'ankle') relative to vertical or to the angle it needs to maintain upright balance, or to how much energy/power it needs to be able to climb up an incline or how much force it needs to exert to keep from accelerating too fast to maintain its balance while going down the incline. 

And I see no reason we could not do the same for most words in the dictionary, at least ones that have to do with physical descriptions.  And even though this would be tedious to do and take a lot of work the first time, after the programming is done once, it could probably easily and quickly be adapted to many or all 'thinking' machines, so they would all be able to correctly use language.  Of course, there may be some problems of the sort that children or anyone learning a new (e.g., foreign) language experience with idioms and idiosyncrasies in grammar or syntax.  Those could likely be corrected, just as they are for humans learning the language and making typical mistakes to begin with.  I once helped a student from the Czech Republic work through a comprehensive, thick, study guide to English so she could pass an English as a Second Language (ESL) college entrance exam, and it was amazing how many exceptions there were to the meanings and grammar principles they gave, which cropped up almost every time she asked about a specific example to see whether she understood the principle or meaning.  Invariably she used an example that showed the need for a modification of the explanation in the book.  Those familiar with a language tend not to notice how idiosyncratic it is or how many exceptions there are to grammatical 'rules' or guides.  A story about Winston Churchill during WWII concerns his response to an official in the War Ministry who denied a dire supply requisition from the front lines because the requisition contained a sentence which ended in a preposition.  Churchill was asked to set the matter right and he did so by telling the grammar stickler "This is a situation up with which I will not put" to show him what a petty, pedantic snob he was being in a matter of life and death far more serious than whether a grammar rule was strictly followed or not, when the meaning of the statement itself was quite clear with the rule violated. 

I also once helped mentor by email a student from France who was studying at Oxford how to teach English as a second language.  He asked me to proof drafts of his papers to make sure there were no glaring linguistic errors, and though he was very good at English, periodically he would write something that at first made no sense.  Typically it was when he was translating too literally that which he did not realize was a French idiom that did not have the same counterpart in English, and half the time I could tell from the context what he was trying to say.  But the other half the time, the French idiom translated directly into English just made no sense at all to me and I could not help him come up with the right words until I found out through further explanation on his part in a different way, or through examples, what he was trying to say.  We would typically have a good laugh over the use of the idiom, particularly when I explained to him what the direct English translation meant. 

I had the same or similar problem in studying German as a second language for graduate school.  I was translating a set of aphorisms by Goethe into English.  Everything was going along fine, with the help of a German-English dictionary to look up any words I did not happen to know, until I got to one aphorism that had a number of words in it that each of which had two very different (though possibly somewhat related) meanings in English.  I tried the different combinations until I got one that at least made good sense, though it seemed very out of character with all the other aphorisms.  The aphorism in question was supposed to mean "One is truly impoverished who has lost all shame with regard to keeping his sorrows (or troubles) privately to himself."  Much to the total delight and almost asphyxiating laughter of my German teacher (who was a cool guy) I translated it as "One is truly impoverished who has had harm come to his private parts."  After he was able to get his breath and skin color back and right himself from being  doubled over one of the classroom tables, he had me write out the Goethe aphorism and my translation because he had "friends to share it with."  I presume programming machines to think understand, and learn from experience, would be fraught with the same kinds of errors, as well as those children normally make in their understanding of anything until it is honed through trial and error.

Children make all kinds of errors of understanding, sometimes because concepts are more vague than we realize and partly because we don't always make all the important aspects or ramifications clear in our explanations and teaching.  One of Art Linkletter’s famous ploys for getting family secrets out of children was to ask them what their parents had told them not to say.  They would then report what they weren’t supposed to say.  In their mind, they weren’t saying it, but just answering the question and giving a report about what they weren’t supposed to say.  When I outgrew my first bicycle, and got a new one, my father said I should try to sell the old one to someone whom I knew didn’t have a bike but wanted one.  So I offered to sell my bike to the father of such a kid I knew.  I had no idea at the time about the price and had asked my father how much to ask for it.  He said to ask for $15 but said I could go as low as $10, but no lower.  When the other kid’s father asked me how much I wanted for the bike, I ignorantly and naively said “$15 but I can go as low as $10.”  So, of course, he said he would take it for the $10, laughing at me as he did so.   I immediately realized I had screwed up, but the way that process would work or was supposed to work had not occurred to me ahead of time.  I think I believed that decent people would pay some ‘true value’ they recognized in the object (or service).  But I had not understood that when my father told me I "could go as low as $10 in what I accepted for the bike" that was not part of what I was to tell the potential buyer.
[Even today, I have some notion of value that is separate from just what people are willing to pay or charge. I think some people overcharge for their work and some undercharge for theirs.  I think it unfair to overcharge and I think it unfair to pay someone less than their work is worth because they undercharge or are forced by financial circumstances to have to accept less than their product or labor is worth and less than they deserve while their boss benefits excessively from it by ripping them off essentially by extortion and taking advantage of their circumstances.  Tipping is a way of paying people more who you think are undercharging or being paid too little for their services, but tipping seems to apply only to certain kinds of work, rather than being universal.  I have paid workers more than we agreed upon when they did good work and put more time, effort, and care into it than others might have, but I didn’t consider it to be tipping them.   I considered it to be paying them the right amount for their work and its quality.  I still believe there is a fair amount (or at least fair range) that products and services are worth that should be determined by something other than just what people can haggle to agree on or what might be considered to be just ‘market forces.’  For example, I tend to believe that in general one hour of any person’s work should be worth one hour of another person’s (if we also count hours of training as part of the hours of labor), with some possible adjustment for degree of difficulty and for risk (and insurance cost to cover it) involved in some cases. 
I am not the only one who believes there is more to fair prices than market forces, but that is a far from universal view.  And ‘fairness’ is one of those concepts that humans disagree about and sometimes even have difficulty understanding one’s own concept about it,  so it would be difficult to program algorithmically into a computer/robot/android without its yielding similar disagreements among androids -- although they might be able to resolve the disagreements better than humans seem to have done so far, particularly by better putting their own experiences into perspective and explaining/describing them better to other androids.  Circumstances one has faced or imagined in thought-experiments can make a difference.  It is difficult to anticipate circumstances that might make one change one’s own view about it.  And whether a self-learning robot could recognize such circumstances too or not, I don’t know.  When I used to photograph weddings, there was a huge difference in how difficult or easy different people were to work for, and I hated that I couldn’t charge the more difficult people more than the price I had stated beforehand after finding out how much more difficult they would make the work.  In some cases of non-wedding photography, I did charge people afterward less for photography than we had agreed on (or gave them more for the same price) because they were easier to work with than expected.  I did not do that for weddings because I was charging what i considered to be a minimal price for the amount of work and skill normally involved.  I was just sorry I couldn’t increase the price afterward for those people to whom an aggravation fee would have been appropriate.]
Misunderstanding Is Easy, Particularly for Children And It Is Difficult to Teach Children by Algorithm,
So It Is Also Difficult to Teach Computers By Algorithm.  Difficult to Anticipate All Possible Mistakes of Understanding
Thinking involves not just rote learning (or following recipes or computer programs), but the ability to learn from experience, particularly in order to correct mistakes of understanding.  That can be a long, sometimes tedious path because one of the sayings about the value of learning from experience is not that it helps you avoid mistakes (though it can) but that at least at first it helps you recognize the same mistake faster each time you make it, or a very similar one, again.  Knowledge and understanding are often difficult to achieve.  Much of that is because people don't always say what they really mean or don't explain things fully enough to prevent errors of application.

When I was in nursery school they had all of us assemble on the playground one day for a group picture of all the students in the school.  I had been taught always to look into the camera for a picture and I was lined up and ready, looking straight at it on its tripod.  The photographer then told us all to look up, and so I looked up (toward the sky, though I didn't know why or what I was supposed to be seeing).  He clicked the shutter; one picture; that was it.   Apparently everyone else had been looking down, and so when they looked up relative to that, they were looking at the camera, but I, of course, was looking up in the air in the photo.  My parents didn’t understand why I wasn’t looking at the camera.  I said I had been but the photographer said to look up, and I did.  If he had simply said "Now look (here) at the camera" which was what he really meant, I'd have been okay.

Also, when I was a child, for example of misunderstanding on one's own without help from ambiguous or vague directions, many of the older adults I met were Jewish immigrants to America, my grandparents among them, from countries with different languages and  most of them spoke what was then called “broken English” or English with a very heavy accent, and misuse of idioms, mixed linguistic expressions, mispronunciation of words, etc.  I was told they were from ‘the old country’ and assumed that was why they were old, and also assumed that when I got old, I would probably talk like them too -- that it was because they were old that they talked that way. My parents and their friends did not talk that way.  Children I knew and played with did not talk that way.  Just old people from the old country.  I had no real concept of foreign languages.

Even bilingual children often don’t know they are speaking different languages or have the concept of ‘language’.  In the office next to mine a Colombian woman owned a business, and though she had been in America for more than 30 years, it had previously been in a Hispanic section of New York and she never really had to learn English and had not, though her children sounded perfectly American and her brothers spoke fluent English.  Her daughter and son-in-law spoke Spanish at home for their child so he would grow up understanding it, but they spoke English to him outside the home and sent him to English-speaking preschool, so he would be simply a bilingual, but otherwise typical, American kid.  Since I only had had one year of high school Spanish a long time ago and had forgot most of it from  never using it, I couldn’t speak with her much other than very rudimentary sentences and with lots of gesturing and pointing.  If I was able to say something she understood, that usually prompted a long rapid response in Spanish because she mistakenly thought I could understand such a response, given that I spoke well enough to ask or say what I had. So whenever her young grandson was with her for the day, I welcomed the opportunity to have him be our interpreter, since he spoke to me in English and to her in Spanish.  The problem was that he didn’t know he spoke two different languages and that there was such a thing as translating from one to the other.  So if I said to him something, as I did the first time I tried to utilize him as a translator, like “Ask your grandmother if it is okay to go outside with me to play with this ball [that I had bought for him] in Spanish” he would say to her in English, as he did on that occasion, “Is it okay if I go outside with him to play ball in Spanish?” 

Sometimes, because he spoke to her in Spanish and spoke to me in English, I could just prompt him to translate by simply asking him to ask or say something to her and he would than automatically transpose into Spanish, but that was never guaranteed.  And even if he did speak to her in Spanish, it wasn’t clear he understood what I was asking or what her answer was.   Often the answer he said she gave made no sense in response to my question or comment, so something was being lost in the translation in either or both directions.  His mother told me that it was not till she was around 6 years old or so that she understood that she spoke two different languages and what it involved to go back and forth between them to translate from one into the other.

But one more story about the difficulty of teaching children, and that would be just as difficult to teach computers if they were being taught as children are.  We were having a family dinner out of state with distant relatives.  I was somewhere around 12 years old.  My great aunt had prepared the dinner, which was very good for the most part, but she happened to ask me if I wanted more of a particular food that I didn’t really enjoy, and I politely declined with a “No thank you” but then said I didn’t really like the taste of it that much.  My father was embarrassed and upset and shortly told me in private that “You never complain about food anyone has given you.  You eat it and say it was delicious.  If you don’t want any more, you just say ‘no thank you,' like you did at first but that is all."  Although he was somewhat restrained, he was clearly angry with me and was pointing out I was definitely never to do that again or face serious punishment.  A few months later, there was a large gathering of all my local cousins at an aunt and uncle’s farm.  That was always a special kind of time for everyone.  I had asked for a glass of milk at one point and I drank it though it tasted awful.  I figured it was because it was country milk and that was different.  I choked it down without complaint as I had been told.  About an hour later, my cousin who was four years older than I complained to my father that his glass of milk tasted terrible.  Since he was so much older and should have known better by then, I was sure he was going to be 'killed' (i.e., seriously admonished and severely punished) for saying that.  But instead my father took the glass and sniffed the milk and said “This has turned; don’t drink it; it has gone bad.”  At that point, I am wondering "What?  Wait a minute.  What’s going on?"  Then to make it worse, my father turned to me and said “You drank your milk before.  You shouldn’t have done that.  Didn’t you know it was bad!  You should have said something and not drunk it.”  So, of course I was confused by then.

However, I used that experience to work out for myself how to deal with the problem in the future of not knowing whether something tastes bad because it is tainted in some way or just a personal taste difference of mine or is ‘an acquired taste’ I have not yet developed -- ask someone who knows how it is supposed to taste (and also, if possible, who knows what the tainted product would taste/smell/look like) to test or check it.  (Or if you are at a table where others are already eating it who should know whether it is good or bad, see their reaction.)  That is no different from asking your mechanic whether a noise in your car is anything to worry about or not, asking a doctor whether a symptom or sign is anything to treat, or asking a computer savvy person whether a problem you are experiencing could be a software conflict from a new piece of equipment you have added, etc.  Presumably an android could learn from conflicting directions in the same sort of way.  Although, my claim has been that we can teach androids to think and understand emotions, etc. it is really more specifically that we can build them, program them with certain basics, and also program them to learn on their own, so that they think and understand things at least as well as humans do -- which sometimes may not be very well at all without further experience, reflection, teaching, admonitions, or fine-tuning from better explanations.  I do think that machines might be able eventually to teach humans what we do not yet know or understand because I think they can have better memory capacity, better perception of more kinds of things, better and faster pattern recognition, vastly greater quantities of associations, etc., but it may turn out instead that they basically are like us and do not think, understand, reason, learn, or behave better or worse than we do, especially if we cannot figure out how to teach them, or help them learn, any better than we teach children now. 

Some Basic principles for Building Thinking Androids:

The following is likely not a complete list, but is intended to be at least a good start:
1) Simple instincts, drives, urges, impulses, etc. could be programmed into machines that are essentially logically similar or tantamount to those simple reactions and aversions or attractions  which parents, psychologists, philosophers, and others discover or suspect that babies, children, adolescents, and adults have.  E.g., new 'young' androids might, like babies, seek people who are familiar to them and fuss when left with someone unfamiliar.  'Teenage' androids could be tired of what is familiar and seek to be with anyone else other than their 'parents'.  And we could go beyond those to build in additional ones if they would be helpful.

2) Drives, instincts, urges, impulses, etc. would be basically causal feedback mechanisms to pursue or try to achieve, increase, prolong, or maximize certain physical states (in the sense of electrical, chemical, physical states tantamount to those which could be read on gauges, though gauges would not be necessary for the android to experience or 'perceive' or 'be aware of'' them ' directly) and to avoid, minimize, shorten, end, or eliminate other states. 

3) Empirical sensors (chemical, optical, pressure, temperature, etc.) would be hardwired into the androids, with pattern recognition programs of the sort already available for facial recognition, motion anticipation, etc.  These would be for detecting and naming/identifying objects, tendencies, spatial and temporal patterns of all sorts.  There would also be principles of cause and effect (insofar as we understand those concepts ourselves, which is not always consistent or reasonable).

3) And as the machine begins to recognize patterns of conditions that lead to or produce states that are sought or avoided, the machine will be 'learning' which conditions and circumstances to pursue or avoid.  That combines learning and factual knowledge with basic instincts to bring them into play under more varied or complex conditions.  Different machines could have some different basic instincts or tendencies; e.g., some might crave (i.e., seek) higher risk activities of various sorts, sounds (music, speech, etc.), tastes,smells, touch, visual objects (color, shape, shade, etc.).  Some might seek to go fast or climb high as registered by speedometer or altimeter readings or by other sensors for judging speed or distance from the ground or above other structures.  Some might seek certain chemicals rather than others (i.e., prefer different tastes or aromas or other properties of chemicals that humans may not be able to sense). 

4) Two important tendencies to build in would be 1) to emulate people (or other machines) to various degrees, and 2) to seek more factual information or experiences to add to machine data bases.

5) There can be conflicting sensations or pursuits/avoidances, just as there are with people, particularly when behaviors that are pursued (and considered pleasurable because of the pursuit) can lead to consequences that are sought to be avoided (e.g., for humans, sex and unwanted pregnancy or desires for accomplishment and achievement coupled with desire to avoid work and effort, or desire for health but also unhealthy foods or lifestyles).  But we also have conflicting goals sometimes, as in rooting for two teams each to win, but they are playing against each other, or wanting to play golf but also wanting to play tennis or finish a book or a project or take one's kids to the zoo.

6) Machines may or may not necessarily recognize, name, understand the patterns, causes, etc. of all their own instincts, drives, urges, impulses, or desires, just as people do not of theirs.  Insofar as machines do recognize and become more aware of the (causal) pattern of their impulses and drives, those would be tantamount to conscious desires, needs, etc.; insofar as they do not, those would be tantamount to unconscious desires, needs, etc.  Unconscious needs and desires could become conscious as the machines figure out more about them and thus learn about them and understand them to the same extent people do, insofar as knowing more about the patterns, causes or suspected causes, and related aspects of our feelings constitutes understanding. 

7) Machines would be programmed to recognize and try to resolve apparent contradictions or incomplete patterns or discrepancies among patterns or statements.  Trying to resolve apparent incomplete patterns and discrepancies in statements or 'beliefs' would be tantamount to having or displaying curiosity.  For an example of a recent 'human' version of this I experienced myself, see "An Anomaly of the Logic of Age Ratios".  Or see the story about physicist Richard Feynman's curiosity about the spinning plate in "Some Thoughts About How Machines Could Think", the 'parent' essay of this one.  Feynman noticed something that seemed odd to him, though it was about something trivial, and he wanted to work out the mathematics of it and also intuitively understand what caused the oddness.  He was fascinated by the puzzle, though no one else in his department shared his enthusiasm -- until after he worked it all out to his satisfaction and then applied it to the spin of electrons and won the Nobel Prize for that application of it. 

What Machine Instincts, Urges, Impulses, Desires, etc. Would Be:
Machines could be built with sensors that either activate various capacities it has or that strive to increase or maintain the pressures or voltages or chemical reactions its sensors detect, or that try to reduce or minimize or eliminate them.  That would be tantamount to the machine's liking something -- trying to get more of it (more pressure, voltage, molecules or molecular density, etc.) in time or intensity -- or its disliking something and trying to avoid or minimize it in time or intensity.  It is not that we or machines would be trying to increase an experience because it likes it or trying to decrease or avoid that experience because it dislikes it, but that its seeking to increase it or avoid it is what we perceive in ourselves, and the machine would perceive in itself, as liking or disliking the experience.  If it is trying to relieve or release pressure, it would perceive that release to make it more "comfortable", i.e., not working hard trying to achieve this state.  This would be no different from people's urinating or defecating when their bladders or bowels are full and exerting pressure, or sneezing or blowing their nose when they have a cold or allergy causing a tickle or sinus pressure they want to alleviate, or are bothered by their nose running.  And it would be no different from wanting to have the feelings induced by chocolate or sex or love or shopping or solving puzzles or discovering something one wants to discover, or any of the myriad things humans want to do and strive to do.

And we could also program the machine with the basic instinct to mimic others or to learn from others whom they want to emulate, setting up other kinds of pressures and "frustrations" when the actions or teachings of others conflict with initial programming or seem to cause it to try to have extra motions it doesn't feel the need to do -- as when little kids are not bothered by snot running down their faces and resist taking the time to wipe it or letting you wipe it, because they have no natural instinct to do that, and because they have an impulse to do something else with that time, without interruption.  A 'lazy' machine, for example, would be one that prefers to (i.e., seeks and tries to) watch daytime TV or do nothing for long periods of time rather than mowing the lawn or helping with other chores or doing some other kind of useful work.  Many human traits are simply preferences of behavior; e.g., perpetually 'hungry' obese people may crave to eat more than to do physical activity that would allow them to lose weight instead of gaining it; or lustful people prefer to seek the physical state of sexual arousal or satisfaction than to learn new ideas.  A person who seems uninterested in sex with a spouse may have a stronger impulse generally or at some given time to watch a football game or to complete some particular task s/he has begun, or to generally find almost anything more preferable to the states that sex induce.  Or there simply may be no impulse to seek a state of sexual arousal and nothing that particularly triggers such a state.  It would be a 'frigid' or asexual android.

Insofar as machines can be designed and programmed or hard-wired to have specific, known preferences, they would be different from babies, which come into this world with likes, dislikes, abilities, disabilities, basic learning skills, etc. that we cannot at this time predict or intentionally and knowingly create.   Some machine traits might be important to build in perhaps to all machines, but others traits may be important to allow to vary among different machines, in order to allow greater diversity of ideas and discoveries, skills, passions, tastes, etc.  E.g., it may be better to program certain kinds of ethical principles/behaviors into all machines to keep them from being selfishly or greedily destructive.  Or there may be good reasons not to do that, at least not without all kinds of exceptions.  Given the unintended side-effects or consequences of any design that can learn and change, I would think it would take some trial and error for humans or machines to learn which designs or parts of designs, if any, should be universal.  It would seem that a strong sense of ethics and compassion, pattern recognitions of all sorts, and the drive for acquiring information and resolving incomplete patterns or apparent logical contradictions might be strong candidates for universal programming and hard-wiring, but that may not be true, and/or there may be other traits equally or more important.  In other words, if we can build and program machines that can think, we should do it so that they think and behave in the best ways, but figuring out what 'the best ways' are, and what the best traits and principles of behavior are, will likely take both work and some trial and error, particularly as we and they discover unintended consequences of what we teach them or help them learn.

If we look at people, there are all kinds of basic or primitive instincts and quirks different people have and desires they develop.  We could program many of the basic ones into machines if we wanted to.  These would be basic likes and dislikes as manifested by their striving either to increase or to minimize/eliminate certain pressure, chemical, or electrical states.  As people in general, and little kids in particular, do, androids would perceive and report their aversions (or attempts to change a state) as 'not liking' those things that bring about the state. We could add language and we could even add the propensity to describe any perceived states as "feelings" or name any quirks or actions, where words or phrases would have to be created to allow the naming of sensed data and/or to communicate efficiently with people and/or other machines.  There could be a built in propensity to have such communication because not being able to be understood would cause pressures perceived and designated as frustrations.  Senses of fairness could be partially hard-wired in through impulses that would lead to the sorts of concepts described in the essay "Fairness as Moral and Conceptual Relevance", which often start in children with instinctive notions to treat people in a way they seem to deserve based on how they treat you, and/or that have to do with not being deprived of getting what you want while others do get it, and/or that have to do with people being treated equally under the same conditions.  But fairness is a complex set of ideas and there is ample room for disagreement and development of understanding of it.  It may be best to allow machines to have the building blocks to determine their own ideas of fairness and to disagree with each other, or it may be best for us to program in somewhat sophisticated notions of what constitutes fair treatment, but still allow modification through experience and learning.  I don't know.  I am interested here in explaining design possibilities that would produce traits such as what we as humans designate to be fear, joy, humor, intelligence, logic, artistic skill, etc., not trying to advocate specific designs or traits to build in.

Moreover, I don't know whether a separate 'processor' would be necessary for the machine to think or not, or whether that would mean being able to monitor, as much as possible, its central processing.  Plato and Freud each postulated separate parts of the brain that processed information from other parts, but I don't know whether those components exist or are necessary or not in people, and whether or not they would be necessary in machines.  For example, suppose it has been a hot muggy day, but in the evening it has cooled off and the humidity has dropped.  An adult human might go outside, and, perceiving how it feels, breathe a loud sigh of relief and exclaim "Wow!  What a pleasure!  This is like paradise!"  I don't know whether a second part of his brain monitors the first part's detecting the change in humidity and temperature or the cool breeze or whether we are wired and educated to report that as part of our reaction to the experience of detecting the decrease in temperature and humidity.  So I don't know whether the machine needs to have a separate computer that announces that certain readings are pleasurable or whether it could just be designed in such a way that it learns on its own to report pleasure when certain readings go from a range high enough to try to avoid (or low enough to drive pursuit of an increase) to the range the machine would be programmed to try to prolong.  Pleasure comes as pursuit or avoidance requires less effort or energy or a state of no further approach or avoidance.  E.g., if your car is in cruise control set at 70, and it comes to a steep hill it has to climb, it may have to kick into a lower gear and/or higher rpms to be able maintain 70.  When it crests the hill, it can 'relax' to maintain 70 on the way down.  The speedometer and tachometer could be set to register groaning or sighs of relief at the different prospects, or make more expressive comments, just as a bicyclist in a mountain area might.  Pleasure can also come from anticipation of reaching such a state by doing what the machine is doing in conformity with patterns it believes will lead to the state pursued by the feedback mechanism under the conditions it is doing it.   E.g., humans have a sense (whether justified or not) of what is possible or not for them to accomplish, and the pursuit of a goal that seems reachable and desirable, along with progress in the pursuit, even with temporary setbacks, can be exhilarating; i.e., pursued with more vigor and less fear of failure and/or less fretting over the work involved.

Now, on my theory here, it might seem there is no difference in feeling between being programmed to pursue more of something, which I am saying would be tantamount to finding pleasure in the conditions, and being programmed to avoid, eliminate, or decrease, mechanical-electro-chemical readings ('sensations').  The machine would just be pursuing what it is programmed to do; there would be neither pleasure nor pain in it; or if there were, they would be the same.  But I think that is not true.  First, we humans do not always distinguish avoiding pain from having pleasure, except perhaps in cases of strong deprivation, and not necessarily even then.  For example, we eat because we get hungry, but hunger is not really a pain, and eating to satisfy a very slight hunger is not necessarily much of a pleasure.  But if we are ravenously hungry, eating can be very pleasurable, not just the alleviation of an unpleasant feeling.  Or if you have to pee and have been holding it, when you finally get to go, it can be a pleasure, not just relief from pain or frustration.  Many things, such as breathing, are not particularly pleasurable, and holding your breath for a short time voluntarily is not particularly painful, but not being able to breathe for a longer time can be very painful, frustrating and agonizing, with being able to breathe again then being pleasurable and extremely satisfying and gratifying.  So I am not sure that we make the distinction in many cases between getting something we want or want more of, and being able to decrease or avoid sensations/readings we do not want or want less of.

Plus, we don't always know what we want; and sometimes, as in the case of addictions (or even certain foods), people want things they know are not good for them (or that will give them indigestion or make them gain weight instead of losing the weight they want to shed), and that in some sense we want to avoid.  Or in the case of OCD, people want things (to a degree and/or under conditions) that make no sense, at least not to others, but that they try to achieve anyway.  Many people have set ways they want to do things that we consider just idiosyncratic without rising to the level of OCD, and many people sometimes have a particular, singular focus on something they want to achieve before doing other things which would have greater priority to most people,  but it seems to me the mechanisms are the same.  It is just the level of compulsion and the degree of unusualness or unconventionality of the desire that makes the difference between normal (common) desires, idiosyncrasies,  and obsessions or compulsions.  Plus, I suspect we tend to think 'driven' behaviors that seem odd at the time but turn out to be very productive or beneficial in the end are good traits, whereas those that end up seeming to have no beneficial result are just OCD ones.  Possibly the Wright brothers seemed obsessed with the idea of creating a flying machine, but once they succeeded, they were considered hard-working geniuses, not compulsive lunatics.  The behavior in such cases is the same, but the result makes the difference in how we view and characterize it.

Now my theory is faced with the problem that we seem to feel different about clear cut cases of pursuing pleasure versus those of avoiding pain or harm.  E.g., fleeing from someone intent on harming us is a different feeling from running a race or running for pleasure, or even from chasing someone with the intent to harm them.  The feelings of the chaser and the one being chased are different.  Or, even at a lower animal level, if you turn on a light at night where cockroaches are partying, they scurry in a way that makes them seem to be fleeing in fright, not playing hide and seek for fun.  Of course, I don't know whether they are frightened (or gleefully running), or feeling frustration if prevented from fleeing to where they are trying to get, or just reacting physically to a stimulus. 

But with people, there is a different feeling when one is running in fright or one is running for pleasure or at least with no concern of fear.  I don't know whether that can be accounted for, but it seems to me that it might be accounted for by different causes of running or different neurological paths (or different electrical/chemical paths in machines) where one path is avoidance of perceived potential harm or just avoidance, and the other path pursuit of a physical state programmed or hard wired in to be intensified or maximized.  That is, seeking to lower a state of some sort (perception of being harmed) would be different from seeking to raise one (e.g., perception of catching lunch and satisfying hunger). 

It might be that if potential harm is perceived, and if the avoidance of harm or of certain calibrated readings (i.e., pain in animals from the firing of some neurons, or short circuits or increased voltage or other electro chemical measures) is programmed to be avoided, attendant other readings will be what we consider fear -- that is an anticipated expectation of pain or the state that is strongly attempted to be avoided or eliminated.  If we are running for a goal we positively desire, then the impetus for running might give readings we take to be pleasure.  Similarly with the programming of a machine, that has learned some things can cause it harm (and states of being it is programmed to avoid or minimize) and that other things can be sources of states of being it is programmed to increase or maximize and sustain. 

Or consider, for example, a person running from someone because they think it fun (e.g., being coy or playful or wanting to show themselves faster, etc.), but then finding out later that the other person was trying to kill them.  That might give them cause for alarm and change how they feel about the pursuit in retrospect, but it won't retroactively change how they felt at the time.  It is not just the running that is involved in our mental states or that would be involved in a machine's running, but also knowledge gained from experience (or hard-wired programming) that could trigger sensations or electro-chemical states that are measured and read -- states that are programmed/hard-wired to be pursued or avoided.  In other words, on the one hand, the causes or purpose of running could be taken into account and channeled into pursuit of pleasure (sustaining, increasing, or maximizing a physical state) versus avoidance of harm (decreasing, minimizing, eliminating or avoiding particular states), or on the other hand, there could be different sensory data that would immediately trigger an overwhelming reaction of avoidance that is tantamount to what would be called or experienced as fear or an overwhelming attraction to sustain or increase that would be considered or experienced as pleasure.  Often humans do not feel fear of a particular set of circumstances until they have experienced a bad (i.e., resisted) result in them (such as an infant's pain from a first inoculation at the pediatrician's office) and then expect it again at the next visit, or a visit to any kind of office or small shop. 

As an example in human behavior of trying to increase or decrease a chemical level, some people, those whom most people would consider to be dare-devils, seem to want to do things which increase their adrenaline levels (as of this writing, we often refer to them as adrenaline junkies) or at least are driven to do things most other people try to avoid.  So it could be, that if adrenaline is the determining factor, some people hard-wired to pursue higher levels than others, perhaps even maximal levels, while those who are considered 'risk-averse' are hard-wired to minimize their levels/readings of adrenaline.  Machines could be built to do that too, pursuing different levels of risk, regardless of the harm that might befall them which they either ignore or which in some way even contributes to the drive to pursue the act that might lead to it.

Or consider, that toddlers, even siblings, might have totally different food preferences; one likes broccoli (chooses or wants to eat it), while another doesn't (tries to avoid it, complains about it, etc.). Or one likes pasta with marinara sauce or a sandwich with condiments and other trimmings, while another doesn't want the ingredients combined, and will take the sandwich apart and eat all the components separately.  Or the notorious child dislike of having their different foods touch each other on their plate, even though they like each of them separately.  Or the person who prefers to eat the different foods on their plate one at a time separately before moving on to the next one, versus the person who likes to travel around their plates to eat a bite of one thing and then a bite of another, then a bite of a third thing, etc.  We could program machines to behave in the same way by pursuing or avoiding certain levels of chemicals or combinations of chemicals.

If we wanted diversity of ideas among machines, we could hard-wire them to try to sustain or to avoid different chemical or physical states.  We could even program them to learn in different ways, whereby some use more logic or reasoning than others.  People who experience the same things or have the same evidence often come away with different conclusions.  Some pursue the discrepancies further or care to check more thoroughly about their own reasoning.  Machines could be built the same way, philosophical machines, scientific curiosity machines, mathematics fascination machine.    Plus then you add in hard-wired responses to wanting to "fit in" with popular or powerful machines versus wanting to follow some sorts of moral imperatives one has discovered or been taught (whether right or wrong), and you start to get conflicting ideas among different machines, just as you do with humans.  Seen in the light that almost any goal can be hardwired into a machine and perhaps even into humans by mutation, it is at least intellectually understandable, or one can imagine that, a masochist is wired to pursue what most other people and animals are wired to avoid, pain.  It is even easier to imagine that sadists are wired to pursue torturing animals or other people, which they thus perceive or experience as fun.  Also one can easily imagine that some people are wired to pursue destruction whereas others are wired to pursue creating, achieving, and building good or beautiful or interesting acts and objects.

And you could also get cognitive dissonance within any given machine, as with any given human, caused by seemingly conflicting ideas or by rules that conflict in a given situation, as with the 'bad milk' situation above where you are told not to disparage food or leave it uneaten just because it tastes bad but then told not to eat certain foods and point out there is something wrong with them to warn others.  Or a five year old might be told to leave her little brother alone and not make him cry, but then get in trouble for letting him go outside because he was crying for her do to that.  Then, of course, she is yelled at or punished for doing that if he is too young to be outside alone.  The fact that adults know what the rules they announce mean and what the exceptions should be does not mean that a child told those rules will know that.  And this is a problem even for adults, particularly if people disagree about whether there are exceptions or what they should be if there are.  For example, you have the conflict in any job of following orders you think will result in harm, but cannot get those in charge to see it, and who will consider you disloyal if you disobey.  For many people loyalty to a bad cause is more important than refusal to comply or to blowing the whistle on the company or group, viewed as snitching.   I imagine a computer could be programmed to also see there is an element of contradiction in the rules, reactions, and explanations of parents, bosses, or governments.  The computer could also be programmed to feel avoidance of being disappointing or being yelled at or demeaned for being called wrong or badly behaved.  That could add to the consternation or desire to figure out what is true in cases of conflicts, or it could add to just following orders and not making waves. 

Or consider musical tastes among different generations.  We could program machines not only to detect audio patterns and features of them (there are apps available now which can identify music you play 'for' them), but to prefer some features to others -- features that might have to do with electrical wave rhythms (tantamount to bio-rhythms, but for machines) or energy levels, or that have to do with what is popular when they are 13 years old (adolescent machines starting to become independent), or that demonstrate recognizable incrementally increased pattern complexity within certain ranges (i.e., genres of music) which is something hard-wired to be pursued, not just in music, but in all kinds of pattern detection software.  And if we don't let the machines 'know' how their musical likes and dislikes are determined, we could enjoy watching older versus newer machines question how the other ones could possibly listen to 'all that crap', let alone enjoy it.  I say "incrementally increased complexity" because music, literature, film, video games etc. seem to be appreciated and pursued that bring novelty and refinement to already existing forms of it once saturation of them occurs that we seem programmed to avoid, and is considered or experienced as 'boring'.  Hence, what we once at first liked/pursued can become disliked/avoided.  But with humans, there is also something about continuing to like music in a genre that is learned when one is an adolescent and that possibly has to do with one's energy levels at that and at other ages. In the movie "Back to the Future", when Michael J. Fox starts playing early rock and roll at his parents' high school prom, the students like it, but when he gets carried away and goes into a 1980's or '90's electric guitar riff, it is too much for everyone.  They have not had a chance, I think, to assimilate the early rock and roll patterns and become saturated/familiar/bored with them and ready to move on to a more complex and/or energetic, animated sound.

There could be different degrees of 'pain' -- strength of impulse or resistance to avoid certain states.  E.g., your car could now be programmed to report low tire pressure that it can limp along with for a while, even though it is not the most optimal or desirable pressure to drive with, or it could say the tire pressure is too low to continue and that it had driven that way once before and had a blowout that damaged the rims in a way it really doesn't want to experience again.  It could really be resistant to continuing.  It could experience or report that resistance as being too painful to drive or to afraid to drive because of the likely ensuing pain.  People have to learn this same kind of thing.  E.g., physical therapists are often called physical terrorists because they require patients do exercises or degrees that either hurt and/or scare the patient about re-tearing or breaking whatever tissue was torn or broken in the first place.  It is difficult for therapists to assure the patient that the exercise will not re-damage the tissue involved.  The pressure one feels in doing the exercise feels like it will tear something, even if that pressure itself does not feel painful, but that any additional pressure will.  And if there already is pain experienced, it is even more difficult to believe the therapist's reassurance nothing will be torn or re-injured. 

Humans, particularly perceptive, sensitive ones, face all kinds of fears and conflicting pressures and ideas of these sorts quite frequently, often because adults and/or society teach them poorly instead of well.  E.g., competitions are held for children (e.g., sports or games) where they are exhorted to win and to try really hard to win, but then not to behave badly or take it hard if they do not.  Yet between whatever natural desire there is to win, succeed, or excel and the exhortations to do well and win, the message is not taught that it is the participation and the attempt to do one's best, not the result of an artificial competition, that is important.  Some coaches are better than others at explaining that to children in a way they can understand and actually feel.  But each child has to learn it, if possible, or forever taste "the agony of defeat" in any losing situation instead of the joy of having done one's best to try to succeed or win.  In the 1975 World Series between Boston and Cincinnati, game 6 was a titanic extra-inning battle that each team had various chances to win that were thwarted by excellent play by the other side.  If Cincinnati won the game, they would be champions; if Boston won the game, the series would go to game 7.  Boston won on the barely fair home run by Carlton Fisk in the bottom of the 12th inning.  Prior to game 7, Pete Rose, one of baseball's most competitive players ever, was asked about the disappointment of losing game 6 and whether it could be shaken off to play game 7 at one's best.  Rose enthusiastically and immediately said it was difficult to be disappointed in the result, given that the pride and honor in having played in what might have been one of the best and most exciting, skillful, and competitive games in World Series history far outweighed that.  To have just been a part of that game was tremendously exciting.  And no matter what happened in game 7, he and his teammates would always have that.  That kind of perspective should be taught to children, but too often is not.  Machines could be programmed to know it, or could be taught it early in a way that is permanently learned, or they could be left to be taught the way children are now, and you will end up with some petulant losers and ungracious winners when they compete. And it must be pointed out that if the only thrill in sports is to be winning, particularly championships, then that automatically makes every season one in which most players and fans will be disappointed, which is rather a stupid and contradictory, self-defeating pursuit.  "Sure, let's devise an activity where the thrill of it is denied to most of the people who do it and most of the people who follow it.  That should be fun, exciting, and really popular."

Seemingly Difficult Human Traits for Machines to Have
There are long known, standard kinds of objections to machines being able to think in the way humans do.  It is thought that they cannot be able to have the following:
  1. Feelings of Pain and Pleasure
  2. Emotions, such as joy and sorrow, frustrations, sense of accomplishment
  3. Ethical Understanding
  4. Sense of Humor
  5. Sense of Beauty
  6. Intellectual Understanding and Joy of Learning
  7. Sense of Reasonable Purpose
  8. Sense of Significance

I wish to address each of these capabilities or characteristics in some detail (though that detail here will be incomplete) to try to show how machines can have them.  The underlying basic idea is that there is a logic to them, which when there is evidence the logic is met by factual conditions, the traits or phenomena will exist or occur.  What needs to be understood then are the logical relationships among facts that produce the characteristic, and we should be able to analyze and determine the kinds of logic that produce characteristics, as when we analyze some particular joke or humorous situation to explain to someone else what makes it funny, which we often do when someone doesn't get the joke or see why it would be funny.  Explaining a joke doesn't make it funny, however, so this is not about machines laughing or being tickled by knowing explanations, but by being hard-wired or programmed to make the immediate connections in the way people do.  Similarly with trying to figure out what is depressing someone or making them anxiety-ridden when there is no obvious immediate cause for them to be sad or afraid.  Insofar as we can figure out all the kinds of logical relationships among facts that trigger, or simply are, our own responses which we call emotional ones, it seems we could build or program similar logic and emotions into machines if we wanted to.  In other words, (logically and conceptually) reverse engineer the logical and conceptual criteria for how the things on the above list work with people and then engineer the criteria into androids, so they can recognize in their own ways different kinds of humor, have a sense of beauty, senses of fairness, justice, right and wrong, sorrow, joy, curiosity, meaningfulness, etc.

Feelings of Pain and Pleasure
I have already explained this to be the avoidance or reduction of some state or the pursuit or maximization of a physical state in the machine.  Certain circuits could be devoted to detecting those states and causing or initiating the avoidance or pursuit of the state.  And the machine would refer to those states as painful or pleasurable.  And I have pointed out that we do not need to believe that we avoid something because it is painful, when it could easily be that the approaching or having the state is what we consider or perceive to be painful to varying degrees simply because it is what we resist having.  The awareness of the resistance or the resistance itself is the pain.  For example, when a child adamantly resists some food or person or some action such as having a hat put on its head, we say s/he doesn't like it, and the child itself may say later that s/he doesn't want it because s/he doesn't like it.  But isn't the resisting it all there is?  Doesn't it come first?  Isn't the "not liking it" just what we call the resisting it?  Similarly, if a child gravitates toward something, isn't that the same thing as liking it?  Isn't that our "evidence" it likes it?  What makes you think there is anything prior to that urge to have it or increase it, that is "liking it" and is different from simply the fervent or vigorous pursuit of it?

And the android could sense or register resistance to injury to itself different from resistance to hard work, so that the former is what it refers to as pain but the latter as just not wanting to do it or as being 'hard' or 'too much trouble' instead of as being physically painful.

Emotions, such as joy and sorrow, frustrations, sense of accomplishment
The logic of these emotions could be based on impending (probabilistic) expectations of achieving certain states of pleasure or avoiding/relieving states of pain.  A feeling of accomplishment could be about achieving a state that was difficult to do and that took much work.  Frustrations would be based on encountering obstacles, particularly ones that are improbably occurring (like when there is a car accident that backs up traffic a mile or two before the highway exit you need to be able to get to an important meeting or to get home or to a restroom) or ones that are produced by people (or thinking machines) for no good reason and that could easily be removed or prevented if those people were smarter, more caring and considerate, or more helpful (e.g., bureaucracy, red tape, insensitive bosses, etc).
Ethical Understanding
Ethics is in large part about taking the following kinds of things into consideration for any act and figuring out which should take precedence when they conflict:
  • Most Good and/or Least Harm of Natural or Intrinsic Consequences
  • Rights and the Violation of Rights
  • Specially Incurred Obligations and the Violation of Them
  • Fair Distribution of Burdens and Benefits, Goods and Harms
  • Fairness and Reasonableness to the Agent Expected or Required to Do the Act
  • Attempts to Harm People Undeserving of That Harm
  • Risk of Unnecessary Harm to Undeserving Persons, Whether Through Negligence, Heedlessness, Irresponsibility, etc.

There are likely ways to analyze these elements in logic of the sort an android could use and incorporate into knowledge and experiences it could have.  For example, it seems to me that there are some basics with which we could start, such as the machines wanting things (i.e., trying to achieve or maintain certain states of circuitry, physical and chemical readings, etc. under certain kinds of conditions) and perceiving that others can give it but won't, or its wanting to avoid other states that others could help prevent but won't.  This would be tantamount to the beginning of an egoistic view of ethics, particularly if one resents being thwarted by others and feels entitled to have what they want and have help getting it.  But the machine could also be programmed, or begin to learn, to see that in any similar situation, other machines or people should be treated by people or machines in similar ways under relevantly similar circumstances, which might require taking into account the agent's needs, desires, or interests and the needs, desires, and interests of other people.  Plus there could be recognition by the machine (through experience or logic) that some goals are harmful in the longer run even if the person or machine wants to achieve them at the time.  And thus a form of consequentialism could also conflict with the notion of equal treatment and fairness based on equal treatment; and the android might see that in some cases long term consequences are more important and should override short term ones.  There are in fact many kinds of factors relevant to what makes any act right or wrong, and sometimes they conflict, as in privacy or autonomy (freedom) versus security, or fairness versus overall benefit (as when the fairest option does not produce the overall best result for a group, that an unfair option would).  It seems to me that androids could have experiences, just as people do, that lead them to favor one kind of result over another, setting up ethical disputes among different machines or machines and people, particularly when the conflicting factors in any given situation are somewhat equally balanced and/or are hidden from view at an unconscious or at least unrecognized or unarticulated level, which is often what happens with human beings causing ethical beliefs that conflict, at least on the surface. 

I see no reason androids would not have the same sorts of ethical dilemmas and ethical understanding humans have, and perhaps do ethics better than most humans do, particularly better than younger people without the kinds of experiences that help one grow in terms of moral understanding and understanding of personal responsibility.  Androids could be ethically programmed better than newborns and have a head start applying better, more developed ethical principles, ideas, and concepts to difficult situations as maturing children, adolescents, and adults.  And they may be better able to share evidence and experiences that lead them to draw different conclusions from each other until all those experiences are shared and explained or articulated.  For example, many people with conflicting political views often point to totally different evidence for their proposals and rationales, ignoring the evidence of the others, when in fact any good proposal should try to take into account all the evidence relevant to the issue.  Typically each side tends to point to the worst consequences, injustices, or unfairness of the other's plan and the best, most just, and fairest consequences of their own, not considering that a different proposal might minimize or eliminate the worst parts of both plans and maximizes the best parts of each. I don't know why that is common human nature, or at least the nature of politicians.  Whether people or other androids have an ethical disagreement with an android, I see no reason that the android cannot learn from that disagreement, even if the other party is mistaken, but particularly if they are right.  Until a neighbor told me, when I was a young child, not to put my feet on her furniture after I rested them on her coffee table while sitting on her couch, I didn't know that was wrong.  Presumably an android could be admonished in the same way and learn from it.  Or an android could watch a  debate or read conflicting articles and learn from that, whether taking sides or seeing problems and good points with both sides -- points in conflict or consonance with other facts the android knows.  People and androids alike should always be open to tall available evidence, no matter where it might be discovered or noticed.

Sometimes even just getting another person with conflicting views to see and face facts can be difficult and yet necessary, and perhaps even sufficient, to help achieve a rational resolution.  One of my former students was serving in Afghanistan and part of his his unit's duties was to provide accompanying protective escort for convoys going from the base to the airfield through a long mountain pass often attacked by Taliban with the advantage of mountain height and cover.  Standing orders were not to travel the road without availability of air support confirmed by Air Force officials.  One morning a convoy was scheduled, the fog was proverbially pea soup thick, so there could be no air support to oversee the convoy.  But for some inexplicable reason, the air base declared the weather clear and said there would be the requisite air cover.  My student's commanding officer directed the convoy to proceed.  The student pointed to the window and said there was no way there could be proper air cover.  His commanding officer said the Air Force officer had said there would be air cover and so the mission was to proceed.  Moreover, the officer pointed out the convoy was taking soldiers to the airport whose tour of duty was up, and no time should be wasted getting them home.  There was only flight out per day and they should not have to wait till tomorrow. 

My student accepted the order when his commanding officer pushed the point, believing it clearly mistaken and foolhardy, but he felt obligated to obey orders, as he had been trained.  There was no air cover; the convoy was attacked and many of the soldiers were killed.  My student lost close friends from his unit as well as many of the soldiers who were now going home the day they wanted to after their tour of duty, but not in the way they wanted to.  I think an android programmed in a reasonable way would have made a better decision and argued the case against his commanding officer better.  My student was understandably totally upset by the event and was still perplexed about what he could have or should have done differently, if anything because, at the time he told me all this, he still couldn't see that the risk of harm to others and his own complicity in allowing it should override considerations and consequences of being court-martialed -- not only from an outside observer's standpoint but even from his own conscience after the fact, and therefore what should have been his conscience before the fact.  While the convoy that day was clearly highly risky and most likely a wrong choice from a rational, probability standpoint, the actual event confirmed it for certainty, leaving no doubt that the convoy should not have traveled in the fog.  It is my contention that the order should therefore have been refused to be obeyed due to the irrational (potentially) deadly risk just to save a day of time, and that my student (or any soldier) cannot avoid personal responsibility by ignoring it any more than the Nazi's who "were just following orders" did, and that if one has to face and accept punishment, one should at least be blameless and wrongly punished for doing the right thing, rather than rightly punished for having done something wrong.  I am sure my student would rather have been court-martialed for not letting the convoy go than to have aided in his brothers in arms being killed in the attack. 

And notice, this is not about refusing to go on a risky mission that has a real point or military value even though a high probability of casualties.  This was about saving one day's time by going on a totally unsafe journey, against the clear order itself that there were to be no convoys without proper air cover.  The officer who had promised air cover was like an umpire who declares a runner safe at first though the guy fell down injured halfway to first base and never made it to the base before the ball got there.  The umpire is supposed to have the final say, just as that officer was -- but that is ridiculous when the final say is clearly and obviously totally wrong.  And you don't throw away your life and the lives of other good people because some idiot (or in this case, two idiots) in charge cannot or will not distinguish dense fog from a clear day.  But a 20 year old enlisted soldier is not likely to know that.   A correctly designed/programmed android should.   At the very least an android should be able to better foresee the consequences of each act and how they will affect him/it personally, along with others, in a way the soldier was not able to see clearly, even after the event.   He had a distinct feeling of guilt but was not yet to the point of being able to know what he might have or should have done otherwise.  He was so trained/conditioned to believe that following direct legal orders was imperative, that he couldn't see it is also important that the orders be morally reasonable and not just merely legal.

Now an android, or a more experienced, older human soldier, could have been faced with the conflict between obeying bad orders and catastrophic consequences and probably chosen punishment for disobedience as the better option.  And an android probably could have given better arguments than a 20-something year old soldier could against his commanding officer, starting with "Lookout the window, sir.  There is no visibility, and clearly there will be no air support for this mission.  Do you really want to get these guys to go home today if that means it will be in a box!  You have rank; use it for issuing sensible orders, not senseless ones.  Do you really want to risk -- seeing that fog as it is -- not only losing this convoy but having to explain to your superiors why you let it go and commanded it to go in such a fog when the directive is clear that there has to be protective air cover?  Do you really want to tell investigating officers that you preferred to believe a telephone assurance of air cover over the clear evidence of your own eyes that there couldn't be?  Do you really think that following orders at the sacrifice of using reasonable judgment will enhance your career the most, let alone be in the best interest of the lives of these soldiers entrusted to your command? If you really think all that, then send me to the brig, because I do not; and I am not going to lead that convoy to its likely doom."

Sense of Humor
There are different ways that things can be funny, but each has a logic of its own and those different forms of logic could be programmed to be recognized, and trigger laughter or what is considered to be amusement, even if the machine does not itself recognize 'consciously' or know why it finds something funny.  For example, there can be associations that capture relationships that are unusual or unexpected, though readily recognized once stated or thought up, as in The Daily Show parodies of movie titles to fit the content of a comment on a news story.  A recent example involved a Florida postal employee who broke the law and flew and landed a gyrocopter on the grounds of the U.S. Capitol Building to deliver letters of protest to Senators and Congressmen.  He was summarily arrested.  Combining this crazy stunt with the movie title Blackhawk Down, the image posted on the television screen was a mock up of the movie poster with the title changed to "Wack Hawk Down", adding an element of rhyme (thus another analogous or related feature) to the association.  Or a joke can look at something from a different angle perhaps bringing contradictions into focus that are, again, unusual or unexpected -- though immediately recognized once stated or thought up, as when Jay Leno mentioned all the hyped advertising about the introduction of new sexy bras ('Miracle Bra', 'Wonder Bra', 'Super Bra') by different companies, and then asked 'What? Are American men not paying enough attention to women's breasts?'  That was a witty, unexpected way of pointing out the conflicts between being alluring to the men whom women might want to attract and the problem of attracting undesired attention from those they don't, or the conflict of having men they do want to attract become interested in them for shallow reasons the women do not want to be valued for.  And it points out the shameless exploitation of such conflicts by some businesses.  Much of all this most people do not tend to notice and would find surprising when pointed out in ways of these sorts.  Androids could try to figure out what people find funny, as people often try to do about other people in order to make them laugh, and it could be programmed (i.e., given the same sort of 'instinct' humans have) to want to achieve or sustain succeeding at eliciting laughter or other signs of amusement people display, and to laugh or snicker with them or display self-satisfaction at having been clever enough to elicit the laugh or moan at an 
intentionally bad pun or 'sick', 'lame', or corny joke.  And like many people who cannot tell jokes well or recognize humor that others do, androids might fail  to have a good sense of humor or timing and have to admit they can never tell jokes right, etc. 

For a machine to have a sense of humor does not mean it will get every joke or find all the ones it does get to be funny, any more than for a person to have a sense of humor means it will "get" (i.e., relatively immediately understand) every joke or find all the ones funny that it does understand.  A punchline may be too obviously anticipated to be funny, or it may require knowledge of the facts it is putting into relationship that the machine or person does not have, or it may not be possible to tell whether a joke, say about a stereotype, is meant to mock the stereotype and those who believe it or is meant to mock the people demeaned by the stereotype.  One of the reasons victims of stereotypes can successfully tell jokes about the stereotypes is that they are clearly mocking the thinking behind the stereotype, not mocking the (usually) disadvantaged group characterized by the stereotype, which would include themselves.  In some cases, mocking a stereotype is about drawing fanciful or absurd implications of it, as in priest/rabbi/minister jokes of various sorts where none of that would really happen but is fun to fantasize about in relationship to stereotypes of the religions and/or clergy.  Or Richard Pryor's question about why police sometimes bring dogs with them, musing "I guess it is because those dogs can catch white boys."  And I think that androids could understand absurd humor as being intentionally obviously false, but with a twist or point that can make people laugh. When one of my grandsons became able to identify colors by name, some of the relatives kept having him demonstrate his skill by asking him what color various objects were to the point of what I thought was pestering him, and so when they asked him what color the drapes were (clearly blue), I interjected before he could answer "They are orange!" and he looked at me and said "They are not orange!" and he saw me laugh and he turned back to everyone and said "They are pink!" and he laughed when he said it, amused at himself for whatever one might call that kind of intended humor -- telling a clear whopper or being facetious in the response or pulling their legs or showing it was a silly question at that point, etc.  I would think an android could learn humor in the same way and understand certain kinds of humorous absurdity, such as sarcasm, as being intentionally and clearly false but in a complex or 'sophisticated' way that unites elements that would not normally be seen to go together or which wouldn't have been said by the person saying it, at least not in that way, if s/he really meant it to be true. 

Of course, not even all people get or appreciate sarcasm or mock sarcasm or absurdist humor; or even some who do might not recognize a particular instance of it.  Two (quite attractive) women came to my photo studio one time because they loved a portrait I had done of a friend of theirs, and they said "We want you to make us look as good as you made her look."  And I said with a grin "But she is pretty."  One of them laughed and the other was offended, even after the first one told her I was clearly just teasing, and even after I said, of course, I could because it won't even take any skill other than not to mess up, because you both are already really pretty."  And one morning, while working for a small suburban weekly newspaper, I was assigned to photograph three Brownies who had earned a cooking badge or some award for cooking.  Their mothers had set up a table with baking materials on it and the girls had a large mixing bowl with a whisk, etc.  The mothers were all hovering behind me and giving unwanted and unnecessary advice to the girls about how to look, etc. and the girls were really tense.  I went through my usual progression of things to try to make the kids smile naturally, and nothing was working because the mothers were so dominant and tense.  Finally I said to the girls as casually as I could "I really need to get smiles out of you because this is the ugliest group of Brownies I have photographed today."  The girls cracked up and I got a great picture of them smiling all lit up with their eyes and their mouths.  Of course, by the time I got back to the newspaper office the boss called me because the mothers had called and said I had insulted their daughters and probably irreparably damaged them. The kids got it, but the mothers didn't.  Or consider going into a physical rehabilitation hospital and saying "I would have been here sooner but couldn't fine a normal parking place; why are there so damn many 'handicapped parking' spaces around here?!"  So even if androids can appreciate that sort of humor sometimes, that won't mean they will get all of it.  Or sometimes Seth Meyers will tell a joke that falls flat with the audience, and he will often then, in feigned surprised disappointment, say something like "Really?  I thought that was the best joke I had tonight, even though it failed around here all day with the staff and everyone told me not to tell it.  I guess it is just too sophisticated or you have to be really smart to get it."

But if you want to make androids really find humor funny and not just have them infer something is funny because it fits a pattern or has certain objective characteristics -- in the way Sheldon Cooper or Shaun Murphy often point out they recognize something must be sarcasm -- you have to hardwire them to laugh to different degrees (sometimes even to tears or to difficulty speaking or remaining upright, or making them expel milk their they are drinking out of their nose)  at the characteristics of the jokes or humorous events, not just observe and say "the joke is funny, ha ha'; very clever of you."

But  there are other intentionally and clearly false statements too (besides humor) which are not lies because they are not meant to deceive (generally), such as fiction.  It is actually very difficult to come up with philosophical definitions for fiction, certain kinds of jokes, etc. which rely on false, but non-deceitful, false statements.  And it is difficult to explain our reactions to fiction, which often involves emotional responses to what we know is not true in some sense.   All these things an android might find puzzling, as children themselves often do.  And I think androids could learn to make the same kinds of distinctions children do and continue to make as adults in the same ways -- through seeing people's reactions to them, as well as their own reactions, and by asking questions about why fiction, mythology, legends, tall tales, fairy tales, absurd jokes, etc. are not really lies.

Even events can be funny if unexpected and known to be improbable in the right way.  When they opened the Benton Harbor, Michigan golf course designed by Jack Nicklaus, Nicklaus invited Arnold Palmer, Tom Watson, and Johnny Miller to play a ceremonial opening round with him for spectators.  On one of the holes there was a mammoth green with undulating hills on it.  As they approached it, Arnie asked Jack if he had designed that green during a night of drinking.  Three of the approach shots had landed within somewhat manageable distances of the cup, but Johnny Miller was over a hundred feet away from it, with sideways hills and valleys.  Johnny, pointing out how difficult this putt was, asked if he could use a wedge instead of a putter.  Jack was not amused, and said the putt was not that difficult.  When Johnny said it was impossible (not just difficult), Jack offered to show him how to do it, and started walking toward him, so that the offer could not be refused.  Johnny moved his ball and Jack casually dropped one down where it had been, then took a second to just look and strike the ball really hard, at an angle not really pointed at the cup.  The ball made a huge curve or two and then straightened out going toward the cup, but much too fast, went over the cup but caught the back lip in a way that deflected it straight up into the air and it then came back down right into the cup where it stayed.  The crowd went wild with both the thrill and the amusement of seeing it because it was clearly more luck than skill, and couldn't have come at a more perfect time, and it also added to the rich lore of great Jack Nicklaus shots, some of which he parodied himself in a television commercial years earlier where he was practicing indoors in a house and decided to hit a drive, opening a sliding glass door about two inches that he was supposedly going to step back and drive the ball through just after the commercial ended.  Tom Watson gave a mock royal bow to Nicklaus upon the ball's resting in the cup.  Any of a number of factors could have made this funny, but all of them occurring together made it spectacularly both amazing and funny.  A machine, just like a human would have to know, and could know, all the facts that make this funny, and that would include the unlikeliness and unexpectedness of making the putt, particularly at the speed it went, the improbability and unexpectedness of its going straight up in the air and coming down into the cup, the legend of Jack, the mock bragging involved in offering to show how to make such an 'impossible' putt, the fact they were all great players and friends, Jack's appearance after it went in as there being nothing to it, and 'there, I showed you the line like I said I would,' etc.

Or one other sort of logical humor is giving mock reasons or evidence for facts or for fabricated facts where that explanation or logic is clearly fallacious and only meant to be a joke explanation.  One of the funniest of those was in the movie Support Your Local Sheriff, where James Garner, in a hardware store meeting of the mayor and town council, in order to demonstrate his shooting ability to deserve the job of sheriff, goes over to a bin and pulls out a steel washer and throws it in the air, draws his gun and shoots at it, and then catches it, holding it up to show them that there is not a mark on it "because I shot right through the hole in the middle of it".  The movie audience convulses with laughter, of course, because his missing it altogether is a far more likely explanation of the unmarked washer.  And the government officials are not impressed or amused by the explanation, so they ask him to do it again, this time with a piece of masking tape over the hole in the washer.  Garner is reluctant "to put another hole in the ceiling", but the group insists.  So he throws the washer in the air again, draws, shoots, and catches the washer -- showing a hole right through the tape.  The movie audience laughs even harder this time because it turned out his explanation earlier probably was actually true, and the whole thing is so improbable that it is absurdly funny to think it could have happened.  A machine could know all those facts, including understanding logical fallacies, probabilities, and be hard-wired to 'laugh' at any such kinds of absurd logic and improbable events.  That would be no different from how people find that scene funny.  And, again, if the androids are hardwired to belly laugh at such an occurrence instead of saying "cute," they would fit right in with the humans in the audience.

Insofar as we can know the different kinds of logic of facts and comments that people find amusing, we can program that logic into machines so that when the machine comes across or notices such a circumstance, the logic can funnel the output into laughter or a physical state considered  amusement by the machine -- a state it wants to prolong, increase, or create/experience by bringing about such circumstances that give it the impulse to "laugh."  Androids that do that well or a lot would be ones said to have a good sense of humor; those that rarely do it would be humorless.  Just like people are.  Even laughter and recognition of the physical state induced by physical tickling could be channeled into laughter, and could be in part programmed to be prolonged but also not so intense that it wants to be stopped -- in the same way that little kids love and seek, yet resist and squirm away from, being tickled.  Or androids could laugh at being startled by something they didn't see or hear 'sneak' up on them, and then laugh at their own reaction.  Some people love being startled and find it really funny, whereas others hate it and don't find it funny at all.  There is room for androids to be just as different in all these regards to all the kinds of humor that people are different about.

Sense of Beauty (and Ugliness) and Art
Preferences for certain patterns of sight and sound (or chemicals for taste and aromas), particularly in some cases very intricately entwined ones or ones that show us something we had not thought of before can be programmed to give states that the machine tries to increase or prolong (or avoid, even avidly in 'disgust')  -- but which affect the machine or register in a different way or part of the machine than pleasure and pain do or than achievement and sorrow do.  There can even be learned influences or experiences that make some patterns preferable to, or less desirable than, others, by association with something that was pleasant or unpleasant. 

Intellectual Understanding and Joy of Learning
There can be a striving to acquire difficult skills or new knowledge it can assimilate in various ways/patterns with previous knowledge, and/or striving to resolve conflicting beliefs, which when accomplished, or in the process of being potentially accomplished, is a state the android strives for and tries to prolong.

Sense of Reasonable Purpose
Emulation and education (including about etiquette, manners, and customs) can begin to instill goals, either, again, to achieve certain kinds of states or to avoid them.  Means to achieve or avoid various states could be learned through pattern recognition of the sort 18th century British philosopher David Hume pointed out we consider to be cause and effect (even though we don't always get that right), and would become part of the purposeful process.  And insofar as the machine has motor skills, they would be employed to help the machine achieve/avoid those states.  There would be no need for the machine to say or do 'random' things that make no sense or have no purpose for it to do, unless it were defective in some way and convulsed or had a seizure or electrical or mechanical defect.  Also, infants, toddlers, drunk people, and stupid people often do things that make no sense or that are just reactions to impulses and environmental stimuli that trigger them, often, in the case of those mobile in dangerous environments, leading to bad results.  Some wars declared and waged by otherwise reasonable people are also examples of reactionary responses to circumstances that would far better be responded to in other ways.  In one sense the movements/acts are voluntary and purposeful, but in another they are not and just seem to be foolish, senseless, almost merely chance behaviors, dependent on what is in the immediate environment to react to. 

Sense of Significance
Machines could learn from trial and error, or by education, what circumstances produce those states it wants/tries to prolong or repeat and what circumstances produce those which it wants/tries to minimize or avoid.   And insofar as it can also recognize patterns it can associate with being able to lead to ends, it will "find or detect" significance in those patterns.  What is significant to a machine may not be what is significant to a human unless we can program machines to recognize human needs and desires/pursuits as well as its own, or generic, machine needs and desires/pursuits.  There was a television commercial asking "Where will you be when your child finds her passion" (meaning a worthwhile interest or profession to cultivate for life), but it seems to me we could ask the same about when a machine finds its passion -- through a combination of circumstances that help it discover what leads or will lead to states it works to pursue, prolong, or intensify.  Machines could even develop 'solidarity' with each other, in the way humans do -- through exposure to shared experiences that yield similar reactions to them, such as going to baseball games together or to churches with similar views, or attending a college with the same traditions and atmosphere.  At the end of this essay, I will discuss the concept of significance with an example that I at first believed would be difficult or impossible for machines to be able to do and 'experience' in the way people do, but which I now think they readily could be designed and created to be able to do and experience.

Different Feelings and Reactions
I see no reason that different stimuli wouldn't be able to trigger different responses in different machines, not only of pursuing/increasing the stimulus or trying to avoid/decrease it, but also different kinds of responses such as pain, sorrow, joy, hunger, nausea, etc. and there is no reason that would not be experienced differently by the machine in terms of what it is activated to do, or to pursue or avoid.  One machine could be designed, say, to pursue (and thus 'like') the chemicals in some coffee brewed in a particular way, while other machines have no 'interest' to pursue the same thing.  Hence, one machine would like coffee, or a particular kind of coffee, and the other would not like it, and could even be designed to avoid (i.e., 'hate') coffee.

Potential Differences Between Thinking Humans and Thinking Machines
1) Machines could be created all alike in ways human cannot.  Whether that would be good or not is another question, and whether they would stay the same after having different experiences is still another.  It would be easier to tell whether nature or nurture was a primary influence in different interests and beliefs with machines than with humans because machines can be created all with the same nature, if we wanted to, and so any differences among them would be due to differences in their environment and their experiences and the reactions to them.  We cannot do that with humans, apart from perhaps identical twins or with scientifically created clones.

2) Or we could create machines to have the best cultural, social, and character traits that are the most constructive, or at at least the least destructive.  Of course malicious people or societies might want to program in evil or avaricious traits that suit them, and that is always a problem with any scientific discovery or technological breakthrough.  And people do genuinely disagree about what is best for society or for each other.  But it seems we would want at least to prevent machines from having the worst and most destructive traits, if the underlying bases or causes for destructive traits are not ultimately the same or necessary for constructive progress as well.

3) As pointed out earlier, death might not be a concern of machines, if their memories and programming could be backed up and/or transferred to other machines.  Humans cannot do that, and when a human dies, then, at least here on earth in its physical life, all that is lost.  Those who believe in life after death tend to think one's personality, character, memories, and cares will remain intact and thus they fear death less relative to their confidence in that belief.

4) Miscommunication, misunderstanding, lack of understanding would likely be less problematic (or non-existent) among machines, since they could transfer their data directly to each other.  I suppose the process could go awry or an older "file" accidentally replace a newer one in a wrong-way transfer, but surely there could be steps to avoid that or recover the mistakenly replaced file.

The Concept and Issue of 'Significance' Using an Example
It has long seemed to me that if we taught computers capable of utilizing the information all the things we teach people from birth through the end of their life, and if we could program machines to perceive as much as people do, that machines would be able to think at least as well as humans, or perhaps more accurately, have as much factual or empirical knowledge about the world.  At this point, by 'utilizing information', I mean interconnecting it in ways, such as knowing synonyms or definitions for words and also being able to identify objects 'visually' by sight, chemical composition (akin to smell or taste for people), by touch, etc., know standard causes and effects or patterned sequences of objects, etc.   And since, theoretically, computers can have access to more factual material and more empirical/perceptual material (through better and more sensors) and greater, expandable memory, theoretically computers can someday know more and think better than human do, unless a way is found to expand human memory, retrieval, perceptual range, and thinking speeds. 

I originally thought that would leave what I will call the ‘Spock/Sheldon problem’ of recognizing what is significant to most people and how it is significant.  (As of this writing, a new TV series, The Good Doctor, is about an autistic savant surgeon, Shaun Murphy, who also portrays a person who though brilliant, has great difficulty recognizing or understanding many common important human characteristics and the language,  expressions or inflections and tones of voice related to them, such as sarcasm and other forms of humor, such as those based on absurdities.  If the series is successful, the name may need to be changed to the ‘Spock/Sheldon/Shaun problem.’) That is a long recognized problem for robots in anthropomorphic form (‘androids’), usually portrayed as lack of emotions or emotional understanding, but I think it is broader than that in a way I will explain shortly.  I want to consider the broader potential problem in this essay, and I want to consider it in light of an exchange I overheard one day between my almost six year old grandson (at the time) and his barely three year old sister that I originally thought computers would be hard-pressed to do in general -- though I now think there is a way to develop them to do it too, in combination with programming them to respond as we are also ‘hard-wired’ or biologically equivalently structured to do by nature.

The grandchildren had spent the rainy day with me and it was approaching time that they would be going home.  As they packed up their things, the five year old told his sister “If you put your lovey in your backpack it won’t get wet outside.” (For those not familiar with the term, a ‘lovey’ is, in this case, a kind of soft, rather well-worn, cloth some little kids carry around and sometimes hold close to their face because it feels real good to them and is comforting if they get upset or very tired.)   Her response was “That is a great idea.”

Now, I am sure that the expression “that is a great idea” is probably one she heard often from her parents as a means of rewarding and of further encouraging good ideas and behaviors, so it is not the expression itself that computers/androids/robots couldn’t learn to say.  It is their knowing when it is appropriate to say it that I think would be the problem.  But it is not a problem involving emotional understanding as much as it is about understanding anything that would be of significance to another person, or to a person at all.  Let me start by first assuming, what may not be true, that my grandson figured this out on his own, understanding that his sister would not appreciate her lovey being wet after getting home, and figured out a way to prevent its getting wet.  It was not about understanding an emotional reaction as much as it was a fact about a normal human preference.  She would not have had a meltdown if it got wet, and it could always have been put in the clothes dryer a few minutes if it bothered her.  But he knew she would not like it wet, either because he had seen her react in disappointment to its being wet before or because he had not liked when his had got wet before.  And he put that together with the knowledge it was raining and would get wet if unprotected, and that being in her backpack would protect it.  And upon his saying that to her, she realized it too.

This was a fairly complex process that involved knowledge of conditions (both in terms of weather and in terms of the point of packing up, etc.), understanding of relevant, fairly easy to see, fairly 'immediate' or adjacent, proximate causes and effects, and above all, his knowing what would be significant to her and her also knowing that.  It was, as she accurately said in this case, “a great idea.”  By 'immediate' and/or 'adjacent' causes and effects, I mean those that are fairly readily seen or associated with each other as part of the causal chain leading to a desired or undesired effect.   Some causal chains are easier to notice or think one perceives than others.  The links are somehow more apparently proximate to each other or seem to be.  In some cases the causes may not actually be causes but just apparent ones.  But the point is that it doesn't necessarily take any great insight into a hidden sequence or complex chain of events.  Part of the success of the Socratic Method of teaching, when it works, involves making clear the adjacent logical deductions and implications of a longer chain of connections in a difficult or complex concept or idea, so that anyone can 'see' the reasoning in a more readily apparent step-wise way.  In this case, it was fairly easy to deduce from seeing it raining and from knowing a wet lovey would disappoint her, that she would be happier at the end if she packed the lovey in her backpack to keep it dry.  Once the association was made between the weather conditions outside and the disappointment of a wet lovey, the remedy was fairly easy to see, but making that association took either the right experience (of parents telling either of them to pack her or his own lovey before, perhaps on various occasions) or the experience of his or her being disappointed in having a wet lovey after being out in the rain, or it took the right chain of insightful (but adjacent, fairly easy) deductions.  If an android 'knew' the end desire -- the dry lovey that would be taken outside when it was raining -- and had knowledge of protective covering, it could have made the same deduction.

Now, of course, it is possible that this occurred because of rote memory and association with past similar occurrences where their parents had given the advice.  If so, then in that case an android or computer could have said the same thing by knowing weather conditions (either from direct perception or from Internet connection with local weather conditions), knowing that some things typically should be covered up for protection from the elements, and knowing to say it to a young child from having heard others say the same kind of thing to children under these conditions, etc.  So it could be that my grandson did not really 'figure out' much, but just repeated what he had heard before about getting ready to go outside and take things outside when it is raining.  But some human at some point figured it out first, and the question is whether an android/computer/robot could have figured it out and known to say it to spare disappointment after the lovey did get wet.  Could an android likely know what might cause disappointment in this case and that disappointment was something to spare someone from feeling if reasonable to do so?

I do think that a computer/robot/android could be easily programmed to have preferences and desires -- i.e., programmed to strive to do certain things, or do them longer or more often, which if thwarted or otherwise not met, would or could be perceived or described by the computer/robot/android as feeling frustrated, disappointed, or unhappy.  And I think a computer/robot/android could be easily programmed to be made unhappy under certain conditions it might not know ahead of time would make it feel that way, e.g., just as a child does the first time a child gets upset his/her lovey got wet only after experiencing it.  A robot could also be programmed to find such conditions something it tries to avoid repeating or the wet lovey could trigger a response to avoid carrying it around while still being programmed to carry it around dry.   That would register to others that it "wants to" have its lovey, but dry.  And it could later itself simply describe this inner working as 'wanting to have the lovey dry, not wet'.  Or, oppositely, one might not know how good an experience feels until after having it the first time, as in tasting a particular food or taking a particular drug or playing a particular game or sport that feels good to play or that is exciting to play. E.g., the motion of swinging a tennis racquet, both forehand and backhand, feels far better to me and seems far more natural than does the motion of swinging either a baseball bat or a golf club, but I didn't know that until after I played/worked with all three swings.

Then the computer could associate various conditions that trigger its pursuit or avoidance responses and try to avoid or pursue those conditions as means to ends.  There are a number of ways the android might be prompted to tell a child "if you put your lovey in your backpack, it won't get wet when we go outside" just as there are a number of ways her brother told her: 1) the android and brother could have heard that particular, or similar advice, given on rainy days before and just repeated it, 2) either could have got their own lovey wet before and have programs/instincts to avoid that, along with also being programmed or have the instincts to prevent its happening to another, or 3) either could have seen it happen to the girl before and her be unhappy about it, and either could have the instincts/programming to try to prevent her being unhappy.  I don't know why my grandson said it to her or how he thought it important to say, but I presume it came about in a fashion like one of these.  If so, it could be the way the android 'learned' it or figured it out.

To bolster my claim that liking something or disliking something, finding it pleasurable or most unpleasant, is what we call whatever is going on when we try to prolong or repeat an experience or try to avoid or end it as soon as possible, consider newborns and infants behaviors and how we respond to them.  Babies tend to fuss under various conditions and we consider them to be unhappy because they are fussing -- but surely it is just that they are responding instinctively or 'merely' physiologically/biologically to lack of food or to too much heat or to insufficient heat or to being too wet or to having indigestion of some sort or being subjected to any of a number of stimuli they are programmed by nature to stop or avoid -- and if we are able to end their fussing, we say they 'just wanted to be changed' or they 'like being dressed warmer' (or 'less warm'), s/he was just hungry, etc.  But there is no reason to believe their 'wanting' or 'liking' warm or cold formula or more or fewer layers of clothes or a dryer diaper was anything other than instinctual nervous system 'programming' to seek or avoid certain conditions, and to quit struggling once they successfully attain or avoid them.  And while they may seem happy or pleased, it is probably more that they are relieved -- relieved of the struggle to achieve the biological/physiological endpoint.  And I am also presuming that as they get older and have repeatedly heard about their apparent 'likes' and 'dislikes' and 'pleasures' and 'pains', they will take on the vocabulary themselves of liking things or finding them pleasurable or enjoyable or painful, uncomfortable, too 'itchy' or 'scratchy' or too hot or not warm enough, etc. when all that is going on is that those conditions evoke a response to continue or end the particular stimuli.   And because we are 'hardwired' to recognize different stimuli, we can distinguish the stimuli we call 'itchy' from 'too hot' or 'prickly' or 'scratchy', etc.

People don't always know what is significant to another or understand why/how it feels that way.  One of the more commonly pointed out instances of that is that men and women often do not understand each other very well or have the same sorts of interests.  It is sometimes a serious matter; sometimes fodder for comedians.  I think it was Elaine Boozler who pointed out that men generally don't have the 'shopping gene' and her boyfriend couldn't understand why she bought a pair of black pants since she already had a pair of black pants.  When she tells that joke women in the audience tend to laugh and men don't get why.   But she goes on to say it is evened out by women not having the 'jumping up to touch the awning' gene men seem to have that makes it impossible for them to walk under something without trying to jump up to see if they can touch it.  Or for example, Americans can know that most of the world feels about soccer (football) in ways similar to how Americans feel about what they call football, but neither appreciates the other's interest or fascination. 

Similarly, it is difficult for people raised in one religion to appreciate the feelings others have for holidays or events and rituals practiced in another, even though they know it is how they feel about their own.  They just do not experience the same feeling for that holiday, event, or ritual.  So one might understand the feeling of reverence and know it is what the other person is feeling, but not be able to experience it onseself for the same phenomenon.  Likewise with college or team allegiance for rooting in sports or other areas.  And some feelings are simply individually different -- whether one enjoys anchovies (or even pineapple) on pizza or finds it disgusting, whether one likes scary movies or finds them too horrifying, whether one likes sex or not, or has a certain sense of humor and appreciates particular jokes or not, whether one likes sarcasm or not, whether one likes shopping or not, whether one likes math or not, or learning in general or not, or learning through reading, or even liking to read.

It is often difficult to tell what another person might like or want or find significant.  I photographed my neighbor's 50th wedding anniversary celebration they had at their home for family and friends.  As the time for the party neared, the wife said she wanted to gather all the family together to take a family picture.  Not every family member was there yet and some other guests were starting to arrive.  Her husband asked whether it wouldn't be better to do it later and she snapped "No, I want to do it now!"  He was taken aback and slightly embarrassed, so to make him feel less self-conscious about it, I said "You haven't learned anything in 50 years, have you?"  And he laughed and said "I guess not."  And he was okay and we took the picture.  But neither one of us expected that reaction from her or thought it was that important to take the picture when she thought it was.  Apparently there was some significance to her that we didn't know about.  Being human and even relatively psychologically 'normal' (as opposed to a Spock/Sheldon/Shaun) doesn't necessarily make you any more aware of what is significant to a particular other person at a particular time than an android would be aware of it. 

Now an android or human might be aware of a significance, if one had any sort of clues to go by that seemed either to fit or to go against a pattern.  I photographed a wedding  of adults in their late 30's one time, and not only were the parents and siblings and their families there, but there were step mothers of both the bride and the groom, because the fathers had each been married and divorced a couple of times on the one hand and three times on the other.  The father of the groom was very personable and funny prior to the wedding ceremony, and he teased me and others about all kinds of things and seemed to be having a real good time.   But when I went to take the large group family photo after the ceremony, he wasn't smiling and looked very severe, and I was getting nowhere trying to get him to smile with the group.  I finally had to direct my attempts to get him to smile specifically to him, but that didn't work either.  He finally asked in front of everyone why I was on him about this, and I said that it was because he was looking unhappy and  I didn't want him to be that way in the picture, and I added "I don't really understand the problem.  I would have thought that being under the same roof at the same time together with all the women you had ever been married to would have made you happy."  The whole family cracked up laughing, and he looked at me and said "Where is your car parked?"  And I said "If you promise to hurt only my car, I'll tell you."   And then he smiled and was fine, but I was at that point afraid I had overstepped.  He came up to me at the reception and put his arm around my shoulder and thanked me and said that was exactly what had been bothering him and he didn't know how to deal with it, and by saying it the way I did, I got him over it.  Now, it was pretty obvious what was likely bothering him, given he had been happy earlier when it was not a family group setting, and I think an android could have seen all that evidence, but I have to say, it was mostly luck that what I said worked instead of upsetting him further, but 1) the fact he had been teasing everyone before indicated to me that he had a playful sense of humor and sarcasm, so I thought it might work with him as well; he did not seem mean spirited when he teased, and seemed like the kind of person who would enjoy being teased in return -- someone who could 'take ribbing as well as dish it out' and 2) I thought that I had figured out a way to phrase it so that it was obviously absurd enough to be playful and funny to him.  But the evidence for that was not airtight.  However, I also felt I had little to lose because I didn't want to take a picture that would make him look bad later.  I figured I had little to lose picture-wise by trying to get him to relax and smile, since even if I upset him, he wouldn't have looked any worse than he was.  I think an android with all my experience as a photographer would have likely had a similar desire to try to make the picture be better than it was otherwise going to be, and would have recognized the problem too, and a solution like mine.  It was primarily about logic and observation; and an android could have that ability.

Now, we might not know all the approach/avoidance reactions to program into computers to make them respond to everything in the ways humans do.  But since humans do not always appreciate what is significant to each other, that might not be terribly problematic from a scientific standpoint, though it would potentially be morally as problematic in the worst cases as is lack of empathy and understanding of sociopaths and psychopaths.  But, I think there is no need to believe androids will be more like psychopaths and sociopaths than most people, or that they will be worse at knowing what is significant to another person than people are.  One does not always even understand one's own self or motivations sometimes or remember how they felt after an experience they once went through that someone else is going through now.   It is difficult to understand one's adolescent children sometimes even though one might have behaved in similar ways, but the circumstances just seem different because one doesn't understand how rebellion/independence works and believes their children should see them as cooler than they saw their own parents.  One tends to think it has to do with traits of the parents more than just the need for distancing oneself from a parent no matter who the parent is, though, of course there are exceptions.  It is often difficult to distinguish a general reaction from specifics of its occurrence.  For example, there is evidence that Tinder-type hookups just for sex with a stranger who appeals to you on a phone app with their picture or a comment or two don't tend to be very satisfying, particularly perhaps, but not only, for women.  Yet, it is common that even with repeated experience, people who keep using such hookup phone apps, think they have simply 'kissed the wrong frogs' instead of realizing or even suspecting that kissing frogs is not likely the right or best way to find a prince or princess. 

There are many feelings we experience vaguely with an inability to describe or explain them to others or even to ourselves.  Describing or even recognizing psychological feelings is often like trying to describe a unique flavor or aroma.  In many cases it is more difficult because you don't even know there is something wrong or what it is.  Moreover, it is also difficult to know whether one is experiencing something peculiar to oneself or that is common to people in general or to a particular group of people.  In her book The Feminine Mystique, Betty Friedan identified a "problem that has no name" or known specific cause at the time -- a deep, pervasive malaise and unhappiness with their supposedly perfect lives by many women married to successful men, often living the sought after suburban life.   It is likely one form of a failure of to achieve (what Abraham Maslow referred to as) self-actualization.  Most women, and even many men, fail to thrive because they are unable to develop and utilize their talents for useful, fulfilling purposes.  John Stuart Mill expressed it in his book Utilitarianism as the effect arising from people putting or finding themselves in positions that waste their talents:
   "Capacity for the nobler feelings is in most natures a very tender plant, easily killed, not only by hostile influences but by mere want of sustenance; and in the majority of young persons it speedily dies away if the occupations to which their position has devoted them, and the society into which it has thrown them, are not favorable to keeping that higher capacity in exercise.  Men lose their high aspirations as they lose their intellectual tastes, because they have not time or opportunity for indulging them; and they addict themselves to inferior pleasures, not because they deliberately prefer them, but because they are either the only ones to which they have access or the only ones which they are any longer capable of enjoying.  It may be questioned whether anyone who has remained equally susceptible to both classes of pleasures ever knowingly and calmly preferred the lower, though many in all ages, have broken down in an ineffectual attempt to combine both."
Aristotle expressed it as a definition of happiness: an activity of the soul in conformity with excellence.  I think it also arises from the pursuit of excellence in seeking the Platonic trinity of goodness, beauty, and truth, particularly when one is feeling that one is making progress toward success.  I believe one of the virtues of some sports and video games  (particularly those one can practice and try to perfect alone) is they improve skills and allow self-perceived (whether realistic or illusory) incremental progress toward satisfying levels of mastery.  One keeps playing them as long as the pursuit seems promising (even if it is a false promise as in golf or marriage which are too often triumphs of hope over experience); and oppositely, one tends to abandon pursuits that are frustratingly difficult with no apparent improvement in sight, or in which one perceives s/he has maxed out one's potential and wrung all the joy one can from it.  That can even include life itself when hope and purpose seem beyond grasping again, though that is often a false belief, but one difficult to see beyond and often requires psychological manipulation more than mere logic to get beyond in order to find or renew a passion for living.

But notice these psychological conditions and feelings are not apparent to us in the way many physical sensations are:  itch, scratchiness, heat,  cold, sexual pleasure, hunger, joy of eating what tastes great.  But even physical sensations are difficult to perceive, distinguish, or describe correctly sometimes.   Once in chemistry class, I picked up with my bare fingers a ceramic crucible I forgot that I had only a minute or two earlier removed from heating red hot in a Bunsen burner.  It was no longer red, but was nearly as hot.  I almost immediately felt, not heat, but like I was being stabbed by needles or cut with a sharp knife.  It took me a second to determine what was going on.  I had to look at what my hand was touching in order to do it; only then did I realize I was being burned, but it never felt hot; it just felt like stabbing pain.  Much of civilization's progress (which is not always upward and has some missteps and memory losses along the way) is being able to detect and describe physical and psychological phenomena in ever greater detail and clarity.  As we do that, and analyze feelings, not only for their biological component, but also their logical components, as in humor or jealousy or grief, or in understanding something like the logical difference between revenge and justice, we can then figure out how to engender comparable mechanisms and their states/feelings and behaviors in androids/computers/machines.

This work is available here free, so that those who cannot afford it can still have access to it, and so that no one has to pay before they read something that might not be what they really are seeking.  But if you find it meaningful and helpful and would like to contribute whatever easily affordable amount you feel it is worth, please do do.  I will appreciate it. The button to the right will take you to PayPal where you can make any size donation (of 25 cents or more) you wish, using either your PayPal account or a credit card without a PayPal account.