This work is available here free, so that those who cannot afford it can still have access to it, and so that no one has to pay before they read something that might not be what they really are seeking.  But if you find it meaningful and helpful and would like to contribute whatever easily affordable amount you feel it is worth, please do do.  I will appreciate it. The button to the right will take you to PayPal where you can make any size donation (of 25 cents or more) you wish, using either your PayPal account or a credit card without a PayPal account.


Follow-Up to "Some Thoughts About How Machines Could Think"

Rick Garlikov


I am more confident now that machines could be designed and built that would think, and that could also feel emotions and sensations, have a sense of reasonable morality, a sense of reasonable purpose, a sense of beauty, a sense of wonder, a sense of astonishment, and even senses of humor, irony, sarcasm, cynicism, etc.  And I think they can understand and see significance in acts and circumstances, though it may not be the same significances (for humans) which we see.  The emotions and sensations they experience may or may not be the same exact ones we feel.  The specific mechanisms for doing all these things may be different from how people think and feel, but the underlying operational logic of the mechanisms could be the same (and I will try to make the case that it is), or still work even if very different.  So in this essay I want to try to make that case more strongly than I did in the first essay.  This is going to be a supplement to that essay, not a complete essay on the topic in itself.  I believe all this because I believe people are, in a sense, thinking machines and because 1) there are a great many instincts, tendencies, or impulses that seem to be built into human beings (and animals), and because 2) the teaching curve of life, which is tantamount to programming, is extremely steep, labor intensive, and time-consuming, particularly in modern life.  I see no reason that if we spent as much time programming and teaching and giving their own experiences to advanced computers (with many of the same kinds of basic tendencies hard-wired into them that animals and humans have) as we do teaching children and adults, we couldn't get machines that think and experience emotions and sensations, etc. pretty much in the same kinds of ways children and adults do.  And I think we could hard-wire in such basic tendencies, impulses, drives, reactions, etc.

I will use the phrase 'thinking and feeling like people do' or 'thinking and feeling like we do', but I also want to discuss modifying machines so that they learn and know more than we do and that they behave better morally and less self-contradictorily than we, as a species, tend to do, even though, in some sense they still will be thinking in the way we do, but will do it better and more compassionately, logically, and sensitively.  This will not take away their "free will" or ability to think or to think independently.  There is nothing about being a good or a reasonable person that makes one have less free will than being a bad or irrational person; Mother Teresa had no less free will than Adolph Hitler, and because they had far different ideas did not mean that they had different sorts of thinking mechanisms.  'Thinking differently' in the sense of having different ideas does not mean 'thinking differently' in terms of how one has ideas or consciousness.  Moreover, we may want machines to think more rationally and more sensitively and compassionately than people do (as long as that does not inhibit their 'imaginations' or their diversity of good and worthwhile ideas), so that they avoid many of the kinds of errors people make.  In other words, there should be some machines interested in exploring ideas, say in science and medicine, that may be improbable, but turn out to be true, so you don't want machines all just believing only what is most probable or seems most probable at any given time or in regard to the evidence available at that time.  Insofar as the original Star Trek characters, Kirk, Spock, and Dr. McCoy ("Bones") mirror Plato's tripartite division of the mind into reason, emotions, and spirit that decides between reason and emotion, we don't want machines that are all Spock, all mere logic or operating only and always on probabilities without some basic instincts.  But we also do not want machines that operate only or primarily on basic instincts as McCoy often did outside of his making medical decisions.  The history of inventions is replete with dreams pursued, and often accomplished, by people who were able to invent, discover, and create what most reasonable, but unimaginative, people, thought was impossible, based on prior, unrealized narrow, experience and would not have even tried to do.  In the case of photocopying machines, people, such as those in charge of IBM, not only thought they could not be created but there would be no point in them if they were, since there already was carbon paper for creating duplicate documents.  So we do not want a world of machines that would all only operate on probabilistic logic and not be able to dream of improbable things and be able to successfully act on those dreams to bring them into existence.

As before, however, this is about developing machines that can actually feel sensations and emotions, not just emulate (other) people's feelings in the way a sociopath or hypocrite might or as someone might who is faking feelings such as love, sorrow, pity, curiosity, interest, or a specific feeling as a malingerer fakes pain or a person faking orgasm to get sex to stop or make the partner feel accomplished.  It is about designing/building machines that can actually have feelings, emotions, and sensations, and that can make new discoveries and have new ideas, not simply follow programmed directions.

The essential idea is that animals and people seem to have basic or underlying instincts, urges, tendencies, drives, impulses, and they, particularly people, have, to different degrees, abilities that utilize those instincts, urges, and drives to develop more complex behaviors, including complex planning, discovery, and language.  I believe that machines can be designed and built to have many of the same kinds of basic tendencies instincts, urges, drives, impulses, and abilities that utilize them.  And I think they can have abilities to utilize them in ways that lead to more complex behaviors, including thinking and language that they understand, not just emulate. Some people seem to have more complex thought processes but also more complex or at least different basic instincts/impulses from other people in that, for example, some people will want to emulate other people's behaviors they see, whereas others will not.  It may be that different experiences govern what will be desirable to emulate, but it may also be that there are conflicting built in instincts in some people that others do not have.  I doubt that as of this writing we know all the basic complex, nuanced kinds of instincts that people might have, and insofar as we do not and/or do not invent ones that would be desirable for machines to have, initial thinking machines are likely to be less sophisticated or different in their thinking from each other when faced with similar phenomena. 

For example, there seem to be myriad kinds of jokes people find humorous, and different comedians, or comedians in different times, often invent new kinds of jokes and deliveries to make people laugh.  The humor we see in situations can often be dissected logically once it is invented, but the invention of it and the response we have to a new kind of joke or delivery style that makes us laugh at it is fairly immediate and is not at the conscious level. So it may be that there are all kinds of logical ways to trigger a laugh instinct or it may be that there are many different possible basic laugh impulses that can be activated by the right stimuli.  In other words, if we created thinking machines that had a sense of humor, it might only find funny the kinds of jokes that trigger the impulses we know to build into it now, which might be only a fraction of the kinds laugh impulses different people might have.  I don't know, because I don't know how humor evolves or changes in people or from one generation to another.  I don't know whether a machine that finds funny the kinds of jokes, say, that I find funny would also find funny a joke or delivery style that might be new to me in ten years that I would also find funny.  But then I found funny things my parents never could, such as the "nonsense jokes" of the 1950's, like "What is the difference between an orange?" If you asked that of people, they all immediately said "An orange and what?" and you would say, "just an orange.  What is the difference between an orange?" They would then ask what the answer was since they had no clue how to answer it, and the answer was "A monkey, because elephants can't walk on lily pads."  Many of us found that hysterically funny because it made no sense and was stupid to the point of absurdity, but many people, particularly older people did not find it funny at all precisely because "that makes no sense, and is stupid."  Other jokes at the time that were considered nonsense jokes had a different kind of absurdity to them, one where the answer was true in some way but ridiculous, such as "What are the three ways you can tell there is an elephant in your refrigerator?" "You can smell the peanuts on its breath; you can't get the door closed; and there will be footprints in the jello."  Some people would laugh hysterically at that and others would just look at you as if you were deranged or just trying to waste their time.  Humor and other kinds of responses we have may be the result of a myriad of evolutionary developments that might be difficult to know until they manifest themselves, so it might be that we can create thinking machines that react in ways we know we do now without their thus reacting the same way to the yet uninvented or undiscovered kinds of stimuli  we will react to in that way in the future.  Or it just may be that people have the same basic senses of humor but that different experiences trigger different paths to it, so that we each find different kinds of things funny.  If this is the case, then machines with a basic kind of impulse to laugh will do so at different things, depending on their experiences too.  Insofar as our reactions to stimuli depend on different instincts, rather than different experiences, machines will be more or less complex than people relative to the number of basic instincts we build them with compared to the number that people on average (might) have from nature.  

Integrating Sensory Systems
It will also be important for machines to have their different sensory systems integrated in ways that people and animals do.  That is, animals and people seem to have their basic impulses and their abilities tied in with their sense organs in ways that let them react to visual, auditory, kinesthetic, olfactory, taste in various similar ways, as in jumping in a moment of fright at a sudden unexpected visual appearance or a sudden unexpected noise, or to react to what is written in a similar way as to its being spoken -- such as good news or bad news.  Of course, there are often differences between what we can comprehend (as easily) visually versus orally, in terms of complex images or idea, and we "take in" a lot of information visually at one time that would take a great many words to describe, usually in some sort of temporally linear fashion, because words cannot be heard (and understood) all at once the way things they depict can be seen and (at least to some extent) comprehended all at once.  We can hear many different sounds at once, as in music played by a symphony orchestra, but we cannot hear many words and ideas at once and comprehend them.  We can translate some sound into sight with sonar equipment, and we can translate some patterns of light into patterns of sound that are easier to detect that way. And basically it seems to me that machines could be built that could do all this too, probably even better, since machines could have all kinds of sonar, radar, ultraviolet, infrared, and other electronic or chemical detection devices we do not.  And if all those means of gathering information could be coordinated and could trigger the basic instincts and perhaps logic, and self-learning ability, and discernment and verification of patterns, we could build into machines in a way something like nature builds into us, they could think and feel and understand and learn and grow and discover and invent things as people do, and likely better.  In other words, it seems to me that this could be done basically in terms of physical and electro-chemical states and mechanisms of the machine in combination with pattern recognition, information gathering, and logic programming, such as the principle of non-contradiction, that statements P and not-P cannot both be true, and the impulse to try to resolve any cases where it does appear to be true.  I will try to show here how that might be done in terms of ethical ideas such as fairness and in terms of humor.  But this needs to be done in a way that does not reduce all ethics, aesthetics, humor, and emotions to mere physical states, because I do not think they all are just those.  If they were, it seems to me that they would then be arbitrary judgments, and I don't believe that is true of all of them, though it is true of some of them, such as frustrations prompted by irrational OCD as opposed to artificial impediments to achieving reasonable pursuits, and such as feelings of justice or injustice based on the satisfactions or the frustrations of unreasonable merely egoistic desires rather than on actual and reasonable moral wrongs.  It seems to me that at least part of understanding words and ideas is associating them with perceptions and experiences, and with other words and ideas we already have learned to associate with those perceptions and experiences.  For example, it does not help you understand the meaning of an unfamiliar word or expression if you are only given a synonym for it that is also unfamiliar to you, but if the meaning can be explained in words or images you do already understand or if the meaning can be shown to you in terms of a place or activity you see or an emotion you can experience, then you will, to that extent at least, have an understanding of it.

In short, it seems to me that much if not all of our knowledge is about what we perceive (usually called 'facts') and the patterns, relationships, and logic involved among facts and also among ideas, as in math, logic, and other theoretical endeavors, such as the invention of games and the development of strategies to play them well, the theoretical parts of science, etc. Adding language to our abilities and knowledge adds relationships. For example, little children first learning language, when they learn a new word, will often point out many of the things it applies to that they see, such as a bird or a fire truck. They are learning to associate the words with the visual perceptions of the objects. A computer could do that, just in the way now facial recognition matches on a computer will bring up a file with the person's name. But it could easily (and annoyingly) be programmed to name out loud each person it sees. "There's John Simmons; there's Anne Ryan; there's Robin Huddleston; oh look, is that Jon Stewart?!" etc." So a computer could be programmed to recognize and name objects by sight. It could then learn to associate phrases and sentences with things it sees and/or hears just in the way people do. And to talk about them, describing what they do, and perhaps trying to ascribe explanations (patterns of behavior) to individuals or to different individuals who act similarly or opposite. Is that not much of what people do? Is that not tantamount to what we consider understanding language and to understanding what, and the way, things happen? Add to that language about ideas, logic, patterns, and other relationships that we discover or notice. All this would make a machine of the sort that Turing wrote about that could converse with people about almost anything that another person could converse with them about, and do it at least as well, if not better. It would likely know more than most people, particularly if it had all the knowledge available to it that is on the Internet, where now you can look up almost any subject from the causes of yellow leaves on knockout roses to how to remove a LoJack device from a car to the symptoms and treatment of Achilles tendonitis, to what is 'trending' on Twitter or Facebook, and where any performer's next live performance will be given or how their last major performance was reviewed, etc.

Personal Identity and Death
Plus, one of the interesting potential future or at least science fiction aspects of machines thinking is that as long as the 'memories' or data of machines can be transferred from one to another with the same manufacturing design, as they can be now from one computer to another, machines would not have to face or fear death -- in terms of the disintegration of their knowledge and recorded biographical history and their ways to respond to experiences and knowledge, though they could face death if all, or a significant portion, of the data and/or commands were deleted from all the computers that had it, and they could face a kind of death if different computers reacted to, and utilized, the data and 'memories' of experiences in different ways because either their initial programming was different or because their different experiences and self-learning were different; just as people now do.  While we can pass on our ideas to others to some extent, that is not the same as preserving them in others the way they might be preserved in us, and it is not the same thing as keeping them in reserve for how we would use them later or what we might gain from them later.  Part of what distinguishes people one from another and makes each of us unique is that we have different ideas and different experiences with different reactions to, perceptions of, and recollections of them, which we cannot directly transfer to each other, but can only transfer indirectly (and not always very well) through words, pictures, stories, poetry, etc.  Machines would be able to have the actual same data/memory, though they might use it in different ways in the future.  Whether the machine that originally supplied the data would fret over that or consider it death if it were no longer able to function or not, I don't know.  It might be a kind of death, but may not have as much sting.  And if its programming and memory were transferred into totally new machines with no other memory or programming of their own, it might not be death at all, but a virtual immortality as long as the process could be continued into new identical machines. 

One of the questions I ask in philosophy class is whether, if there were a machine that could clone your body and mind (with all its ideas, memories, abilities, etc.) exactly as they are now (say in a distant city, so that you could travel anywhere virtually instantaneously), would it be okay to disintegrate the original each time to prevent overpopulation of "you" with it attendant problems of which of you should keep your job, spouse, home, car, bank and retirement accounts, etc.  What is interesting is that students split on this issue with some thinking it would be no different from how we proceed from day to day or year to year over time anyway, and others thinking that the clone would not be the same person and that disintegration of the original would be the death of them.  Basically I raise the strange question because it allows me to point out how our concept of identity is complex and possibly somewhat arbitrary in regard to any kinds of things that change over time but which we still consider to be the 'same' thing despite its changes.  E.g., if we build an identical replica of our car out of all new parts, we would say that is a different car though it would be an identical one, but if we replace all the parts in our car over time as they need replacing, we would say that is the same car we originally owned.  The concept of things being the same which change through time is probably more complex and somewhat arbitrary, perhaps even contradictory, than most people recognize or imagine.

But back to current reality, contemporary answering machines emulate language but do not understand it in the way I have in mind, though some of them can relate words or phrases to synonymous ones, which is a simple kind of understanding or ability to use language.  As of this writing a phone answering machine with pleasant recordings available for it to play can emulate questions and comments by ask you what you are calling about and then convincingly say "I am sorry, I do not understand; can you say that again in different words?" or if you say "I think my bill is mistaken", the machine can say "Am I correct that you want to speak to someone in accounting about an error in your bill?" and when you say "Yes", then say "Okay, I've got it; let me pull up your account and connect you with someone in that department to help you."  Those devices are improving and sometimes they work pretty well.  But they are not thinking in the sense I am talking about allowing machines to be able to do.  Often they are still not particularly good and are frustrating, because the options you are given are insufficient or unclear as to which one you need to pursue, and they sometimes lead you down the wrong path from which there is no escape by the time you discover that, other than to hang up and call back and try a different route.  Some seem to be programmed now to connect you with a person if you say things the machine is not programmed to recognize, particularly if it has already asked you to repeat your request in different terms, and it still does not recognize them.  There was one answering system with a major company that if the caller got angrily frustrated, after repeated failed attempts to say something the machine could utilize, and just started swearing, the machine would say something like "OK. Let me connect you with someone who can help you" and then connect you to a human customer service representative.  It was best to just cuss at the machine at the beginning if you knew you needed to be connected to a human.

Even though these machines are not thinking, many of them do sound like they are and they are often more helpful and do better than really stupid, lazy, or ignorant humans who don't really try to understand your questions and who give answers that are irrelevant or simply wrong because they do not know the right answer or care that you have it, or they connect you to the wrong department to address them, just to be done with you.  In many cases such lazy or irresponsible operators are freely associating the language you use with whatever pops into their heads, and they are not really understanding you but just making surface connections between your words and departments or people or answers they associate with those words at a surface level.  That is not significantly different from what a computer could do. Or if the machine does not recognize your vocabulary and just connects you with someone, that is not terribly different from having a human operator who has to connect you to someone else who has or might have greater knowledge about your question or problem, and making his/her best guess as to who that might be, or just choosing someone to get you off their hands.  In other words, some well-programmed automated answering systems now are more helpful than poorly trained or poorly motivated human beings, but the machines are not thinking, and the humans are only thinking minimally and poorly, if what they are doing could be thinking at all.  What I am interested in here are machines that do think and that do it well.  In my writings about education, much of it is about teaching students for understanding, not just teaching them to memorize the material, nor training them to respond merely automatically, which is tantamount to programming them to a certain extent.  I am interested in machines and humans who can think, understand, and feel in the ways that would be considered intelligent, wise, sensitive, and caring.

As to basic instincts to be programmed in or learned, let me start with some examples before giving a general principle.  Suppose when your car is low on gas, instead of just having a gauge that shows you that, it also cries like a baby, crying louder and louder the lower it is on gas and the longer you take to begin filling up the tank; or suppose that instead of having a gas gauge, manufacturers just have the infamous 'check engine' light come on when the car begins to get low on fuel, since to laymen the check engine light is the automotive equivalent to a baby's crying -- indicating that something is, or might be, troubling it or that the light is just fussy, but you can't tell what or whether it is serious or not.  Moving up to the older toddler stage, suppose, that we make use of the onboard GPS and wireless Internet system to find nearby gas stations and even perhaps sets a course to take you to one, resisting your efforts to turn it away from the path it has charted.  Suppose that as it gets lower on gas, it resists your attempts to steer it away even with more force.  I don't see that as being impossible to do today.  And it would not be much different from the efforts of a young child to get what it wants that it sees at a supermarket, and being difficult to dislodge from the attempt.  Then if the car had some sort of self-programmable memory, when we pull into a gas station to fill it up, it remembers that gas station's location and tries to go there.  We might mollify it by taking it to other gas stations, more convenient.  At some point, with built in optical scanners, etc. it might recognize gas stations as such, even ones it has not seen before.  If it had a voice recognition system and audible device, we might be able to "teach" it we are putting gas in it and have it just ask for gas the next time it starts to get low.  It could start out gently asking, but get more "nagging" or loud and insistent as the gauge gets lower and lower.  At some point, if it has some sort of logic built into it, we could get it to see that it is not seeking a gas station as much as it is seeking gasoline, and that gas stations are normally where to find gasoline in usable form, but there are other possible places, such as the fuel can in your garage.  The car could be programmed to nag you about getting gas as well, nagging more insistently (e.g., more frequently, louder, shriller, etc.).  Self-checkout counters in supermarkets almost seem to be nagging (though just following a program) when they keep telling you to place your item in the bagging area, and lock up when you don't, or when they repeatedly keep saying, after you have paid, "Please take all your items from the bagging area" until you do, and it senses that by the weight's being lifted from the bagging area.  It is not a stretch in technology to think that we could have all kinds of sensors for all kinds of things and have programmed logic that can be self-reprogrammed by experiences to let it know what messages to give, how to give them most effectively, and under what total set of circumstances.  That would be expensive (at least initially) and unreasonable to do past certain points, but it is not theoretically impossible.

The difference between humans (or animals) and machines in this regard is that humans and animals are born with limited sets of instincts and urges (i.e., causal responses or reactions to stimuli) compared to what they are taught and learn on their own as they age, whereas machines would not have to start from scratch in the teaching/learning process each time the way babies and baby animals do.  We could simply copy what an older machine has 'learned' or reprogrammed itself to do, into a new machine.  That is no more "cheating" about more efficiently teaching machines to think than is developing great teaching tools for children that more efficiently help them learn what others have discovered.  The progress of civilization would have early come to a screeching halt if the knowledge discovered, invented, or learned by others could not be taught and passed on from one person or generation to another in a way more efficient than it took to discover it.  People with average intelligence have knowledge it took the finest minds collectively to discover or invent over the span of human history.  And in fact, there is so much knowledge in so many areas now, that people need longer and longer schooling or training, and they need to specialize because otherwise it would take too long to learn everything that is known.  Insofar as machines would not have to go through all that time-consuming training, if they could think at all, as I believe they can be built to, they would make far more progress far faster than humans do.  But at first machines could have basic kinds of instincts and programming/learning ability that humans and animals do, and they could slowly be taught what human infants and children are taught.  That would be time-consuming and frustrating, just as it is with children.  And there would be dangers to the machine and misunderstandings by it, just as there are with children (and even with adults).  For example, as I was driving my three year old grandson somewhere, somehow the subject of lawn mowers and weed trimming machines arose because of what we passed, and I told him about my having to weed-whack a section of my yard, and he asked why I didn't just mow it with the lawn mower.  I said it was too steep to be able to do that, and he didn't understand what "steep" meant.  That seemed difficult to explain while driving, except to talk about how some hills were really difficult to climb or that made you go really fast when you went down them, if he had experience (as he did) with some different hills.  But it would have been easier to explain it with visual examples while not driving.  For example, one might take a book or piece of cardboard and hold it at different slanting angles, saying which slants you were showing him were steeper.  Of course, just seeing angles and slants would not necessarily convey that steeper angles were harder to climb or descend, or to walk across from one side to an other without losing your balance, etc.  They would not show that it was more difficult to push or pull a heavy object up a steeper incline or to control its speed going down one.

All those sorts of things go into our concepts of "steep" versus less steep or flat/level.  Galileo learned even more about steepness and its relationship to acceleration and gravity when he experimented with the speeds of objects rolling down planes tilted at varying angles.  Most adults today probably do not know the particulars or the physics formulas that Galileo discovered, and so their concept and understanding of the word steep is not as complex or complete as his, though they certainly can recognize steep hills or climbs when they see them or walk on them.  And we normally only teach children the parts of the concept we can explain to them which we think they can comprehend or that we think they need to know.  I see no reason we could not teach a computer to recognize and describe concepts like steep hills or terrain or climbs, versus less steep or level ones.  It would take time and they would have to have visual angle sensors to detect degrees of incline, and/or kinesthetic ones that either relate physical platform position (i.e., its 'foot' or 'ankle') relative to vertical or to the angle it needs to maintain upright balance, or to how much energy/power it needs to be able to climb up an incline or how much force it needs to exert to keep from accelerating too fast to maintain its balance while going down the incline. 

And I see no reason we could not do the same for most words in the dictionary, at least ones that have to do with physical descriptions.  And even though this would be tedious to do and take a lot of work the first time, after the programming is done once, it could probably easily and quickly be adapted to many or all 'thinking' machines, so they would all be able to correctly use language.  Of course, there may be some problems of the sort that children or anyone learning a new (e.g., foreign) language experience with idioms and idiosyncrasies in grammar or syntax.  Those could likely be corrected, just as they are for humans learning the language and making typical mistakes to begin with.  I once helped a student from the Czech Republic work through a guide to English so she could pass an English as a Second Language (ESL) college entrance exam, and it was amazing how many exceptions there were to the meanings and grammar principles they gave, which cropped up almost every time she asked about a specific example to see whether she understood the principle or meaning.  Invariably she used an example that showed the need for a modification of the explanation in the book.  I also once helped mentor by email a student from France who was studying at Oxford how to teach English as a second language.  He asked me to proof drafts of his papers to make sure there were no glaring linguistic errors, and though he was very good at English, periodically he would write something that at first made no sense.  Typically it was when he was translating too literally what he did not realize was a French idiom that did not have the same counterpart in English, and half the time I could tell from the context what he was trying to say.  But the other half the time, the French idiom translated directly into English just made no sense at all to me and I could not help him come up with the right words until I found out through further explanation on his part in a different way, or through examples, what he was trying to say.  We would typically have a good laugh over the use of the idiom, particularly when I explained to him what the direct English translation meant.  I had the same or similar problem in studying German as a second language for graduate school.  I was translating a set of aphorisms by Goethe into English.  Everything was going along fine, with the help of a German-English dictionary to look up any words I did not happen to know, until I got to one aphorism that had a number of words in it, each of which had two very different (though possibly somewhat related) meanings in English.  I tried the different combinations until I got one that at least made good sense, though it seemed very out of character with all the other aphorisms.  The aphorism in question was supposed to mean "One is truly impoverished who has lost all shame with regard to keeping his sorrows (or troubles) privately to himself."  Much to the total delight and almost asphyxiating laughter of my German teacher (who was a cool guy) I translated it as "One is truly impoverished who has had harm come to his private parts."  After he was able to get his breath and skin color back and right himself from being  doubled over one of the classroom tables, he had me write out the Goethe aphorism and my translation because he had friends to share it with.  I presume programming machines to think understand, and learn from experience, would be fraught with the same kinds of errors, as well as those children normally make in their understanding of anything until it is honed through trial and error.

Basic principles:
1) Simple instincts, drives, urges, impulses, etc. could be programmed into machines that are essentially logically similar or tantamount to those simple reactions and aversions or attractions  which parents, psychologists, philosophers, and others discover or suspect that babies, children, adolescents, and adults have.  And we could go beyond those to build in additional ones if they would be helpful.

2) Drives, instincts, urges, impulses, etc. would be basically causal feedback mechanisms to pursue or try to achieve, increase, or maximize certain physical states (in the sense of electrical, chemical, physical states or 'gauge readings') and to avoid, minimize, or eliminate other states.  And as the machine begins to recognize patterns of conditions that lead to or produce states that are sought or avoided, the machine will be 'learning' which conditions and circumstances to pursue or avoid.  That combines learning and factual knowledge with basic instincts to bring them into play under more varied or complex conditions.  Different machines could have some different basic instincts or tendencies; e.g., some might crave (i.e., seek) higher risk activities of various sorts.  Some might seek to go fast or climb high as registered by speedometer or altimeter readings or by other sensors for judging speed or distance from the ground or above other structures.  Some might seek certain chemicals rather than others (i.e., prefer different tastes or aromas or other properties of chemicals that humans may not be able to sense). 

3) Two important tendencies to build in would be 1) to emulate people (or other machines) to various degrees, and 2) to seek more factual information or experiences to add to machine data bases.

4) There can be conflicting sensations or pursuits/avoidances, just as there are with people.

5) Machines may or may not necessarily recognize, name, understand the patterns, causes, etc. of all their own instincts, drives, urges, impulses, or desires, just as people do not of theirs.  Insofar as machines do recognize and be more aware of the (causal) pattern of their impulses and drives, those would be tantamount to conscious desires, needs, etc.; insofar as they do not, those would be tantamount to unconscious desires, needs, etc.  Unconscious needs and desires could become conscious as the machines figure out more about them and thus learn about them and understand them to the same extent people do, insofar as knowing more about the patterns, causes or suspected causes, and related aspects of our feelings constitutes understanding. 

6) Machines would be programmed to recognize and try to resolve apparent contradictions or incomplete patterns or discrepancies among patterns or statements.  Trying to resolve apparent incomplete patterns would be tantamount to having or displaying curiosity.

What Machine Instincts, Urges, Impulses, Desires, etc. Would Be:
Machines could be built with sensors that either activate various capacities it has or that strive to increase or maintain the pressures or voltages or chemical reactions its sensors detect, or that try to reduce or minimize or eliminate them.  That would be tantamount to the machine's liking something -- trying to get more of it (more pressure, voltage, molecules or molecular density, etc.) in time or intensity -- or its disliking something and trying to avoid or minimize it in time or intensity.  It is not that we or machines would be trying to increase an experience because it likes it or trying to decrease or avoid that experience because it dislikes it, but that its seeking to increase it or avoid it is what we perceive in ourselves, and the machine would perceive in itself, as liking or disliking the experience.  If it is trying to relieve or release pressure, it would perceive that release to make it more "comfortable", i.e., not working hard trying to achieve this state.  This would be no different from people's urinating or defecating when their bladders or bowels are full and exerting pressure, or sneezing or blowing their nose when they have a cold or allergy causing sinus pressure they want to alleviate, or are bothered by their nose running.  And it would be no different from wanting to have the feelings induced by chocolate or sex or love or shopping or solving puzzles or discovering something one wants to discover, or any of the myriad things humans want to do and strive to do.

And we could also program the machine with the basic instinct to mimic others or to learn from others whom they want to emulate, setting up other kinds of pressures and "frustrations" when the actions or teachings of others conflict with initial programming or seem to cause it to try to have extra motions it doesn't feel the need to do -- as when little kids are not bothered by snot running down their faces and resist taking the time to wipe it or letting you wipe it, because they have no natural instinct to do that, and because they have an impulse to do something else with that time, without interruption.  A 'lazy' machine, for example, would be one that prefers to watch daytime TV or do nothing for long periods of time to mowing the lawn or helping with other chores or doing some other kind of useful work.  Many human traits are simply preferences of behavior; e.g., perpetually 'hungry' obese people may crave to eat more than to do physical activity that would allow them to lose weight instead of gaining it; or lustful people prefer to seek the physical state of sexual arousal or satisfaction than to learn new ideas.  A person who seems uninterested in sex with a spouse may have a stronger impulse generally or at some given time to watch a football game or to complete some particular task s/he has begun, or to generally find almost anything more preferable to the states that sex induce.  Or there simply may be no impulse to seek a state of sexual arousal and nothing that particularly triggers such a state. 

Insofar as machines can be designed and programmed or hard-wired to have specific preferences, they would be different from babies, which come into this world with likes, dislikes, abilities, disabilities, basic learning skills, etc. that we cannot at this time predict or intentionally and knowingly create.   Some machine traits might be important to build in perhaps to all machines, but others traits may be important to allow to vary among different machines, in order to allow greater diversity of ideas and discoveries, skills, passions, tastes, etc.  E.g., it may be better to program certain kinds of ethical principles/behaviors into all machines to keep them from being selfishly or greedily destructive.  Or there may be good reasons not to do that, at least not without all kinds of exceptions.  Given the unintended side-effects or consequences of any design that can learn and change, I would think it would take some trial and error for humans or machines to learn which designs or parts of designs, if any, should be universal.  It would seem that a strong sense of ethics and compassion, pattern recognitions of all sorts, and the drive for acquiring information and resolving incomplete patterns or apparent logical contradictions might be strong candidates for universal programming and hard-wiring, but that may not be true, and/or there may be other traits equally or more important.

If we look at people, there are all kinds of basic or primitive instincts and quirks different people have and desires they develop.  We could program many of the basic ones into machines if we wanted to.  These would be basic likes and dislikes as manifested by their striving either to increase or to minimize/eliminate certain pressure, chemical, or electrical 'readings'.  We could add language and we could even add the propensity to describe any perceived readings or actions in language, where words or phrases would have to be created to allow the naming of sensed data and/or to communicate efficiently with people and/or other machines.  There could be a built in propensity to have such communication because not being able to be understood would cause pressures perceived as frustrations.  Senses of fairness could be partially hard-wired in through impulses that would lead to the sorts of concepts described in the essay "Fairness as Moral and Conceptual Relevance", which often start in children with instinctive notions to treat people in a way they seem to deserve based on how they treat you, and/or that have to do with not being deprived of getting what you want while others do get it, and/or that have to do with people being treated equally under the same conditions.  But fairness is a complex set of ideas and there is ample room for disagreement and development of understanding of it.  It may be best to allow machines to have the building blocks to determine their own ideas of fairness and to disagree with each other, or it may be best for us to program in somewhat sophisticated notions of what constitutes fair treatment, but still allow modification through experience and learning.  I don't know.  I am interested here in explaining design possibilities that would produce traits such as what we as humans designate to be fear, joy, humor, intelligence, logic, artistic skill, etc., not trying to advocate specific designs or traits to build in.

Moreover, I don't know whether a separate 'processor' would be necessary for the machine to think or not, or whether that would mean being able to monitor, as much as possible, its central processing.  Plato and Freud each postulated separate parts of the brain that processed information from other parts, but I don't know whether those components exist or are necessary or not in people, and whether or not they would be necessary in machines.  For example, suppose it has been a hot muggy day, but in the evening it has cooled off and the humidity has dropped.  An adult human might go outside, and, perceiving how it feels, breathe a loud sigh of relief and exclaim "Wow!  What a pleasure!  This is like paradise!"  I don't know whether a second part of his brain monitors the first part's detecting the change in humidity and temperature or the cool breeze or whether we are wired and educated to report that as part of our reaction to the experience of detecting the decrease in temperature and humidity.  So I don't know whether the machine needs to have a separate computer that announces that certain readings are pleasurable or whether it could just be designed in such a way that it learns on its own to report pleasure when certain readings go from a range high enough to try to avoid (or low enough to drive pursuit of an increase) to the range the machine would be programmed to try to prolong.  Pleasure comes as pursuit or avoidance requires less effort or energy or a state of no further approach or avoidance.  E.g., if your car is in cruise control set at 70, and it comes to a steep hill it has to climb, it may have to kick into a lower gear and/or higher rpms to be able maintain 70.  When it crests the hill, it can 'relax' to maintain 70 on the way down.  The speedometer and tachometer could be set to register groaning or sighs of relief at the different prospects, or make more expressive comments, just as a bicyclist in a mountain area might.  Pleasure can also come from anticipation of reaching such a state by doing what the machine is doing in conformity with patterns it believes will lead to the state pursued by the feedback mechanism under the conditions it is doing it.   E.g., humans have a sense (whether justified or not) of what is possible or not for them to accomplish, and the pursuit of a goal that seems reachable and desirable, along with progress in the pursuit, even with temporary setbacks, can be exhilarating; i.e., pursued with more vigor and less fear of failure and/or less fretting over the work involved.

Now, on my theory here, it might seem there is no difference in feeling between being programmed to pursue more of something, which I am saying would be tantamount to finding pleasure in the conditions, and being programmed to avoid, eliminate, or decrease, mechanical-electro-chemical readings ('sensations').  The machine would just be pursuing what it is programmed to do; there would be neither pleasure nor pain in it; or if there were, they would be the same.  But I think that is not true.  First, we humans do not always distinguish avoiding pain from having pleasure, except perhaps in cases of strong deprivation, and not necessarily even then.  For example, we eat because we get hungry, but hunger is not really a pain, and eating to satisfy a very slight hunger is not necessarily much of a pleasure.  But if we are ravenously hungry, eating can be very pleasurable, not just the alleviation of an unpleasant feeling.  Or if you have to pee and have been holding it, when you finally get to go, it can be a pleasure, not just relief from pain or frustration.  Many things, such as breathing, are not particularly pleasurable, and holding your breath for a short time voluntarily is not particularly painful, but not being able to breathe for a longer time can be very painful, frustrating and agonizing, with being able to breathe again then being pleasurable and extremely satisfying and gratifying.  So I am not sure that we make the distinction in many cases between getting something we want or want more of, and being able to decrease or avoid sensations/readings we do not want or want less of.

Plus, we don't always know what we want; and sometimes, as in the case of addictions (or even certain foods), people want things they know are not good for them (or that will give them indigestion or make them gain weight instead of losing the weight they want to shed), and that in some sense we want to avoid.  Or in the case of OCD, people want things that make no sense, at least not to others, but that they try to achieve anyway.  Many people have set ways they want to do things that we consider just idiosyncratic without rising to the level of OCD, but it seems to me the mechanisms are the same.  It is just the level of compulsion and the degree of unusualness or unconventionality of the desire that makes the difference between normal (common) desires, idiosyncrasies,  and obsessions or compulsions. 

Now my theory is faced with the problem that we seem to feel different about clear cut cases of pursuing pleasure versus those of avoiding pain or harm.  E.g., fleeing from someone intent on harming us is a different feeling from running a race or running for pleasure, or even from chasing someone with the intent to harm them.  The feelings of the chaser and the one being chased are different.  Or, even at a lower animal level, if you turn on a light at night where cockroaches are partying, they scurry in a way that makes them seem to be fleeing in fright, not playing hide and seek for fun.  Of course, I don't know whether they are frightened (or gleefully running), or feeling frustration if prevented from fleeing to where they are trying to get, or just reacting physically to a stimulus.  But with people, there is a different feeling when one is running in fright or one is running for pleasure or at least with no concern of fear.  I don't know whether that can be accounted for, but it seems to me that it might be accounted for by different causes of running or different neurological paths (or different electrical/chemical paths in machines) where one path is avoidance of perceived potential harm or just avoidance, and the other path pursuit of a physical state programmed or hard wired in to be intensified or maximized.  That is, seeking to lower a state of some sort (perception of being harmed) would be different from seeking to raise one (e.g., perception of catching lunch and satisfying hunger).  It might be that if potential harm is perceived, and if the avoidance of harm or of certain calibrated readings (i.e., pain in animals from the firing of some neurons, or short circuits or increased voltage or other electro chemical measures) is programmed to be avoided, attendant other readings will be what we consider fear -- that is an anticipated expectation of pain or the state that is strongly attempted to be avoided or eliminated.  If we are running for a goal we positively desire, then the impetus for running might give readings we take to be pleasure.  Similarly with the programming of a machine, that has learned some things can cause it harm (and states of being it is programmed to avoid or minimize) and that other things can be sources of states of being it is programmed to increase or maximize and sustain.  Or consider, for example, a person running from someone because they think it fun (e.g., being coy or playful or wanting to show themselves faster, etc.), but then finding out later that the other person was trying to kill them.  That might give them cause for alarm and change how they feel about the pursuit in retrospect, but it won't retroactively change how they felt at the time.  It is not just the running that is involved in our mental states or that would be involved in a machine's running, but also knowledge gained from experience (or hard-wired programming) that could trigger sensations or electro-chemical states that are measured and read -- states that are programmed/hard-wired to be pursued or avoided.  In other words, on the one hand, the causes or purpose of running could be taken into account and channeled into pursuit of pleasure (sustaining, increasing, or maximizing a physical state) versus avoidance of harm (decreasing, minimizing, eliminating or avoiding particular states), or on the other hand, there could be different sensory data that would immediately trigger an overwhelming reaction of avoidance that is tantamount to what would be called or experienced as fear or an overwhelming attraction to sustain or increase that would be considered or experienced as pleasure.  Often humans do not feel fear of a particular set of circumstances until they have experienced a bad (i.e., resisted) result in them (such as an infant's pain from a first inoculation at the pediatrician's office) and then expect it again at the next visit, or a visit to any kind of office or small shop. 

As an example in human behavior of trying to increase or decrease a chemical level, some people, those whom most people would consider to be dare-devils seem to be want to increase their adrenaline levels (as of this writing, we often refer to them as adrenaline junkies) or at least are driven to do things most other people try to avoid.  So it could be, that if adrenaline is the determining factor, some people hard-wired to pursue higher levels than others, perhaps even maximal levels, while those who are considered 'risk-averse' are hard-wired to minimize their levels/readings of adrenaline.  Machines could be built to do that too, pursuing different levels of risk, regardless of the harm that might befall them which they either ignore or which in some way even contributes to the drive to pursue the act that might lead to it.

Or consider, that toddlers, even siblings, might have totally different food preferences; one likes broccoli, while another doesn't. Or one likes pasta with marinara sauce or a sandwich with condiments and other trimmings, while another doesn't want the ingredients combined, and will take the sandwich apart and eat all the components separately.  Or the notorious child dislike of having their different foods touch each other on their plate, even though they like each of them separately.  Or the person who prefers to eat the different foods on their plate one at a time separately before moving on to the next one, versus the person who likes to travel around their plates to eat a bite of one thing and then a bite of another, then a bite of a third thing, etc.  We could program machines to behave in the same way by pursuing or avoiding certain levels of chemicals or combinations of chemicals.

If we wanted diversity of ideas among machines, we could hard-wire them to try to sustain or to avoid different chemical or physical states.  We could even program them to learn in different ways, whereby some use more logic or reasoning than others.  People who experience the same things or have the same evidence often come away with different conclusions.  Some pursue the discrepancies further or care to check more thoroughly about their own reasoning.  Machines could be built the same way.  Plus then you add in hard-wired responses to wanting to "fit in" with popular or powerful machines versus wanting to follow some sorts of moral imperatives one has discovered or been taught (whether right or wrong), and you start to get conflicting ideas among different machines, just as you do with humans, and you get cognitive dissonance caused by seemingly conflicting ideas within any given machine, as with any given human.  When I was young, we visited a relative for dinner and the hostess asked me if I would like more of one of the foods she had prepared, and I said "no thank you, I didn't like the taste very much."  My father angrily told me when we left that you never say you don't like something someone gives you and that you eat it without complaint.  Some time later, we were at the farm of one of my uncles for a family gathering and I asked for a glass of milk.  It tasted terrible, but I dutifully drank it without complaint, figuring country milk just tasted different from city milk or that the brand of milk they bought was different.  An hour or so later, one of my cousins asked for a glass of milk and then told my dad, after the first sip, it tasted terrible.  I braced for my father to lecture him about his bad manners, but instead my father smelled the milk and said it had turned sour and was bad, not to be drunk.  Then he looked at me and said "You drank a whole glass of this milk before; what is the matter with you that you didn't say it had turned bad!?"  I imagine a computer could be programmed to also see there is an element of contradiction in the two reactions and explanations of my father.  The computer could also be programmed to feel avoidance of being disappointing or being yelled at or demeaned for being called wrong or badly behaved.  That could add to the consternation or desire to figure out what is true about being served food that tastes terrible.  Is one supposed to say something to someone who should know what is wrong or not?  I finally devised a way to handle such situations in, say, a restaurant where I might order something I have never tried before and it tastes terrible to me.   I don't want the restaurant to feel they need to replace it with something else if it is just that I ordered something I shouldn't have, but I also don't want to eat spoiled or improperly prepared food.  I call the waiter over and state the dilemma and ask him or her to have the chef taste the batch from which it was served to see whether it is right or not.  If it is, I will simply pay for it and eat what I can and learn from the experience not to order that again, but if there is something actually wrong with it, then I would like to have something else and will order after I find out.  A machine or another person could figure that solution out from the previous experiences, or perhaps could figure out even a better solution, such as asking for a small sample, if feasible, of the dish one is unfamiliar but thinking of ordering.  Some machines or people could of course just say it tastes terrible; others might eat it without complaint, and possibly later become ill because of a bad chemical reaction.  Of course, machines would not necessarily have to eat food, but the point is the same with regard to anything they might be in part programmed or in part learn to avoid or seek, where a conflict or seeming contradiction arises.  They could certainly be programmed to seek or avoid certain chemicals or combination of chemicals through 'olfactory sniffers' available now which detect and analyze chemicals.

Or consider musical tastes among different generations.  We could program machines to detect audio patterns and features of them, preferring some features to others that might have to do with their electrical wave rhythms (tantamount to bio-rhythms) or energy levels, or that have to do with what is popular when they are 13 years old, or that demonstrate recognizable incrementally increased pattern complexity within certain ranges (i.e., genres of music) which is something hard-wired to be pursued, not just in music, but in all kinds of pattern detection software.  And if we don't let the machines 'know' how their musical likes and dislikes are determined, we could enjoy watching older versus newer machines question how the other ones could possibly listen to 'all that crap', let alone enjoy it.  I say "incrementally increased complexity" because music, literature, film, video games etc. seem to be appreciated and pursued that bring novelty and refinement to already existing forms of it once saturation of them occurs and is programmed to be avoided, and is considered or experiences as 'boring'.  But with humans, there is also something about continuing to like music in a genre that is learned when one is an adolescent and that possibly has to do with one's energy levels at that and at other ages. In the movie "Back to the Future", when Michael J. Fox starts playing early rock and roll at his parents' high school prom, the students like it, but when he gets carried away and goes into a 1980's or '90's electric guitar riff, it is too much for everyone.  They have not had a chance, I think, to assimilate the early rock and roll patterns and become saturated/familiar/bored with them and ready to move on to a more complex and/or energetic, animated sound.

There could be different degrees of 'pain' -- strength of impulse or resistance to avoid certain states.  E.g., your car could now be programmed to report low tire pressure that it can limp along with for a while, even though it is not the most optimal or desirable pressure to drive with, or it could say the tire pressure is too low to continue and that it had driven that way once before and had a blowout that damaged the rims in a way it really doesn't want to experience again.  It could really be resistant to continuing.  It could experience or report that resistance as being too painful to drive or to afraid to drive because of the likely ensuing pain.  People have to learn this same kind of thing.  E.g., physical therapists are often called physical terrorists because they require patients do exercises or degrees that either hurt and/or scare the patient about re-tearing or breaking whatever tissue was torn or broken in the first place.  It is difficult for therapists to assure the patient that the exercise will not re-damage the tissue involved.  The pressure one feels in doing the exercise feels like it will tear something, even if that pressure itself does not feel painful, but that any additional pressure will.  And if there already is pain experienced, it is even more difficult to believe the therapist's reassurance nothing will be torn or re-injured. 

Humans, particularly perceptive, sensitive ones, face all kinds of fears and conflicting pressures and ideas of these sorts quite frequently, often because adults and/or society teach them poorly instead of well.  E.g., competitions are held for children (e.g., sports or games) where they are exhorted to win and to try really hard to win, but then not to behave badly or take it hard if they do not.  Yet between whatever natural desire there is to win, succeed, or excel and the exhortations to do well and win, the message is not taught that it is the participation and the attempt to do one's best, not the result of an artificial competition, that is important.  Some coaches are better than others at explaining that to children in a way they can understand and actually feel.  But each child has to learn it, if possible, or forever taste "the agony of defeat" in any losing situation instead of the joy of having done one's best to try to succeed or win.  In the 1975 World Series between Boston and Cincinnati, game 6 was a titanic extra-inning battle that each team had various chances to win that were thwarted by excellent play by the other side.  If Cincinnati won the game, they would be champions; if Boston won the game, the series would go to game 7.  Boston won on the barely fair home run by Carlton Fisk in the bottom of the 12th inning.  Prior to game 7, Pete Rose, one of baseball's most competitive players ever, was asked about the disappointment of losing game 6 and whether it could be shaken off to play game 7 at one's best.  Rose enthusiastically and immediately said it was difficult to be disappointed in the result, given that the pride and honor in having played in what might have been one of the best and most exciting, skillful, and competitive games in World Series history far outweighed that.  To have just been a part of that game was tremendously exciting.  And no matter what happened in game 7, he and his teammates would always have that.  That kind of perspective should be taught to children, but too often is not.  Machines could be programmed to know it, or could be taught it early in a way that is permanently learned, or they could be left to be taught the way children are now, and you will end up with some petulant losers and ungracious winners when they compete.

Seemingly Difficult Human Traits for Machines to Have
There are long known, standard kinds of objections to machines being able to think in the way humans do.  It is thought that they cannot be able to have the following:
  1. Feelings of Pain and Pleasure
  2. Emotions, such as joy and sorrow, frustrations, sense of accomplishment
  3. Ethical Understanding
  4. Sense of Humor
  5. Sense of Beauty
  6. Sense of Reasonable Purpose
  7. Sense of Significance

I wish to address each of these capabilities or characteristics in some detail (though that detail here will be incomplete) to try to show how machines can have them.  The underlying basic idea is that there is a logic to them, which when there is evidence the logic is met by factual conditions, the traits or phenomena will exist or occur.  What needs to be understood then are the logical relationships among facts that produce the characteristic, and we should be able to analyze and determine the kinds of logic that produce characteristics, as when we analyze some particular joke or humorous situation to explain to someone else what makes it funny, which we often do when someone doesn't get the joke or see why it would be funny.  Explaining a joke doesn't make it funny, however, so this is not about machines laughing or being tickled by knowing explanations, but by being hard-wired or programmed to make the immediate connections in the way people do.  Similarly with trying to figure out what is depressing someone or making them anxiety-ridden when there is no obvious immediate cause for them to be sad or afraid.  Insofar as we can figure out all the kinds of logical relationships among facts that trigger, or simply are, our own responses which we call emotional ones, it seems we could build or program similar logic into machines if we wanted to.

Feelings of Pain and Pleasure
I have already explained this to be the avoidance or reduction of some state or the pursuit or maximization of a physical state in the machine.  Certain circuits could be devoted to detecting those states and causing or initiating the avoidance or pursuit of the state.  And the machine would refer to those states as painful or pleasurable.  And I have pointed out that we do not need to believe that we avoid something because it is painful, when it could easily be that the approaching or having the state is what we consider or perceive to be painful to varying degrees simply because it is what we resist having.  The awareness of the resistance or the resistance itself is the pain.  For example, when a child adamantly resists some food or person or some action such as having a hat put on its head, we say s/he doesn't like it, and the child itself may say later that s/he doesn't want it because s/he doesn't like it.  But isn't the resisting it all there is?  Doesn't it come first?  Isn't the "not liking it" just what we call the resisting it?  Similarly, if a child gravitates toward something, isn't that the same thing as liking it?  Isn't that our "evidence" it likes it?  What makes you think there is anything prior to that urge to have it or increase it, that is "liking it" and is different from simply the fervent or vigorous pursuit of it?

Emotions, such as joy and sorrow, frustrations, sense of accomplishment
The logic of these emotions could be based on impending (probabilistic) expectations of achieving certain states of pleasure or avoiding/relieving states of pain.  A feeling of accomplishment could be about achieving a state that was difficult to do and that took much work.  Frustrations would be based on encountering obstacles, particularly ones that are improbably occurring (like when there is a car accident that backs up traffic a mile or two before the highway exit you need to be able to get to an important meeting or to get home or to a restroom) or ones that are produced by people (or thinking machines) for no good reason and that could easily be removed or prevented if those people were smarter, more caring and considerate, or more helpful.
 
Ethical Understanding
It seems to me that there are some basics with which we could start, such as the machines wanting things (certain states of circuitry, etc.) and perceiving that others can give it but won't, or its wanting to avoid other states that others could help prevent but won't.  This would be tantamount to an egoistic view of ethics, but the machine could also be programmed to see that in any similar situation, other machines or people should be treated by people or machines the way it does, in terms of being able to achieve goals they seek.  But then there could be programmed in recognition (through experience or logic) that some goals are harmful in the longer run even if the person or machine wants to achieve them at the time.  And thus a form of consequentialism could also conflict with the notion of equal treatment and fairness based on equal treatment.  There are in fact many kinds of factors relevant to what makes any act right or wrong, and sometimes they conflict, as in privacy or autonomy (freedom) versus security, or fairness versus overall benefit (as when the fairest option does not produce the overall best result for a group, that an unfair option would).  Machines could have experiences, just as people do, that lead them to favor one kind of result over another, setting up ethical disputes among different machines or machines and people, particularly when the conflicting factors in any given situation are somewhat equally balanced and/or are hidden from view at an unconscious or at least unrecognized or unarticulated level.

Sense of Humor
There are different ways that things can be funny, but each has a logic of its own and those different forms of logic could be programmed to be recognized, even if the machine does not recognize 'consciously' why it finds something funny.  For example, there can be associations that capture relationships that are unusual or unexpected, though readily recognized once stated or thought up, as in The Daily Show parodies of movie titles to fit the content of a comment on a news story.  A recent example involved a Florida postal employee who broke the law and flew and landed a gyrocopter on the grounds of the U.S. Capitol Building to deliver letters of protest to Senators and Congressmen.  He was summarily arrested.  Combining this crazy stunt with the movie title Blackhawk Down, the image posted on the television screen was a mock up of the movie poster with the title changed to "Wack Hawk Down", adding an element of rhyme to the association.  Or a joke can look at something from a different angle perhaps bringing contradictions into focus that are, again, unusual or unexpected -- though immediately recognized once stated or thought up, as when Jay Leno mentioned all the hyped advertising about the introduction of new sexy bras by different companies, and then asked 'What? Are American men not paying enough attention to women's breasts?'  That was a witty, unexpected way of pointing out the conflicts between being alluring to the men whom women might want to attract and the problem of attracting undesired attention from those they don't, or the conflict of having men they do want to attract become interested in them for shallow reasons the women do not want to be valued for.  And it points out the shameless exploitation of such conflicts by some businesses. 

For a machine to have a sense of humor does not mean it will get every joke or find all the ones it does get to be funny, any more than for a person to have a sense of humor means it will "get" (i.e., relatively immediately understand) every joke or find all the ones funny that it does understand.  A punchline may be too obviously anticipated to be funny, or it may require knowledge of the facts it is putting into relationship that the machine or person does not have, or it may not be possible to tell whether a joke, say about a stereotype, is meant to mock the stereotype and those who believe it or is meant to mock the people demeaned by the stereotype.  One of the reasons victims of stereotypes can successfully tell jokes about the stereotypes is that they are clearly mocking the thinking behind the stereotype, not mocking the (usually) disadvantaged group characterized by the stereotype, which would include themselves. 

Even events can be funny if unexpected in the right way.  When they opened the Benton Harbor, Michigan golf course designed by Jack Nicklaus, Nicklaus invited Arnold Palmer, Tom Watson, and Johnny Miller to play a ceremonial opening round with him for spectators.  On one of the holes there was a mammoth green with undulating hills on it.  As they approached it, Arnie asked Jack if he had designed that green after a night of drinking.  Three of the approach shots had landed within somewhat manageable distances of the cup, but Johnny Miller was over a hundred feet away from it, with sideways hills and valleys.  Johnny, pointing out how difficult this putt was, asked if he could use a wedge instead of a putter.  Jack was not amused, and said the putt was not that difficult.  When Johnny said it was impossible (not just difficult), Jack offered to show him how to do it, and started walking toward him, so that the offer could not be refused.  Johnny moved his ball and Jack put one down where it had been, then took a second to just look and strike the ball really hard, at an angle not really pointed at the cup.  The ball made a huge curve or two and then straightened out going toward the cup much too fast, went over the cup but caught the back lip in a way that reflected it straight up in the air and then down into the cup where it stayed.  The crowd went wild with both the thrill and the amusement of seeing it because it was clearly more luck than skill, and couldn't have come at a more perfect time, and it also added to the rich lore of great Jack Nicklaus shots, some of which he parodied himself in a television commercial years earlier where he was practicing indoors in a house and decided to hit a drive, opening a sliding glass door about two inches that he was supposedly going to step back and drive the ball through just after the commercial ended.  Tom Watson gave a mock royal bow to Nicklaus upon the ball's resting in the cup.  Any of a number of factors could have made this funny, but all of them occurring together made it spectacularly both amazing and funny.  A machine, just like a human would have to know, and could know, all the facts that make this funny, and that would include the unlikeliness and unexpectedness of making the putt, particularly at the speed it went, the improbability and unexpectedness of its going straight up in the air and coming down into the cup, the legend of Jack, the mock bragging involved in offering to show how to make such an 'impossible' putt, the fact they were all great players and friends, Jack's appearance after it went in as there being nothing to it, and 'there, I showed you the line like I said I would,' etc.

Or one other sort of logical humor is giving mock reasons or evidence for facts or for fabricated facts where that explanation or logic is clearly fallacious and only meant to be a joke explanation.  One of the funniest of those was in the movie Support Your Local Sheriff, where James Garner, in a hardware store meeting of the mayor and town council, in order to demonstrate his shooting ability to deserve the job of sheriff, goes over to a bin and pulls out a steel washer and throws it in the air, pulls his gun and shoots at it, and then catches it, holding it up to show them that there is not a mark on it "because I shot right through the hole in the middle of it".  The movie audience convulses with laughter, of course, because his missing it altogether is a far more likely explanation of the unmarked washer.  And the government officials are not impressed or amused by the explanation, so they ask him to do it again, this time with a piece of masking tape over the hole in the washer.  Garner is reluctant "to put another hole in the ceiling", but the group insists.  So he throws the washer in the air again, draws, shoots, and catches the washer -- showing a hole right through the tape.  The movie audience laughs even harder this time because it turned out his explanation earlier probably was actually true, and the whole thing is so improbable that it is absurdly funny to think it could have happened.  A machine could know all those facts, including understanding logical fallacies, probabilities, and be hard-wired to 'laugh' at any such kinds of absurd logic and improbable events.  That would be no different from how people find that scene funny.

Insofar as we can know the different kinds of logic of facts and comments that people find amusing, we can program that logic into machines so that when the machine comes across or notices such a circumstance, the logic can funnel the output into laughter or a physical state considered  amusement by the machine -- a state it wants to prolong, increase, or create/experience by bringing about such circumstances that give it the impulse to "laugh."  Even laughter and recognition of the physical state induced by physical tickling could be channeled into laughter, and could be in part programmed to be prolonged but also not so intense that it wants to be stopped -- in the same way that little kids love and seek, yet resist and squirm away from, being tickled.


Sense of Beauty (and Ugliness) and Art
Preferences for certain patterns of sight and sound (or chemicals for taste and aromas), particularly in some cases very intricately entwined ones or ones that show us something we had not thought of before can be programmed to give states that the machine tries to increase or prolong  -- but which affect the machine or register in a different way or part of the machine than pleasure and pain do or than achievement and sorrow do.  There can even be learned influences or experiences that make some patterns preferable to, or less desirable than, others, by association with something that was pleasant or unpleasant. 

Sense of Reasonable Purpose
Emulation and education (including about etiquette, manners, and customs) can begin to instill goals, either, again, to achieve certain kinds of states or to avoid them.  Means to achieve or avoid various states could be learned through pattern recognition of the sort 18th century British philosopher David Hume pointed out we consider to be cause and effect (even though we don't always get that right), and would become part of the purposeful process.  And insofar as the machine has motor skills, they would be employed to help the machine achieve/avoid those states.  There would be no need for the machine to say or do 'random' things that make no sense or have no purpose for it to do, unless it were defective in some way and convulsed or had a seizure or electrical or mechanical defect.  Also, infants, toddlers, drunk people, and stupid people often do things that make no sense or that are just reactions to impulses and environmental stimuli that trigger them, often, in the case of those mobile in dangerous environments, leading to bad results.  Some wars declared and waged by otherwise reasonable people are also examples of reactionary responses to circumstances that would far better be responded to in other ways.  In one sense the movements/acts are voluntary and purposeful, but in another they are not and just seem to be foolish, senseless, almost merely chance behaviors, dependent on what is in the immediate environment to react to. 

Sense of Significance
Machines could learn from trial and error or by education what produces states it wants/tries to prolong and those it wants/tries to minimize.   And insofar as it can also recognize patterns it can associate with being able to lead to ends, it will "find or detect" significance in those patterns.  What is significant to a machine may not be what is significant to a human unless we can program machines to recognize human needs and desires/pursuits as well as its own, or generic, machine needs and desires/pursuits.  There was a television commercial asking "Where will you be when your child finds her passion" (meaning a worthwhile interest or profession to cultivate for life), but it seems to me we could ask the same about when a machine finds its passion -- through a combination of circumstances that help it discover what leads or will lead to states it works to pursue, prolong, or intensify.  Machines could even develop 'solidarity' with each other, in the way humans do -- through exposure to shared experiences that yield similar reactions to them, such as going to baseball games together or to churches with similar views, or attending a college with the same traditions and atmosphere.

Different Feelings and Reactions

I see no reason that different stimuli wouldn't be able to trigger different responses, not only of pursuing/increasing the stimulus or trying to avoid/decrease it, but also different kinds of responses such as pain, sorrow, joy, hunger, nausea, etc. any more than different electrical switches in a home trigger different responses, depending on whether the switch is hooked up to a light, a doorbell, a dishwasher, a television, a phone charger, etc.  It is already known that different stimuli affect different parts of the brain or different, more immediate, spinal nerve paths, and there is no reason that would not be experienced differently by the machine in terms of what it is activated to do, or to pursue or avoid.

Potential Differences Between Thinking Humans and Thinking Machines

1) Machines could be created all alike in ways human cannot.  Whether that would be good or not is another question, and whether they would stay the same after having different experiences is still another.  It would be easier to tell whether nature or nurture was a primary influence in different interests and beliefs with machines than with humans because machines can be created all with the same nature, if we wanted to, and so any differences among them would be due to differences in their environment and their experiences and the reactions to them.  We cannot do that with humans, apart from perhaps identical twins.

2) Or we could create so that they have the best cultural, social, and character traits that are the most constructive, or at at least the least destructive.  Of course malicious people or societies might want to program in evil or avaricious traits that suit them, and that is always a problem with any scientific discovery or technological breakthrough.  And people do genuinely disagree about what is best for society or for each other.  But it seems we would want at least to prevent machines from having the worst and most destructive traits, if the underlying bases or causes for destructive traits are not ultimately the same or necessary for constructive progress as well.

3) As pointed out earlier, death might not be a concern of machines, if their memories and programming could be backed up and/or transferred to other machines.  Humans cannot do that, and when a human dies, then, at least here on earth in its physical life, all that is lost.  Those who believe in life after death tend to think one's personality, character, memories, and cares will remain intact and thus they fear death less relative to their confidence in that belief.

4) Miscommunication, misunderstanding, lack of understanding would likely be less problematic (or non-existent) among machines, since they could transfer their data directly to each other.  I suppose the process could go awry or an older "file" accidentally replace a newer one in a wrong-way transfer, but surely there could be steps to avoid that or recover the mistakenly replaced file.



This work is available here free, so that those who cannot afford it can still have access to it, and so that no one has to pay before they read something that might not be what they really are seeking.  But if you find it meaningful and helpful and would like to contribute whatever easily affordable amount you feel it is worth, please do do.  I will appreciate it. The button to the right will take you to PayPal where you can make any size donation (of 25 cents or more) you wish, using either your PayPal account or a credit card without a PayPal account.