This work is available here free, so that those who cannot afford it can still have access to it, and so that no one has to pay before they read something that might not be what they really are seeking.  But if you find it meaningful and helpful and would like to contribute whatever easily affordable amount you feel it is worth, please do do.  I will appreciate it. The button to the right will take you to PayPal where you can make any size donation (of 25 cents or more) you wish, using either your PayPal account or a credit card without a PayPal account.
Some Thoughts About How Machines Could Think
Rick Garlikov

Related essays:

When we ask whether it might be logically possible to make a machine that could actually think, I presume by "machine" we mean any sort of device we might construct "artificially", not just strictly mechanical ones. It might be part electronic, perhaps even part chemical in some way. But we are not simply asking whether we can construct material objects that can think, for we have been doing that for many thousands of years by reproducing. 

Whether people or animals have some sort of non-material aspects to their being or not, the creating part we do when we create other human beings by reproducing seems to have to do with making their physical bodies. When babies are born prematurely, for example, we nurse their bodies as best we can in order to get all the physical functions working normally, and we do not worry about how to "put in" minds. We do, of course, worry about whether their brains will function adequately if they are not nurtured in the right way, but our concern is for their brains as physically fully developed, not for something else called a mind that we also need to build and somehow connect to the brain. We generally expect that if there is no brain damage, and if the right sort of learning environment is provided, the baby's mind or mental abilities will develop and function normally. Notice this way of stating the case does not mean that minds and brains are the same thing. Nor does it explain their relationship if they are different. This is just a statement that when we reproduce physically healthy babies and rear them in educationally nurturing environments, we also tend to get thinking beings. 

Now it seems to me that to make a thinking machine we do not necessarily have to know precisely how each part of it works. That is because we really do not know how thinking works in humans or other animals, though we often seem to be able to recognize it, and because even when we make devices that are in part electronic or chemical, we don't really know why they work the way they do. 

It is sometimes difficult to tell whether on a particular occasion, a human has thought up something clever or is merely repeating something s/he heard previously that was cleverly thought up by someone else. When I visited England, I met some people whom I thought were most clever in some of the phrases they invented, but it turned out those were somewhat common figures of speech there. 

In physics, we find correlations among phenomenon, but we don't really understand why the most basic processes or forces work as they do. We just simply utilize what knowledge we do have of their workings to make radios, televisions, computers, remote control garage door openers, internal combustion engines or nuclear reactors, etc. We do not necessarily need to understand completely how something works in order to tell that it does work. 

An example of what I mean by being able to use phenomena even though we do not understand them is the following. When I was six or seven years old, one of my uncles brought me a small toy car about two inches long or so. When he ran his hand over and along the top of the car, the car moved in the direction of his hand. He asked me how he had made the car moved. I said he had pushed it. He said he didn't touch it, and he showed me again more slowly this time, and I could see he was telling the truth, that he could make the car move without touching it; his hand was an inch or so above the car the whole time. I couldn't see how that could be and asked him to explain to me how he had made the car move. He showed me a piece of metal on the roof of the car and he showed me that in his hand he had a magnet. "The magnet is what makes it move" he pointed out. I had never heard of a magnet before, and my immediate question then was "How does it do that?" He said that magnets attract metal. Of course, I wanted to know how they could do that. He didn't know. Here was my first instance of what seemed to be "action at a distance". The verbal explanation that it worked through magnetism just meant to me that magnets worked at a distance, but it did not explain how that could be. Magnetism seemed to me to be like magic. And I wanted to know what the trick was. Just giving it a name -magnetism- did not explain anything satisfactorily. Learning later that there were magnetic fields or electromagnetic waves still did not explain how they worked. It gave a theoretical description, but not an explanation. 

Later, when I was older, and remote controls were developed to operate everything from your garage door to your television, it seemed like more action at a distance. Radio and television itself seemed to work through action at a distance. Now, of course, we can talk about electro-magnetic waves and how they can "carry" information that can be turned back into sound or images, etc., but that does not explain how they work or why something could work like that. In having televisions, radios, and remote controls, we have simply learned how to utilize phenomena we have discovered, without necessarily understanding that phenomena fully. It is not that we know how it works, but we know some things we can use about the way it works, the effects it has, or that accompany its working. 

Moreover, forces are just theoretical constructs we postulate to "account" for changes in motion. They do not explain changes in motion, and we have no way of perceiving them other than through changes in something. That we can predict how forces work shows we have some useful and good correlations about motion - we have a working theoretical model- but it does not show there are things called forces, any more than there is something we acquire called "balance" when we learn how to ride a bicycle. Balance is not some causal "thing" we have when we can ride; it is just another way of saying that we can ride without falling down. When someone falls over while trying to hop on one foot, to say that he "lost his balance" is not to say anything other than he fell or was about to. One does not fall because one "lost one's balance"; the two things are synonymous. Even if we talk about getting an object's "center of gravity" outside the vertical planes above its base in a downward force field such as gravity, that is simply a point about a set of correlations we have come to understand about what corresponds to things falling over. It does not tell us what gravity is or why the fictitious point we call "center of gravity" works the way it does. (See also "Entropy, Scientific Explanations, Pseudo-Scientific Explanations, and Teaching Science") 

The reason I point this out is that, however biology works to make thinking beings, whether humans or cockroaches or dogs, it seems to do it by constructing a material physical object. Similarly in physics; when we construct machines that can apply great force, we do not make the forces directly; we just make the machines. We do not put forces, as such, into machines; we construct them in ways that we know will result in great force. So we might be able to make machines that can think without knowing exactly how we have caused thinking. In creating humans (by reproducing) we do not know how we are creating minds or thinking beings even though we do it. Moreover, sometimes even when we teach people to think, we typically employ techniques that we have evidence will work but we do not usually, if ever, know how they work. 

How humans think, and what "thinking" might be, are correlative problems to the question of whether machines could be made that could think. They are not the same problem however, because if thinking machines could be constructed, they might not necessarily "work" in the same way we do, nor would they perhaps even solve problems or "figure things out" in the same way we do. A robot might use calculus and physics, for example, to figure out the proper trajectories for shooting basketballs accurately, but most NBA players probably do not make instantaneous calculus computations while shooting running jump shots. So even if we could make basketball-playing robots, that would not necessarily mean we have figured out how NBA players are able to learn and know how to shoot baskets as accurately as they do. 

On the other hand, sometimes a model turns out to be the way nature does work. When Gregor Mendel used the word "gene" in a model to explain how traits were passed on from generation to generation, he was only giving a theoretical explanation of how it might work. It turned out that DNA (or segments of it) fits that model. 

What I am looking to explain in this essay is a theoretical model to describe how a machine might think. It may or may not fit how humans actually think, but I believe we need a model for what thinking might be like - a model that somehow fits or mirrors human thought or gives the same kind of results humans achieve from the same sort of experiences, as opposed to being programmed specifically to achieve those specific results. Humans sometimes learn some thinking by drill and practice of assigned lessons, and I take that not to be too distant from being specifically "programmed", but that sort of thinking is not the kind I am interested in, so I am not interested in being able to produce a robot or computer that merely mimics human behavior and speech by very detailed programs that prescribe essentially every act under almost every circumstance. I am not seeking a machine that merely mimics human thought by having been rehearsed what to do or say under various circumstances. 

There can be some programming involved, because it is not that human beings or animals are born with no abilities or traits. The issue I want to consider here is whether we can "program" machines to have traits that allow the same kinds of ideas and thoughts to develop which humans develop. Can we program machines so that, as a combined outgrowth of that program and then the experiences they have, they make empirical and logical discoveries, exalt in them, have ideas, winnow out the ones that do not work and verify the ones that do, develop concepts, create inventions and tools, develop new likes and dislikes, make ethical judgments, perhaps even have consciences, create art or compose music and poetry, and be able to make aesthetic judgments about them, communicate with each other and with us, make jokes or appreciate the humor in ones we make, get excited over good news or saddened by bad news, be curious, like to be touched, get confused, become perplexed, have disagreements with humans and other machines, perhaps have self-consciousness or a sense of "self", or have any of the other kinds of feelings we have? 

The question I want to address is not whether we can script machines precisely to mimic us or behave like us under certain conditions, nor whether we can program them to do specific tasks, but whether we can program them to have something like the kinds of basic "innate" theoretical characteristics we seem to have so that, as they experience things, they develop their own culture and have similar sorts of achievements and ideas we have as growing individuals and as an ongoing culture and civilization. 

The same, or a related, question is whether there is any kind of explanation we can give of what thinking might be like and how humans can do it if they are simply material objects -- whether it is how we actually do what is called thinking and feeling or not. In particular, at this point, I am seeking potential programmable basic principles that can theoretically produce what we typically would regard as "thinking" behavior, whether that is how we actually think or not. So this is meant to be an attempted explanation of how a material object might think, whether it is the actual way biological objects, such as humans or roaches or dogs, think or not, and whether we can ever really understand how thinking works biologically other than just knowing what the mechanisms are that allow it to happen. Knowing the mechanisms, as with any machine or natural object, is not necessarily to understand why or how the process works. 

Before I begin, let me say that I am extremely conflicted about whether thinking can be just a material phenomenon or not.

On the one hand, as above, it seems to me that we think using our bodies (generally assumed to be our brains), whether one could think without a body or not (as in disembodied spirits or souls), and whether our bodies think from "within" or "merely" act in some ways similar to radios or any other receiving device. So it seems that material objects can think, or at least have thoughts. 

One question is whether thinking is peculiar to biology or whether biological form is just one way to allow the processes that make up thinking. If biology (just) provides the processes that make up thinking, could not other forms of matter do the same if they allowed logically or functionally similar processes. And even if only biological matter can think or have thoughts, the second question is still how it does that -- not in just the mechanistic sense, but in the sense of how matter or material things can have mental states at all. 

It is not sufficient, for example, to trace the electro-chemical, nervous system mechanics as they occur between your finger and your brain when your finger is stuck with a pin. I want to know what and where the "pain" is -- the feeling of pain, not just the reaction to the pain; not just the drawing back and saying "ouch" or some other four letter word, or crying or yelling. Nor, to explain sight, is it sufficient to trace electro-chemical impulses from the retina to the brain along the optical nerve. How does the optical nerve chemistry and physics produce the image in your mind's eye, if there is such a thing as an image. Some philosophers, of course, say there is no "image" and no mind or mind's eye, but we need a satisfactory explanation of what seeing is, if that is true because whether it is metaphorical or not, we certainly seem to have images we see and can act on in the way a television or a piece of paper with a picture on it does not. The paper and the television do not "see" the pictures they portray or project. 

Third, what makes some images or sounds significant to us so that we respond in the right or reasonable way to it rather than just reacting? We can make a robot that will respond to light, but how do we make it respond in the right way, rather than just responding any which way? What is it that lets infants and children eventually learn to respond appropriately to what they see, so that, for example, they don't just walk off the edge of cliffs or fall down steps as they would do if left on their own when they first start moving about? What is it that makes some of our art be done purposely and not just be the result of random motions, as might a canvas produced by the crawling around on it by worms that were dipped in buckets of paint? We certainly do seem to make sense out of things and figure things out and realize we have done something worth accomplishing in ways that mere matter seems like it should not be able to do. We do not just respond randomly or purely instinctively as moths to a flame. Our thoughts have meaning, purpose, and significance. 

On the other hand, how could a hunk of material, and the process of physics no matter what material or what material process -- whether biological, electrical, chemical -- think up the things Newton and Einstein or da Vinci did?

A computer can do math, but a computer seems unlikely to invent math or any math concepts, or to be able to distinguish meaningful math and science concepts from useless ones. And even when a computer is programmed to turn out poetry, as was done in the 1950's or '60's, it seems that a computer cannot recognize good poetry and be able to trash all the bad poetry it writes, saving only the good material. And could a computer turn out not just grammatically formulated words according to linguistic principle, but words that were appropriate to a mood or a profound event? 

Could a batch of "mere" matter other than that arranged biologically to form intelligence, write like Shakespeare or even like a lesser poet or a writer of country music? And, even if not, how could biologically arranged matter do this? How does material, whether biological or not, "feel" anything and how does it think in ways that let it make discoveries about the world, as opposed to just moving about in some random way or reacting to stimuli, whether in an appropriate or useful way or not? Evolution, for example, says if we did not respond appropriately, we would likely die out as a species. But evolution does not explain what the mechanism is that allows us to act appropriately and thus survive as a species, and often as individuals. There is nothing in evolution that says any species has to have been created that would survive. There is no evolutionary reason life ever needed to be formed or have the ability to think or even think instinctually or respond appropriately to anything in the environment. 

And the problem of feelings is different from the problem of thinking or acting appropriately. While current computers can beat humans at chess, it seems that computers cannot want to win at chess or appreciate a good move or be excited about victory. Nor can one get depressed if it loses, or chagrined if it gets baited and tricked into making a fatal move. Computers can now recognize and identify people via imaging technology, but could a computer be glad to see someone it recognizes? Could one have a good sense of humor and find things really funny? Could a computer be curious and try to pursue its curiosity? Could a computer miss you when you are gone? Could a really bad day make one feel depressed? When a computer breaks down during a crucial operation in a company, can it feel responsible or unworthy? Can computers theoretically have a conscience? 

The problem of feelings is not that devices do not have feelings, but that it is unclear what could model feelings in just a logic pattern? What would we have to make a computer do to give it feelings? What sort of design would we have to construct in order to even think its operation might represent or create feelings of any sort? How can we represent or explain feelings in ourselves, if we are just material objects as bodies? How can the motions of atoms or molecular or electrical changes in the brain "be," or be a part of, or a cause of, feelings? 

Moreover, feelings are important, it seems to me, not just in themselves but because some feelings seem to be significant in motivating us or a thinking machine to want to do things it notices as somehow worth doing at all. I am not after a machine that just reacts to stimuli in some simple and direct or specific, pre-programmed way the way an insect or an animal might just responding instinctually. I want there to be some sort of thought, reflection, consideration, and choice to be involved, whatever that might mean when we ourselves do it. 

The way I want to approach the problem here is to see whether thinking and feeling can be "reduced" to the following of certain basic kinds of commands or built in principles -- ones which can be constructed using logic that would work in conjunction with physical tools we have available now or conceivably could make, such as optical scanners and pattern recognition software, voice recognition software, temperature, pressure, "odor", detectors, etc. The reason I want to do this is that we seem to have certain instincts or inborn traits which seem to have something to do with learning and maturing into independently "thinking" beings, even if we do not become particularly good thinkers. So it seems to me that if we can get machines to parallel us in certain ways by having programmed operating principles, in a sense that perform the same kinds of functions as our instincts or physical-chemical abilities do, though maybe not the same particular functions or in the same manner, perhaps we will have machines that can think. Perhaps also it will explain or model in some way how we think or what we are perhaps doing when we think and when we "feel" things. 

One theoretical objection that could be raised with the potential success of this approach, though, is that it would reduce thinking to a mechanical formula or set of formulas. I am concerned about that, but I have two responses to make at this time: 

1) It might be that our own thinking is somehow the result of whatever it is in our biology that makes us perceive things in certain ways and process what we perceive (or create from within). Our education or experiences then would simply be the way our biology "works on", effects, or "interprets for us" the phenomena we experience. This is perhaps something of a simplistic deterministic model of what constitutes thinking. I lean more toward the following answer instead. 

2) I think it is possible to come up with a set of programmable principles which are nevertheless open-ended in significant ways, so that, with experience, we do (and machines would) invent original ideas and concepts and thus have original ideas and beliefs in a meaningful sense of original. There might be some elements of randomness in what we think up, but there would be varying amounts of rationality involved in responding to those elements of randomness. Machines would not simply be following the programs we put into them in order to get out what we expect or want them to or what we have put in. Depending on their initial programs and their experiences, and responses to those experiences, different machines would develop different ideas when faced with similar new experiences or phenomena. As with humans, just because people might have certain similar abilities or certain similar ways of examining the same phenomena, it does not mean they would not have different ideas about them. 

Major Programmable Principles

I would like to suggest candidates for major programmable principles. I am fully aware, however, that I may inadvertently state principles in such a way that they will not be even theoretically programmable (since it may be easy to say "make the computer so it can do X" when it might not be possible to do that, or when that task might be more complex than it would seem). 

I am also aware that I might have conceived of principles that include too much of what I am trying to "get out" of them, so that it is my, or some programmer's, thinking or answers that I am suggesting be programmed into the machine, rather than some ability to think. I don't want to end up having a robot simply mimic me or come to the same conclusions I did because I have manipulated its judgment, any more than when I teach students to reason through a problem I want them simply to memorize and only repeat my analysis. 

Of course, it is always difficult to tell how "deep" one needs to go in analyzing or explaining things. And it is difficult to tell how deep or basic a programmable principle needs to be in order not to include bias or give it to much of a disposition to reach a certain conclusion or perform an act that is more predetermined than chosen. The same holds true in understanding and teaching human beings, though, too. In trying to understand human behavior, it is difficult to know how much of it is in some sense causally determined and how much is, in some sense, a matter of choice, understanding, insight, reasoning or other sorts of seemingly non-caused acts. 

I will suggest principles which, if I am correct, will give computers the potential for the same kinds of behavioral and psychological problems and quirks that humans have, which may not be necessary to do. But it seems to me that if we want machines to think in some sense in the same way or ways that humans do, they will need to have the same sorts of things that afflict their thoughts that humans have. For if we teach machines to think, there is no reason to suppose they would all think as well as each other or have the same ideas or the same thinking and artistic abilities. 

It seems to me that if we build computers to think as we do, then we should expect that some of them will think better or have different talents or develop different skills. Some of them will be more curious than others. Some will be better problem solvers. Some will be more analytic, rational and reflective, while others will be more impulsive or quick to judge. Some may even develop biases and prejudices. Some will behave more responsibly than others. Some will be more caring and compassionate. Some will be more creative, some more interested in sports, some more absent-minded as they are more easily distracted or have less memory capacity or less access to memory. Some will be more persevering, more focused, more single-minded in purpose. They will have different interests and different views about complex matters. We can program some machines to give more weight to emulating people or other machines or more weight to trying new things or to thinking through problems first, or just forging ahead without much thought, etc. Experiences the machine has might alter any of these initially programmed characteristics. 

[I will presume that we have or could make devices that can generate distinguishable, sortable and thus "identifiable" or "recognizable" patterns for odors, spoken and printed words, images, sounds, pressures, and texture patterns, so that we have some sort of "sensory" input for the computer. There are now "sniffing" machines that can identify molecules in the air. There are optical pattern recognition devices and software, and there are audio recognition devices and software.] 

It seems to me that there are two main principles that would be of utmost importance: 
(1) Correlations should be able to be noticed and/or sought to be made between patterns and between different kinds of patterns (e.g., optical and tactile or audio). For example, the computer could notice circular elements where they exist visually, and it could associate things that look round with things that feel a certain way, calling them "feeling round". (It turns out that we do not as straightforwardly as it seems associate how things feel with how they look; it seems to be a learned experience of association. A researcher told me once that people blind from birth can tell by feel what is round and what is square, but that if they are given sight by surgery, that they cannot tell a cube from a ball by sight. If you have ever had to try to judge what an object is only by feel -- for example reaching inside of something in order to try to figure out what is holding it together-- you know how difficult that can sometimes be. So I do not think that programming a computer to correlate patterns from one kind of sensor with patterns from another is necessarily that different from what we do.) 

Correlations of patterns might also be sufficient for the computer to recognize by sight larger and smaller similar objects, or objects that appear larger, because they are closer, than similar objects further away. In other words, the computer might be able to sense something is a penny whether it is closer and takes up more of its field of vision or further away taking up less, rather than seeing two different objects or giving them two different names. It would, as philosopher George Berkeley argued humans do, possibly then learn to correlate perceived distance of motion (e.g., walking) with visual distances. And it would learn to judge distances by sight, as we do. 

Correlation of patterns would include temporal patterns, not just phenomena being sensed or perceived at the same time. This should lead to some sense of cause and effect (in at least the sense philosopher David Hume described in regard to consistent patterns of antecedents and consequences; e.g., when we approach a fire, we feel heat or see a rise in temperature of a thermometer we are holding, so we say fire causes these effects; similarly, we see one pool ball move into another and the second one always moves away, so we say the first one caused the second one to move when it hit it). 

(2) Words should be used or coined to describe patterns when possible. The words would be correlated with the patterns by programming and then by the computer itself as it identifies new patterns and gives corresponding words, or verbal descriptions, to them. 

(3) Apparent contradictions and differences in patterns should be resolved/explained so that the contradiction disappears and so that differences can be accounted for. 

(4) Similar patterns should be explained. 

These latter two principles, particularly perhaps (3) would, I think, be or bring about (a form of) curiosity. 

I think the above could be done and would help contribute much to, or account for much of, thinking, as I will explain below, though I realize there are problems with formalizing the principles that might constitute "accounting for" or "explaining". This is difficult even for human behavior in that some attempted explanations seem to be more reasonable or satisfying in some way to some people than to others. 

The next two are perhaps more difficult: 

(5) Attempted preservation and enhancement of good things. This one is problematic unless we can describe a way for the computer to recognize what is good and what is not. I will come back to this. 

(6) Mimic verbal and other behaviors of people (where possible) unless, after contradictions have been resolved, there is a prohibition against doing such behaviors. This one is intended to give it a "motive" to talk and interact with people, etc. 

There will be other principles we will need (to give computers a sense of humor, for example), but I want to start with these for now and explain what they will give us. And I also want to talk about some subsidiary specific principles for the behavior associated with "liking" or "not liking" something. For "liking" we program the computer to seek more of a certain stimulus or kind of stimulus in a way I will explain shortly. For "not liking" we program the computer to try to avoid the stimulus. 

When the computer recognizes a certain kind of touch, visual image, or other sort of sensory stimulus, it would go into a mode that would try to retain or increase sensing of the pattern until a threshold of time or a threshold of increased sensor readings made it change to a different behavior. By "increased sensor readings" I mean something like the following, that I think would be an example of how a computer would "like" sex or a massage. Suppose that there were sections on the computer that registered touch pressures over various areas. E.g., there could be magnetic fields that kept apart various sensors with a certain amount of pressure. When those pressure forces were altered by something altering one or more of the fields, the computer would "sense" touch of a certain pressure and over a certain area. Or this could be registered mechanically by how far levers or springs were moved when touched, etc. If we programmed the computer to move in such a way or to say things to someone to touch it more -- either harder or for a longer time or over a greater area of its sensors, until a certain amount of time went by or until a certain total of pressure readings were achieved, causing the computer then to want to turn away or stop the pressure -- it would behave in the exact same way as would a dog or cat that wants to be petted (longer) or someone who liked sex or getting a massage. 

In fact "liking" or "desiring" anything would be simply for the computer to try to achieve more of it in some sense -- either a greater number of certain kinds of sensor readings/patterns or a longer time of them, until something cancels out the attempts to achieve more. Or there could be a second-order sensing mechanism such that it can tell when the computer is seeking to increase a sensation. That second-order sensing might be what we would call "liking" something. Or it would be knowing or being aware of liking something. Oppositely with regard to trying to avoid certain sensor reading, which we would refer to disliking or hating whatever causes either such sensor readings or whatever causes the second-order sensing mechanism to notice that the avoidance of such readings has begun to operate. The reason there are two kinds of liking/disliking things is that humans are not always aware they like or dislike something though it is obvious to everyone else from their behavior that they do. We sometimes seem to know directly we like something (we feel ourselves "wanting" it or wanting to do it, or we can feel that it feels good or pleasurable). That would be tantamount to the computer recognizing it is trying to get more of the experience it is sensing and calling it "liking" or "wanting" the thing causing that "drive". But we also sometimes want or pursue things without realizing it. That would be tantamount to the computer not being aware of its attempts to increase its sensor readings of a certain sort, even though it is doing that. 

Bringing the computer's "desires" or "urges" to fruition could be accomplished in various ways. The computer could simply "get tired of," "bored with," or "desensitized to" a certain stimulus over time or it could reach a certain amount of pressure or combination of pressure and time and be done climactically. Or we could set up other kinds of responses to desires; see the essay Itches Without Scratches for the varieties of kinds of responses humans display to desires or "itches". 

Moreover, the computer could be made to "enjoy in general" or even "obsess" over certain kinds of things by not only "wanting" to spend time doing them once they have begun (as in the sex case above) but by being programmed to seek to do in general those things it has found it likes doing (or pursues further) once they have begun. This might be sex, or it might be solving puzzles, or resolving contradictions, or doing math problems. Different computers/machines could be programmed to have different kinds of stimuli it finds enjoyable. There could also be a program feature in some or all machines that makes them seek to pursue and thus enjoy what it observes others seeking and seeming to enjoy -- a kind of herd mentality or peer pressure mechanism. These kinds of things could be brought into conflict with, or overridden by, other sorts of phenomena or stimuli, such as having disagreeable experiences from emulating peers, by seeing bad results happen to those who do them, or by receiving advice from those one respects or wants to emulate more or please or receive approval from. Praise or approval might be something we program in a liking for until and unless it conflicts with some other principle or experience. Experience could also teach different computers to distinguish sincere praise from insincere, manipulative flattery. 

Success at learning things or the expectation of success at learning could also be programmed to be desirable. Again, these desires could be overridden by experiences and resulting understanding or knowledge that conflict with them. Humans seem to enjoy learning what they can learn or believe they can learn. 

It seems to me that noticing and trying to resolve contradictions through reasoning or through empirical methods, such as scientific confirmation, would give the computer what is tantamount to curiosity in humans. Contradictions could be in the form of verbal contradictions or in the form of non-verbal ones, such as what we feel (without expressing it in words) when we see something that does not "make sense" to us -- whether it is a magician's trick or a natural phenomena that does not fit previously noticed patterns. For example, if we saw someone walking on water or flying, we would wonder how that could happen because it in some sense contradicts what we have seen before. Physicist Richard Feynman noticed, for example, that when a cafeteria dish that was tossed in the air spinning around, that the wobble of the plate spun around at a different rate from the emblem or pattern on the plate. This made him want to see why that could be, because it seemed something of a contradiction to him. He needed to resolve it. Now others never noticed it, and even when Feynman pointed it out to his colleagues, they did not care about it. It did not pique their interest either as a contradiction or as something interesting or worth trying to resolve. 

Different computers would also notice or not notice different contradictions, because, like humans, they would experience different things throughout their lives, and the different juxtapositions of experiences would, just like in humans, make some experiences be more noticeable or more meaningful than others, so that, for example, a computer working on planetary motion might find more significance in an apple's falling off a tree to the ground than a computer not wondering about planetary motion at the time an apple falls within its view. 

Moreover, if priorities could be programmed into computers, or different percentages or degrees of priorities could be programmed in, then some computers would be more or less curious than others, more or less persevering in trying to resolve or explain contradictions or similarities they perceive. 

Similarly different computers would be more or less easily distracted from particular tasks, depending on which stimuli and functions were given priority or degrees of priority over others. For example, one computer might be programmed to give priority to any function involving math, or to resolving contradictions, whereas another might have programming or experiences that make it leave off working math for sex at the drop of a sexual hint. Some might find watching or playing basketball more of a priority than sex. 

Feelings

Now when I said the computer would behave in the same way, the objection would be that we had only programmed in a kind of behavior, not the feelings we have that make us want to be touched in massage or sexually. But I think that objection can be met by the fact that we ourselves do not know why we respond as we do or why we have the feelings we do. We too are "programmed" or "hard-wired" to seek more of the touch. And we call that "liking" to be touched. We say it "feels good" but the computer could say that too, and it's programming would make it be sincere. It might not feel in the "same" way we do -- not having the same mechanism, but it could monitor and recognize certain kinds of pressure and sensor patterns and how it was responding, in a kind of reflective view of itself or of its own sensor readings and responses to them. Or it could be made aware of "secondary electronic characteristics" that reflected the drive, impetus, or "urge" it had to increase certain primary sensory readings, and then call those "feelings". It could recognize that it was in a mode that usually was called "feeling good". And I am not sure that is not all we ourselves do. We respond to certain stimuli, then someone names the mood we are in and we learn to associate that state of nerve endings or chemistry or whatever it is in us with the feelings with that name. The computer could do the same thing. Whenever it recognized it was in a mode to seek more of something, it could say "I like this" or "that feels really good" or "that is really pleasant." When it is done, it could be programmed to want to turn over and shut down for a while or smoke a cigarette. It could say, "I am really parched; I want some motor oil" or "I want to finish this book tonight, so please don't make me go to bed just yet." 

Liking Versus Approving or Finding Good

There are any of a number of potential ways we could program the ability to make moral, aesthetic, and value judgments into machines and to allow at least some, if not all machines, to distinguish, when necessary, between things they like and things they think good. One method would be to let the computer notice when things, which it likes to do, have consequences that bring about things it likes to avoid, so that it has to decide between which is better. It may have ambivalence. It may, like many humans, be unable to resist immediate pleasures even when it knows the undesirable consequences, or it may have such vivid memories of past bad consequences that it is easier for it to eschew the desire that it knows will cause it trouble. Like some humans, it may ignore the consequences when faced with the "temptations" of immediate pleasures while later ruing the day and vowing never again to give in to such temptations at the time the consequences begin to kick in. Only to repeat the same mistake or process the next time the temptations occur. 

As above, there could also be conflicts between programmed likes/desires and the disapprobation of others, whether reasonable or not, who are considered to be important to the machine. Or there could be conflicts between programmed likes and the observation of bad experiences that befall others when they have pursued similar desires. As in human society, there could be machines programmed with opposing likes and dislikes and different degrees of the ability to distinguish what they like from what they think is good, so that irrational conflicts would arise between computers. E.g., one computer could be programmed to seek "sexual" kinds of touch while others abhor it and believe it must be wrong. If the first computer also is programmed to be susceptible to peer pressure or to consequences that the disapproving group has the power to inflict on those who do things of which they disapprove, then the first computer will have to decide whether to do what it wants or whether to avoid it. And it will have to decide whether it is better to do what it wants while trying to hide it from the disapproving group. 

With regard to approval and disapproval, there could be, if necessary, a buried or "unconscious" directive to seek approval of important people and to respond with, say, a smile, where what counts as approval is learned as the computer grows. Just as children sometimes have to be told when others are "making fun of" them or are "not being mean" or disapproving when they correct or admonish them for their own good, a computer might have to learn the signs of approval and disapproval by others. Some computers might have better insight than others into this. Some might end up being too trusting or naive. This would be similar, it seems to me, to the way people respond to signs of approval, sincerity, etc. In short, with regard to many of what might be called "psychological drives", we could program in some sort of directive that the computer may not be conscious of to seek certain kinds of things, but which the computer has to learn as it has experiences, what sorts of things count as meeting or triggering (its responses to) those directives. 

I presume that with the ability to develop language and concepts and the ability to make distinctions that seem to resolve contradictions, a language of moral terms and concepts would appear that are distinguishable merely from any computers' likes and dislikes. I suspect we develop our ethical standards and language from noticing conflicts among our different desires, especially conflicts between desires and the desirability of the consequences that attend their (attempted) fulfillment. We might also develop a sense of fairness based on seeing no reason to treat different people differently without some relevant condition that seems to justify the difference. Or it might be we develop a sense of fairness by weighing different ideas different people have about what is right and why. 

Alternatively, some simple (or perhaps even complex) human ethical principles could be programmed into the computers likes and aversions in such a way that the computer does not recognize it has them other than through the responses it gives to different stimuli or phenomena. Some computers may or may not try to analyze what principles might explain such aversions or approvals, and thus come up with their own "ethical principles" they think they subscribe to. This, however, would be to program a particular ethics, at least initially, into the computer and I do not believe humans develop their ethical ideas through that kind of programming at the molecular or genetic level. As I said, though, I suspect we develop ethics by realizing it is not always a good thing to fulfill immediate desires, and that desires can conflict and we have to choose which one is what we would call "best." It might also be that we develop some of our ethical concepts by comparing and evaluating different people's ideas. 

It would be interesting to see, for example, what a computer would say about the kind of situation that happened to me as a boy, where in playing basketball in the backyard of a friend, he seemed to change the rules to suit his particular play at the time, changing, for example what constituted "out of bounds" or that perhaps gave him the ball back if it was lost because it hit an odd crack in the driveway during his dribble. Opponents would say "But you said before that...." and we would come up with the concept that his changing the rules was not right and that it gave him an "unfair" advantage. We were too young to know the word advantage, but we already had some sense of what was "not fair" and had heard those words. Would a computer programmed as I have described, or in some similar way, come up with a similar notion that there was something wrong with someone's changing the rules in the middle of a game? 

Similarly, would a computer who "wanted to go" to the zoo, and which you told you would take to the zoo, be particularly upset or disappointed if you reneged on the promise? Would it see a difference between that kind of case and your having never said you would take it because it knew you had to work? Can we develop an ethics based on facts and feelings, consistency, and equal treatment, all together that is not just an "ethics" of privilege and egoistic hedonism? I would think we could, but it would be interesting to see what computers programmed as I suggest would actually do. 

Humor might be more difficult to program using just facts and consistency, but I think it could still be done. There are a number of different kinds of things that make something funny, and I think that each of them could be programmed into the computer so that when it found any of those kinds of situations, it would find humor in it -- as long as that situation did not also meet the parameters of a different kind of emotion, such as embarrassment or disapprobation by others who are found to be witnessing your behavior and reactions. E.g., an adolescent might laugh at a sex joke, but not if his parents were in the room with him. He might still find it funny, though. 

Humor can turn on a surprising deduction that is seen to be frivolous, such as "The telephone was invented in 1875 and the bathtub was invented in 1850, which means that if you were alive in 1850, you could have sat in the bathtub for 25 years without the phone ringing." Or Jay Leno's joke one night about the introduction by different company's of bras that enhance women's bust lines significantly: the Wonder Bra, the Miracle Bra, and one other, I forget, followed by the question "What's the matter, do men not pay enough attention to women's breasts?" 

There are a number of ways to interpret what is funny about these or other jokes, but it seems those are all factual elements, and there is something about those factual kinds of elements that makes different people laugh, depending on what their "sense of humor" is, or is triggered by. 

Similarly, the humor in slapstick can be explained, as can be the humor in almost every kind of joke or event. Of course, an explanation of a joke or humorous situation is not necessarily funny. But the point is that the computer learns what sorts of things are appropriate to trigger something like laughter or a set of digital or electronic signals it can identify as "funny". 

A computer might laugh at the wrong kind of situation, but humans do that also. That is just an "inappropriate sense of humor." A computer might not have a sense of humor, but many humans do not. A computer might have a wry or perverse sense of humor or a childish sense of humor. A computer might have a dry or intellectual sense of humor. It might even have a stupid or adolescent or immature sense of humor. It could laugh at or describe as funny things which no one (else) would think were funny. That is all part of the way humans experience humor. Some people will get morally indignant about jokes that others find hysterical and totally unobjectionable. 

I do not see that the computer has to understand what makes it laugh other than to have the joke or event meet the criteria the computer has developed for humor. The process of meeting the criteria, or the specific criteria that any particular event or joke meets, can be hidden from the computer's verbal or conscious knowledge or awareness. A computer might laugh at a given situation and then have to analyze it in case someone challenges them about "What is so funny about that?!" just as humans do. I intercepted a high school note one time and opened it instead of just passing it on. I had never dared to open a note not meant for me before, and it took some nerve for me to do it. I was nervous about doing it. What private communication might I discover?! So I opened the note and it said "My name is Betsy Ross, and if you don't stop bothering me, I will never get this flag finished!" I went into hysterical laughter over that without having any idea why. The note, the initial suspense, and the situation of my looking for something personal and private and meaningful, just all seemed to go together into one comically absurd event that sent me into gales of laughter. I could see the same thing happening to a computer that had all the relevant factual information about the event that I did, and that had a sense of not wanting to be disapproved of by others, and that knew that intercepting (meaningful and personal) private notes would bring such disapproval. 

Moreover, laughter in humans is somewhat infectious, and it could be that a computer could be programmed so that it asked people who laughed, "What's so funny about that?" and then learned on its own which kinds of things it chose to emulate with laughter or a sense of humor. 

In Summary

Assuming we can build machines that can have memory and sensing devices, it seems to me that we can build them to think like humans by programming into them initial traits, some hidden from their consciousness or awareness, that are similar to our own psychological traits. Some of these traits will be triggered directly by sensory inputs when appropriate experiences happen but some triggering inputs will have to be learned just as we all learn to find different things funny that trigger off our inherent/programmed trait of laughter, or just as we learn to distinguish sincere from insincere praise, or just as we learn to think we discern when someone is mocking or ridiculing us. So it might be that the computer can be programmed, say, to respond to praise, but have to learn, as it "grows up" or has more experiences, what praise is or when it is occurring. 

It seems to me that many of our concepts and ideas stem from a combination of inborn traits (emotions, desires, drives, etc. -- which are tantamount to a computer's major programmable principles) and our learning from experience. That means, I think, that we can, and often do, alter or transcend our instinctual/programmed emotions or initial and developed character traits. And I think it allows us to develop new concepts and ways of understanding things we were not directly or specifically programmed to do. I think science, art, and morality, for example, can come from a combination of inborn traits in combination with experience, inventiveness, experimentation, and rational decision-making using, in particular, temporal and spatial, sensory and conceptual pattern recognition, and the desires, or operating principles, to resolve apparent contradictions and explain similar patterns.

This work is available here free, so that those who cannot afford it can still have access to it, and so that no one has to pay before they read something that might not be what they really are seeking.  But if you find it meaningful and helpful and would like to contribute whatever easily affordable amount you feel it is worth, please do do.  I will appreciate it. The button to the right will take you to PayPal where you can make any size donation (of 25 cents or more) you wish, using either your PayPal account or a credit card without a PayPal account.