Artificial and Human Unintelligence: Cases of YouTube and Blue Cross

Rick Garlikov


Many times people do not understand something because they don’t grasp the context in which it is said or written.  Some people do this more than others, and some people do it intentionally in order to misrepresent and mischaracterize what others have said or written.  But I want to discuss here the problem it causes for understanding even when it is done unintentionally or when there is not some selfish or otherwise wrongful motivation or intention for doing it.

People may sometimes even read or listen to something with the best of intentions and yet still take it to say or mean the total opposite of what it actually says or means because they don’t appreciate the significance of various kinds of connecting words or phrases in the sentences, such as “although”, “however”, “unless”, “if”, “only if”, “nevertheless”, “except”, “and”, “but”, “notwithstanding”, “apart from”, “along with”, “in spite of”, “despite”, “particularly”, “that being said”, and a myriad of other words or phrases that involve relationships between ideas, some of which are adverbs, some of which are conjunctions or prepositions or other parts of speech.  

In some cases, misunderstandings are themselves understandable because they are caused by unrealized ambiguities or modifiers whose placements are not clear.  As an example of an unrealized ambiguity, I was driving on a freeway with a girlfriend one time who was using a map to navigate and as we were passing an exit, she said “take the next exit”.  I then took the upcoming exit and she got upset and said “Not this exit; I said to take the next exit.”  I, of course, said “This is the next exit”, and she said “No, this is the first exit.”  It turned out that what she meant by “next” was “the second one after the one we just passed” and I took it to mean “the first one after the one we just passed”.  The ambiguity or unclarity of the word “next” is perhaps more obvious to see when one is closer to the time or place that is upcoming, for example “next Saturday” is pretty clearly more intended to mean “the one that is a week from today” when spoken on a Saturday than which Saturday it means when it is spoken on a Wednesday.  Saying “next Saturday” on a Wednesday should invite the question “Do you mean this coming Saturday or the one a week after it?”  And if one is a quarter mile from an approaching exit and the navigator says “take the next exit”, it is appropriate to ask whether they mean “the one coming up or the one after it”.  But, I am interested here in cases where the error is not due to an ambiguity or an unclear modifier placement as in the song “Flying purple people eater” which could mean either a purple thing that eats people or, given the silly nature of the song in the first place, a thing that eats purple people.  Or in a more serious case, something like the common directions to “take this medicine at mealtimes” meaning either to take it at certain times (normal breakfast time, lunchtime, and dinner time) whether one has much food then or not, or whether it means to take it with enough food to fill one’s stomach, at times somewhat evenly spaced out three times during the day, regardless of the time one takes it (in case, for example, one has an uncommon eating schedule or doesn’t get full meals during some of those times).

For my purposes here, I am interested more in misunderstandings that are neither intentional, motivated by selfishness, nor caused by ambiguity.  I am interested in those caused by what might be characterized simply as insufficient care or attention given while reading, or listening.  That includes, but is not limited to, careless, negligent, or irresponsible reading or listening.  So, then, consider the following version or the Garden of Eden story:

Given its outcome, one can imagine the story going something like this:

God to Adam and Eve:  “I’m going out for a bit.  You two should have fun and do whatever you like here except for one thing – don’t eat the fruit from the tree of knowledge of good and evil that is that tree over there [God points to it] in the corner of the Garden.”  God leaves and Adam and Eve explore the Garden and also talk and get to know each other.  After a while, they realize they are both hungry and discuss what to eat.  

One of them says “God said something about eating the fruit of that tree, and something about its being a tree of knowledge of good and evil, whatever those things are.  What do you say about eating that fruit?”

The other one said: “Sure, why not?  Let’s do it.”

So they do, and the rest is history, as they say, including history’s first “Oh, shit” moment the second they realize that they are naked and bare and that, for some reason which they don’t really understand, that is a bad or embarrassing thing, though only for humans, not for the animals.  When God returns, He notices something is wrong, although it is not clear why He has to return to know that, since He is omniscient and shouldn’t have to be at the scene of the crime to know there was a crime.  But anyway, He returns and sees from their attempts to cover up their guilt with fig leaves that they have clearly eaten the fruit of the tree of knowledge of good and evil and have thus disobeyed a direct order from Him.  When He asks why they did it, they give Him their actual reason – that He had said “something about eating the fruit of that tree” and they had gotten hungry, so they ate it.  Plus, they didn’t know it was wrong to eat it (or even to be disobedient if they had meant to be, or thought they might be, disobedient) until after they ate it, since it was eating that fruit that gave them the knowledge that disobedience or being naked was wrong.  They point out that God had given them a Catch-22, expecting them to know it was wrong to learn what was wrong or what ‘wrong’ even means.

God: “But I said ‘Don’t’ eat it!’  I didn’t say ‘eat it.’  What’s the matter with you idiots!”

Adam and Eve in unison: “What’s the difference?  What do you mean by adding ‘don’t’ to the sentence, other than to make it unnecessarily longer?  And again, how would we know it was wrong to eat it or to disobey you until we ate it.”

History’s second, third, fourth, and fifth “oh, shit” moments then followed pretty much immediately:  God realized in a flash (though, again, it seems He, of all people, should have known it before) that a) He shouldn’t have created people, b) shouldn’t have given them free will without better understanding of, and sensitivity toward, the difference between good and evil; i.e., without more sense and sensitivity to begin with, c) that they just can’t be left alone without screwing up, and d) they don’t understand perfectly good English.

Things are no better today, although God spent the next 6,000 or so years (or 300,000, depending on whom you believe) trying to get people to do better.  But God’s omniscience is no match for people’s ignorance (no matter what fruit they eat), and His omnipotence is impotent against their free will when coupled with laziness or inattention to what should be fairly obvious details, their lazy disdain for enlightened self-interest or even obvious prudence of those people who think that an ounce of prevention is a pain in the neck or just too much trouble – let alone some people’s insensitivity, malevolence, or short-sighted desire for short term selfish gains which causes a greater loss even just to themselves.

And that brings us to the following question: If even omniscient God cannot create intelligence in humans, isn’t it then even more unlikely for humans, who have far less knowledge than God, to create intelligent machines – particularly if the machines are going to teach themselves from the totality of combined human knowledge and false beliefs which humans themselves cannot seem to distinguish from each other, and which many humans cannot distinguish or recognize even after it  is pointed out and explained to them.

This brings me to two particular problems that both machines and humans seem to have trouble with, among the myriad of other problems involving understanding.

The two particular problems I am talking about here are: 1) taking things out of context – the simplest case being the Adam and Eve case of leaving off the “don’t”.  And 2) choosing the wrong category to put something in or ignoring other categories it might fit at least as well, if not better.  Consider Blue Cross and YouTube regarding these problems.  And for the simple sake of argument here, let’s assume a likely false statement to be true – that each company is trying to do the right thing and not simply make the most profit by intentionally doing the wrong thing that they know is wrong but think they can get away with.  

In terms of taking things out of context, YouTube rejects videos that its employees or its computers and apps “think” violate its rules against misinformation, which would be an admirable goal if the apps or people recognize misinformation correctly.  But they do not recognize it apparently when the video is pointing out something is misinformation and is therefore false.  That is, if you state someone else’s position in order to show what you are arguing against or showing is mistaken, Youtube removes your video and issues a warning for stating what it says is false.  It essentially can’t distinguish between your video saying “Don’t eat the fruit of that tree” and “Eat the fruit of that tree”.  You can’t say you shouldn’t believe what Smith is saying, because when you tell or show what Smith is saying, YouTube says you are saying the false thing and your video is saying false, misinforming things.  This is particularly true if you include images or video clips of people proclaiming the thing to be true in order to show you are not arguing against a ‘straw man’, but arguing against what people have really claimed.  YouTube even points out in the warning it gives you where you showed the wrong thing in the video.  But its AI bots or its humans can’t seem to realize how or why you are using it; they only ‘see’ that you are stating or showing something false that goes against their guidelines.  They can’t seem to see that you are stating it to show what you are pointing out is false.

Worse then, if you appeal the decision, you cannot explain why they are wrong – you only can register that you are appealing, but not say why – and you almost immediately get back a notice that your appeal was considered and rejected because your video breaks the rule against containing and thus conveying false information – basically saying the same thing the original rejection said for the same reason.

Blue Cross has a similar appeal process where you can file an appeal to a denial of coverage but you cannot present your evidence to the reviewer or reviewers who consider it.  Your appeal is presented to them by the person who rejected your claim to begin with, and surprisingly though it may seem, you then  get the same rejection you got the first time.  In many cases the rejection points out the treatment for which you are filing a claim falls into a category that is not covered even though 1) it more appropriately also falls into a category that is covered, or even though 2) it is a real stretch on their part to say it fits the category that is not covered.  

As an example of 1, they rejected a claim of special vitamins prescribed for pregnancy because they said they were food supplements which are not covered, even though prescriptions are covered.  The fact that it is more of a prescription item than a food supplement didn’t matter.  [Years later they did change that and did cover prescription maternity vitamins, but for a long time they didn’t.]

As an example of 2), they rejected a procedure done at home by parents to once daily operate a device installed  by an orthodontist to, relatively easily and painlessly over a few weeks time, separate the two halves of an upper jaw to correct an underbite, in a child, although they pay ten or twenty times more to split the upper jaw and make it wider if done to a teenager by a surgeon years later after the patient’s bones have fused during normal growth and need to be surgically separated.  (An underbite is the lower jaw extending outward further than the upper jaw as shown here:


                                             

Underbites can cause serious jaw problems later in life.)  But the reason they give for the denial – both originally and in response to the appeal – is that it is an orthodontic procedure, not because it is done by an orthodontist, but “because teeth are moved”, even though it is the upper jaw bones which the teeth are moving all together with, rather than relative to each other within, when those bones are moved.  The teeth are not moving within the jaw, as in orthodonture that changes the relative positions of teeth to the jaw and to each other, but moving with the jaw.  Saying that the teeth are moved when the jaw splits is like saying you are moving your fingers when you raise your arm and therefore your fingers are not paralyzed if you can move your arm.  Or it is like saying we are all practicing orthodonture without a license when we speak or turn our head because we are moving our teeth.  When you point that out in your appeal, they ignore it in their response as if they didn’t see it or couldn’t understand it or appreciate its meaning.  

And it doesn’t matter whether this is the result of artificial unintelligence or human unintelligence, except it is more understandable when a machine gets it wrong.  Humans who are not having to be institutionalized for mental defects should not be that ignorant or stupid.  That a computer doesn’t understand the difference is understandable, but not that a human being with a college degree, particularly even a medical degree, doesn’t understand it is not acceptable.  And yet, neither the people nor the computers at YouTube and at Blue Cross or many other companies seem to be able to understand or remedy these kinds of problems.

Or consider one of the more exasperating things currently with some AI answering systems for businesses, even those which are otherwise a really good AI answering system that sounds perfectly human and takes care of most needs you might have by either answering your question or problem (sometimes after clarifying it first, which is impressive) or by directing you to the right person who can address, answer, and explain it properly.  I am talking about the kind of case where the machine “admits” to not understanding your issue and asks you to explain it in other words, which you then do, but the machine then again says it doesn’t understand and asks you to explain it in other words.  It becomes clear very soon that the machine is never going to understand the problem or recognize it from its algorithm, but the machine is relentless in asking for you to say it in terms that match whatever its algorithm can recognize.  It seems to me that the programmers should have set it up (or the computer itself, if properly self-learning should have figured out) to “realize” or recognize, after two or three futile attempts to “understand” your issue, it needs to direct your call to a human.  (It should essentially be like any human, say, new to the job, who has to say something like “I’m sorry; I am new here and don’t yet know the answer to your question.  Let me try to find someone more experienced to help you.”)

However that won’t solve your problem if the human you are then connected to is no more knowledgeable than the computer and also prevents you from getting to the right person who can address it.  You often get a human operator who either doesn’t know where to direct your call or doesn’t care where to direct it, but just connects you to a random person or person they perhaps want to annoy along with annoying you.  Again, it is somewhat understandable if a machine does that because machines today are not likely programmed to have empathy for your having a problem that really needs to be resolved correctly, but it is not acceptable when a person does it.  People ought to know better and have empathy for a customer who has a legitimate question or problem.  But they often do not know better.  Essentially, artificial intelligence will, in many cases, just be artificial stupidity, ignorance, lack of understanding, and concern that mirrors human stupidity, ignorance, lack of understanding, and lack of concern.

So, unless computers can have self-learning programs that help it learn and make decisions better than humans do, it is not clear that AI will do that much better than humans in areas that require understanding and judgment, and may do worse.

However, since computers will not likely be lazy or negligent, and can better share knowledge with each other once one of them attains that knowledge (through either human programming or through self-learning) than humans seem to be able to learn from each other, there is hope that they can all learn to do better by something like the following in the above situation, where after the second or third failed attempt you make to describe your issue in a way the computer recognizes it, the AI says something like “I am sorry I am not understanding your question properly, but let me connect you with a (human) person who might, and I will listen in on the conversation and any follow-up to it so that I can learn how to do this correctly next time” and then instantly relay what it learns to all the other machines in the system.  Computers have a potential capacity for improvement in both learning and teaching that lazy or inattentive humans do not.  Whether that capacity is developed and utilized or not remains to be seen.