Inhumane?

August sixth, last week, was the seventieth anniversary of the destruction of the Japanese city of Nagasaki with the second, and last, atomic bomb used in warfare. The two atomic bombs dropped on Japan killed almost 130,000 people immediately, and the eventual death toll was estimated at close to a quarter million people, of whom only about 20,000 were military personnel.

In ceremonies memorializing the event, at least one speaker asked for the abolition of atomic weapons as “inhumane.” That got me to thinking. While there’s no doubt that an atomic weapon is “inhumane,” is there any effective weapon used in war that is “humane”?

The allied firebombing of Dresden in February of 1945 killed 25,000 civilians. The Japanese attack on the Chinese city of Nanking in 1937, known popularly as “the rape of Nanking” resulted in 200,000 civilian deaths, according to the International Military Tribunal, largely inflicted by rifles, grenades, and bayonets. Arrows, swords, and trampling by horses resulted in the death of over 200,000 civilians when the Mongols sacked Bagdad in 1258. If my sources are correct, there have been at least seventy wars in the last 2000 years with death tolls exceeding 100,000 people, and almost thirty with death tolls exceeding a million people.

So why are atomic weapons any more inhumane than any other weapons? Given all these wars and deaths, all of these wars that were waged by people, human beings attacking other human beings, what do we mean when someone talks about atomic weapons being especially inhumane?

The word “inhumane” and its linguistic roots mean “not human” or “not having the qualities of a human being.” Yet, obviously, it’s very human for human beings to slaughter other human beings. As far back as archeologists have been able to find human remains that can be analyzed, they find a significant percentage, roughly fifteen percent, of the individuals died violent deaths from weapons.

Today, we use the word “humane” to denote kindly or civilized conduct toward others and showing a lack of cruelty toward animals, but isn’t that almost a combination of wishful thinking and hypocrisy?

14 thoughts on “Inhumane?”

  1. Dave A says:

    The difference between “humane” and “inhumane” is quite simple really. A humane weapon is one which gives the upper/ruling classes a fair chance of escaping it’s effects whereas of course an inhumane one gives them the same poor odds as everyone else.
    Tom Lehrer wrote an excellent ditty regarding this (“We’ll all go together when we go”), with the beautiful couplet:
    When the air becomes uraneous
    We will all go simultaneous
    Oh we all will fry together when we fry.

  2. Joe says:

    An inhumane weapon has long-lasting residual effects.

    A humane weapon lets non-combatants rebuild once the insanity is over. It also can spare other species which had nothing to do with the problem.

    Humans all wish for the next generations to live happily in a good environment. Humane weapons satisfy that human desire. Inhumane weapons preclude it.

    Thus, to my mind, agent Orange is an example of an inhumane weapon. A minefield is an inhumane weapon. Genocide and mass slaughter are also inhumane.

    But to my mind, killer robots with an off switch need not be. Not fearing for their lives, with faster than human reflexes, they may prove to be more humane than human soldiers.

    1. Lourain says:

      Killer robots? How are they fundamentally different from a machine gun, a drone, etc.? Somewhere there is a human responsible for the deaths caused by any of these weapons. Killer robots would be programmed by humans (unless, shades of Fred Saberhagen, we are stupid enough to give them independent volition).

      1. Joe says:

        Unlike machine guns, which rely on human operators, robots would apply judgement, which a mine doesn’t. To that extent they would be more humane than weapons people already use.

        Having a human in the loop is not the panacea people think: I recall a case where some people tried to surrender to a drone, but the lawyer on duty said one can’t surrender to a drone, so the people attempting to surrender were killed. (The source might have been Jeremy Scahill).

        However it’s hard to say whether the programmer would be responsible for the robot’s actions. Once one deals with sufficiently complicated problems, one can prove mathematically that one cannot predict a system’s behavior.

        1. Lourain says:

          Do you really believe that we can program judgement into robots? Trying to predict all the possible permutations of behavior (which would be necessary for that superior judgement from robots) has proven impossible for any organism much more complex than an insect.
          Please remember, a computer can only do what we humans tell it to do (the advantage of a computer is the speed at which it can operate and the accuracy with which it can carry out its directions).

          1. Joe says:

            In principle, yes, I do believe software will be able to have judgement. IBM’s Watson already diagnoses medical conditions better than most doctors. That’s a form of judgment. Furthermore, self-driving cars will need judgement: is it better to kill the old lady who just stepped out into the interstate, or the passengers in the car?

            The technical impediment of adding judgement is difficult but likely solvable. The real question is whether the military, or industry, would be willing to purchase it or develop it.

            Unfortunately the opposite (detect a human, check whether it has the appropriate feature such as an RFID tag, shoot it if not) is now trivial. That makes it cheap, and easy to replicate by many countries. In fact, it already exists and is deployed along the border between the two Koreas.

            The statement “computers only do what we humans tell them to do” isn’t quite right:

            It assumes we are able to predict the full consequences of what we ask. But we know we can’t (see the Halting problem for instance, or Godel’s incompleteness theorem).

            It also assumes that we tell them what to do, and not to follow the patterns they discover in data we provide them. Machine-learning (such as Watson) looks for patterns in data, such as all published medical literature, and makes decisions based on those patterns. Google Translate does the same. The programmers have no knowledge of medicine, or of translation, but their programs still perform the desired function.

          2. Lourain says:

            Doctors, engineers, scientists, and average Joe down the street are all looking for patterns. Computers such as Watson use algorhythms developed by humans and data provided by humans. The reason that Watson does better than the majority of doctors is its larger database and ability not to forget. (Yes this is a simplification!)
            Yes, self-driving cars might have to choose between Granny and passengers, but the choice will be made according to the dictates of human programmers, not due to some magical ‘judgement’, and the results might not be the choice you or I might make.
            Your statement “Once one deals with sufficiently complicated problems, one can prove mathematically that one cannot predict a system’s behavior,” puts the choices back on human shoulders. Choosing to use a system that is so complex that potential for behavior that is unpredictable is a human choice.
            Godel’s incompleteness theorems address the problem that it is impossible for any but the most trivial of axiomatic systems to prove all truths about the relations of the natural numbers (Wiki). My statement “Trying to predict all the possible permutations of behavior (which would be necessary for that superior judgement from robots) has proven impossible for any organism much more complex than an insect,” fits with this problem of uncertainty.
            With these considerations, why would a robot’s judgement be better that a human’s? It might be more consistent, but would its judgement be better in novel situations?

          3. Joe says:

            @Lourain (August 18)

            The point of Goedel’s theorem and Turing’s Halting problem is that any model that uses anything beyond first order logic will have unpredictable consequences. But first order logic is too weak to do much useful (you can’t even count in it). So any model that is sufficiently complex to be useful would have unpredictable consequences. I.e. “Choosing to use a system that is so complex that potential for behavior that is unpredictable is a human choice” is equivalent to “Choosing to use a system that useful is a human choice”… Since we’re unlikely to use a system that isn’t useful, the argument dissolves.

            Why would a robot be better at judging?

            The choice would not be based on fear, or on limited data, but on a lot of data. Since robots can share data, and what they learn, the choice function would improve over time, something that cannot happen with people, especially as many soldiers are fresh young recruits.

            The novelty argument assumes the robot only figures out correlations, and not a causal model. Although a lot of machine-learning today only does the former, there are examples of the latter, such as Eureqa. As that model is refined using the shared experience of many situations, it is not unreasonable to assume it will become better than what people do themselves. Somewhat like the Google car driving millions of miles without any accident, because it doesn’t get distracted or tired.

  3. Country Lawyer says:

    I’m in with L.E. on this, all weapons are inhumane as we use the term today – Truman believed that the atomic bombs were more humane than a house-by-house fight across Japan. Both for the Japanese, and for the invading allied forces. Did it shorten the war? Probably. There were no good choices, just as in most wars, there are only less evil choices. Which, incidentally, is one of the themes I love L.E. for writing about.

  4. hola says:

    Why limit the discussion to weapons – is allowing children in one of the wealthiest countries to go to bed hungry almost every night humane? How about making poor people work several jobs just to pay rent – is that humane. And people living under bridges, (risking either frying or freezing to death, depending on the place and the season) – does that qualify as humane? What about the number of people who can’t afford bail, thus are confined in conditions that can be deadly: is it humane that we in the good ole’ US of A have implemented debtors’ prisons straight out of Dickens? L.E. mentions animals, and, thanks to human activity, the number of species that are either extinct or threatened with same, is growing rapidly. There are too many people in a position of power who’s definition of ‘humane’ equates to their ability to maintain that power and that’s plain wrong.

  5. Ryan Jackson says:

    I’d argue it’s just another version of the “Honorable combat” nonsense. Except instead of it being “Cowardly” to “ambush/trick/sneak attack” your enemy it’s “Inhumane” to have that many lives go out in an eyeblink instead of a prolonged battle with a lot of actual soldiers doing the killing.

    To paraphrase/quote numerous people from fictional to the real, to your own works on occasion. There is no “Good guy” in a war, once the fighting starts both sides are “evil” and it’s just a matter of doing as much devastation to your enemy as possible while minimizing the devastation to your own people.

    People talk at lengths after the fact about what could have been better, or less damaging or more “humane” or “honorable” but ultimately, they’re speaking from either the side of the loser in a war, or the distance of the future looking back to a situation they weren’t part of.

  6. Sam says:

    I’m thinking of a line from the first Captain America movie where Doctor Erskine told Steve Rogers to remember that the first country the Nazis conquered was Germany.

    The point was to draw a distinction between the Nazi regime and the general populace who would have had differing degrees of complicity in the atrocities committed by the Nazis.

    I’ll pose a hypothetical: if in some alternate reality the Japanese had invaded the US and conquered a number of states and capital cities such as Chicago. Let’s say the Nazi invader’s leadership and a significant proportion of their military strength was based in Chicago and to fight a conventional campaign would be extremely difficult and cost the lives of many US soldiers whereas to drop an atom bomb on Chicago would solve the problem but kill up to 10 times as many US civilians as US soldiers who would be lost in a conventional campaign what would be considered the most appropriate strategy?

    I think the notion of war crimes is often problematic and it’s almost always the victors who label the losers the war criminals in the end. However I think the basic premise is that there is a distinction between combatants and non-combatants and weapons and military tactics that indiscriminately kill combatants and non-combatants alike are considered inhumane and often war crimes. The atom bomb was and is one of the more indiscriminate weapons and therefore is considered less humane to use than other bombs at the time that individually could not cause as much damage.

    1. Sam says:

      I made a mistake with my last post.

      I wrote “Let’s say the Nazi invader’s leadership. . . was based in Chicago”

      That should have been the Japanese invader’s leadership.

      1. Ryan Jackson says:

        The only thing I’d point out is that your scenario changes the issue in a very practical way.

        In the realm Scenario it was the trade of people in another country for ours. In your scenario it’s a question of which of our people.

        As I was saying earlier. As many have said. In a war your goal is to protect your own people as much as possible and to devastate your enemy so much they must stop.

        From a general ethics point standing in peacetime, I would not claim the life of a civilian in one country is greater than another. From the stand point of a military leader, specially in war time, the lives of Our people outweigh the lives of others.

        Doesn’t make it “right” per say, but there it is.

Leave a Reply

Your email address will not be published. Required fields are marked *