Pressing the Limits

As both individuals and as a species, human beings have always had a tendency to press the limits, both of their societies and their technologies.  This tendency has good points and bad points… good because as a species we wouldn’t have developed and life would still be in the “natural state,” or “nasty, brutish, and short,” a pithy observation attributed to the philosopher Thomas Hobbes in Leviathan.  The “bad” side of pressing the limits has been minimized, because the advantages have been so much greater over time than the drawbacks.

Except… the costs and the consequences of pushing technology to the limit may now in some cases be reaching the point where they outweigh the overall benefits, and not just in military areas.

The latest and most dramatic evidence of this change is, of course, the current Gulf of Mexico oil rig explosion and the subsequent oil blowout.  Deep-sea drilling and production platforms are required to have in place redundant blow-out protectors… as did the BP rig.  But the blow-out protector failed.  Such failures are exceeding rare.  Repeated tests show these work over 99% of the time, but something like 60 have failed in tests of the equipment.  The Gulf oil disaster just happens to be one of the few times it’s happened in actuality and represents the largest such failure in terms of crude oil releases.  What’s being overlooked, except by the environmentalists, who, so far as I can tell, are operating more on a dislike of off-shore drilling than a reasoned technical analysis, is the fact the number of offshore drilling platforms is around 6,000 in service world-wide in some form or another, and increasing.  That number will increase whether the U.S. bans more offshore drilling or not.  From 1992 to 2006, the Interior Department reported 39 blow-outs at platforms in the Gulf of Mexico, and although none were as serious as the latest, that’s more than two a year, yet that represents a safety record of 99.93%.  In short, there’s not a lot of margin for error.  What makes the issue more pressing is that drilling technology is able to drill deeper and deeper – and the pressures involved at ever greater depths put increasing stress on the equipment to the point where, as is apparent with the BP disaster, stopping the flow of oil in the case of a failure becomes extraordinarily difficult and exceedingly expensive, as well as time-consuming.  Because crude oil is devastating to the environment, the follow-on damage to the ecosystems and the economy of the surrounding area will create far greater costs than capping the well.

Pushing technology beyond safe limits is nothing new to human beings.  When steam engines were first introduced, the desire for power and speed led to scores, if not hundreds, of boiler explosions.  Occasionally, disasters led to changes, such as the phasing out of hydrogen dirigibles after the Hindenburg fire and crash, but that change was also made easier by the improvements in aircraft, which were also far faster than dirigibles. The costs of other disasters are still with us – and we tend to overlook them.  The town of Centralia, Pennsylvania, has largely been abandoned because the coal seams in the mostly worked out mines beneath the town caught fire and have been smoldering away for more than forty years, causing the ground above to collapse and continually releasing toxic gases.  In Pennsylvania alone, there are more than 30 such subterranean fires.  World-wide there are more than 3,000 such fires, some of which release more greenhouse gases and other toxic fumes than some coal-fired power plants.  Yet few of these fires are more than watched, because the technology does not exist that can extinguish them in any fashion close to cost-efficient and in some cases, not at all because the fires burn so deep.

Pushing electronic technology to the limits, without regard for the implications, costs, and other downsides, has resulted in a world linked together in such a haphazard fashion that a massive solar flare – or a determined set of professional hackers – could conceivably bring down an entire nation’s communications and power distribution network – and that doesn’t even take into account the vast increase in the types and the amounts of exceedingly toxic wastes created on a world-wide scale, most of which is still not handled as it should be.  Another area where technology is being pressed to the limits is that of bio-tech, where scientists have reported creating the first synthetic cell.  While they engineered in considerable safeguards, once that technology is more available, will everyone?

As illustrated by the BP disaster, we when, as a society, push technology to its limits on a large scale, for whatever reason, the implications of a technological or systems failure are getting to the point where we require absolute safety in operation of those systems – and obtaining such assurance is never inexpensive… and sometimes not even possible.

But then again… if we tweaked existing technology just a bit more so that we could get even more out of it…. get more oil, more bandwidth, make more profit…

When to Stop Writing… [With Some “Spoilers”]

The other day I ran across two comments on blogs about my books.  One said that he wished I’d “finish” more books about characters, that he just got into the characters and then the books ended.  The other said that I dragged out my series too long.  While the comments weren’t about quite the same thing, they did get me to thinking.  How much should I write about a given character?  How long should a series be?

The simple and easy answer is that I should write as long as the story and the series remain interesting.  The problem with that answer, however, is… interesting to whom?

Almost every protagonist I’ve created has resulted in a greater or larger number of readers asking for more stories about that particular character, and every week I get requests or inquiries asking if I’ll write another story about a particular character.  That’s clearly because that reader identified with and/or greatly enjoyed that character… and that’s what every author likes to hear.  Unfortunately, just because a character is so memorable to readers doesn’t mean that there’s another good story there… or that another story about that character will be as memorable to all readers.

Take Lerris, from The Magic of Recluce.  By the end of the second book about him, he’s prematurely middle-aged as a result of his use of order and chaos to save Recluce from destruction by Hamor… and his actions have resulted in death and destruction all around him, not to mention that he’s effectively made the use of order/chaos magic impossible on a large or even moderate scale for generations to come.  What is left for him in the way of great or striking deeds?  Good and rewarding work as a skilled crafter, a happy family life? Absolutely… but there can’t be any more of the deeds, magic, and action of the first two books.  That’s why there won’t be any more books about Lerris.  If I wrote another book about Lorn…another popular character… for it to be a good book, it would have to be a tragedy, because the only force that could really thwart or even test him is Lorn himself.  After a book in which a favorite character died, if of old age after forty years of magic working – and all the flak I took from readers who loved her – I’m understandably reluctant to go the tragic route again.  So… for me, at least, I try to stop when the best story’s been told, and when creating an even greater peril or trial for the hero would be totally improbable for the world in which he or she lives.

For the same reason, because I’ve never written more than three books about a given main character, my “series” aren’t series in the sense of eight or ten books about the same characters, but groupings of novels in the same “world.”  Even so, I hear from readers who want more in that world, and I read about readers who think I’ve done enough [or too much] in that world.  Interestingly enough, very few of the complainers ever write me; they just complain to the rest of the world, and for me that’s just as well.  No matter what they say publicly, I don’t know a writer who wants to get letters or emails or tweets telling them to stop doing what they like to do… and I’m no different.

But those who complain about series being too long usually aren’t dealing with the characters or the stories. From what I’ve seen and read, they’re the readers who’ve “exhausted” the magic and the gimmicks.  They’re not there for characters and insights, but for the quicker “what’s new and nifty?”  And there’s nothing wrong with that, but it’s not necessarily a reason for an author to stop writing in that world; it’s a reason for readers who always want the “new” to move on.  There’s still “new” in the Recluce Saga; it’s just not new magic.  Sometimes, it’s stylistic.  I’ve written books in the first person, the third person past tense, the third person present tense.  I’ve connected two books with an embedded book of poetry.  I’ve told the novels from both the side of order and the side of chaos, and from male and female points of view.  Despite comments to the contrary, I’ve written Recluce books with teenaged characters, and those in their twenties, thirties, forties, and older. That’s a fair amount of difference, but only if the reader is reading for what happens to the characters… and virtually all the critics and reviewers have noted that each book expands the world of Recluce.  I won’t write another Recluce book unless I can do that, and that’s why there’s often a gap of several years between books.  The same is true of books set in my other worlds.

So… I guess, for me, the answer is that I stop writing about a character or a world when I can’t show something new and different, although it may be quiet new or character new.

Technology, Society, and Civilization

In today’s modern industrial states, most people tend to accept the proposition that the degree of “civilization” is fairly directly related to the level of technology employed by a society.  Either as a result or as a belief, then, each new technological gadget or invention is hailed as an advance. But… how valid is that correlation?

In my very first blog [no longer available in the archives, for reasons we won’t discuss], I made a number of observations about the Antikythera Device, essentially a clock-work- like mechanical computer dating to 100 B.C. that tracked and predicted the movement of the five known planets, lunar and solar eclipses, the movement of the moon, as well as the future dates for the Greek Olympics. Nothing this sophisticated was ever developed by the Roman Empire, or anywhere else in the world until more than 1500 years later.  Other extremely technological devices were developed in Ptolemaic Egypt, including remote-controlled steam engines that opened temple doors and magnetically levitated statues in those temples.  Yet both Greece and Egypt fell to the more “practical” Roman Empire, whose most “advanced” technologies were likely the invention of concrete, particularly concrete that hardened under water, and military organization.

The Chinese had ceramics, the iron blast furnace, gunpowder, and rockets a millennium before Europe, yet they failed to combine their metal-working skill with gunpowder to develop and continue developing firearms and cannon.  They had the largest and most advanced naval technology in the world at one point… and burned their fleet.  Effectively, they turned their backs on developing and implementing higher technology, but for centuries, without doubt, they were the most “civilized” society on earth.

Hindsight is always so much more accurate than foresight, but often it can reveal and illuminate the possible paths to the future, particularly the ones best avoided. The highest level of technology used in Ptolemaic Egypt was employed in support of religion, most likely to reinforce the existing social structure, and was never developed in ways that could be used by any sizable fraction of the society for societally productive goals.  The highest levels of Greek technology and thought were occasionally used in warfare, but were generally reserved for the use of a comparatively small elite.  For example, records suggest that only a handful of Antikythera devices were ever created.  The widest-scale use of gunpowder by the early Chinese was for fireworks – not weapons or blasting powder.

Today, particularly in western industrial cultures, more and more technology is concentrated on entertainment, often marketed as communications, but when one considers the time and number of applications on such devices, the majority are effectively entertainment-related.  In real terms, the amount spent on basic research and immediate follow-up in the United States has declined gradually, but significantly, over the past 30 years.  As an example, NASA’s budget is less than half of what it was in 1965, and in 2010, its expenditures will constitute the smallest fraction of the U.S. budget in more than 50 years.  For the past few years, the annual budget of NASA has been running around $20 billion annually.  By comparison, sales of Apple’s I-phone over 9 months exceeded the annual NASA budget, and Apple is just one producer of such devices.  U.S. video game software sales alone exceed $10 billion annually.

By comparison, the early Roman Empire concentrated on using less “advanced” technology for economic and military purposes.  Interesting enough, when technology began to be employed primarily for such purposes as building the coliseum and flooding it with water and staging naval battles with gladiators, subsidized by the government, Roman power, culture, and civilization began to decline.

More high-tech entertainment, anyone?

Sacred? To Whom?

I’ll admit right off the top that I have a problem with the concept that “life is sacred,” not that I don’t feel that my life, and that of my wife and children and grandchildren aren’t sacred to me.  But various religions justify various positions on social issues on the grounds that human life is “sacred.”  I have to ask the question why human life, as opposed to other kinds of life, is particularly special – except to us.

Once upon a time, scientists and others claimed that Homo sapiens were qualitatively different and superior to other forms of life.  No other form of life made tools, it was said.  No other form of life could plan logically, or think rationally.  No other form of life could communicate.  And, based on these assertions, most people agreed that humans were special and their life was “sacred.”

The only problem is that, the more we learn about life on our planet, the more every one of these assertions has proved to be wrong.  Certain primates use tools; even Caledonian crows do.  A number of species do think and plan ahead, if not in the depth and variety that human beings do.  And research has shown and is continuing to show that other species do communicate, from primates to gray parrots.  Research also shows that some species have a “theory of mind,” again a capability once thought to be restricted to human beings. But even if one considers just Homo sapiens, the most recent genetic research shows that a small but significant fraction of our DNA actually comes from Neandertal ancestors, and that genetic research also indicates that Neandertals had the capability for abstract thought and speech.  That same research shows that, on average, both Neandertals and earlier Homo sapiens had slightly larger brains than do people today.  Does that make us less “sacred”?

One of the basic economic principles is that goods that are scarce are more valuable, and we as human beings follow that principle, one might say, religiously – except in the case of religion.  Human beings are the most common large species on the planet earth, six billion plus and growing.  Tigers and pandas number in the thousands, if that.  By the very principles we follow every day, shouldn’t a tiger or a panda be more valuable than a human?  Yet most people put their convenience above the survival of an endangered species, even while they value scarce goods, such as gems and gold, more than common goods.

Is there somehow a dividing line between species – between those that might be considered “sacred” and those that are not?  Perhaps… but where might one draw that line?  A human infant possesses none of the characteristics of a mature grown adult.  Does that make the infant less sacred?  A two year old chimpanzee has more cognitive ability than does a human child of the same age, and far more than a human infant.  Does that make the chimp more sacred?  Even if we limit the assessment of species to fully functioning adults, is an impaired adult less sacred than one who is not?  And why is a primate who can think, feel, and plan less sacred than a human being?  Just because we have power… and say so?

Then, there’s another small problem.  Nothing on the earth that is living can survive without eating in some form or another something else that is or was living.  Human beings do have a singular distinction there – we’re the species that has managed to get eaten less by other species than any other species.  Yes… that’s our primary distinction… but is that adequate grounds for claiming that our lives, compared to the lives of other thinking and feeling species, are particularly special and “sacred”?

Or is a theological dictum that human life is sacred a convenient way of avoiding the questions raised above, and elsewhere?

Making the Wrong Assumption

There are many reasons why people, projects, initiatives, military campaigns, political campaigns, legislation, friendships, and marriages – as well as a host of others – fail, but I’m convinced that the largest and least recognized reason for such failures is that those involved in such make incorrect assumptions.

One incorrect assumption that has bedeviled U.S. foreign policy for generations is that other societies share our fundamental values about liberty and democracy.  Most don’t.  They may want the same degree of power and material success, but they don’t endorse the values that make our kind of success possible.  Among other things, democracy is based on sharing power and compromise – a fact, unfortunately, that all too many U.S. ideologues fail to recognize, which may in fact destroy the U.S. political system as envisioned by the Founding Fathers and as developed by their successors… until the last generation.  Theocratically-based societies neither accept nor recognize either compromise or power-sharing – except as the last resort to be abandoned as soon as possible.  A related assumption is that peoples can act and vote in terms of the greater good.  While this is dubious even in the United States, it’s an insane assumption in a land where allegiance to the family or clan is paramount and where children are taught to distrust anyone outside the clan.

On a smaller scale, year after year, educational “reformers” in the United States assume, if tacitly and by their actions, that the decline in student achievements and accomplishments can be reversed solely by testing and by improving the quality of teachers.  This assumption is fatally flawed because student learning requires two key factors – those who can and are willing to work to teach and those who can learn and who are willing to learn.  Placing all the emphasis on the teachers and testing assumes that a single teacher in a classroom can and must overcome all the pressures of society, the media, the social peer pressures to do anything but learn, the idea that learning should be fun, and all the other societal pressures that are antithetical to the work required to learn. There are a comparative handful of teachers who can work such miracles, but basing educational policy and reforms on those who are truly exceptional is both poor policy and doomed to failure.  Those who endorse more testing as way to ensure that teachers teach the “right stuff” assume that the testing itself will support the standards, which it won’t, if the students aren’t motivated, not to mention the fact that more testing leaves less time for teaching and learning.  So, in a de facto assumption, not only does the burden of teaching fall upon educators, but so does the burden of motivating the unmotivated, and disciplining the undisciplined at a time when society has effectively removed the traditional forms of discipline without providing any effective replacements.  Yet the complaints mount, and American education is failing, even as the “reformers” keep assuming that teachers and testing alone can stem the tide.

For years, economists used what can loosely be termed “the rational person” model for analyzing the way various markets operated.  This assumption has proved to be horribly wrong, as recent studies – and economic developments – proved, because in all too many key areas, individuals do not behave rationally.  Most people refuse to cut their losses, even at the risk of losing everything, and most continue uneconomic behaviors not in their own interests, even when they perceive such behaviors in others as irrational and unsound.  Those who distrust the market system assume that regulation, if only applied correctly, can solve the problems, and those who believe that markets are self-correcting assume that deregulation will solve everything.  History and experience would suggest both assumptions are wrong.

In more than a few military conflicts dating back over recent centuries, military leaders have often assumed that superior forces and weapons would always prevail.  And… if the military command in question does indeed have such superiority and is willing to employ it efficiently to destroy everything that might possibly stand in its way, then “superiority” usually wins.  This assumption fails, however, in all too many cases where one is unable or unwilling to carry out the requisite slaughter of the so-called civilian population, or when military objectives cannot be quickly obtained, because, in fact, in virtually every war of any length a larger and larger fraction of the civilian population becomes involved on one side or another, and “superiority” shifts.  In this regard, people usually think of Vietnam or Afghanistan, but, in fact, the same sort of shift occurred in World War II.  At the outbreak of WWII in 1939, the British armed forces had about 1 million men in arms, the U.S. 175,000, and the Russians 1.5 million.  Together, the Germans and Japanese had over 5 million trained troops and far more advanced tanks, aircraft, and ships.  By the end of the war, those ratios had changed markedly.

While failure can be ascribed to many causes, I find it both disturbing and amazing that seldom are the basic assumptions behind bad decisions ever brought forward as causal factors… and have to ask, “Why not?”  Is it because, even after abject failure or costly success that didn’t have to be so costly, no one wants to admit that their assumptions were at fault?