Archive for June, 2010

Marketing F&SF

Recently, Brad Torgersen made a lengthy comment about why he believed that F&SF, and particularly science fiction, needs to “popularize” itself, because the older “target market” is… well… old and getting older, and the younger readers tend to come to SF through such venues as media tie-in novels, graphic novels, and “popular” fiction.  While he’s absolutely correct in the sense that any vital genre has to attract to new readers in order to continue, he unfortunately is under one major misapprehension – that publishers can “market” fiction the way Harley-Davidson marketed motorcycles.  Like it or not, publishing – and readers – don’t work quite that way.

Don’t get me wrong.  There’s quite a bit of successful marketing in the field, but one reason why there are always opportunities for new authors is that it’s very rare that a publisher can actually “create” a successful book or author.  I know of one such case, and it was enabled by a smart publisher and a fluke set of circumstances that occurred exactly once in the last two decades.  Historically, and practically, what happens far more often is that, of all the new authors published, one or two, if that, each year appeal widely or, if you will, popularly.  Once that happens, a savvy publisher immediately brings all possible marketing tools and expertise to publicize and expand that reading base and highlight what makes that author’s work popular.

In short, there has to be a larger than “usual” reader base to begin with, and the work in question has to be “popularizable.”  I do have, I think it’s fair to say, such a reader base, but, barring some strange circumstances, that reader base isn’t likely to expand wildly to the millions because what and how I write require a certain amount of thought for the fullest appreciation of readers, and the readers who flock to each new multi-million selling new novelistic sensation are looking primarily for either (1) entertainment, (2) a world with the same characters that promises that they can identify with it for years, or (3) a “fast” read, preferably all three, but certainly two out of three.

All this doesn’t mean that publishers can’t do more to expand their readership, but it does mean that such expansion has to begin by considering and publishing books that are likely to appeal to readers beyond those of the traditional audience, without alienating the majority of those traditional readers. And in fact, one way that publishers have been trying to reach beyond the existing audience is by putting out more and more “supernatural” fantasy dealing with vampires, werewolves, and books with more explicit sexual content.  The problem with this approach is that, first, such books tend not to appeal to those who like science fiction and/or tech-oriented publications, as well as also tending to alienate a significant percentage of older readers, as opposed to, as Brad pointed out, media tie-in novels, which appeal across a wider range of ages and backgrounds. Another problem is that writing science fiction, as opposed to fantasy, takes more and more technical experience and education, and fewer and fewer writers have that background.  That’s one reason why SF media tie-in novels are easier to write – most of the technical trappings have been worked out, one way or another.

I don’t have an easy answer, except to say that trying to expand readership by extending the series of authors with “popular” appeal or by copying or trying to latch on to the current fads has limited effectiveness. Personally, I tend to believe that just looking for good books, whether or not they fit into current popularity fads, is the best remedy, but that may just be a reflection of my views and mark me as “dated.”

In any case, Brad has pointed out a real problem facing science fiction, in particular, and one that needs more insight and investigation by editors and publishers in the field.

The Vanishing/Vanished Midlist?

Several weeks ago, I attended a science fiction convention where the guest of honor was a writer who spent some 20 years as what one might call a “high mid-list author,” someone able to work full-time as a writer and pay the bills.  Except… several years ago, this came to an end for the writer.  Oh… the writer in question still publishes two books a year, but they aren’t selling as well as earlier books, although those who read the books claim they’re as good, if not better, than earlier work, and now to make ends meet requires outside additional work as a consultant and educator.  To make matters worse, at least from my point of view, this writer produces work that is more than mere entertainment and mental cotton-candy.

Interestingly enough, more and more of the books cited by “critical” reviewers in the F&SF field [with whom I have, as most know, certain “concerns”] seem to come from smaller presses.  This is creating, I believe, an almost vicious cycle in F&SF publishing. The more the books praised by reviewer come from small presses, the more larger publishers get the message that “good” or “edgy” or “thoughtful” books don’t sell as well, and the greater the almost subconscious pressure to opt for “fiction-fun” or “fiction-light.”  To their credit, certain publishers, including mine, thankfully, are resisting this trend, but I’m still seeing more of those novels that are gaming and media tie-ins or endless series.  And yes, the Recluce Saga is long, but… as I keep pointing out, no character has more than two books.  I don’t have eight or ten or fifteen books endlessly spinning improbable stories and extensions about the same character or characters.

With the drastic changes in wholesale distribution over the past decade or so, virtually no mid-list books receive such distribution, except perhaps lower-selling titles of big-name authors.  As a result of these trends, the midlists of at least some large publishers that were once the home of “thoughtful” books are shrinking. Some such midlist writers have found homes with the smaller presses, but small press distribution systems often are not as extensive. That has resulted in lower sales for the authors who wrote those books, and lower sales means lower incomes, and either cutting back on writing or holding down more other jobs… or… trying to re-invent one’s self with another form of “fiction-light.”

I’ve heard many who believe that e-book sales can help here, but the sales figures I’ve seen suggest that e-books do more for those books that have high sales levels and wide distribution in hardcover and paperback – and those aren’t the midlist books.

It almost appears that the midlist F&SF titles are going to become a ghetto within a genre… and that concerns me, and it’s certainly affecting all authors, but particularly those who once wrote good midlist books and made a living at it… and now can’t.

Electronic Free-Loading… and Worse

Even with spam “protection,” the amount of junk email that my wife and I receive is astronomical – less than one in fifty emails is legitimate.  The rest are spam and solicitations.  Now I’m getting close to a hundred attempted “spam” comments on the website daily, all of them with embedded links to sell or promote something. That’s just one facet of the problem.  Another facet is the continual proliferation of attempts at phishing and identity theft.  It makes one want to ask – have there always been so many people trying to make a buck, rupee, ruble, Euro, or whatever by freeloading or preying on others?

I know that con artists have been around since the beginning of history, but never have such numbers been so obvious and so intrusive to so many.  Is this the inevitable result of an electronic technology that makes theft, fraud, and blatant self-promotion at the expense and effort of others a matter of keyboarding at a distance?  At one time, these types of offenses had to be carried out in person and embodied a certain amount of risk and a probability of detection and usually criminal punishment.  Now that they can be accomplished via virtually untraceable [for practical purposes] computer/internet access, they’ve proliferated to the point where virtually every computer connected to the net runs the risk of some sort of loss or damage – a form of computer Russian roulette.

But what I find the most disheartening about this is the fact that so many people, once the risk and criminal penalty factors were so dramatically reduced by technology, set out to exploit and fleece others.  Even those of us not yet fleeced or exploited have to take time, effort, and additional software to deal with these intrusions.  I have to sort through the potential comments quarantined by the system several times a day, because a few are legitimate, and deserve to be posted, and I still have to take time to delete all the unwanted email.  I have to pay for protective software, and so forth.  In effect, every computer user is being taxed in terms of time, money, and risk by this radical expansion of the unscrupulous.

Now… those who are extreme technophiles will claim that the downsides of our technologically based communications/computing systems are negligible… or at least that the benefits far outweigh the downsides.  But the problem here is that most of the benefits, especially in terms of costs, go to large institutions and the unscrupulous, while the downsides fall on the rest of us.  I don’t see that, for example, that the internet enables more good writers; it enables writers who are better self-promoters, and some good writers are, and a great many aren’t.  In trying to evaluate honestly what I do on the net, I suspect that my internet presence is similar to treading water.  I’m not losing much ground to the blatant self-promoters, but for all the effort it requires, I’m not gaining either, and it’s time spent when I can’t be writing.  Yet if I don’t do it, especially with, I have to admit after looking at recent sales figures [and yes, some of you were right] the recent spurt in the growth of e-books, my sales will suffer.

I don’t see that the internet is that useful in enabling small businesses, because there are so many, and the effort and ingenuity require to attract customers is considerable, but it certainly allows large ones to contact everyone.  And it certainly allows every variety of cyber-criminal potential access to a huge variety of victims with almost no chance of getting detected, let alone prosecuted and punished.  The idea of privacy has become almost laughable, even for those of us who don’t patronize social networking sites.

Cynical as I may be, my hopes have always been that technology would be employed to enable the best to be better, and the rest to improve who and what they are.  Yet… I have this nagging feeling that, more and more, technology, particularly communications technology, is dragging down far more people than it is improving, especially ethically… and, even if it isn’t, it’s creating a tremendous diversion of time from actual productive work.  That diversion may be worthwhile in manufacturing-based industries, but it’s a definite negative force in areas such as writing and other creative efforts.  In a society that is becoming ever more dependent on technology, unless matters change, this foreshadows a future in which marketing and hype become ever more present and dominant, even as the technophiles are claiming communications technology makes life better and better.

Better and better for whom?  And what?

Fantasy… Should be Fun?

The other day, when reading a blogger’s review of The Soprano Sorceress, I came across an interesting question, clearly meant to be rhetorical – what point was there to reading a fantasy if the reader didn’t like the fantasy world created by the author?  It’s a good question, but not necessarily in the way that the reviewer meant, because his attitude was more along the lines of wanting to avoid reading about worlds he didn’t like, particularly because he asked another question along the lines of what fun was there in reading about such a world.

Yet… I have to confess that there are authors I probably won’t read again because I don’t care that much for their worlds, just as there are authors I won’t read again because I don’t care for their characters.  In particular, I don’t care for characters who make mistakes and errors that would prove fatal in any “realistic” world situation, yet who survive for book after book [I presume, because the series continues, even if I’m no longer reading them].  Obviously, those kinds of books have great appeal, because millions upon millions of them sell, and maybe that’s the “fun” in reading them.

But there’s a distinction between “good” and “fun,” and often one between “entertaining” and “thought-provoking,” and there are readers who prefer each type, although sales figures suggest that “fun” and “entertaining” are the categories that tend to outsell others significantly, often by orders of magnitude.

The question the blogger reviewer asked, however, holds within it an assumption that all too many of us have – that “our” view is the only reasonable way of looking at a particular book… and that, I think, is why I tend to be reluctant in reading reviews, either those considered “professional” or those less so, because the vast majority of reviewers start from the unconscious presupposition that theirs is the only “reasonable” way of looking at a given book.  The more “professional” the reviewer is, the less likely this presupposition occurs, but there are still well-known reviewers and review publications that fall regularly into this mind-set.  The problem lies in not only in the expectations of the reviewer, but also in the knowledge base – or the lack of knowledge – that the reviewer possesses.  A novel that uses allusions heavily to disclose character will seem shallow to the reader or reviewer who does not understand those referents.  A reader unfamiliar with various “sub-cultures,” such as the corporate or legal worlds, politics, the military, academia, is likely to miss many subtleties of the type where explanation would destroy the effect.  Because of this “sub-culture” blindness, certain books, or parts of certain books, tend to be less entertaining – or even boring – to those unfamiliar with the subculture, whereas a reader who understands those subcultures may be smiling or even howling with laughter.

As a side note, despite the impression that some bloggers have apparently gained from this site, I do read blog reviews of my work and that of other authors on a continuing basis, if sometimes reluctantly.  Why reluctantly?  Because it’s more often painful than not.  As a writer, for me such blogs often raise the question of why the reader didn’t understand certain matters that appear so obvious to me.  Could I have done something better, or was the matter presented well and the reader didn’t get it?  Half and half?  Such questions and second-guessing, I feel, are necessary if any writer wants to improve, no matter how long he or she has been writing… but I suspect any author who claims the process is enjoyable or entertaining is either lying or a closet masochist.  As part of being a professional, an author should know, I personally believe, the range of reactions to his or her work, as well as the reasons behind those reactions, but, please, let’s not have commentators suggest that we’re somehow outdated, out of touch, or unreasonable when we suggest that the process isn’t always as pleasurable to us as it apparently is to those who take great delight in complaining about what they perceive as deficiencies in what we write.  Sometimes, indeed, the deficiencies are the writer’s, but many times the deficiencies lie in the reviewer, and where the deficiencies may lie, or even if there are such deficiencies, isn’t always obvious to most readers of either blog or professional reviews… or even of professional blog reviews.

Sometimes… Just Sometimes… We Get It Right

Way back in 1958, in the so-called “Golden Age” of science fiction, Jack Vance wrote a book called The Languages of Pao, in which he postulated that language drastically affects human thought patterns and, thus, the entire structure of a culture or civilization.  A more scholarly statement of this is the linguistic relativity principle, otherwise known as the Sapir-Whorf hypothesis, of which there are two versions.  One states that language limits and determines cognitive categories. A weaker version merely suggests that language  influences thought and certain non-linguistic behaviours.  The Sapir-Whorf hypothesis was thought to be discredited by color-related experiments in the 1960s, because researchers  found that language differentials did not seem to affect color perception or usage.

Recent studies of human brain patterns and linguistic development, reported in the June 1st edition of New Scientist, strongly suggest that, first, there is not, as previously thought, a  genetically-determined “universal” human instinct/hard-wired pattern for language that is common to all human beings, but that languages are in fact learned and used in often totally different ways by those speaking different tongues.  Thus, as speculated by Vance, languages do in fact shape the way we not only think, but the very way in which we see the world.  And, as occasionally happens, but not so much as we science fiction writers would like to think or claim, one of us has actually anticipated a fundamental discovery, and one that has profound implications for human civilization, implications that I don’t think most people have fully considered.

If this research is accurate, then, for example, intractible cultural differences may well lie in the linguistic patterns of a culture.  A language that offers many ways in which to accurately express the same concept or thought would likely promote more openness of thought than a language in which there is literally only one correct way in which that thought can be expressed.  A language/culture that allows rapid linguistic innovation may promote change and development… but it might well have the downside of undermining standards, because standards, as represented by language, are not seen as fixed or immutable.   We already know that words expressing concepts, such as “freedom”  or “equality,” do not “translate” into the exact same meanings in different cultures, and this research offers insights into why the differences go beyond mere semantics.

These possibilities have certainly been considered in human history, if only instinctively or subconsciously.  For centuries, the Roman Catholic church resisted the translation of the Bible into any other language, insisting it be read and taught only in Latin.  Since 1635, with a few years in abeyance during the French Revolution, L’Academie Francaise has policed usage and linguistic development in France, attempting to restrict or eliminate the use of Frenchified Anglicisms.  And languages do affect other aspects of human behavior.  Recent studies have shown that speakers of tonally-inflected languages have far, far, higher rates of perfect pitch than do speakers of languages that are not tonally inflected.  Not entirely coincidentally, it seems to me, speakers of such languages also appear to have more successful classical musicians.

A more disturbing aspect of the research is the possibility that linguistic differences may well create cultural “understanding” divides that are difficult, if not impossible, to bridge, simply because the languages create antithetical patterns of thought, so that a speaker of one language cannot literally comprehend emotionally the concepts and values behind the words of a speaker of another language.  The initial research suggests that the magnitude of variances in languistic learning patterns ranges from very slight to quite significant… and it will be interesting to see if such differences can ever be quantified.  But it does appear that speaking another language goes far beyond the words.

And a science fiction writer pointed out the cultural implications and ramifications for societies first.

Pressing the Limits

As both individuals and as a species, human beings have always had a tendency to press the limits, both of their societies and their technologies.  This tendency has good points and bad points… good because as a species we wouldn’t have developed and life would still be in the “natural state,” or “nasty, brutish, and short,” a pithy observation attributed to the philosopher Thomas Hobbes in Leviathan.  The “bad” side of pressing the limits has been minimized, because the advantages have been so much greater over time than the drawbacks.

Except… the costs and the consequences of pushing technology to the limit may now in some cases be reaching the point where they outweigh the overall benefits, and not just in military areas.

The latest and most dramatic evidence of this change is, of course, the current Gulf of Mexico oil rig explosion and the subsequent oil blowout.  Deep-sea drilling and production platforms are required to have in place redundant blow-out protectors… as did the BP rig.  But the blow-out protector failed.  Such failures are exceeding rare.  Repeated tests show these work over 99% of the time, but something like 60 have failed in tests of the equipment.  The Gulf oil disaster just happens to be one of the few times it’s happened in actuality and represents the largest such failure in terms of crude oil releases.  What’s being overlooked, except by the environmentalists, who, so far as I can tell, are operating more on a dislike of off-shore drilling than a reasoned technical analysis, is the fact the number of offshore drilling platforms is around 6,000 in service world-wide in some form or another, and increasing.  That number will increase whether the U.S. bans more offshore drilling or not.  From 1992 to 2006, the Interior Department reported 39 blow-outs at platforms in the Gulf of Mexico, and although none were as serious as the latest, that’s more than two a year, yet that represents a safety record of 99.93%.  In short, there’s not a lot of margin for error.  What makes the issue more pressing is that drilling technology is able to drill deeper and deeper – and the pressures involved at ever greater depths put increasing stress on the equipment to the point where, as is apparent with the BP disaster, stopping the flow of oil in the case of a failure becomes extraordinarily difficult and exceedingly expensive, as well as time-consuming.  Because crude oil is devastating to the environment, the follow-on damage to the ecosystems and the economy of the surrounding area will create far greater costs than capping the well.

Pushing technology beyond safe limits is nothing new to human beings.  When steam engines were first introduced, the desire for power and speed led to scores, if not hundreds, of boiler explosions.  Occasionally, disasters led to changes, such as the phasing out of hydrogen dirigibles after the Hindenburg fire and crash, but that change was also made easier by the improvements in aircraft, which were also far faster than dirigibles. The costs of other disasters are still with us – and we tend to overlook them.  The town of Centralia, Pennsylvania, has largely been abandoned because the coal seams in the mostly worked out mines beneath the town caught fire and have been smoldering away for more than forty years, causing the ground above to collapse and continually releasing toxic gases.  In Pennsylvania alone, there are more than 30 such subterranean fires.  World-wide there are more than 3,000 such fires, some of which release more greenhouse gases and other toxic fumes than some coal-fired power plants.  Yet few of these fires are more than watched, because the technology does not exist that can extinguish them in any fashion close to cost-efficient and in some cases, not at all because the fires burn so deep.

Pushing electronic technology to the limits, without regard for the implications, costs, and other downsides, has resulted in a world linked together in such a haphazard fashion that a massive solar flare – or a determined set of professional hackers – could conceivably bring down an entire nation’s communications and power distribution network – and that doesn’t even take into account the vast increase in the types and the amounts of exceedingly toxic wastes created on a world-wide scale, most of which is still not handled as it should be.  Another area where technology is being pressed to the limits is that of bio-tech, where scientists have reported creating the first synthetic cell.  While they engineered in considerable safeguards, once that technology is more available, will everyone?

As illustrated by the BP disaster, we when, as a society, push technology to its limits on a large scale, for whatever reason, the implications of a technological or systems failure are getting to the point where we require absolute safety in operation of those systems – and obtaining such assurance is never inexpensive… and sometimes not even possible.

But then again… if we tweaked existing technology just a bit more so that we could get even more out of it…. get more oil, more bandwidth, make more profit…

When to Stop Writing… [With Some “Spoilers”]

The other day I ran across two comments on blogs about my books.  One said that he wished I’d “finish” more books about characters, that he just got into the characters and then the books ended.  The other said that I dragged out my series too long.  While the comments weren’t about quite the same thing, they did get me to thinking.  How much should I write about a given character?  How long should a series be?

The simple and easy answer is that I should write as long as the story and the series remain interesting.  The problem with that answer, however, is… interesting to whom?

Almost every protagonist I’ve created has resulted in a greater or larger number of readers asking for more stories about that particular character, and every week I get requests or inquiries asking if I’ll write another story about a particular character.  That’s clearly because that reader identified with and/or greatly enjoyed that character… and that’s what every author likes to hear.  Unfortunately, just because a character is so memorable to readers doesn’t mean that there’s another good story there… or that another story about that character will be as memorable to all readers.

Take Lerris, from The Magic of Recluce.  By the end of the second book about him, he’s prematurely middle-aged as a result of his use of order and chaos to save Recluce from destruction by Hamor… and his actions have resulted in death and destruction all around him, not to mention that he’s effectively made the use of order/chaos magic impossible on a large or even moderate scale for generations to come.  What is left for him in the way of great or striking deeds?  Good and rewarding work as a skilled crafter, a happy family life? Absolutely… but there can’t be any more of the deeds, magic, and action of the first two books.  That’s why there won’t be any more books about Lerris.  If I wrote another book about Lorn…another popular character… for it to be a good book, it would have to be a tragedy, because the only force that could really thwart or even test him is Lorn himself.  After a book in which a favorite character died, if of old age after forty years of magic working – and all the flak I took from readers who loved her – I’m understandably reluctant to go the tragic route again.  So… for me, at least, I try to stop when the best story’s been told, and when creating an even greater peril or trial for the hero would be totally improbable for the world in which he or she lives.

For the same reason, because I’ve never written more than three books about a given main character, my “series” aren’t series in the sense of eight or ten books about the same characters, but groupings of novels in the same “world.”  Even so, I hear from readers who want more in that world, and I read about readers who think I’ve done enough [or too much] in that world.  Interestingly enough, very few of the complainers ever write me; they just complain to the rest of the world, and for me that’s just as well.  No matter what they say publicly, I don’t know a writer who wants to get letters or emails or tweets telling them to stop doing what they like to do… and I’m no different.

But those who complain about series being too long usually aren’t dealing with the characters or the stories. From what I’ve seen and read, they’re the readers who’ve “exhausted” the magic and the gimmicks.  They’re not there for characters and insights, but for the quicker “what’s new and nifty?”  And there’s nothing wrong with that, but it’s not necessarily a reason for an author to stop writing in that world; it’s a reason for readers who always want the “new” to move on.  There’s still “new” in the Recluce Saga; it’s just not new magic.  Sometimes, it’s stylistic.  I’ve written books in the first person, the third person past tense, the third person present tense.  I’ve connected two books with an embedded book of poetry.  I’ve told the novels from both the side of order and the side of chaos, and from male and female points of view.  Despite comments to the contrary, I’ve written Recluce books with teenaged characters, and those in their twenties, thirties, forties, and older. That’s a fair amount of difference, but only if the reader is reading for what happens to the characters… and virtually all the critics and reviewers have noted that each book expands the world of Recluce.  I won’t write another Recluce book unless I can do that, and that’s why there’s often a gap of several years between books.  The same is true of books set in my other worlds.

So… I guess, for me, the answer is that I stop writing about a character or a world when I can’t show something new and different, although it may be quiet new or character new.

Technology, Society, and Civilization

In today’s modern industrial states, most people tend to accept the proposition that the degree of “civilization” is fairly directly related to the level of technology employed by a society.  Either as a result or as a belief, then, each new technological gadget or invention is hailed as an advance. But… how valid is that correlation?

In my very first blog [no longer available in the archives, for reasons we won’t discuss], I made a number of observations about the Antikythera Device, essentially a clock-work- like mechanical computer dating to 100 B.C. that tracked and predicted the movement of the five known planets, lunar and solar eclipses, the movement of the moon, as well as the future dates for the Greek Olympics. Nothing this sophisticated was ever developed by the Roman Empire, or anywhere else in the world until more than 1500 years later.  Other extremely technological devices were developed in Ptolemaic Egypt, including remote-controlled steam engines that opened temple doors and magnetically levitated statues in those temples.  Yet both Greece and Egypt fell to the more “practical” Roman Empire, whose most “advanced” technologies were likely the invention of concrete, particularly concrete that hardened under water, and military organization.

The Chinese had ceramics, the iron blast furnace, gunpowder, and rockets a millennium before Europe, yet they failed to combine their metal-working skill with gunpowder to develop and continue developing firearms and cannon.  They had the largest and most advanced naval technology in the world at one point… and burned their fleet.  Effectively, they turned their backs on developing and implementing higher technology, but for centuries, without doubt, they were the most “civilized” society on earth.

Hindsight is always so much more accurate than foresight, but often it can reveal and illuminate the possible paths to the future, particularly the ones best avoided. The highest level of technology used in Ptolemaic Egypt was employed in support of religion, most likely to reinforce the existing social structure, and was never developed in ways that could be used by any sizable fraction of the society for societally productive goals.  The highest levels of Greek technology and thought were occasionally used in warfare, but were generally reserved for the use of a comparatively small elite.  For example, records suggest that only a handful of Antikythera devices were ever created.  The widest-scale use of gunpowder by the early Chinese was for fireworks – not weapons or blasting powder.

Today, particularly in western industrial cultures, more and more technology is concentrated on entertainment, often marketed as communications, but when one considers the time and number of applications on such devices, the majority are effectively entertainment-related.  In real terms, the amount spent on basic research and immediate follow-up in the United States has declined gradually, but significantly, over the past 30 years.  As an example, NASA’s budget is less than half of what it was in 1965, and in 2010, its expenditures will constitute the smallest fraction of the U.S. budget in more than 50 years.  For the past few years, the annual budget of NASA has been running around $20 billion annually.  By comparison, sales of Apple’s I-phone over 9 months exceeded the annual NASA budget, and Apple is just one producer of such devices.  U.S. video game software sales alone exceed $10 billion annually.

By comparison, the early Roman Empire concentrated on using less “advanced” technology for economic and military purposes.  Interesting enough, when technology began to be employed primarily for such purposes as building the coliseum and flooding it with water and staging naval battles with gladiators, subsidized by the government, Roman power, culture, and civilization began to decline.

More high-tech entertainment, anyone?

Sacred? To Whom?

I’ll admit right off the top that I have a problem with the concept that “life is sacred,” not that I don’t feel that my life, and that of my wife and children and grandchildren aren’t sacred to me.  But various religions justify various positions on social issues on the grounds that human life is “sacred.”  I have to ask the question why human life, as opposed to other kinds of life, is particularly special – except to us.

Once upon a time, scientists and others claimed that Homo sapiens were qualitatively different and superior to other forms of life.  No other form of life made tools, it was said.  No other form of life could plan logically, or think rationally.  No other form of life could communicate.  And, based on these assertions, most people agreed that humans were special and their life was “sacred.”

The only problem is that, the more we learn about life on our planet, the more every one of these assertions has proved to be wrong.  Certain primates use tools; even Caledonian crows do.  A number of species do think and plan ahead, if not in the depth and variety that human beings do.  And research has shown and is continuing to show that other species do communicate, from primates to gray parrots.  Research also shows that some species have a “theory of mind,” again a capability once thought to be restricted to human beings. But even if one considers just Homo sapiens, the most recent genetic research shows that a small but significant fraction of our DNA actually comes from Neandertal ancestors, and that genetic research also indicates that Neandertals had the capability for abstract thought and speech.  That same research shows that, on average, both Neandertals and earlier Homo sapiens had slightly larger brains than do people today.  Does that make us less “sacred”?

One of the basic economic principles is that goods that are scarce are more valuable, and we as human beings follow that principle, one might say, religiously – except in the case of religion.  Human beings are the most common large species on the planet earth, six billion plus and growing.  Tigers and pandas number in the thousands, if that.  By the very principles we follow every day, shouldn’t a tiger or a panda be more valuable than a human?  Yet most people put their convenience above the survival of an endangered species, even while they value scarce goods, such as gems and gold, more than common goods.

Is there somehow a dividing line between species – between those that might be considered “sacred” and those that are not?  Perhaps… but where might one draw that line?  A human infant possesses none of the characteristics of a mature grown adult.  Does that make the infant less sacred?  A two year old chimpanzee has more cognitive ability than does a human child of the same age, and far more than a human infant.  Does that make the chimp more sacred?  Even if we limit the assessment of species to fully functioning adults, is an impaired adult less sacred than one who is not?  And why is a primate who can think, feel, and plan less sacred than a human being?  Just because we have power… and say so?

Then, there’s another small problem.  Nothing on the earth that is living can survive without eating in some form or another something else that is or was living.  Human beings do have a singular distinction there – we’re the species that has managed to get eaten less by other species than any other species.  Yes… that’s our primary distinction… but is that adequate grounds for claiming that our lives, compared to the lives of other thinking and feeling species, are particularly special and “sacred”?

Or is a theological dictum that human life is sacred a convenient way of avoiding the questions raised above, and elsewhere?