Archive for the ‘General’ Category

Corruption [Part I]

Corruption is, in some form or another, endemic to human societies and has been throughout history. The only question seems to be in what forms it exists and to what degree it impacts societies and individuals.

At present, the United States is facing a heated political issue over immigration, but what I find disheartening about the debate is that it is centered almost entirely on the symptoms of a larger set of problems, rather than on the problems themselves.  The estimated eleven million illegal immigrants that have flooded into the entire United States, but especially into and through the American Southwest are a problem, yes, but they’re symptoms of a far larger set of problems that the majority of individuals and politicians are ignoring with various phrases along the lines of, “We have to stop the illegal immigration and deal with it first before we can address the other problems.”

Duh!  Given that we share a border of over 2,000 miles with Mexico, there is no cost-effective and practical way to seal that border.  Doing so will require spending tens of billions of dollars erecting and manning guard towers and shooting people – or doing the equivalent with RPVs and technology.  Among other things, I really don’t like the idea of the United States, the land of the free, being reduced to creating the western equivalent of the Berlin Wall, while instituting a police state within those walls to determine who’s here “legally” and who’s not.

The second problem is that it’s still not likely to work, because the pressures that have created that massive flow of immigrants still remain and are increasing. One of those pressures, like it or not, is that a significant percentage of the Mexican government, especially on the local level, is so corrupt that the drug cartels are often considered more honest and reliable than the government. The associated problem is that the drug cartels operate one of the most profitable lines of business in the world – and the most affluent customers in the largest single national market happen to be Americans.  Because corruption in Latin America has rendered government often powerless, the various cartels are fighting for market share of the drug market there – and in parts of the American Southwest – and unlike American commercial enterprises, they’re fighting for that market share with guns and bullets.

One of the other aspects of governmental corruption is a proliferation of paperwork, regulations, etc., that cannot be surmounted except through some sort of bribery.  This makes any sort of business growth extremely difficult, and often dangerous, and without business growth the economy and people suffer.  While the United States has its share of regulations and paperwork, our form of “bribery” is a “legal” combination of bureaucrats, lawyers, and politicians [it’s more complex than this, but the extended principles still hold in the more complex reality of U.S. commerce and law].  We have more bureaucrats than we ought to have because, without them, we’ve discovered over our history, the business and moneyed interests have tended to work people into an early grave under unsafe conditions.  To combat the excessive zeal of the bureaucrats, we have attorneys.  And we have politicians, who respond to both campaign contributions and voter ire.  It’s frankly, a form of legalized bribery and interest pandering,  but it does get the job done without having every petty official demanding a bribe under the threat of shutting down a business or sending someone to jail for violating this or that minor rule.  It also tends to keep competing for consumer dollars and market share confined to the economic arena and political arenas, rather than fighting it out with guns.

The problem is that, for whatever reason, very few Latin American governments have been able to institutionalize within a legal framework the power-struggles of competing interests or to control “corruption,” and as the economic stakes get higher and higher, so does the level of violence.  Thus, given the increasing lack of safety in Mexico, the ever-increasing number of deaths and kidnappings, not to mention the lack of economic opportunity, is it any wonder that people want to leave?  And since the problems exist to some degree or another in all too many Latin America countries, what destination is the logical choice?

“Merely” building a wall won’t solve the problems.  Nor will ignoring the fact that one of the driving factors behind all this is the apparently insatiable appetite of Americans for illegal drugs.  The United States imprisons a greater percentage of its population than any other industrialized nation in the world, the vast majority these days for drug-related offenses, and all that imprisonment doesn’t seem to have put more than a small dent in the drug trade.

So… in a very real sense, our own “drug corruption” is fueling the chaos and fighting over drug market share in Mexico and the American Southwest… which in turn fuels the pressures for immigration to the United States.  [To be continued]

The E-Book Revolution

For several years now, various prophets have predicted that e-books would be the wave of the future, and… lo and behold, Amazon.com has just recently announced that for the first time ever for some period, e-books outsold hardcovers.  It’s to be expected that Amazon would be the first outlet to report such news, given Amazon’s emphasis on e-books and its own Kindle, and given Amazon’s appeal to the tech-savvy readers. But what exactly does this mean?

Is it the great revolution in publishing… or a sign of the end of culture in the United States and the rest of the western world?  Of course, the obvious reply to such an absurd question would be neither… but I’m not so sure that the rise of e-books doesn’t contain some elements of each.

The rise in e-book sales, especially given the marketing models and patterns in the publishing industry, is going to have a very hefty impact on true professional full-time authors, and by that I mean those authors who make their living solely by writing.  That impact is already being felt, and it’s anything but positive.  Moreover, the e-book impact is being exacerbated by other social trends, most notably the marked decrease in paperback book sales.  According to my sources in the publishing industry, initial paperback book print runs in the F&SF are averaging 40-60% fewer copies being printed than was the case for comparable books ten years ago.  Even noted “mainstream authors” who sell millions of paperback books are seeing significant drops in paperback book sales numbers.

Now that e-books are being made available, at least in my case and that of other authors, on the same day as hardcovers, any e-book sale that replaces a hard-cover sale results in a direct drop in income for the author.  Depending on the author’s royalty rates and sales numbers, that drop in income could be as little as 10 cents per copy or as high as $2.60 per copy.  As for paperback books, the impact varies by when the e-book is sold, because the agency model has a declining price for the e-book over time.  In general, however, authors will theoretically make more money by selling e-books than paperback books.  That’s because for the first year or so, when paperback sales are generally the highest, the e-book royalty rate may result in a higher per copy return to the author than from a paperback.  The problem here, though, lies in three unanswered questions.  First, how much will piracy reduce paying hardcover, paperback, and e-book sales?  Second, will all retailers report accurately “straight” download sales?  In the case of paperbacks, there is inventory control because the retailer either has to pay for the book or return the stripped cover for a return refund.  Physical items provide for a check against intentional undercounting.  What checks exist for an electronic item with no physical presence?  Third, what happens after several years when the e-book price drops to essentially nothing?  At that point, the author’s backlist sales revenues plummet, and the so-called “long-tail” provides far less revenue than would a paperback.

The other problem is the proliferation of “reader” platforms.  Until or unless this situation is rectified and standardized formats compatible across readers are instituted, there will be very few independent electronic “small presses.”

Based on what I’ve seen so far, although it’s likely to take several years to sort itself out, the combination of e-books and existing reading/publishing trends is going to result in an increasing decline in the number of midlist authors who are able to support themselves by writing, as well as a decline in the income of A-list writers.

As for the impact on reading and cultural trends… that’s an area where there are far fewer hard facts, but I speculate, and it’s purely speculation at this point, that the results will be mixed.  The screen readers, such as the Kindle and the Nook and all the others, are already a boon to older readers because they can enlarge the type, and more and more older readers are finding this greatly increases what is available for them to read.  Since these readers are more interested, in general, in reading than in whipping through stripped-down action novels and the like, they will support to some degree continuation of more traditional books.  On the other hand, a considerable number of the younger generations, who are more likely to be involved in screen-multi-tasking, already have manifested a certain impatience with novelistic complexity that isn’t reflected in “action” magic or technology.  Whether this will result in even greater pressure for action-oriented simplicity in the e-book market remains to be seen, but the vampire/supernatural crazes in bookselling suggests strongly that may well be the case.

As with most revolutions, a lot of innocents are going to be affected, and not necessarily positively, from readers to writers to small publishers… and I’ve probably only touched the surface here.

Administrative Overkill

Years ago, there was a story in ANALOG about a “political engineer” who, despite his engineering degree, knew little about engineering and who had reached a position of power in his organization because of his “political” and “administrative” expertise – who dies when his undersea dome implodes on him because he didn’t understand that there are indeed times when subject matter expertise is vital.  I was reminded of this when reading Sunday’s New York Times education section, which documented the growth of professional administrative staff members in U.S. colleges and universities.  During the time period from 1976 to 2008, the number of professional administrative employees has doubled – from 42 such employees for every 1,000 students to 84, while the number of full-time faculty has dropped from 65 to 55 professors for every thousand students.  Put another way, more than 60% of college employees are not involved in actually teaching students, and the numbers often exceed 70% at private colleges and universities, whereas thirty years ago, those percentages were reversed.

Now… I’m probably very old-school, but I do have the belief that higher education ought to focus on educating students, imparting both knowledge and understanding, and for all the lengthy and considerable rationalizations for the need of more administrative personnel, I think such rationalizations are largely just that – a way of justifying positions and excessive administrative salaries.  At the colleges and universities with which I’m somewhat familiar, the majority of “administrative” personnel above the clerical level – and that number is considerable – make salaries well in excess of actual professors of similar age and experience [except for business department professors, who apparently live in a la-la land of their own, despite the rather dubious record of this discipline in the real world in recent years].

One critical point seems to be continually overlooked – all that administration isn’t what teaches students.  In fact, all those administrators create more non-teaching workloads on faculty rather than easing faculty workloads.  The number of reports, assessments, committee assignments, etc., placed on college and university faculty has possibly quintupled over the past generation – and those reports and assessments not only haven’t improved the quality of teaching, but have decreased it, because they reward faculty who are politically and administratively adept over those who are most adept at teaching and they take time away from actually pursuing greater scholarship and improving teaching skills by requiring more and more forms and assessments for the administrators.

So… while recent reports have surfaced showing that, despite all the advertising, British Petroleum has collected something like 97% of all the “severe” violations for shortcomings in offshore drilling, their political and administrative experts have been busy trying to convince the world that their engineering shortcomings are merely “unavoidable risks” of drilling.  All hail the political engineers!

Likewise… despite study after study that shows the single key factor in effective education is the level of subject matter expertise and the capability of the individual professor, colleges and universities have consistently short-changed the teaching faculties to support an ever-increasing administrative structure.  All hail the administrative educators!

And… when, exactly, if ever, will we stop rewarding excessive administrative growth and get back to rewarding actual skill and accomplishment in doing rather than administrating?

The Big Shift

The other day I happened to catch a few minutes of the disaster mega-epic 2012.  A few minutes were all it took to remind me why I don’t, and shouldn’t, watch such cinematic giant-buttered-popcorn features.  I may not have all the details precisely correct, but that shouldn’t matter much because those details are so hugely and absurdly wrong in the first place – and, yes, there will be a point to all this, but after I first present those absurdities.

From what the section of the movie I did watch showed, Earth is doomed to disaster in the year 2012 because the Earth’s crust will shift, but around China as a pivot point [no, I don’t know why China was used, except that it seems to further the plot] so that great arks can be built for select humans in China and in great secrecy — and underground as well.  These two points alone are beyond merely dubious.

Taking the second one first… we can’t even spend enough to restart the space program or rebuild our highway bridges and infrastructure…and we’re going to be able to build something that no one outside of China knows about costing tens of hundreds of billions of dollars?  And the Chinese will cooperate when all they have to do is nothing to end up, literally, on top of the world?  I won’t mention, except in passing, the scenes where helicopters ferry elephants and giraffes dangling beneath them over frozen mountains in the last hour before disaster hits China or driving Bentleys out of the cargo hatches of aircraft landing in icy mountain valleys.

The first point is the one that truly frightens me, because it reveals how little either Hollywood or most people understand about the world, and plate tectonics in particular is just one example.  There are continuing references to the Earth’s crust shifting something like 23 degrees and thousands of miles, and I suspect this part of the movie had its genesis in a pseudo-scientific thriller of more than 20 years ago entitled The HAB Theory.  Such a gigantic shift in hours is not only technically impossible, but if it did occur, there wouldn’t be much life left anywhere above the microscopic or very small cellular level.  There certainly wouldn’t be mere huge fissures running alongside McCaran Airport in Las Vegas, and the earthquakes wouldn’t be a “mere” 9.4 on the Richter scale.

A “mere” tectonic plate shift of a few yards in the right place can generate an earthquake of over 7.0.  It’s estimated that the earthquake that dropped the land around Seattle some twenty plus yards some 800 years ago[as I recall reading] might have been over 8.0, and if a similar quake occurred today, there would likely be nothing of size or significance left standing within fifty miles of MicroSoft headquarters.  Comparatively TINY shifts in the earth’s crust and continental plates, resulting from shifts over years, if not centuries, result in massive damage.  You certainly wouldn’t need even a single degree of shifting of the Earth’s crust to level everything and destroy any vestige of culture and civilization.

But, of course, a shift of a single degree just doesn’t sound cataclysmic enough for Hollywood or the consumers of giant-hot-buttered-popcorn cinema.  Is it any wonder that no one gets upset over the prospect of a few degrees of global warming… or that they can’t understand that those mere few degrees of increased temperature would result in inundating every major port city in the world?

Or… put another way… little things do mean a lot, something that’s so hard to get across in a world obsessed with the titanic… or the apparently titanic.

Image, “Sacred Poets”, and Substance

This past weekend, my wife and I watched Local Color, a movie presented as a true-to- life story of a summer in the early life of artist John Talia, when he was mentored by the Russian-born impressionist artist Nikoli Seroff – except that it’s not… exactly.  It took a while to track down the story behind the story, and it turns out that “John Talia” is actually George Gallo, the director of the movie, who did begin as an art student, but not of “Seroff,” but of the Lithuanian-born impressionist George Cherepov.  The use of the name Seroff was also confusing, because there was also a Viktor Seroff who was a scholar of the relationship between impressionism in art and in music.  Like “Talia,” director Gallo believes in representational art, and like the fictionalized “Talia,” after stints in Hollywood as a director, he was recognized as good enough to have his artwork featured in well-known New York City galleries.

The movie was shot on a literal shoestring, with most of the actors doing it for love and little else.  It never got wide distribution and received very mixed reviews, ranging from five stars downward.  While I enjoyed and appreciated it, in some ways the discovery that it was “fictionalized” bothered me far more than any short-comings it may have had, although I didn’t find many.  On the one hand, I can see why Gallo may have wanted to fictionalize the names, particularly his own, but by doing so, in essence, what could have been, and should have been, a tribute to Cherepov was lost in the process of creating an “image” of sorts.

I tend to be disturbed by the entire “image-making” process anyway, because the process of image-making obscures, if not totally distorts, the facts behind the “image.”  Certainly, such image-making is hardly new to human society and culture, although the power of modern technology makes it far, far easier.  Still, even in American culture, the images have run rampant over the truth, and in the process, often make heroes out of one man while ignoring the greater accomplishments of another in the same situation.  In “A Sacred Poet,” an article published more than thirty years ago in The Magazine of Fantasy and Science Fiction, Isaac Asimov noted that, because of the popular poem, written in 1863 by Henry Wadsworth Longfellow, most people believe that Paul Revere was the hero who warned the America colonists of the imminent British attack on Concord.  While that warning did indeed result in a colonial victory, it wasn’t delivered by Revere at all, because he was caught by a British patrol, but by Dr. Samuel Prescott.  Yet Longfellow’s poem about the “ride of Paul Revere” created a lasting image of Revere as the heroic rider who warned the Americans, and that image has effectively trumped history for more than a century.

Every American presidential campaign is an exercise in image-making, and generally, the more successful the campaign, the more distorted the image… and the greater the potential for loss of popular and political support when facts to the contrary eventually leak out and become widely-known.

Perhaps George Cherepov was even less likeable than “Nikoli Seroff,” and George Gallo didn’t want to misrepresent the real artist. Or perhaps… who knows?  But it still bothers me, I have to say.

The Illusion of Knowledge

Recently, I’ve read more and more on both sides of the “debate” about whether the internet/world-wide-web is a “good” thing.  One ardent advocate dragged out the old
“Greek” argument that even writing was “bad” because memory would atrophy… and, of course, look how far we’ve come from the time of the Greeks, how much knowledge we’ve amassed since then.

And… in a cultural and societal sense, that accumulation of knowledge has, in fact, occurred, but I’m not so certain that we now don’t stand at the edge of a precipice, where, if we choose incorrectly as a society, we will slide down the slippery slope into ignorance and anarchy, if not worse. Some people already believe we’ve started to slide so much that we’ll never recover.  While I’m not that pessimistic, not yet, at least, I would like to point out a fatal flaw in the idea that technology results in a more knowledgeable society.

To begin with, let us consider the very meaning of “knowledge.” Various dictionary definitions begin with:  (1) a product of understanding acquired through experience, practical ability or skill and (2) deep and extensive learning.  The key terms here are understanding and learning.  The problem with the web and electronic technology in general is that most users fail to understand that access to information or facts is not at all the same as understanding those facts, their use, or, especially, their significance.  True understanding is impossible without a personally learned internal database.  Being able to net-search things is not the same as knowing them, and very few individuals can retain facts looked up unless they have a personal internal knowledge base to which they can relate such facts.

All too many educational “reformers” either tend to equate the learning of specific, often unrelated facts, processes, and discrete skills with education or knowledge, or, at the other extreme, they emphasize “process” and inter-relations without ever requiring students to learn basic structures and facts.  Put another way, information access is not knowing or knowledge, nor is learning processes and systems ungrounded in hard facts. Both the understanding of process and systems and a personal integrated factual “database” are necessary for an individual to be educated and knowledgeable, and far too few graduates today possess both.

The often-too-maligned educational system of the early and mid-twentieth century had a laudable objective:  to give students the basic knowledge of their society and the basic skills needed to survive and prosper in that society.  Did it often fail?  It did, and in many places, and far too frequently.  But that didn’t mean that the objective was wrong; it meant that all too often the techniques and means used were not suited to various types of students.

What followed that system is certainly no better, and possibly much worse. When something like 40% of high school graduates cannot explain against whom the American Revolution was fought and why it was important, those students cannot be classed as knowledgeable.  Nor can the 60% who cannot write coherent complex sentences or understand them be considered educated.

A culture that exalts the ability to use technology over the ability to understand it and over the ability to explain even what society is, why it exists, and what forms of government benefit who and why is in deep trouble.  So is one where the process of accessing information is elevated over understanding what that information means and how to use it. That, by the way, is also known as thinking.

And yet, every day, and in every way, our society is encouraging an ever-increasing percentage of our young people to communicate, communicate, communicate with less and less real knowledge… and without even being able to understand truly how little that they know about the basis and structure of the world in which they live.

And concerning knowledge… that is the greatest illusion of all.

They Did It All by Themselves [Part II]

Several weeks ago, an article appeared in the local newspaper, an interview with the new artistic director of the Utah Shakespeare Festival.  He’s a product of the local university, where he learned his craft from, among others, Fred Adams, the legendary professor who established and ran for decades the Festival [which has won, among other honors, a Tony for being one of the best regional theatres in the United States].  The new director is an accomplished and effective actor, and there’s no doubt about that.  But what bothered me about the interview was that not a single word appeared about those who mentored, taught, inspired, and hired him, including Fred Adams.  Everything was about the new director, his talents, and his aspirations.  I can’t honestly say whether this was because he never mentioned those who had helped him every step of the way or because the interviewer left any such remarks out of the final story.

In some ways, it doesn’t matter, because, as the story ran, it’s all too symbolic of American culture today.  No one owes anything to anyone.  In fact, it’s even worse than that. Part of this change lies in an attitude that everything important exists only in the here and now, a change in what was once a core American value.  Southern Utah University, for example, exists only because, more than a century ago, a handful of local citizens mortgaged everything they had to come up with the funds to build the first building of the school – the building being required by the state legislature.  They did so because they felt that would offer a better future to their children and their community.  None of them ever received any financial reward, and their act is largely buried in history… except for a few older residents of the town and some university faculty.

Another symptom indicative of this change in public attitudes is reflected in the content of those largely useless student evaluations.  As a senior faculty member, my wife serves on the committee that reviews tenure and promotion applications for faculty. Since faculty members are now required to include all student evaluations and comments, she sees the comments from students across all disciplines in the university, and what is so incredibly disheartening is that there is virtually no real appreciation for professors at any level. The overwhelming majority of the comments – even of professors who have demonstrated incredible teaching effectiveness and who have gone out of their way to help students for years – deal with complaints, often insanely petty.

Part of this trend may be because all too many students don’t seem to know what’s important.  One student praised a professor because he once brought in soft drinks for the class!  Another faculty member was praised for bringing donuts. Exactly what does this have to do with education? Over the years, my wife and other members of her department have done such quiet deeds as paid student medical bills out of their own pockets, created student scholarships with their own funds, and personally helped students financially, offered hundreds of unpaid hours of additional instruction – the list is endless.  Once, say fifteen years ago, students seemed to appreciate such efforts.  Today, they complain if faculty members don’t smile when the students perform [yes… this actually happened.  Twice!].

In the interests of full disclosure, as the saying goes, I probably haven’t offered enough gratitude to those who helped me – but I have offered it, in speeches, in book dedications, and in interviews… and I didn’t forget them, visiting and writing them over the years.  And certainly there are notable exceptions, some very public.  One noted Broadway singer and actress, in giving a concert last week, paid clearly heart-felt tribute upon several occasions to her undergraduate singing teacher.  The problem is that these are exceptions… and becoming more and more infrequent every year.

The noted Isaac Newton once said that he had accomplished so much because he stood “on the shoulders of Giants,” but all of us owe debts to those who preceded us.  We didn’t do it alone, and far too many people who should know this fail, time and time again, to know that, to appreciate it, and to acknowledge it, both privately and publicly.

No… I’m Not Theologically Challenged… Just Directionally Impaired

A little over a week ago, I did something – unintentionally – that I truly wish I could undo, and for which I’m very sorry. I was taking my morning walk with the over-energetic Aussie-Saluki when a car pulled over, and a strange man with a delightful and precise English accent asked me for directions to the Catholic Church.  I was glad to oblige, and promptly stated, “Just go to the end of the road; turn right, and it’s two blocks up.”

Simple enough.

Except… I’m one of those people who have no innate sense of left and right.  I’m not directionally impaired in the sense of getting lost; I almost always know where I am, and have enough sense [acquired painfully from my wife] to ask directions when I don’t.  I’m also very good at providing written directions to others.  But when I’m caught off-guard as I was on that morning, with my thoughts more on other matters, from the plotting of the newest book to what a beautiful morning it was, I often speak before full consideration of my words.

The road used to end where I meant for him to turn, but it hasn’t for more than a year, since the city extended it a half-mile downhill through a winding canyon to meet up with the Cross Hollows Parkway.  Instead, there’s only a stop sign there now.  And… as a result of my left-right confusion, I told the English gentleman to turn right, rather than the correct direction, which was left.

But I didn’t.  About twenty seconds after he drove off, I realized what I had done and started waving and running after the car, Aussie-Saluki delighted that we were running.  Alas, he never looked back… and by the time I got home and went looking for him in the car… he was nowhere to be found.  I just hope he found the right church.

Now… the other side of the story is…  Equally inadvertently, the directions I had given him were precisely correct in taking him to the nearest LDS Stake [church], if almost three-quarters of a mile away, rather than the four blocks to the Catholic Church.

So… either way… I’m either regarded as a directional idiot, theologically challenged in not even knowing which church was which, or determined to steer the poor man away from his church of choice to another faith [even though I’m not a member of either faith].

As so many people have often probably said, I just wish I’d thought through what I’d said a little more carefully…. And because I never knew who he was, this is the only apology I can offer.

You Don’t Get What You Don’t Pay For

The other day I came across a series of articles, seemingly unrelated – except they weren’t.  The first was about why Vietnam is now producing perhaps the majority of great young chess players in the world.  The second was a news report on the Gina Bachauer International Artists Piano Competition in Salt Lake City, and the third was a table of the average salaries of U.S. university professors by area of specialty.

The Vietnamese are producing chess champions and prodigies, it seems, because [gasp!] they pay them.  Gifted young players are paid from $300 to $500 a month to learn and play chess, and the best get all expenses paid to play in tournaments world-wide.  These are substantial incentives in a country where the average monthly family earnings are around $100.  Of course, American teenagers spend more than that monthly on what the Vietnamese would likely consider luxuries, and in the United States young chess players must count on the support of family or charitable organizations… and despite being one of the largest and most prosperous nations in the world, we have comparatively very few international class chess masters.

The finals of the Gina Bachauer Piano Competition were held in Salt Lake City last week, and of the eight finalists, one was Russian, one was Ukrainian, and the other six were Asian. This pattern has been ongoing for close to a decade, if not longer.  We haven’t produced a true giant in piano performance in decades, but then, the top prize is a mere $30,000, hardly worth it for Americans, apparently, not when it takes 15 plus years of study and hours upon hours of daily practice – all for a career in which the top-flight pianists generally make less money than whoever is 150th on the PGA money list.

All this might just tie in to the salaries of university professors.  The three areas in which university professors’ salaries are the lowest are, respectfully, from the bottom: theology/religion; performing and visual arts; and English.

I’m cynical, I know, but I don’t think that this is coincidental.  In the United States, mainstream religions [who generally require some intensive theological training] are losing members left and right.  The highest-paid performing and visual artists are those who can provide the most spectacular show, not the most technically sound performance, and most “professional” pop singers could not even match the training or technical ability of the average graduate student in voice, but technical ability doesn’t matter, just popularity, as witness American Idol.  As for English, when 60% of all college graduates aren’t fully technically competent in their own language, this does suggest a lack of interest.

The other factor in common in these areas is that the average semi-educated American believes that he or she knows as much as anyone about religion, singing, dancing, acting, and English as anyone.  And that’s reflected in both what professors are paid and in what experts in those fields are paid. The problem is that popular perceptions aren’t always right, regardless of all the mantras about the “wisdom of the crowd.”  The highest paid professors – and professionals – in the United States today are in the field of business and finance.  That’s right – those quant geniuses who brought you all the greatest financial melt-down since the Great Depression, not to mention the “Flash Crash” of a month or so ago when technical glitches resulted in the largest fastest one-day decline in the market ever.  Oh… and just as a matter of national pride, if you will, why do professors of foreign languages get paid 8-10% more than professors of English? Especially when the mastery of English is at a decades-low point?

More to the point, it’s not just about singers, writers, English professors, but about all of society.  We may complain about the financiers and their excesses, but we still allow those excesses.  We may talk about the importance of teachers, police, firefighters, and others who hold society together, but we don’t truly support them where it counts.

As a society, we may not always get what we pay for, but you can bet we won’t get what we don’t pay for.

Everyone’s Wonderful! [Part II and Counting]

I noted some time back that the scholar Jacques Barzun had documented in his book From Dawn to Decadence what he believed was the decline of western culture and civilization and predicted its eventual fall.  One of his key indicators was the elevation of credentials and the devaluation of achievement. Along these lines, the June 27th edition of The New York Times [brought to my attention by an alert reader] carried an article noting the emergence and recognition of multiple high school valedictorians. One high school had 94, and another even had 100!

While many factors have contributed to this kind of absurdity, two factors stand out: (1) rampant grade inflation based on an unwillingness of educators and parents to apply stringent standards that measure true achievement and (2) a society-wide unwillingness to recognize that true excellence is rare – except perhaps in professional sports.

So many problems arise from this tendency to over-praise and over-reward the younger generation that I can’t possibly go into all of them in a blog.  But I do want to address some of those of greater import, not necessarily in order of societal impact, but as I see them.  First of which is the fact that, beyond high school and certainly beyond college, there can’t be multiple “winners.”  There will only be one position at the hospital for a new surgeon, one or two vacancies for new teachers each year at the local school or a handful at most.  Graduate schools only take a limited number of applicants from the overall pool, and they do make choices.  Sometimes, the choices or the grounds on which they’re made may not be fair, just as a bad grade in freshman PE may keep a high school student from becoming valedictorian [if only one is chosen, the way it used to be], but the plain fact is that, in life, economics and need limit what is available, and students need to learn that not everyone gets to be top dog, even if the differences between the contenders seem minuscule,

Second, by recognizing multiple students as “valedictorians,” schools and parents are both devaluing the honor and simultaneously over-emphasizing it as a credential.  As a result, more and more colleges are ignoring whether students are “valedictorians” and relying on other factors, such as, perhaps regrettably, standardized test scores.

Third, like it or not, as former President Jimmy Carter once stated [and for which he was roundly criticized], “Life isn’t fair.”  It may not be “fair” that one teacher somewhere in the past didn’t like this or that student’s performance and gave them an A- rather than an A, and that kept them from being valedictorian.  It’s not “fair” that Ivy League schools now require better grades from their female applicants than from their male applicants because more female students work harder and the schools don’t want to overbalance their student bodies with women.  Unfortunately, what society can do in “legislating” fairness is not only limited, but impossible to produce anything close to absolute fairness in real terms.  All society can do is set legal parameters to prohibit the worst cases.  We, as individuals, then have to do our best to act fairly and learn to work around or live with the instances where “life isn’t fair,” because it isn’t and never will be.

Fourth, frankly, in cases of similar or identical grades, other factors should be weighed.  They certainly are in all other occupational situations in life, because they have to be. When there are limited spaces, decisions will be made to determine who gets the position.  Not observing this practical factor in high school is just another aspect of giving students an inflated view of their own “specialness,” or, if you will, the continuation of the “trophies for everyone” philosophy.

But… is anyone listening?  Apparently not, because there’s more and more grade inflation, more and more valedictorians, and more and more emphasis on how “wonderful” every student is.

Marketing F&SF

Recently, Brad Torgersen made a lengthy comment about why he believed that F&SF, and particularly science fiction, needs to “popularize” itself, because the older “target market” is… well… old and getting older, and the younger readers tend to come to SF through such venues as media tie-in novels, graphic novels, and “popular” fiction.  While he’s absolutely correct in the sense that any vital genre has to attract to new readers in order to continue, he unfortunately is under one major misapprehension – that publishers can “market” fiction the way Harley-Davidson marketed motorcycles.  Like it or not, publishing – and readers – don’t work quite that way.

Don’t get me wrong.  There’s quite a bit of successful marketing in the field, but one reason why there are always opportunities for new authors is that it’s very rare that a publisher can actually “create” a successful book or author.  I know of one such case, and it was enabled by a smart publisher and a fluke set of circumstances that occurred exactly once in the last two decades.  Historically, and practically, what happens far more often is that, of all the new authors published, one or two, if that, each year appeal widely or, if you will, popularly.  Once that happens, a savvy publisher immediately brings all possible marketing tools and expertise to publicize and expand that reading base and highlight what makes that author’s work popular.

In short, there has to be a larger than “usual” reader base to begin with, and the work in question has to be “popularizable.”  I do have, I think it’s fair to say, such a reader base, but, barring some strange circumstances, that reader base isn’t likely to expand wildly to the millions because what and how I write require a certain amount of thought for the fullest appreciation of readers, and the readers who flock to each new multi-million selling new novelistic sensation are looking primarily for either (1) entertainment, (2) a world with the same characters that promises that they can identify with it for years, or (3) a “fast” read, preferably all three, but certainly two out of three.

All this doesn’t mean that publishers can’t do more to expand their readership, but it does mean that such expansion has to begin by considering and publishing books that are likely to appeal to readers beyond those of the traditional audience, without alienating the majority of those traditional readers. And in fact, one way that publishers have been trying to reach beyond the existing audience is by putting out more and more “supernatural” fantasy dealing with vampires, werewolves, and books with more explicit sexual content.  The problem with this approach is that, first, such books tend not to appeal to those who like science fiction and/or tech-oriented publications, as well as also tending to alienate a significant percentage of older readers, as opposed to, as Brad pointed out, media tie-in novels, which appeal across a wider range of ages and backgrounds. Another problem is that writing science fiction, as opposed to fantasy, takes more and more technical experience and education, and fewer and fewer writers have that background.  That’s one reason why SF media tie-in novels are easier to write – most of the technical trappings have been worked out, one way or another.

I don’t have an easy answer, except to say that trying to expand readership by extending the series of authors with “popular” appeal or by copying or trying to latch on to the current fads has limited effectiveness. Personally, I tend to believe that just looking for good books, whether or not they fit into current popularity fads, is the best remedy, but that may just be a reflection of my views and mark me as “dated.”

In any case, Brad has pointed out a real problem facing science fiction, in particular, and one that needs more insight and investigation by editors and publishers in the field.

The Vanishing/Vanished Midlist?

Several weeks ago, I attended a science fiction convention where the guest of honor was a writer who spent some 20 years as what one might call a “high mid-list author,” someone able to work full-time as a writer and pay the bills.  Except… several years ago, this came to an end for the writer.  Oh… the writer in question still publishes two books a year, but they aren’t selling as well as earlier books, although those who read the books claim they’re as good, if not better, than earlier work, and now to make ends meet requires outside additional work as a consultant and educator.  To make matters worse, at least from my point of view, this writer produces work that is more than mere entertainment and mental cotton-candy.

Interestingly enough, more and more of the books cited by “critical” reviewers in the F&SF field [with whom I have, as most know, certain “concerns”] seem to come from smaller presses.  This is creating, I believe, an almost vicious cycle in F&SF publishing. The more the books praised by reviewer come from small presses, the more larger publishers get the message that “good” or “edgy” or “thoughtful” books don’t sell as well, and the greater the almost subconscious pressure to opt for “fiction-fun” or “fiction-light.”  To their credit, certain publishers, including mine, thankfully, are resisting this trend, but I’m still seeing more of those novels that are gaming and media tie-ins or endless series.  And yes, the Recluce Saga is long, but… as I keep pointing out, no character has more than two books.  I don’t have eight or ten or fifteen books endlessly spinning improbable stories and extensions about the same character or characters.

With the drastic changes in wholesale distribution over the past decade or so, virtually no mid-list books receive such distribution, except perhaps lower-selling titles of big-name authors.  As a result of these trends, the midlists of at least some large publishers that were once the home of “thoughtful” books are shrinking. Some such midlist writers have found homes with the smaller presses, but small press distribution systems often are not as extensive. That has resulted in lower sales for the authors who wrote those books, and lower sales means lower incomes, and either cutting back on writing or holding down more other jobs… or… trying to re-invent one’s self with another form of “fiction-light.”

I’ve heard many who believe that e-book sales can help here, but the sales figures I’ve seen suggest that e-books do more for those books that have high sales levels and wide distribution in hardcover and paperback – and those aren’t the midlist books.

It almost appears that the midlist F&SF titles are going to become a ghetto within a genre… and that concerns me, and it’s certainly affecting all authors, but particularly those who once wrote good midlist books and made a living at it… and now can’t.

Electronic Free-Loading… and Worse

Even with spam “protection,” the amount of junk email that my wife and I receive is astronomical – less than one in fifty emails is legitimate.  The rest are spam and solicitations.  Now I’m getting close to a hundred attempted “spam” comments on the website daily, all of them with embedded links to sell or promote something. That’s just one facet of the problem.  Another facet is the continual proliferation of attempts at phishing and identity theft.  It makes one want to ask – have there always been so many people trying to make a buck, rupee, ruble, Euro, or whatever by freeloading or preying on others?

I know that con artists have been around since the beginning of history, but never have such numbers been so obvious and so intrusive to so many.  Is this the inevitable result of an electronic technology that makes theft, fraud, and blatant self-promotion at the expense and effort of others a matter of keyboarding at a distance?  At one time, these types of offenses had to be carried out in person and embodied a certain amount of risk and a probability of detection and usually criminal punishment.  Now that they can be accomplished via virtually untraceable [for practical purposes] computer/internet access, they’ve proliferated to the point where virtually every computer connected to the net runs the risk of some sort of loss or damage – a form of computer Russian roulette.

But what I find the most disheartening about this is the fact that so many people, once the risk and criminal penalty factors were so dramatically reduced by technology, set out to exploit and fleece others.  Even those of us not yet fleeced or exploited have to take time, effort, and additional software to deal with these intrusions.  I have to sort through the potential comments quarantined by the system several times a day, because a few are legitimate, and deserve to be posted, and I still have to take time to delete all the unwanted email.  I have to pay for protective software, and so forth.  In effect, every computer user is being taxed in terms of time, money, and risk by this radical expansion of the unscrupulous.

Now… those who are extreme technophiles will claim that the downsides of our technologically based communications/computing systems are negligible… or at least that the benefits far outweigh the downsides.  But the problem here is that most of the benefits, especially in terms of costs, go to large institutions and the unscrupulous, while the downsides fall on the rest of us.  I don’t see that, for example, that the internet enables more good writers; it enables writers who are better self-promoters, and some good writers are, and a great many aren’t.  In trying to evaluate honestly what I do on the net, I suspect that my internet presence is similar to treading water.  I’m not losing much ground to the blatant self-promoters, but for all the effort it requires, I’m not gaining either, and it’s time spent when I can’t be writing.  Yet if I don’t do it, especially with, I have to admit after looking at recent sales figures [and yes, some of you were right] the recent spurt in the growth of e-books, my sales will suffer.

I don’t see that the internet is that useful in enabling small businesses, because there are so many, and the effort and ingenuity require to attract customers is considerable, but it certainly allows large ones to contact everyone.  And it certainly allows every variety of cyber-criminal potential access to a huge variety of victims with almost no chance of getting detected, let alone prosecuted and punished.  The idea of privacy has become almost laughable, even for those of us who don’t patronize social networking sites.

Cynical as I may be, my hopes have always been that technology would be employed to enable the best to be better, and the rest to improve who and what they are.  Yet… I have this nagging feeling that, more and more, technology, particularly communications technology, is dragging down far more people than it is improving, especially ethically… and, even if it isn’t, it’s creating a tremendous diversion of time from actual productive work.  That diversion may be worthwhile in manufacturing-based industries, but it’s a definite negative force in areas such as writing and other creative efforts.  In a society that is becoming ever more dependent on technology, unless matters change, this foreshadows a future in which marketing and hype become ever more present and dominant, even as the technophiles are claiming communications technology makes life better and better.

Better and better for whom?  And what?

Fantasy… Should be Fun?

The other day, when reading a blogger’s review of The Soprano Sorceress, I came across an interesting question, clearly meant to be rhetorical – what point was there to reading a fantasy if the reader didn’t like the fantasy world created by the author?  It’s a good question, but not necessarily in the way that the reviewer meant, because his attitude was more along the lines of wanting to avoid reading about worlds he didn’t like, particularly because he asked another question along the lines of what fun was there in reading about such a world.

Yet… I have to confess that there are authors I probably won’t read again because I don’t care that much for their worlds, just as there are authors I won’t read again because I don’t care for their characters.  In particular, I don’t care for characters who make mistakes and errors that would prove fatal in any “realistic” world situation, yet who survive for book after book [I presume, because the series continues, even if I’m no longer reading them].  Obviously, those kinds of books have great appeal, because millions upon millions of them sell, and maybe that’s the “fun” in reading them.

But there’s a distinction between “good” and “fun,” and often one between “entertaining” and “thought-provoking,” and there are readers who prefer each type, although sales figures suggest that “fun” and “entertaining” are the categories that tend to outsell others significantly, often by orders of magnitude.

The question the blogger reviewer asked, however, holds within it an assumption that all too many of us have – that “our” view is the only reasonable way of looking at a particular book… and that, I think, is why I tend to be reluctant in reading reviews, either those considered “professional” or those less so, because the vast majority of reviewers start from the unconscious presupposition that theirs is the only “reasonable” way of looking at a given book.  The more “professional” the reviewer is, the less likely this presupposition occurs, but there are still well-known reviewers and review publications that fall regularly into this mind-set.  The problem lies in not only in the expectations of the reviewer, but also in the knowledge base – or the lack of knowledge – that the reviewer possesses.  A novel that uses allusions heavily to disclose character will seem shallow to the reader or reviewer who does not understand those referents.  A reader unfamiliar with various “sub-cultures,” such as the corporate or legal worlds, politics, the military, academia, is likely to miss many subtleties of the type where explanation would destroy the effect.  Because of this “sub-culture” blindness, certain books, or parts of certain books, tend to be less entertaining – or even boring – to those unfamiliar with the subculture, whereas a reader who understands those subcultures may be smiling or even howling with laughter.

As a side note, despite the impression that some bloggers have apparently gained from this site, I do read blog reviews of my work and that of other authors on a continuing basis, if sometimes reluctantly.  Why reluctantly?  Because it’s more often painful than not.  As a writer, for me such blogs often raise the question of why the reader didn’t understand certain matters that appear so obvious to me.  Could I have done something better, or was the matter presented well and the reader didn’t get it?  Half and half?  Such questions and second-guessing, I feel, are necessary if any writer wants to improve, no matter how long he or she has been writing… but I suspect any author who claims the process is enjoyable or entertaining is either lying or a closet masochist.  As part of being a professional, an author should know, I personally believe, the range of reactions to his or her work, as well as the reasons behind those reactions, but, please, let’s not have commentators suggest that we’re somehow outdated, out of touch, or unreasonable when we suggest that the process isn’t always as pleasurable to us as it apparently is to those who take great delight in complaining about what they perceive as deficiencies in what we write.  Sometimes, indeed, the deficiencies are the writer’s, but many times the deficiencies lie in the reviewer, and where the deficiencies may lie, or even if there are such deficiencies, isn’t always obvious to most readers of either blog or professional reviews… or even of professional blog reviews.

Sometimes… Just Sometimes… We Get It Right

Way back in 1958, in the so-called “Golden Age” of science fiction, Jack Vance wrote a book called The Languages of Pao, in which he postulated that language drastically affects human thought patterns and, thus, the entire structure of a culture or civilization.  A more scholarly statement of this is the linguistic relativity principle, otherwise known as the Sapir-Whorf hypothesis, of which there are two versions.  One states that language limits and determines cognitive categories. A weaker version merely suggests that language  influences thought and certain non-linguistic behaviours.  The Sapir-Whorf hypothesis was thought to be discredited by color-related experiments in the 1960s, because researchers  found that language differentials did not seem to affect color perception or usage.

Recent studies of human brain patterns and linguistic development, reported in the June 1st edition of New Scientist, strongly suggest that, first, there is not, as previously thought, a  genetically-determined “universal” human instinct/hard-wired pattern for language that is common to all human beings, but that languages are in fact learned and used in often totally different ways by those speaking different tongues.  Thus, as speculated by Vance, languages do in fact shape the way we not only think, but the very way in which we see the world.  And, as occasionally happens, but not so much as we science fiction writers would like to think or claim, one of us has actually anticipated a fundamental discovery, and one that has profound implications for human civilization, implications that I don’t think most people have fully considered.

If this research is accurate, then, for example, intractible cultural differences may well lie in the linguistic patterns of a culture.  A language that offers many ways in which to accurately express the same concept or thought would likely promote more openness of thought than a language in which there is literally only one correct way in which that thought can be expressed.  A language/culture that allows rapid linguistic innovation may promote change and development… but it might well have the downside of undermining standards, because standards, as represented by language, are not seen as fixed or immutable.   We already know that words expressing concepts, such as “freedom”  or “equality,” do not “translate” into the exact same meanings in different cultures, and this research offers insights into why the differences go beyond mere semantics.

These possibilities have certainly been considered in human history, if only instinctively or subconsciously.  For centuries, the Roman Catholic church resisted the translation of the Bible into any other language, insisting it be read and taught only in Latin.  Since 1635, with a few years in abeyance during the French Revolution, L’Academie Francaise has policed usage and linguistic development in France, attempting to restrict or eliminate the use of Frenchified Anglicisms.  And languages do affect other aspects of human behavior.  Recent studies have shown that speakers of tonally-inflected languages have far, far, higher rates of perfect pitch than do speakers of languages that are not tonally inflected.  Not entirely coincidentally, it seems to me, speakers of such languages also appear to have more successful classical musicians.

A more disturbing aspect of the research is the possibility that linguistic differences may well create cultural “understanding” divides that are difficult, if not impossible, to bridge, simply because the languages create antithetical patterns of thought, so that a speaker of one language cannot literally comprehend emotionally the concepts and values behind the words of a speaker of another language.  The initial research suggests that the magnitude of variances in languistic learning patterns ranges from very slight to quite significant… and it will be interesting to see if such differences can ever be quantified.  But it does appear that speaking another language goes far beyond the words.

And a science fiction writer pointed out the cultural implications and ramifications for societies first.

Pressing the Limits

As both individuals and as a species, human beings have always had a tendency to press the limits, both of their societies and their technologies.  This tendency has good points and bad points… good because as a species we wouldn’t have developed and life would still be in the “natural state,” or “nasty, brutish, and short,” a pithy observation attributed to the philosopher Thomas Hobbes in Leviathan.  The “bad” side of pressing the limits has been minimized, because the advantages have been so much greater over time than the drawbacks.

Except… the costs and the consequences of pushing technology to the limit may now in some cases be reaching the point where they outweigh the overall benefits, and not just in military areas.

The latest and most dramatic evidence of this change is, of course, the current Gulf of Mexico oil rig explosion and the subsequent oil blowout.  Deep-sea drilling and production platforms are required to have in place redundant blow-out protectors… as did the BP rig.  But the blow-out protector failed.  Such failures are exceeding rare.  Repeated tests show these work over 99% of the time, but something like 60 have failed in tests of the equipment.  The Gulf oil disaster just happens to be one of the few times it’s happened in actuality and represents the largest such failure in terms of crude oil releases.  What’s being overlooked, except by the environmentalists, who, so far as I can tell, are operating more on a dislike of off-shore drilling than a reasoned technical analysis, is the fact the number of offshore drilling platforms is around 6,000 in service world-wide in some form or another, and increasing.  That number will increase whether the U.S. bans more offshore drilling or not.  From 1992 to 2006, the Interior Department reported 39 blow-outs at platforms in the Gulf of Mexico, and although none were as serious as the latest, that’s more than two a year, yet that represents a safety record of 99.93%.  In short, there’s not a lot of margin for error.  What makes the issue more pressing is that drilling technology is able to drill deeper and deeper – and the pressures involved at ever greater depths put increasing stress on the equipment to the point where, as is apparent with the BP disaster, stopping the flow of oil in the case of a failure becomes extraordinarily difficult and exceedingly expensive, as well as time-consuming.  Because crude oil is devastating to the environment, the follow-on damage to the ecosystems and the economy of the surrounding area will create far greater costs than capping the well.

Pushing technology beyond safe limits is nothing new to human beings.  When steam engines were first introduced, the desire for power and speed led to scores, if not hundreds, of boiler explosions.  Occasionally, disasters led to changes, such as the phasing out of hydrogen dirigibles after the Hindenburg fire and crash, but that change was also made easier by the improvements in aircraft, which were also far faster than dirigibles. The costs of other disasters are still with us – and we tend to overlook them.  The town of Centralia, Pennsylvania, has largely been abandoned because the coal seams in the mostly worked out mines beneath the town caught fire and have been smoldering away for more than forty years, causing the ground above to collapse and continually releasing toxic gases.  In Pennsylvania alone, there are more than 30 such subterranean fires.  World-wide there are more than 3,000 such fires, some of which release more greenhouse gases and other toxic fumes than some coal-fired power plants.  Yet few of these fires are more than watched, because the technology does not exist that can extinguish them in any fashion close to cost-efficient and in some cases, not at all because the fires burn so deep.

Pushing electronic technology to the limits, without regard for the implications, costs, and other downsides, has resulted in a world linked together in such a haphazard fashion that a massive solar flare – or a determined set of professional hackers – could conceivably bring down an entire nation’s communications and power distribution network – and that doesn’t even take into account the vast increase in the types and the amounts of exceedingly toxic wastes created on a world-wide scale, most of which is still not handled as it should be.  Another area where technology is being pressed to the limits is that of bio-tech, where scientists have reported creating the first synthetic cell.  While they engineered in considerable safeguards, once that technology is more available, will everyone?

As illustrated by the BP disaster, we when, as a society, push technology to its limits on a large scale, for whatever reason, the implications of a technological or systems failure are getting to the point where we require absolute safety in operation of those systems – and obtaining such assurance is never inexpensive… and sometimes not even possible.

But then again… if we tweaked existing technology just a bit more so that we could get even more out of it…. get more oil, more bandwidth, make more profit…

When to Stop Writing… [With Some “Spoilers”]

The other day I ran across two comments on blogs about my books.  One said that he wished I’d “finish” more books about characters, that he just got into the characters and then the books ended.  The other said that I dragged out my series too long.  While the comments weren’t about quite the same thing, they did get me to thinking.  How much should I write about a given character?  How long should a series be?

The simple and easy answer is that I should write as long as the story and the series remain interesting.  The problem with that answer, however, is… interesting to whom?

Almost every protagonist I’ve created has resulted in a greater or larger number of readers asking for more stories about that particular character, and every week I get requests or inquiries asking if I’ll write another story about a particular character.  That’s clearly because that reader identified with and/or greatly enjoyed that character… and that’s what every author likes to hear.  Unfortunately, just because a character is so memorable to readers doesn’t mean that there’s another good story there… or that another story about that character will be as memorable to all readers.

Take Lerris, from The Magic of Recluce.  By the end of the second book about him, he’s prematurely middle-aged as a result of his use of order and chaos to save Recluce from destruction by Hamor… and his actions have resulted in death and destruction all around him, not to mention that he’s effectively made the use of order/chaos magic impossible on a large or even moderate scale for generations to come.  What is left for him in the way of great or striking deeds?  Good and rewarding work as a skilled crafter, a happy family life? Absolutely… but there can’t be any more of the deeds, magic, and action of the first two books.  That’s why there won’t be any more books about Lerris.  If I wrote another book about Lorn…another popular character… for it to be a good book, it would have to be a tragedy, because the only force that could really thwart or even test him is Lorn himself.  After a book in which a favorite character died, if of old age after forty years of magic working – and all the flak I took from readers who loved her – I’m understandably reluctant to go the tragic route again.  So… for me, at least, I try to stop when the best story’s been told, and when creating an even greater peril or trial for the hero would be totally improbable for the world in which he or she lives.

For the same reason, because I’ve never written more than three books about a given main character, my “series” aren’t series in the sense of eight or ten books about the same characters, but groupings of novels in the same “world.”  Even so, I hear from readers who want more in that world, and I read about readers who think I’ve done enough [or too much] in that world.  Interestingly enough, very few of the complainers ever write me; they just complain to the rest of the world, and for me that’s just as well.  No matter what they say publicly, I don’t know a writer who wants to get letters or emails or tweets telling them to stop doing what they like to do… and I’m no different.

But those who complain about series being too long usually aren’t dealing with the characters or the stories. From what I’ve seen and read, they’re the readers who’ve “exhausted” the magic and the gimmicks.  They’re not there for characters and insights, but for the quicker “what’s new and nifty?”  And there’s nothing wrong with that, but it’s not necessarily a reason for an author to stop writing in that world; it’s a reason for readers who always want the “new” to move on.  There’s still “new” in the Recluce Saga; it’s just not new magic.  Sometimes, it’s stylistic.  I’ve written books in the first person, the third person past tense, the third person present tense.  I’ve connected two books with an embedded book of poetry.  I’ve told the novels from both the side of order and the side of chaos, and from male and female points of view.  Despite comments to the contrary, I’ve written Recluce books with teenaged characters, and those in their twenties, thirties, forties, and older. That’s a fair amount of difference, but only if the reader is reading for what happens to the characters… and virtually all the critics and reviewers have noted that each book expands the world of Recluce.  I won’t write another Recluce book unless I can do that, and that’s why there’s often a gap of several years between books.  The same is true of books set in my other worlds.

So… I guess, for me, the answer is that I stop writing about a character or a world when I can’t show something new and different, although it may be quiet new or character new.

Technology, Society, and Civilization

In today’s modern industrial states, most people tend to accept the proposition that the degree of “civilization” is fairly directly related to the level of technology employed by a society.  Either as a result or as a belief, then, each new technological gadget or invention is hailed as an advance. But… how valid is that correlation?

In my very first blog [no longer available in the archives, for reasons we won’t discuss], I made a number of observations about the Antikythera Device, essentially a clock-work- like mechanical computer dating to 100 B.C. that tracked and predicted the movement of the five known planets, lunar and solar eclipses, the movement of the moon, as well as the future dates for the Greek Olympics. Nothing this sophisticated was ever developed by the Roman Empire, or anywhere else in the world until more than 1500 years later.  Other extremely technological devices were developed in Ptolemaic Egypt, including remote-controlled steam engines that opened temple doors and magnetically levitated statues in those temples.  Yet both Greece and Egypt fell to the more “practical” Roman Empire, whose most “advanced” technologies were likely the invention of concrete, particularly concrete that hardened under water, and military organization.

The Chinese had ceramics, the iron blast furnace, gunpowder, and rockets a millennium before Europe, yet they failed to combine their metal-working skill with gunpowder to develop and continue developing firearms and cannon.  They had the largest and most advanced naval technology in the world at one point… and burned their fleet.  Effectively, they turned their backs on developing and implementing higher technology, but for centuries, without doubt, they were the most “civilized” society on earth.

Hindsight is always so much more accurate than foresight, but often it can reveal and illuminate the possible paths to the future, particularly the ones best avoided. The highest level of technology used in Ptolemaic Egypt was employed in support of religion, most likely to reinforce the existing social structure, and was never developed in ways that could be used by any sizable fraction of the society for societally productive goals.  The highest levels of Greek technology and thought were occasionally used in warfare, but were generally reserved for the use of a comparatively small elite.  For example, records suggest that only a handful of Antikythera devices were ever created.  The widest-scale use of gunpowder by the early Chinese was for fireworks – not weapons or blasting powder.

Today, particularly in western industrial cultures, more and more technology is concentrated on entertainment, often marketed as communications, but when one considers the time and number of applications on such devices, the majority are effectively entertainment-related.  In real terms, the amount spent on basic research and immediate follow-up in the United States has declined gradually, but significantly, over the past 30 years.  As an example, NASA’s budget is less than half of what it was in 1965, and in 2010, its expenditures will constitute the smallest fraction of the U.S. budget in more than 50 years.  For the past few years, the annual budget of NASA has been running around $20 billion annually.  By comparison, sales of Apple’s I-phone over 9 months exceeded the annual NASA budget, and Apple is just one producer of such devices.  U.S. video game software sales alone exceed $10 billion annually.

By comparison, the early Roman Empire concentrated on using less “advanced” technology for economic and military purposes.  Interesting enough, when technology began to be employed primarily for such purposes as building the coliseum and flooding it with water and staging naval battles with gladiators, subsidized by the government, Roman power, culture, and civilization began to decline.

More high-tech entertainment, anyone?

Sacred? To Whom?

I’ll admit right off the top that I have a problem with the concept that “life is sacred,” not that I don’t feel that my life, and that of my wife and children and grandchildren aren’t sacred to me.  But various religions justify various positions on social issues on the grounds that human life is “sacred.”  I have to ask the question why human life, as opposed to other kinds of life, is particularly special – except to us.

Once upon a time, scientists and others claimed that Homo sapiens were qualitatively different and superior to other forms of life.  No other form of life made tools, it was said.  No other form of life could plan logically, or think rationally.  No other form of life could communicate.  And, based on these assertions, most people agreed that humans were special and their life was “sacred.”

The only problem is that, the more we learn about life on our planet, the more every one of these assertions has proved to be wrong.  Certain primates use tools; even Caledonian crows do.  A number of species do think and plan ahead, if not in the depth and variety that human beings do.  And research has shown and is continuing to show that other species do communicate, from primates to gray parrots.  Research also shows that some species have a “theory of mind,” again a capability once thought to be restricted to human beings. But even if one considers just Homo sapiens, the most recent genetic research shows that a small but significant fraction of our DNA actually comes from Neandertal ancestors, and that genetic research also indicates that Neandertals had the capability for abstract thought and speech.  That same research shows that, on average, both Neandertals and earlier Homo sapiens had slightly larger brains than do people today.  Does that make us less “sacred”?

One of the basic economic principles is that goods that are scarce are more valuable, and we as human beings follow that principle, one might say, religiously – except in the case of religion.  Human beings are the most common large species on the planet earth, six billion plus and growing.  Tigers and pandas number in the thousands, if that.  By the very principles we follow every day, shouldn’t a tiger or a panda be more valuable than a human?  Yet most people put their convenience above the survival of an endangered species, even while they value scarce goods, such as gems and gold, more than common goods.

Is there somehow a dividing line between species – between those that might be considered “sacred” and those that are not?  Perhaps… but where might one draw that line?  A human infant possesses none of the characteristics of a mature grown adult.  Does that make the infant less sacred?  A two year old chimpanzee has more cognitive ability than does a human child of the same age, and far more than a human infant.  Does that make the chimp more sacred?  Even if we limit the assessment of species to fully functioning adults, is an impaired adult less sacred than one who is not?  And why is a primate who can think, feel, and plan less sacred than a human being?  Just because we have power… and say so?

Then, there’s another small problem.  Nothing on the earth that is living can survive without eating in some form or another something else that is or was living.  Human beings do have a singular distinction there – we’re the species that has managed to get eaten less by other species than any other species.  Yes… that’s our primary distinction… but is that adequate grounds for claiming that our lives, compared to the lives of other thinking and feeling species, are particularly special and “sacred”?

Or is a theological dictum that human life is sacred a convenient way of avoiding the questions raised above, and elsewhere?

Making the Wrong Assumption

There are many reasons why people, projects, initiatives, military campaigns, political campaigns, legislation, friendships, and marriages – as well as a host of others – fail, but I’m convinced that the largest and least recognized reason for such failures is that those involved in such make incorrect assumptions.

One incorrect assumption that has bedeviled U.S. foreign policy for generations is that other societies share our fundamental values about liberty and democracy.  Most don’t.  They may want the same degree of power and material success, but they don’t endorse the values that make our kind of success possible.  Among other things, democracy is based on sharing power and compromise – a fact, unfortunately, that all too many U.S. ideologues fail to recognize, which may in fact destroy the U.S. political system as envisioned by the Founding Fathers and as developed by their successors… until the last generation.  Theocratically-based societies neither accept nor recognize either compromise or power-sharing – except as the last resort to be abandoned as soon as possible.  A related assumption is that peoples can act and vote in terms of the greater good.  While this is dubious even in the United States, it’s an insane assumption in a land where allegiance to the family or clan is paramount and where children are taught to distrust anyone outside the clan.

On a smaller scale, year after year, educational “reformers” in the United States assume, if tacitly and by their actions, that the decline in student achievements and accomplishments can be reversed solely by testing and by improving the quality of teachers.  This assumption is fatally flawed because student learning requires two key factors – those who can and are willing to work to teach and those who can learn and who are willing to learn.  Placing all the emphasis on the teachers and testing assumes that a single teacher in a classroom can and must overcome all the pressures of society, the media, the social peer pressures to do anything but learn, the idea that learning should be fun, and all the other societal pressures that are antithetical to the work required to learn. There are a comparative handful of teachers who can work such miracles, but basing educational policy and reforms on those who are truly exceptional is both poor policy and doomed to failure.  Those who endorse more testing as way to ensure that teachers teach the “right stuff” assume that the testing itself will support the standards, which it won’t, if the students aren’t motivated, not to mention the fact that more testing leaves less time for teaching and learning.  So, in a de facto assumption, not only does the burden of teaching fall upon educators, but so does the burden of motivating the unmotivated, and disciplining the undisciplined at a time when society has effectively removed the traditional forms of discipline without providing any effective replacements.  Yet the complaints mount, and American education is failing, even as the “reformers” keep assuming that teachers and testing alone can stem the tide.

For years, economists used what can loosely be termed “the rational person” model for analyzing the way various markets operated.  This assumption has proved to be horribly wrong, as recent studies – and economic developments – proved, because in all too many key areas, individuals do not behave rationally.  Most people refuse to cut their losses, even at the risk of losing everything, and most continue uneconomic behaviors not in their own interests, even when they perceive such behaviors in others as irrational and unsound.  Those who distrust the market system assume that regulation, if only applied correctly, can solve the problems, and those who believe that markets are self-correcting assume that deregulation will solve everything.  History and experience would suggest both assumptions are wrong.

In more than a few military conflicts dating back over recent centuries, military leaders have often assumed that superior forces and weapons would always prevail.  And… if the military command in question does indeed have such superiority and is willing to employ it efficiently to destroy everything that might possibly stand in its way, then “superiority” usually wins.  This assumption fails, however, in all too many cases where one is unable or unwilling to carry out the requisite slaughter of the so-called civilian population, or when military objectives cannot be quickly obtained, because, in fact, in virtually every war of any length a larger and larger fraction of the civilian population becomes involved on one side or another, and “superiority” shifts.  In this regard, people usually think of Vietnam or Afghanistan, but, in fact, the same sort of shift occurred in World War II.  At the outbreak of WWII in 1939, the British armed forces had about 1 million men in arms, the U.S. 175,000, and the Russians 1.5 million.  Together, the Germans and Japanese had over 5 million trained troops and far more advanced tanks, aircraft, and ships.  By the end of the war, those ratios had changed markedly.

While failure can be ascribed to many causes, I find it both disturbing and amazing that seldom are the basic assumptions behind bad decisions ever brought forward as causal factors… and have to ask, “Why not?”  Is it because, even after abject failure or costly success that didn’t have to be so costly, no one wants to admit that their assumptions were at fault?