Archive for the ‘General’ Category

Life Out There?

Scientific findings in two areas seem almost in conflict, at least with regard to the question of whether there’s other intelligent life in the universe and how frequent it might be.  The first set of findings reports that life exists across a far greater spectrum of temperatures and pressures than most biologists dared to hope.  The second set of findings comes from astronomers, who are finding that, at least so far, other solar systems appear far more bizarre than ours, with planets in odd orbits, planets circling their suns in retrograde orbits, massive gas giants in tight solar orbits, all creating conditions that appear less favorable to life, or to complex cellular life, than in our solar system.

At this point, it’s clear that we are far from knowing enough to speculate knowledgeably about the frequency of life in the universe, let alone intelligent life… and yet…

Could it just be that life, of all sorts and kinds, arises under all ranges of conditions and under strange suns and stars?  Given the billions and billions of planets in the universe, and given the range of conditions under which life has evolved on earth, how could there not be life elsewhere?

But… given the immensity of the universe, and the distance between stars, will we ever know for certain?  And does it matter?

I’m afraid it does.  It matters because too many people in too many cultures have come to believe that somehow “we” – homo sapiens – are special merely because we exist, that some deity created the entire universe and put us at the center of it.

Yet… how special are we?  In almost every decade over the past century, archeologists and paleontologists have discovered yet another variety of human forebear – homo neanderthalensis, homo floresiensis, homo africanus, homo erectus, etc., and many of these were not our ancestors, but cousins.  And all of them are extinct, with the exception of the Neanderthals, who live on in the genes of much of the world’s population.

Recently, a team of archeologists discovered a big-brained dinosaur, one they believe was on the way to what we would call true intelligence – except it ran out of time when the climate changed.  Perhaps it, too, thought it was special, merely through the fact of existing.

Will it take the discovery of alien artifacts and signals to prove that we’re not that unique in the grand scheme of the universe?  Or would that discovery just trigger xenophobia and racial paranoia?

I don’t know, and I doubt anyone does, but what I do find intriguing as a science fiction writer is that the majority of novels written in the genre and dealing with such subjects tend to deal with humans trying to prove they’re special or acting as if we are.  All that, of course, raises the question of whether, if there are aliens out there, they’d even want to deal with us at present.

2010 – A Year of Change… Or More of the Same?

Certainly, there were many changes in the world, and in the United States, in 2010, but in many areas things seemed to stay the same.  Yet, which of the changes were “real,” and which of those things that seemed unchanged truly did change?

In the book field, an area obviously of concern to me, it’s fair to say that ebooks “arrived,” not that they haven’t been available to some degree for years, but 2010 marked the first year in which they accounted for a truly significant fraction of total book sales, although the analysts will likely be trying to ascertain exactly what that fraction was for months to come.  With ebooks has also come the rise of publishers who are essentially ebook only, and who rely on print-on-demand trade paperbacks, if pressed for a physical product.  Whether such publishers will become a larger part of the market or fade away is uncertain, as of the moment.

In science, one of the “biggest” announcements, although it received comparatively little media attention, was that astronomers have determined that the universe contains more than three times as many stars as previously thought because the number of so-called red dwarf stars had been grossly undercounted, largely because optical telescopes on Earth could not pick many of them up, even in stellar areas comparatively closer to Earth.  This also increases the chances for alien life because red dwarf stars have a much longer and more stable lifespan than do brighter stars.  Will this change anything here on Earth?  Hardly likely. 

In U.S. politics, of course, the balance of power in the legislative branch shifted considerably with the Republican takeover of the House of Representatives and the Democratic loss of a “gridlock-proof” [not that it always was] Senate.  That shift will likely result in very little being accomplished in 2011 or 2012 because the Republicans don’t want to accomplish anything but to roll back what the Democrats did, and the Democrats have enough votes – and the President – to stop such efforts, and neither side has either the initiative, intelligence, nor the will to work out compromise solutions.  So there really wasn’t much change there, either.

The war in Afghanistan continued in 2010, with escalating U.S. casualties, and is now the longest military conflict in U.S. history.  While the media continues to report, in small stories and back pages, various events, the majority of the American people remained content to pay lip service to the military, to allow private contractor profiteering, and in general only complained about it in terms of siphoning off funding for their desired social programs.  In short, no real change – except, of course, to the families and lovers of the increased numbers of dead and wounded.

2010 has been established as one of the three warmest years on record, at least in technological times, despite unseasonably cold winters in the northeast U.S. and in Europe, and that apparent paradox will continue to fuel opposition to dealing with the real issue of global warming, resulting in no real change in actions or positions.

The other real social change heralded in 2010, especially in western Europe and the United States, memorialized in part by the movie – The Social Network [because all momentous social movements need cinematic commemoration] – was the verification that the only forms of social contact that matter are those created and maintained by electronic means.  This is indeed a significant change, marked by the decline and possible demise of:  first meetings with significant others conducted with physical presence; actual conversations without overt and covert electronic interruptions and/or additions; efficient work habits and sustained mental concentration; and, of course, social niceties such as written paper thank-you notes.

In the end, did much really change?

When “Faster” Isn’t

I just returned from visiting family over Christmas, and, as a result of twelve hours spent in transit (and that was with NO delays), I got to thinking about “speed” in our modern society. We’re always told that technology is better and faster, but I have my doubts about such speed in the real world. It doesn’t matter how potentially or theoretically “fast” something is.  What matters is how fast it does what it does in the real world.

Because airports are ever more crowded, and over scheduled, and because commercial aircraft don’t fly any faster than they did thirty years ago, flight times are longer than they were thirty years ago – and that doesn’t count all the extra minutes, and occasionally hours, spent in security lines and screening.  Train travel isn’t any better, either. The Acela is supposedly capable of traveling between Boston and New York at 150 mph.  It doesn’t even approach 60% of its capabilities, of course, because the tracks it travels won’t handle that speed… and because it doesn’t have a dedicated rail system, but must share the rails with much slower freight trains.  All that may be one reason why, except in bumper-to-bumper rush hours in cities, most drivers exceed the speed limits on freeways and interstates whenever physically possible.  But because freeways everywhere are getting more and more crowded, they aren’t getting to their destinations any faster.

Even spacecraft aren’t flying any faster than they did in the 1960s, not markedly, anyway, and we certainly haven’t been able to get human beings any farther from Earth than we did a generation ago.

But aren’t we in the age of electronic superspeed?  Not from what I can tell.  Because of all the bells and whistles, firewalls, and electronic security, even my brand-new laptop loaded with one of the fastest processors, and more memory of more types than I’ll ever come close to using, takes longer to boot up and load than my ancient 1996 laptop.  Email doesn’t get there any faster, and the whole process effectively takes longer because, even with all those electronic devices and systems, I still have more and more spam that results in my having to take more time than I used to… and any way you look at it, that means slower.

My wife reminded me that not only is the mail slower, but deliveries are fewer than when we were children.  It also costs almost 1500% more per ounce than then.  This is progress?

As far as I can figure, about the only thing that, in practice, goes faster than it did a generation ago is the money, because, regardless of the “official” statistics, everything that most people need costs more every year.  Now… if we could just get everything else moving that fast…

The “Other”

In fiction, a great deal has been written on the theme of the “other,” the outsider, the stranger, the one who doesn’t fit, and what has been written ranges from horror to the romantic, from the impossible to the trite, from Camus’s L’Etranger, the man who looks and acts normal, but isn’t, to Alien, a creature so different that it screams of otherness, even to the vampires of Twilight, who apparently seek sameness and try to conceal their otherness… and the list and examples go on and on.

But to me, there’s another “other” that is far more socially, politically, and economically horrifying. Or in political terms, as the late senator Russell Long proclaimed, “Don’t tax you; don’t tax me; go tax that fellow under the tree.”  Unhappily, this practice of singling out the “other” for responsibility, whether it be for taxes, political change, educational blame, immigration problems, etc., has gotten so far out of hand that no one seems to even recognize what’s happening.

Take education.  This morning I just read an article about the problems a local, open- enrollment university has in getting students to actually complete their degree programs and graduate, and, once again, the “other” singled out for responsibility was essentially the faculty – the faculty has the sole responsibility for inspiring these students, for making sure they’re “interested” enough to attend classes, to choose their curriculum responsibly, to study, to learn the material.  On top of that, the state is pushing the idea that raising the percentage of college graduates will effectively solve a various assorted problems, from high unemployment to creating “better” jobs.  The target is something like 50% of all high school graduates graduating from college.  Duh… has anyone looked at the jobs required to maintain a civilization, including highly skilled ones that don’t require a college degree?  Electricians, plumbers, heating and air conditioning contractors, computer technicians, sheet metal workers, machinists, the list goes on for pages.  People need skills, but thinking that 50% of them should come through college degrees is insanity.  And, as I’ve noted before,  rather than deal with the problems of lack of student initiative and responsibility, lack of resources, lack of work ethics, failing parental responsibilities, it’s so much easier to focus on the teachers.

Then there’s the responsibility for paying for federal government services.  While I’ll concede that those who make more should pay more, the exact formula being far more questionable, why exactly should close to 40% of the population bear no responsibility for those services at all – and insist that a smaller and smaller minority of the population bear a greater and greater share – that the so-called rich become the “other.”

Immigration falls into the same category.  Massive numbers of Hispanics have flooded and are flooding into the United States, if at a lesser rate in the last year or so, and most of them are looking for a better life – why are they to blame for that, when ALL of our forebears did exactly the same thing?  Why are they to blame for fleeing the drug-trade induced violence that permeates Latin America when the high demand for those illegal drugs in the United States is what has caused that violence?  Especially when we seem powerless to stop the trade through criminalization and by imprisoning millions of users… and unwilling to control it by legalizing it?  Rather than looking at the root causes of the immigration problem, it’s so much easier to single out the stranger, the immigrant as the cause, when they’re only the symptom.

The problem of teenage pregnancies follows a similar pattern.  Because of the “benefits” of modern civilization, young people are becoming sexually mature at younger and younger ages, yet the complexity of a technological society is such that the economic maturity comes later and later.  Human beings are not built biologically to abstain from sex for the ten to fifteen year gap between physical maturity and social-economic maturity – and the vast majority can’t and don’t.  Yet religious fundamentalists of all stripes and varieties preach “abstinence” and “morality” – and blame sexual “immorality” on everything from culture to the media [not that they both don’t contribute], while pumping billions into purchasing the offerings of the media and ignoring the root causes and addressing them in a meaningful way.

Whatever the problem, there’s always an “other,” whom all too many of us find convenient to blame… and I find that “other-seeking” mentality far more horrifying than the “others” of cinema and fiction.  More than thirty years ago, the cartoonist Al Capp, in his Pogo strip made the observation, “We have met the enemy, and he is us.”

The problem is that it’s so much easier to blame the “other.”

What Ever Happened to Gratitude?

That’s the question my wife asked me the other day as she reflected on the semester she’d just completed.  As director of the university opera theatre program, she produces and directs at least one student production every semester, and she has done so for more than twenty years.  What she noted was that even ten years ago, students would offer cards or notes, or even small tokens of gratitude, for the efforts she made in producing and directing these programs – a gratitude, if you will, for the funds she expended that were not reimbursed by the university, the hours and hours of extra time provided in rehearsing and providing additional personal instruction to performers who needed it.

This year, for the first time ever, she received not a single card, even though she is teaching more students than ever before.  Paradoxically, this was also a year in which her student evaluations were among the best ever; so the lack of cards or tokens of appreciation weren’t likely due to student unhappiness.  It’s also not something that happened this year.  Fifteen years ago, it wasn’t unusual for her to receive thank-you notes from students who successfully completed senior recitals, or from those she helped into graduate programs. Over the last few years, those notes have dwindled away to nothing as well, again, even though she is even more successful in getting more and more students to perform at a higher level.  And this is not something limited to my wife, but a change in social climate that her colleagues both in her university and elsewhere have noted.

There’s also an increasing interest in grades and less interest in mastering the techniques of singing and performing. Along with this increased emphasis on grades and “credentials” and the decline in expressed gratitude, or perhaps because of it, she and others have noted a growing attitude among students – and among younger faculty and professionals in the field – that these younger people have “done it all by themselves.”

There’s little or no awareness or recognition that no one “does it by himself or herself.”  Virtually all of us have had mentors, teachers, or benefactors somewhere along the way, who made a difference, whether or not we wish to recognize them or not.  Along with this, I’ve also overheard more and more young professionals ask, when requested to do something professional, “What’s in it for me?”

To me, this growing focus on self, both in academia and in business, is a disturbing trend, and one that is mirrored by the trends in the financial community, where the focus seems to remain on how much compensation individuals can build up, rather than upon what they are accomplishing.  In the political area, the focus is on getting re-elected, no matter what the cost to the community or nation.  And in all areas, there’s less and less gratitude for what we’ve received and more and more complaints about what we haven’t… and yet, at the same time, more and more people are less and less willing to go out of their way for others.

Might it just be… just perhaps… that so much of the polarization in society is fueled by anger that others don’t appreciate what we’ve done, even as we fail to appreciate what others have done?

The Unmentioned Costs of Freedom

A federal judge in Virginia has declared a section of the recently passed healthcare law unconstitutional.  That section, which would not have taken effect until 2014, is the one that requires individuals to purchase health insurance or to pay various tax penalties.  While there’s little doubt that the question will go all the way to the U.S. Supreme Court, I would not be surprised to see this ruling upheld, because there’s a vast difference in law, and in practice, between requiring individuals NOT to do something “bad,” and in requiring them to take an affirmative action at their own expense – or pay a penalty for not spending money as required by the federal government.

This distinction can be a very fine line, but it still exists.  For example, car manufacturers are required to incorporate costly safety features in their new cars, but no consumer is required to buy a new car.  Manufacturing companies are required to meet standards for safety and emissions, but no corporation is mandated to build a new plant to produce widgets, etc.

The problem with a democratic system is that our freedoms often exact costs, and often those costs fall on others in one way or another.  One example of this in health care hits fairly close to home – my home.  My wife is a professor of voice, and she teaches college students.  Over the last 18 years, never has she had a year when she has not been confronted with sick and contagious students, as well as some with severe throat, sinus, or lung problems, who have no health insurance because neither they nor their parents have such insurance.  Not only do these students often infect others, but at times, their lack of insurance threatens their own permanent well-being.  More times than I want to count, my wife has paid for visits to doctors, and she has negotiated special rates with specialists for certain tests and procedures for her students. In the vast majority of cases, these students or their parents, and it is usually their parents’ doing, have made the choice of not paying for health insurance.  But there is a cost, either in time lost from work or studies [and lower grades, possibly even resulting in the loss of a scholarship], in overall health, and in the costs imposed or assumed by others.  One of the largest costs in the health care area is that of emergency room costs.

The health care area is just one example. Another is our presumption of innocence.  Under our system of justice, someone arrested or charged with a crime is presumed innocent.  That presumption results in literally billions of dollars being spent to prove guilt and convict presumed criminals, many of whom escape punishment not because they are innocent, but because they had better lawyers.

Another area is child welfare.  Because we choose not to send poor families to poorhouses and debtors’ prisons, but to allow them freedom, we end up paying significant sums to welfare mothers and others, and subsidizing food – largely in the hope that at least some of it will get to children – rather than taking the children and letting their parents fend for themselves.

Yet another is the freedom of movement.  We allow people great freedom to own vehicles, to operate them at high rates of speed, and even presume that they will do so sensibly.  We also combine that freedom with that of drinking.  This “combination” of freedoms costs more than 40,000 lives annually, and billions in damages and destruction.

The last freedom we provide in the United States is simple.  In allowing people the freedom to innovate and to be successful, we also allow most people the freedom to make enough bad choices to hurt, if not destroy, themselves.

Am I suggesting a more totalitarian approach and an adoption of the Napoleonic Legal Code?  Heavens, no!  But what I am suggesting is that there are real and measurable costs to “freedom,” restricted as some think it now is, and that it is anything but “free,” and part of those costs are included in our taxes, in our insurance bills, and in the costs of the goods we buy… as well in human suffering and tragedy.

The “Value” Problem in Taxation

More than a few commentators on the left and elsewhere – as well as a host of Democratic legislators – are deploring the idea that families who earn more than $250,000 will be allowed to share in the continued lower tax rates of the so-called Bush tax cuts, and more than a few letters have graced the pages of various publications declaring that the rich and super-rich shouldn’t get such benefits.

Were the people who made $25,000 in the mid-1950s “rich?”  Certainly, no one I knew thought they were rich.  Well-off perhaps, even affluent, but certainly not anywhere close to rich.  Yet an income of $250,000 today is worth about what $25,000 was 60 years ago, perhaps even less, adjusted for inflation.  In real terms, even gasoline prices aren’t that much higher than then, and they’re certainly far lower, in terms of the purchasing power of the dollar, than they were during the “gas crisis” of the early 1970s.

To be fair about this, I’m just as appalled by those on the right who declare that increasing the taxes on those making more than $250,000 will bankrupt small businesses.  If a business making, say, $500,000 annually can’t afford an additional $10,000-$30,000 in taxes, then that business is in trouble already.  If the business is making enough to worry about increased taxes in the hundreds of thousands of dollars, it’s not a small business, besides which, at that level it should be incorporated and passing on those taxes to its customers, the way all the other U.S. corporations do – which is part of the reason why the whole idea of taxing corporate profits doesn’t make economic sense in a world economy… not that most politically motivated tax policies make economic sense.

I’d be among the first to admit that the United States government faces a fiscal crisis, but the reason why isn’t because the rich are greedy, or too many of the “poor” are undeserving, or that immigrants are “milking” the system, all of which are overblown stereotypes based on true anecdotes that are statistically a small proportion of what has caused our unbalanced budgets and deficits. The reason is that the American people, as a group, really have no understanding of numbers or what those numbers mean, and, unfortunately resist anyone or any institution that wants to enlighten them.  Nor is there any serious questioning of the basis of the whole idea of taxation as now practiced.  Admittedly, if we want government services, they have to be paid for.  But why do we continue with a system that isn’t raising the revenue necessary to cover the services we demand and yet reject either reducing demand or changing the basis of taxation?

If I walk into a McDonald’s and order a Big Mac, the cashier doesn’t ask me how much I make and price the sandwich accordingly.  The same is true at every retailer in the U.S. and for the majority of commercial services.  In fact, larger commercial customers usually get discounts for their larger orders. Yet all the services that the U.S. government supplies are essentially based on income and “cost” more the more someone makes.  Someone in the upper fifth of income in the U.S. pays a great deal more for his or her share of national defense, national parks, etc., than does someone in the lowest fifth, who often pays nothing at all.

Now… the rationale for higher individual costs of government [i.e., taxes] rests on the assumption that wealthier people benefit more from government and upon the idea that poorer people cannot afford to pay taxes.  Moreover, there is a feeling that it is somehow “unfair” to tax a well-off person and a poor person the same percentage of their income.  Yet, if one taxes someone who makes $100,000 at a ten percent rate, and someone who makes $20,000 at that same ten percent rate, the person who is better-off is paying $8,000 more than the poorer person for the same government services. That’s 400% more. If one then adds in the “progressive” tax structure, the person who is well-off may be paying tax rates of more than 20%, which works out to 900% more. Does that person who is well-off get nine times more government services?  No.

In fact, it’s likely that the poorer individuals get more government benefits than wealthier individuals.  I emphasize the word individuals because once one factors in corporations, that picture changes.  Various organizations, from foundations to industries, do in fact get large benefits, and because, in effect, as I’ve discussed many times before, corporations essentially pay no taxes because they pass those costs on to the customers and consumers,  they have often have no real costs, or reduced costs, for the government services they receive.

Larger homes on larger lots pay more property taxes just about everywhere.  But do the owners of such homes actually require more municipal services?  The odds are that they don’t.  In fact, they may require fewer.

All of this brings up a larger question: Why do we tax people and their property on their income and the value of their property for government services?  Why don’t we just tax them on the basis of services?

The simple answer is that it’s politically unwise.  A recent example occurred in a Utah county that imposed a fee for police services for those areas of the county not belonging to the municipalities that paid for police services.  More than a few people simply refused to pay – and that could lead to the situation that occurred in another state, where firefighters refused to fight a fire where the homeowner had not paid a $75 firefighting fee, and the homeowner watched his home burn to the ground.

The second answer is that a significant fraction of the population cannot pay such fees, and failing to provide such government and municipal services would endanger those who can pay even more than those who cannot.  Allowing crime to go unchecked in neighborhoods that cannot pay for police services would only result in crime spreading, and in the end, those who can pay would pay even more to protect themselves.  A similar practicality applies to a number of services, from roads to sanitation, to regulation of food and highway safety, and so on.

Any community requires a baseline of services to survive.  So do nations, although that baseline varies by culture and the times.  The problem the United States faces today is that, as a nation, we’re asking for more in government programs and services than the majority of people wish to pay.  It’s no secret that 10 % of the population pays more than seventy percent of the taxes… and that, essentially, they pay for the privilege of being successful.  The plain fact is that those who are well-off pay more in taxes comparatively, percentage-wise, and in absolute terms because they’re a minority and because society as a whole insists on it, not because it’s fair.

Today, the majority of Americans don’t and won’t pay for the bulk of services that they think government should provide. That same majority thinks that it’s wrong for the richer minority to object to paying the bulk of those costs.  Why exactly is it wrong for the “rich” to object to paying a disproportionate share, and why is it right for the majority to demand services it won’t support through taxes, especially when 30% of the population pays no federal income tax at all?  McDonald’s doesn’t give free food to thirty percent of its customers, and no one thinks that’s unfair, but government certainly gives free or reduced price services to more than thirty percent of its citizens.

Another Single-Focus Education “Fix”

Apparently, the latest “fad” to enter the education reform arena is an intense focus on “subject mastery,” unfortunately to the exclusion of other skills necessary for student success. There’s nothing wrong with the idea that students need to master the subject matter that they’re supposed to be studying.  Such mastery is absolutely necessary, but again, the reformers, at least all those mentioned in The New York Times article on it on November 28th, are throwing the baby out with the bath water.

They have observed, wonder of wonders, that many students with terrible grades actually know the material, and that many other students with good grades don’t.  They have rightly identified a real problem in many American schools – that appearance and behavior and apparent attitude often result in inflated grades for students who really don’t learn what they should.  Unfortunately, from there, a number have gone to the assumption, and even implemented revamped curricula and standards, that very little besides subject mastery matters.  Homework is downgraded to counting not at all, as are attendance and behavior.

This idiocy – and it is idiocy – ignores so many factors I almost don’t know where to begin.  However… first, homework.  If homework is designed properly, it should both require learning and skills mastery. It should also teach students research skills and get the point across that you just can’t find answers in one place.  Admittedly, all too much homework is indeed busywork, but that’s not a problem with the idea of homework;  it’s a problem with how teachers use homework.  Second, if homework isn’t graded, in our society, unfortunately, it doesn’t get done, because we’ve taught children, by example, that the only things that are important are those that “count,” either in dollars or grades.  If homework doesn’t get done, then skills mastery suffers for most students.  In addition, both higher education and jobs requiring that higher education also require “homework,” doing projects and presentations, research, etc., and removing that facet of education or downplaying it into insignificance does students a great disservice.

Second, attendance.  Like it or not, most jobs require attendance.  It doesn’t matter how smart you are, because, if you’re not at work, sooner or later you’re going to get fired.  Discounting attendance because there are a few students bright enough to learn matters without being there – and those students are indeed a minority – sends a societal message that encourages a self-centered and eventually self-destructive attitude.  The same is true about behavior.  Employees who continually misbehave get fired.  College graduates who do the same seldom ever make it in either professional or executive positions.

Students not only have to master skills, but they have to learn how to learn, how to apply that learning in society, and put all three together.  Yes, skills mastery is vital… but without the other factors, it’s also useless.

When will we as a society ever get away from the “one-big-simple-fix” attitude?

Cultural Isolation… and Reading

The kind folks at Goodreads featured two of my books, one fantasy and one science fiction, as their November choices for the Science Fiction and Fantasy Club members to read and comment on, if they wished.  The books were The Magic of Recluce and Haze.  As I suspected, I took a certain amount of flak on one aspect of The Magic of Recluce, and that was my “creative” use of textual sound effects.  This was something I’ve known for years, especially since Dave Langford’s “poem” created solely from the sound effects in the first few Recluce books.  Needless to say, the later Recluce books have far, far, fewer sound effects.  And some Goodreads readers also noted that I was a bit too elliptical in areas, a tendency I think I’ve largely corrected in later fantasy books [after all, The Magic of Recluce was my very first fantasy book, written over twenty years ago, and I have learned a few things more about writing in the years since].

The negative comments about Haze, however, bothered me more, not because a number of readers didn’t like the book, because that’s to be expected.  Any book by any author will find some readers who don’t like it.  What bothered me was why these readers didn’t like the book.  Almost all of those who posted negative comments made the observation that they couldn’t connect with Keir Roget, the main character, because he showed no emotion.  In point of fact, that is not true.  He shows no overt emotion beyond politeness and tactfulness, or a quiet reserve, even when his life is threatened. It’s not that he has no emotions; it’s that they’re kept under tight rein, because in both his culture and his profession [security agent] revealing emotions can be dangerous, if not fatal, particularly when you’re already under suspicion, as Roget is.  The safest way not to reveal emotions is to repress them so that you don’t feel them strongly yourself, and this is exactly what Roget does.  There are numerous clues in Roget’s small actions as to what he feels in his actions, but these are subtle.

From a reader’s point of view, this clearly presented a challenge, and that difficulty was magnified because the “culture” is future Earth, and future southwestern Utah in one series of events.  That’s a future where at least some U.S. readers “expect” a certain emotional pattern from the character, and Roget didn’t deliver.  Of course, if he had, he wouldn’t have survived even to the point where the book actually begins. I suspect that, had I made the entire culture more Sinese and the main character had been identified as of Chinese heritage and genetics, readers would have had less difficulty, but perhaps not.

But what all the comments underline is that at least a certain percentage of readers are so isolated in their own culture that they have great difficulty in getting “outside” their own cultural and personal expectations, in particular when the setting “looks” familiar.  Yet that was actually one of the basic points of the book, shown in many ways – that what looks familiar may not be at all and that our own future may be far more alien to us than many could possibly imagine.  The problem of course, was that, for some readers, I succeeded in making that seemingly familiar future so alien that they could neither accept nor identify with it… and that doesn’t help sales a great deal.

What I’ve experienced with Haze may also reflect why comparatively few SF books, especially those with high sales levels, depict heroes or heroines with emotional complexions more than slightly different from those in current western society.  Emotional differences are far more alien than physical differences, it would seem, at least in current SF, and that’s why so many aliens are really just humans in disguise.

Data, Knowledge, and Wisdom

The   November 20th edition of The Economist features an observation on the growth of data surrounding purchases of bonds, stocks, derivatives, etc. The article notes that since the founding of the Centre for Research in Security Prices at the University of Chicago in 1960, initially funded by Merrill Lynch, the number of academic economic journals dealing with, analyzing, or providing such data has grown from 80 then to over 800 today.

Yet some economists, such as Robert Shiller of Yale University, according to The Economist, dispute the value of such information, noting that even with all the proliferation of data, no one can explain the market melt-down of two years ago.  Others dispute Shiller, pointing out that the market demand for such information proves its value.

In my view, they’re both right, because each is talking about a different aspect of the information.  Shiller is talking about understanding how the securities markets actually work, especially in times when markets perform “abnormally,” while all those who want more and more data are talking about how valuable they find it in making money through trading.

Combine all that data with sophisticated trend analysis and you get knowledge that can make a great deal of money, generally always in short-run situations, but what all that data won’t tell you is when something basic is going to change, and change abruptly.  And those who mine the data are more than happy to be able to use that data 99% of the time to make piles of money. As for the one percent of the time that they’re wrong… well… everything they’ve made the rest of the time covers that – for them.  What their profits don’t do is remedy the vast economic damage that ripples through the economy when one of those unforeseen market meltdowns occurs.

The problem with the computerized use of all this securities market data is that, because it works so well so much of the time for those with the resources to exploit it, there’s little incentive to fund or look into basic research in the field.  In addition, the economists who do all the short-term analysis are, according to Professor Shiller, “idiot savants, who get a sense of authority from work that contains lots of data.”   Again, the problem is that the focus on daily market economics stresses immediate returns to the detriment of long-term understanding… or wisdom, if you will.

And what else is new?

Black Friday

Today is “black Friday,” the day after Thanksgiving when supposedly the Christmas shopping madness seizes the American people and drives them into a frenzy of buying for themselves and others.  While the term “Black Friday” was used at least as far back as 1869, it originally referred to financial crises, but at some point in the mid-1970s, newspapers and media began referring to the day after Thanksgiving as an indicator of what merchants were likely to be “in the black,” or profitable, because of the Christmas buying season. After a time, and particularly in the last decade, this meaning of “black Friday” has usurped all others.

Unfortunately, there are a number of problems with this usage, and especially with the implications behind it.  Because everyone seems to want to concentrate on the economic side, I’ll begin there.  First, the idea of “black Friday” emphasizes consumer buying and consumption as the primary measure of U.S. economic power and success, at a time when the majority of the goods consumed come from other countries.  Second, it ties expectations to a given day of the year. In addition, it creates a mindset that suggests, if a retail business’s holiday revenues are not substantial, then there’s something wrong.  While that may be true for seasonal goods, not all consumer products are seasonal.

Beyond those reasons are the ethical ones.  Do we really want to continue to push the idea of consumer spending as the only – or even the principal – way to keep the economy going, particularly at a time when our entire societal infrastructure, from roads to bridges, to financial structures, to the electrical grid, among others, need rebuilding or total re-structuring?  At a time when Americans, with something like five percent of the world’s population, already consume 25% of its annual output?  Do we want to create a mindset that emphasizes consumption at a time when so many people are struggling to make ends meet?

Then there’s the purely practical question of whether it’s a good idea to emphasize consumption – most of it temporary in nature – when those goods are largely produced overseas, while neglecting building and using capital goods that will generate jobs in the United States.

Black Friday – an economic success or a societal disaster?

More on Poetry

As some readers may know, my wife is a professional singer who is also a professor of voice and opera.  Among her many duties is that of teaching aspiring classical singers diction and literature.  One notable type of song literature required of these students is that of “art song,” and a significant percentage of art song consists of poetry set to music by composers.  Various forms of art song, if called by different names, have been composed in many languages, although classical singers usually begin by learning art songs in English, French, German, and Italian.

Earlier this semester, my wife was beginning the section on American and English art song, and out of a class of fifteen students, she found that what they had read in high school appeared to be limited to a bit of Chaucer and Shakespeare, along with Emily Dickinson, and perhaps T.S. Eliot.

None of the students had learned any poetry by such greats as John Milton, William Blake, Shelley, Byron, Keats, Yeats, Robert and Elizabeth Barrett Browning, Christina Rosetti., Robert Louis Stevenson, Thomas Hardy, W.H. Auden, A.E. Housman, Edna St. Vincent Millay, and Amy Lowell. In fact, none of them even appeared to have read Robert Frost. Moreover, none of them could actually read verse, except in a halting monotone.  This lack of background in poetry puts them at a severe disadvantage, because these are the poets whose words have been put to music in art song and even in choral works.

These were not disadvantaged students. They came out of high school with good grades and good standardized test scores.  Yet they know essentially very little about the historical written arts of their own native language.  In turn, this lack shows up in their narrowness of word usage, metaphor, and general weakness in both oral and written expression. Whether it’s related or not, it does appear that there’s also a correlation between the loss of  solid English instruction and the growth of such phrases as “you know”; “I mean”; “like…dude”; and scores of other meaningless phrases used to cover lack of even semi-precise expressiveness.

Bring back the great old poets… all of them.

Bookstores, Literacy… and Economics

Although I was surrounded by books growing up, I can’t recall ever going to a bookstore to obtain a book until I was in college.  I was a frequent visitor to the local library, and there were the paperback SF novels my mother picked up at the local drugstore, but bookstores weren’t really a part of my orbit, and their absence didn’t seem to affect my voracious reading habit.  As an author, however, I’ve become very aware of bookstores, and over the past twenty years, I’ve entered over a thousand different bookstores, in forty-two of the fifty states, over 120 in the space of three weeks on one tour.  And because I was once an economist I kept track of the numbers and various other economics-related aspects of those bookstores.

The conclusion?  Well… there are many, but the one that concerns me the most are the changes in bookselling and where books can be obtained and what those changes mean for the future functional literacy of the United States.

When I first became a published novelist thirty years ago, for example, the vast majority of malls had small bookstores, usually a Waldenbooks or a B. Dalton, often two of them, one at each end of the mall, or perhaps a Brentano’s or another chain. And I was very much aware of them, because I spent more times in malls than I really wanted to, which is something that occurs when one has pre-teen and teenaged daughters.  According to the statistics, at that time, there were over 1500 Waldenbooks in malls nationwide, and hundreds of B. Daltons, not to mention all the other smaller bookstores. Today, the number of Waldenbooks stores totals less than 200 hundred, and the majority of those were closed because Borders Books, the present parent company of Waldenbooks, did not wish to continue them once it acquired the chain, preferring to replace many small stores with larger Borders stores.  Even so, Borders has something less than five hundred superstores.  The same pattern holds true for Barnes and Noble, the parent of the now-or-almost-defunct B. Dalton stores.  The actual number of bookstores operated by these two giant chains is roughly half what they operated twenty-five years ago.  At the same time, the growth of the chain superstores has squeezed out hundreds of smaller independent bookstores.

Prior to 1990, there were somewhere in the neighborhood of 400 book wholesalers in the United States, and there were paperback book racks in all manner of small retail establishments.  Today there are only a handful of wholesalers, and the neighborhood book rack is a thing truly of the past.

Add to this pattern the location of the book superstores.  Virtually all of these stores are located in the most affluent sections of the areas they serve.  In virtually every city I’ve visited in the last fifteen years, there are huge sections of the city, sometimes as much as 60 percent of the area, if not more, where there is no bookstore within miles, and often no convenient public transport. There are fewer and fewer small local bookstores, and most large bookstores are located in or near upscale super malls.  Very few, if any, malls serving the un-affluent have bookstores.  From a short-term economic standpoint, this makes sense for the mega-store chains.  From a cultural standpoint, and from a long-term customer development standpoint, it’s a disaster because it limits easy access to one of the principal sources of books largely to the most affluent segments of society.

What about the book sections in Wal-Marts?  The racks and carrels in the average super Wal-Mart number roughly a third of those in the size of the smallest of the Waldenbooks stores I used to visit, and the range of books is severely limited, effectively to the best-sellers of each genre.

Then, because of recent economic pressures, the local libraries are seeing their budgets cut and cut, as are school libraries – if the school even has a library.

Research done for publishing firms has shown that so-called impulse book purchasing – the kind once made possible be neighborhood book racks and ubiquitous small mall bookstores, accounted for a significant percentage of new readers… and the comic book racks that were next to the book racks provided a transition from the graphic format to the books.

Some have claimed that books will be replaced by the screen and the I-phone and other screen “aps,” and that well may be… for those who can already read… but the statistics show that while fewer Americans are totally illiterate, an ever-increasing percentage is effectively functionally illiterate.

Is that functional illiteracy any wonder… when it really does take a book to start learning to read and when books are becoming harder and harder to come by for those who need them the most?

Voting Influence

Decades ago, the late science fiction writer Mack Reynolds wrote a novel depicting a future United States in which citizens received one “basic” vote, and then could “earn” additional votes for various accomplishments, such as earning advanced degrees, completing a period of military and/or public service, etc.  At the time of the book, Reynolds received a great deal of flak for that concept, and I suspect, were anyone to advance such an idea today, the outcry would likely be even greater.

But why?  In point of fact, those with great sums of money already exert a disproportionate amount of influence over the electoral process, especially in the United States now that the U.S. Supreme Court has granted corporations and wealthy individuals access to the media that is only limited by the amount of their resources, in effect granting such entities the impact of millions of votes. The rationale for the court decision, which has in effect been legally sustained, is that restriction on the use of money for advertising one’s political views and goals is in effect a restriction on first amendment freedom of speech rights.  The practical problem with this decision is that, in a culture dominated by pervasive mass media, the result is to multiply the effect of exercising freedom-of-speech rights manifold for those who have large amounts of wealth.  Since, given the costs of effectively using mass media, only the top one or two tenths of one percent of the population can exercise such media-enhanced rights, the result of the decision is to give disproportionate influence to a tiny fraction of the population.  Moreover, as a result of the decision, in most cases, donors to groups and corporations availing themselves of this “right” do not even have to disclose their donations/spending.

The Court’s decision essentially grants greater weight in determining who governs us strictly on the basis of income and wealth.  Are not other qualities and accomplishments also of equal or greater value to civilization?  And if so, why should they not be granted greater weight as well? That was really the question Reynolds was addressing in postulating such a change in American society, and it’s a good question.

Before you dismiss the idea out of hand, consider the fact that the way in which our current system operates grants greater governmental influence to a small group of people whose principal talent is making money.  It does not grant such influence to those who teach, who create, or who perform unheralded and often dangerous military and public service, and as the revelations about Iraq have showed, at times such money-making operations have in fact been based on taking advantage of American soldiers deployed abroad, so that those with great sums of money not only gained electoral influence, but did so at the expense of those who served their country… and many of whom died doing so.

Then… tell me again why we don’t need an electoral or regulatory counterbalance to unbridled use of wealth in trying to influence elections.

Boring?

The other day, someone commented on the blog that, unfortunately, Imager’s Intrigue and Haze were boring and major disappointments.  I replied directly, something I usually avoid doing, at least immediately, because the comment punched several of my buttons.  As many of my readers well know, my first fantasy, The Magic of Recluce, features Lerris, a young man who, at the beginning of the novel, finds virtually everything in his life boring, and everything that he railed against at the beginning far less so at the end… yet the world in which he lives has changed very little.

I have no problem with readers saying that they personally found a book of mine – or anyone else’s – boring… or whatever.  I have great problems when they claim the book is boring, without qualifications.  A book, in itself, is neither exciting nor boring.  It simply is.  When a reader picks up a book and reads it, there is an interaction between what the reader reads and what the writer wrote.  What a reader finds interesting depends at least as much on the reader as the writer.  There are some books that have been widely and greatly acclaimed that I do not find interesting or enjoyable, and that is true of all readers.  In general, however, books that are well-written, well-thought-out, and well plotted tend to last and to draw in a greater percentage of readers than those that are not.  The fact that books with overwhelmingly positive reader and critical reviews that also sell large numbers receive comments like “dull,” “boring,” and “slow” suggests that no book can please everyone.  That’s not a problem.

The problem, as I see it, is that there are more and more of such unthinking comments, and those comments reflect an underlying attitude that the writer must write to please that particular reader or the author has somehow failed if he or she has not done so.  This even goes beyond the content of the books.  A number of my books – and those of many other authors – are now receiving “one-star” or negative reviews, not because of faults in the book, but because the book was not available immediately in cheaper e-book versions at the time when the hardcover is published.  Exactly how many people in any job would think it fair that they received an unsatisfactory performance review because they didn’t offer their services at a lower rate?  Yet that’s exactly what the “one-star-reviewers” are essentially saying – that they have the right to demand when and at what price what version of a book should be released.

It took poor Lerris exile and years to understand that Wandernaught was not boring, but that he was bored because he didn’t want to understand.  But that sort of insight seems lacking in those whose motto appears to be: Extremism in the pursuit of entertainment (preferably cheap) is no vice, and moderation in the criticism of those who provide it is no virtue.

The Failure of Imagination

On my way to and back from the World Fantasy Convention, I managed to squeeze in reading several books – and a bit of writing.  One of the books I read, some three hundred plus pages long, takes place in one evening.  While I may be a bit off in my page count, after reading the book, I thought that of the more than three hundred pages, the prologue and interspersed recollections and flashbacks amounting to perhaps fifty pages provided the background for the incredibly detailed action, consisting of sorcery, battles, fights and more fights, resulting in… what?  An ending that promised yet another book. To me, at least, it was more like a novelized computer game [and no, it’s not, at least not yet].  If I hadn’t been on an airplane, and if the book hadn’t come highly recommended, I doubt I would have finished it.

The more I’ve thought about this, the more it bothered me, until I realized that what the book presented, in essence, was violence in the same format as pornography, with detailed descriptions of mayhem in realms of both the physical and the ghostly, with just enough background to “justify” the violence.  While I haven’t done as much reading of the genre recently as I once did – I read 30-40 books in the field annually, as opposed to the 300 plus I once read – to offer a valid statistical analysis, it seems to me that this is a trend that is increasing… possibly because publishers and writers are trying to draw in more of the violence-oriented gaming crowd.  Then again, perhaps I’ve just picked the wrong books, based on the recommendations of reviewers who like that sort of thing.

And certainly, this trend isn’t limited to books. In movies, we’re being treated – or assaulted, depending on one’s viewpoint – with more and more detailed depictions of everything, but especially of mayhem, murder, and sexually explicit scenes. The same is true across a great percentage of what is classified as entertainment, and I’m definitely not the first commentator to notice that.

Yet… all this explicitness, at least to me, comes off as false.  Older books, movies, and the like that hint at sex, violence, terror, and leave the reader and viewer in the shadows, so to speak, imagining the details, have a “reality” far more realistic than entertainment that leaves nothing to the imagination.

This lack of reader/viewer imagination and mental exploration also results in another problem, lack of reader understanding. I’m getting two classes of reader reviews on books such as Haze, in particular, those from readers who appear truly baffled and those who find the book masterful. The “baffled” comments appear to come largely from readers who cannot imagine, let alone understand, the implications and pressures of a society different from their own experience and preconceptions… and they blame their failure to understand on the writer.  The fact that many readers do understand suggests that the failure is not the writer’s.

All this brings up another set of questions.  Between the detailed computer graphics of games, the growth of anime, manga, and graphic novels, the CGI effects in cinema, what ever happened to books, movies, and games that rely on the imagination? A generation ago, children and young adults used their imagination in entertainment and reading to a far greater extent. The immediate question is to what degree the proliferation of graphic everything minimizes the development of imagination. And what are the ramifications for the future of both society and culture?

The Technology Trap

Recently, I read some reader book reviews of a science fiction novel and came across a thread that surfaced in several of the reviews, usually in a critical context.  I realized, if belatedly, that what I had read was an underlying assumption behind much science fiction and something that many SF readers really want.  The only problem, I also realized, is that what they want is something that, in historical and practical contexts, is as often missing as present.

What am I talking about?  The impact of technology, of course.

Because we in the United States live in a largely technology-driven, or at least highly technologically supported, society, there is an underlying assumption that technology will have a tremendous impact on society, and that every new gadget somehow offers an improvement to society.  I have grave doubts about the second, but that’s not the assumption I’m going to address, but rather the first, the idea that in any society, technology will triumph.  I’d be the first to agree that one can define, to some degree, a culture or society by the way in which it develops and uses technology, but I’d have to disagree on the point that developing technology is always a societal priority.

Imperial China used technology, but there certainly wasn’t a priority on developing it past a certain point, and in fact, one Chinese emperor burned the most technologically advanced fleet in the world at that time.  The Chinese developed gunpowder and rockets, but never developed them to anywhere close to their potential.  As I’ve noted in a far earlier blog, the Greeks developed geared astronomical computers thousands of years in advance of anyone else… and never applied the technology to anything else.  Even the British Empire wasn’t interested in Babbage’s mechanical computer.  And, for the present, at least, western civilization has turned its back on supersonic passenger air transport, even though it’s proved to be technically feasible.

Yet, perhaps because many SF readers are enamored of technology, there seems to be an assumption among a significant fraction of readers that when an author does not explore or exploit the technology of a society and give it a significant role, at least as societal background, he or she has somehow failed in maximizing the potential of the world depicted in the novel in question.

Technology is only part of any society, and, at times, and in some places, it’s a very tiny part.  Even when it underpins a society, as in the case of western European-derived societies in our world, it often doesn’t change the societal structure, but amplifies the impact of already existing trends.  Transportation technology improves and expands the existing trade networks, but doesn’t create a new function in society.  When technology does change things, it usually does so by changing the importance of an existing structure, as in the case of instant communications.  And at times, as I noted above, a society may turn its back on better technology, for various reasons… and this is a facet of human societies seldom explored in F&SF and especially in science fiction, perhaps because of the myth — or the wish — that technology always triumphs, despite the historical suggestions that it doesn’t.

Just because a writer doesn’t carry technology as far as it might go theoretically doesn’t mean the writer failed.  It could be that the writer has seen that, in that society, technology won’t triumph to that degree.

Election Day… and the Polarization of Everything?

The vast majority of political observers and “experts” – if pressed, and sometimes even when not – will generally admit that the American political climate is becoming ever more polarized, with the far right and the far left refusing to compromise on much of anything.  For months now, the Republican party in the U.S. Senate has said “No!” to anything of substance proposed by the Democratic leadership, and in the health care legislation, for example, the Democrats effectively avoided dealing with any of the issues of interest to the Republicans, some of which, such as medical malpractice claims reform, have considerable merit.

Yet, if one looks at public opinion polls, most Americans aren’t nearly so radical as the parties that supposedly represent them, although recently that has begun to change, not surprisingly, given the continual public pressure created by the tendency of media news outlets to simplify all issues to black and white… and then to generate conflict, presumably to increase ratings.

Add to this the extreme media pressure placed on any politician who seeks a compromise or another approach outside of either party positions or his or her own past pronouncements, and we have a predictable outcome – polarization and stalemate.

There are times when stalemate may be preferable to ill-considered political action, but at present, there are a number of areas affecting the United States where some sort of action is and has been necessary.  A relative of mine just got her latest health insurance bill – over $1,000 a month for single-party coverage – and this wasn’t a gold-plated health plan by any means.  For two people, the premium would have been over $1,600 monthly, or over $19,000 a year.  Now… the median family income in the United States runs around $50,000 at present, and a $1,600 a month health insurance bill is over 35% of that – and doesn’t include deductibles and co-payments.  Single parent households have a median family income of  roughly $35,000, and $1,000 a month is more than a third of before-tax income.  These figures do tend to suggest that some sort of action on health care insurance was necessary, but the vast majority of one party effectively declared that they weren’t interested in anything proposed by the majority party, and the majority party effectively refused to consider any major issues brought to the table by the minority.  By parliamentary maneuvers, the majority slid through legislation thoroughly opposed by the overwhelming majority of the minority – and further increased the political polarization in Washington.

Similar polarization can be seen on other major issues, from immigration to energy policy and climate change legislation, and, of course, taxation.   One party wants to soak those who have any income of substance, and the other wants to reduce taxes so much that we’ll never dig our way out of the deficit.  Those who would suffer the greatest taxation don’t have enough to cover the deficit, and cutting or eliminating taxes, as some have proposed, would destroy us as a nation.

Tell me… exactly how does this polarization resolve anything?

Transformational… Reflective…?

In response to one comment on a recent blog, I noted that vocal music had changed over the last forty years, and another commenter made the point that languages evolve… both of which raised in my mind the question of the role art plays in societal evolution. Put bluntly, does art lead such transformations, or does it merely reflect them?  Or is it the usual mix of a little leading, and a great deal of reflection?

While I’m no art historian, it does appear to me that changes in the predominant or critically acclaimed styles of painting do not follow a pattern of gradual change, but occur irregularly, and at times, at least, preceded significant societal changes, as in the case of the rise of the impressionists, or the modern art movement of the 1950s.  Such changes also do not appear to be primarily gradualistic.

Music historians have placed classical music into periods, but how does one analyze the changes from one period to the next?  Were giants such as Bach and Mozart so dominant in their mastery that they forced the composers who followed them to innovate?  Beethoven’s great Ninth Symphony, which is unlike any other of its time and, for that matter, unlike any of quality any time soon thereafter, was composed at a time when the “old order” had been restored.  Was he reacting to the currents of past revolution, or anticipating the changes to come?  It’s easy enough to say that such questions were irrelevant to Beethoven, except that it’s unlikely that any creative soul is impervious to the environment, especially in Beethoven’s case, since the currents of politics swirled around Vienna during the period, especially after 1800, when his most daring works were composed.

Popular music, especially in the United States, underwent radical changes in the 1960s, and significant societal changes also occurred.  Did they occur in tandem, or did the music reinforce the impetus for change?  Can anyone truly say?

Science fiction aficionados often like to claim that SF leads the way into the future, but does it?  Isaac Asimov did foresee the pocket calculator, but the success record of the genre is pretty weak, either in predicting or inspiring social and technological changes.  Almost 40 years ago, in my very first story, I predicted computer analysis and economic modeling, somewhat accurately, as it turned out, and cybercrime as well, and while cybercrime has indeed become a feature of current society, I never predicted the most predominant type.  I did predict institutional cybercrime of the general type that caused the last economic meltdown, and, so far as I can tell, that story was one of the first, if not the first, to suggest that type of crime, but… somehow… I don’t think my little story inspired it.  I just saw where technology and trends might lead.

But, of course, that leaves open the question… how much do the arts influence the future?

The Resurgence of Rampant Tribalism

Several pieces of archeological “trivia” clicked together for me the other day.   First was an event in the early history of the United States, during the time period when the Indians had had enough and decided to push the English out of New England – in a conflict known as King Philip’s War, named for the young chief of the Wampanoag Indian tribe. Despite differing religious beliefs, the English colonists were united, while the Indians were fragmented into more than half a dozen local tribes, two of which, the Pequot and the Mohegan, supported the English.  On top of that, at a point when the English colonists were having great difficulty, the neighboring Mohawk tribe, rather than support King Philip, attacked the Wampanoag.

The second piece of informational trivia was the recollection that one of the contributing reasons for the Spanish success against the Aztecs was that tribes conquered by the Aztecs united with the Spaniards.  The third was an article in Archeology revealing recent discoveries about the ancient Etruscans, one of which was that, despite their initial control of the central Italian peninsula and a higher level of technology than the Romans, in the end Rome triumphed, largely because the Etruscan cities could never form a truly unified nation.  Greece is another example.  The ancient Greek city-states never could form a unified nation – except briefly in short-lived alliances and then under the iron fist of Alexander and, despite their comparatively advanced technology and civilization, ended up dominated by the Romans.

The largest single difference between a nation and a collection of tribes is that a nation is held together by an overriding set of common beliefs.  The United States began as a “tribal” confederation, but succeeded in unifying what amounted to regional tribes through the idea and principles of a federal republic… for a period of little more than sixty years before the beliefs of the southern “tribes” resulted in rebellion.  One of the contributing factors to the defeat of the South was the lack of cohesion between the “tribal states” of the southern confederacy, a lack exemplified by the fact that some southern railways had different gauge track systems from others – and it does get hard to move supplies when you have fewer railways and they don’t interconnect.

While history does not repeat itself in any exact fashion, patterns and “echoes” do, and one of the patterns of history is that large and unified countries almost always triumph over nations that are or resemble tribal confederations or over smaller nations.  Another pattern is that confederations or unions seldom endure.  They either merge into a nation of shared values, as did the United States, or they fragment, as did the former USSR.

The problem facing the United States, and the world, today is that tribalism is again becoming rampant, if more in the form of values, largely religious, that are increasingly intolerant of those with other values.  This tribalism, instead of seeking common ethical and practical grounds, manifests itself in demanding that those with other beliefs be repudiated, if not exiled or exterminated, and often demonizes those with comparatively minor differences in beliefs.

More than a few political scientists have theorized that this trend could conceivably, if unchecked, result in the political fragmentation of the United States into several nations.  While I’m not that skeptical, I do see that this tribalization has resulted in a growing failure of society and government and an increasing inability to deal with critical national problems, ranging from failing infrastructure to financial overcommitment and endless wars around the globe.

And… as another symptom… is it that surprising that one of the top-rated media shows is the “tribally-based” Survivor series?  More tribalism, anyone?