Archive for the ‘General’ Category

Ends or Means

By the time they reach their twenties, at least a few people have been confronted, in some form or another, with the question of whether the ends justify the means.  For students, that’s usually in the form of cheating – does cheating to get a high grade in order to get into a better college [hopefully] justify the lack of ethics?  In business, it’s often more along the lines of whether focusing on short-term success, which may result in a promotion or bonus [or merely keeping your job in some corporations], is justified if it creates long-term problems or injuries to others.

On the other hand, I’ve seldom seen the question raised in a slightly different context.  That is, are there situations where the emphasis should be on the means? For example, on vacation, shouldn’t the emphasis be on the vacation, not on getting to the end of it?  Likewise, in listening to your favorite music, shouldn’t the emphasis be on the listening and not getting to the end?

I suppose there must be some few situations where the end is so vital that the means don’t matter, but the older I get, the fewer examples of that I’ve been able to cite because I’ve discovered that the means so affect the ends that you can seldom accomplish the ends without a disproportionate cost in collateral damage.

This leads to those situations where one needs to concentrate on perfection in accomplishing the means, because, if you don’t, you won’t get to the end.  Some instances such as these are piloting, downhill ski racing, Grand Prix driving [or driving in Los Angles or Washington, D.C., rush hour traffic], or undertaking all manner of professional tasks, such as brain or heart surgery, law enforcement, or fire fighting.

The problem that many people, particularly students, have is a failure to understand that, in the vast majority of cases, learning the process is as critical [if not more so] as the result.  Education, for example, despite all the hype about tests and evaluations, is not about tests, grades, and credentials [degrees/certification].  Even if you get the degree or certification or other credential, unless you’ve learned enough in the process, you’re going to fail sooner or later – or you’ll have to learn all over what you should have learned the first time.  Unfortunately, because many entry-level jobs don’t require the full skill set those who were trying to provide the education were attempting to instill, that failure may not come for years… and when it does, the results will be far more catastrophic.  And, of course, some people will escape those results, because there are always those who do… and, unfortunately, for some reasons, those “evaders” are almost invariably the ones those who don’t want to do the work to learn the process pick as examples and reasons why they shouldn’t work on learning the processes behind the skills.

Studies done on college graduates two generations ago “discovered” that such graduates made far more income over their lifetimes than did those without a college degree.  Unfortunately, the message became that a degree was what mattered, not the skills represented by that degree, and ever since then people have focused on the credential, rather than on the skills, a fact emphasized by rampant grade and degree inflation and documental by the noted scholar Jacques Barzun, in his book, From Dawn to Decadence: 500 Years of Western Cultural Life, 1500 to the Present , where he observed that one of the reasons for the present and continuing decline of Western Civilization is the fact that our culture now exalts credentials over skills and real accomplishments.

One of the most notable examples of this is the emphasis on monetary gain, as exemplified by developments in the stock and securities markets over the past two years.  The “credential” of the highest profit at any cost has so distorted the process of underwriting housing and business investment that the profit levels reaped by various sectors of the economy bear no relationship to their contribution to either the economy or culture.  People whose decisions in pursuit of ever higher and unrealistic profit levels destroyed millions of jobs are rewarded with the “credential” of high incomes, while those who police our streets, fight our fires, protect our nation, and educate our children face salary freezes and layoffs – all because ends justify any means.

Hypocrisy… Thy Name Is “Higher” Education

The semester is over, or about over, in colleges and universities across the United States, and in the majority of those universities another set of rituals will be acted out.  No… I’m not talking about graduation.  I’m talking about the return of “student evaluations” to professors and instructors. The entire idea of student evaluations is a largely American phenomenon that caught hold sometime in the late 1970s, and it is now a monster that not only threatens the very concept of improving education, but it’s also a poster child for the hypocrisy of most college and university administrations.

Now… before we go farther, let me emphasize that I am not opposing the evaluation of faculty in higher education.  Far from it.  Such evaluation is necessary and a vital part of assuring the quality of faculty and teaching.  What I am opposed to is the use of student evaluations in any part of that process.

Take my wife’s music department.  In addition to their advanced degrees, the vast majority have professional experience outside academia.  My wife has sung professionally on three continents, played lead roles in regional operas, and has directed operas for over twenty years.  The other voice professor left a banking career to become a successful tenor in national and regional opera before returning to school and obtaining a doctorate in voice.  The orchestra conductor is a violinist who has conducted in both the United States and China.  The band director spends his summer working with the Newport Jazz Festival.  The piano professor won the noted Tchaikovsky Award and continues to concertize world-wide.  The percussion professor performs professionally on the side and has several times been part of a group nominated for a Grammy.  This sort of expertise in a music department is not unusual, but typical of many universities, and I could come up with similar kinds of expertise in other university departments as well.

Yet… on student evaluations, the students rate their professors on how effective the professors are at teaching, whether the curricula and content are relevant, whether the amount of work required in the course is excessive, etc.  My question/point is simple:  Exactly how can 18-24 year-old students have any real idea of any of the above?  They have no relevant experience or knowledge, and to obtain it is presumably why they’re in college.

Studies have shown that the closest correlation between high student evaluations is that the professors with the easiest courses and the highest percentage of As get the best evaluations. And, since evaluations have become near-universal, college level grades have experienced massive grade inflation.  In short, student evaluations are merely student Happiness Indices – HI!, for short.

So why have the vast majority of colleges and universities come to rely on HI! in evaluating professors for tenure, promotion, and retention?  It has little to do with teaching effectiveness or the quality of education provided by a given professor and everything to do with popularity.  In the elite schools, student happiness is necessary in order to keep student retention rates up, because that’s one of the key factors used by U.S. News and World Report and other rating groups, and the higher the rating, the more attractive the college or university is to the most talented students, and those students are most likely to be successful and eventually boost alumni contributions and the school’s reputation.  For state universities, it’s a more direct numbers game.  Drop-outs and transfers represent lost funds and inquiries from state legislatures who provide some of the funding.  And departments who are too rigorous in their attempts to maintain or [heaven forbid] upgrade the quality of education often either lose students or fail to grow as fast as other departments, which results in fewer resources for those departments.  Just as Amazon’s reader reviews greatly boosted Amazon’s book sales, HI! boost the economics of colleges and universities.  Professors who try to uphold or raise standards face an uphill and usually unsuccessful battle – as evidenced by the growing percentage of college graduates who lack basic skills in writing and logical understanding.

Yet, all the while, the administrations talk about the necessity of HI! [sanctimoniously disguised as thoughtful student evaluations] in improving education, when it’s really about economics and their bottom line… and by the way, in virtually every university and college across the country, over the past 20 years, the percentage growth in administration size has dwarfed the growth in full-time, tenure-track, and tenured faculty.  But then, why would any administration really want to point out that perceived student happiness trumps academic excellence in every day and in every way or that all those resources are going more and more to administrators, while faculties, especially at state universities, have fewer and fewer professors and more and more adjuncts and  teaching assistants?

Newer… Not Always Better

Somehow people, especially students, don’t get it.  As the title above suggests, just because something is newer, it isn’t necessary better – even in computers.  I have yet to find a commercial graphing program in existence today that is anywhere even close to the Boeing Graph program of some 25 years ago.  And as techno-historians know, the Beta videotape system was far superior to the VHS system.

What’s interesting now, though, is that for some applications – such as viewing student voice teachers and critiquing them – VHS tapes are far superior to DVDs.  Why?  Because the tapes can be paused at any given second, or rewound to a precise point.  Commercial DVDs and equipment can’t.  When a voice professor is studying vocal dynamics, that’s important.  Having to play through sections, even at high speed, takes time and often overshoots or undershoots the point in question.  Yet my wife’s pedagogy students complain that she uses “antiquated equipment” and makes them use old-fashioned tapes instead of new hip digital disks.  What they don’t seem to understand is that “new” isn’t better if it doesn’t do what you want it to, especially when “old” technology does.

This isn’t confined to the sometimes arcane area of vocal pedagogy, but applies across our techno-society. Typewriters do a far better job of filling in forms – at least those not available on one’s own computer – than do computers. Word Seven is a much faster word processing system for text than is the current version of Word [which I do have for the other applications], and the search capabilities of fifteen-year-old WordPerfect 6.0 still exceed those of any current version of Word.  As I noted in an earlier post, a keyed ignition is far more effective at turning off a runaway engine than a new high-tech keyless engine, not to mention safer.  My “old” color ink-jet printer delivers a far cleaner and clearer image than does the new and improved laser-jet printer, even if the laser is faster. And in terms of overall medical effectiveness, in terms of all factors, there’s no solid proof that the newer NSAIDs have any more benefits and more effectiveness than does good old aspirin, and although aspirin does have a slightly higher propensity to create gastro-intestinal bleeding, it also has many other benefits, such as reducing the risk of heart attacks and colon cancer – and it’s one of the oldest drugs around. Certainly, the now-retired Concorde passenger jet was far superior to any commercial aircraft now in service in getting passengers across the ocean quickly, and more than a few pilots still claim that the retired F-14 exceeds anything now flying for total air superiority.  Photographic film still provides a better image than does comparable digital photography.

Going back to recording equipment, if you happen to have a phonograph with a working needle, you can still play vinyl and other old records nearly a century old.  You certainly can’t do that with tapes even half that old, and a single light scratch effectively destroys the usefulness of a CD.  That’s fine for entertainment products that aren’t meant to outlast the current fad, but is it acceptable for recording data or information with a longer lifespan?

So why aren’t newer products always better?  The plain fact is that superiority is often far down the list in product qualities, usually behind cost of production/operation, novelty appeal, style, ease of operation, and profitability.  Another factor is that, especially in computer and communications products, manufacturers try to cram in as many applications as possible so as to appeal to the widest possible number of consumers. The multiplicity of applications generally results in the overall degradation of the capability of all functions, but that degradation usually isn’t perceptible, or relevant, to most users.

This often results in cheaper products, but the downside is that those products often don’t suit the needs of professionals in specialized fields… and because it’s getting harder and harder to develop or produce products for users with particular needs – such as my professorial wife – those users have to make do with either improvised or older equipment… and risk being termed dinosaurs and out of date,

In the end… newer isn’t always better; it’s always only newer.

Complete Piracy at Last

It’s now official.  According to my editor and Macmillan Company, the parent of Tor Books, every single one of my titles has now appeared somewhere as pirated edition, in some form or another.  I’d almost like to claim this as a singular distinction.  I can’t. Macmillan also believes that every single book they’ve published in recent years – something like the last three decades – has appeared in pirated editions of some sort.

I can’t say I’m surprised.  Every time I attempt to check up on how my books are doing, I discover website after website offering free downloads of everything I’ve ever written, including versions of titles that never were issued in electronic format and even those that haven’t been in print in those particular editions offered in more than twenty years.  I could spend every minute of every day trying to chase them down… without much success.  So I grit my teeth and bear it.

Ah… the wonders of the electronic age.

Coincidentally, and unsurprisingly, the sales of paperback mass-market fiction books have also begun to decline.  Part of this is likely due in part to the collapse of a section of the wholesale distribution system, but that shrinkage doesn’t account for most of it, because it’s also occurring in the case of titles and authors who were never distributed widely on a wholesale basis, and whose books were largely sold only through bookstores. This hasn’t been so obvious in the F&SF field, because, while the average paperback print run has decreased, the number of paperback titles has increased slightly, but according to knowledgeable editors, the decrease is happening pretty much across the board, and some very big name authors – far bigger names than mine – have seen significant decreases in paperback book sales… and that’s without a corresponding increase in e-book sales.  Obviously, this isn’t true for every single author, and it’s impossible to determine for newly published authors because, if they haven’t published a book before, how can one accurately determine if their paperback sales are falling off from those of their previous book?

Despite all the talk, it appears that the popular mantra that information and entertainment need to be free remains in force for a small but significant fraction of former book buyers – even if such “free editions”  reduce authors’ incomes and result in publishers eliminating yet more mid-list authors because declining sales have made them unprofitable, or even money-losing.

The other day I came across an outraged comment about the price of an e-book version of my own Imager’s Challenge. The would-be reader was outraged that the electronic version was “only” a few dollars less than the hard-cover edition, especially since the paperback edition won’t be out for four months or so.  Somehow, it doesn’t seem to penetrate that while paper may be the single largest component of “physical” publishing costs, it still only amounts to something like 10-15% of the publisher’s cost of producing a book, i.e., a few dollars. Even without paper, the other costs remain, and they’re substantial – and publishing remains, as I have written, time and time again, a very low margin business. That’s why publishers really don’t want to cannibalize their hardcover revenues by undercutting the hardcover prices before the paperback version is on the shelves, especially given the decline in paperback sales.

There are many problems with piracy, including the fact that authors essentially get screwed, but the biggest one for readers seems to be overlooked.  The more piracy exists and the wider-spread it becomes, the less the choice readers will have in finding well-written, well-edited books, and especially of books that are not popular best-sellers.  The multi-million selling popular books – and the “popcorn books,” as my wife calls them – will survive piracy.  The well-written books for smaller audiences won’t.  So readers could very well be left with dwindling choices… and scrambling through thousands of self-published e-volumes, most of which are and will be poorly written and unedited in search of that rare “gem” – a good and different book that doesn’t appeal to everyone.

But… after all, information and entertainment want to be free.

The Instant Disaster Society?

Last Thursday, the stock market took its single biggest one day drop in its history, somewhere slightly over a thousand points, as measured by the Dow Jones Industrial Index.  While the market recovered sixty to seventy percent of that drop before the close Thursday, the financial damage across the world was not inconsiderable.  Did this happen because Greece is still close to a financial meltdown, or because economic indicators were weak?   No… while the leading cause or precipitating factor may have been a typographical error – a trader entered a sell order for $16 BILLION of exchange futures, instead of a mere $16 million, there are a number of other possibilities, but the bottom line [literally] was that, whatever the cause, all the automated and computerized trading engines immediately reacted – and the market plummeted.  Later, the NASDEQ canceled a number of trades, but that was long after the damage had been done.

From the Terminator movies onward, there have been horror stories about computers unleashing doomsday, but the vast majority of these have concerned nuclear and military scenarios – not world economic collapse.  While I don’t fall into the “watch out for those evil computers” camp, I have always been and remained greatly concerned about the growth and uses of so-called “expert systems” – in all areas of society, largely because computers are the perfect servants – they do exactly what their programming tells them to do, even if the result will be disastrous.

For example, Toyota is now having all sorts of problems with runaway acceleration.  When this first occurred, my question was simple enough:  Why didn’t the drivers either shift into neutral or turn off the ignition.  Apparently, it turns out, at least some of them may not have been able to, not quickly, because they had keyless ignition systems.  Yet the automakers are talking about cars that will be not only keyless but also totally electronic, that is, even the shifting will be electronic and not physical/manual.  And if the electronics malfunction, exactly how will a driver be able to quickly “kill” the system?  Let’s think that one over for a bit.

President Obama and the health care reformers want all medical records to be electronically available, both for cost-saving purposes and for ease of access.  The problem with that kind of ease of access is that it also offers greater ease of hacking and tampering, and, I’m sorry, no system that offers the kind of ease the “reformers” are proposing can be made hacker-proof.  The access and security requirements are mutually antithetical. Years ago, Sandra Bullock starred in a movie called “The Net,” and while many of the computer references are outdated and almost laughable, one aspect of the movie was not and remains all too plausibly real.  At least two characters die because their medical records are hacked, and changed.  In addition, national databases are manipulated and identities switched.  Now… the computer experts will say that these sorts of things can be guarded against… and they can be, but will they?  Security costs money, and good security costs a lot of money, and people use computers to cut costs, not to increase them.

As far as economics go, now that an “accident” has shown just how vulnerable securities markets are to inadvertent manipulation, how long before some terrorist or other extremist group figures out how to duplicate the effect?  And then all the programmed trading computers will blindly execute their trades… and we’ll get an even bigger disaster.

Why?

Because we’ve become an instant-reaction society, and electronic systems magnify the effect of either system glitches or human error. Those programmed securities trading computers were designed to take advantage of market fluctuations on a micro if not a nano-second basis.  For better or worse, they make decisions faster than any human trader could possibly make them – and they do so based on data that may or may not be accurate.

We’re seeing the same thing across society.  Today’s young people are being trained to react, rather than to think.  Instead of letters or even email, they use Twitter.  Instead of bridge or old fashioned board games like Risk or Diplomacy, they prefer fast-acting, instant reaction videogames with a premium on speed.  More and more of the younger generation cannot form or express complex concepts, even as technology is taking us into an ever more complex world.  Business has a greater and greater emphasis on short-term gain and profits.  People want instant satisfaction.

The societal response to the increase in speed across society is to use computers and electronic systems to a greater and greater extent – but, as happened last Thursday, what happens when one’s faithful and obedient electronic servants do exactly what their inputs dictate that they’re supposed to do – and the result is disaster?

Do we really want – and can our society survive – a world where a few high-speed mistakes can destroy more than a trillion dollars worth of assets in seconds… or do even worse damage than that?  Not to mention one where thinking is passé… or for the old fogies of an earlier generation… and where all that matters is instant [and shallow] communications and short-term results that may well result in long-term disaster.

Stupid Questions/Bureaucratic Catch-22s

A few weeks ago, the Canadian science fiction writer Peter Watts was convicted of “assaulting” U.S. border guards because he failed to listen/heed instructions to remain in his car when he was pulled over for a search at a border crossing.  Although the guards’ testimony that Watts had physically assaulted them was refuted, Watts was found guilty because, under the law, failure to follow instructions constituted “assault,” although the only action he took was to be stupid enough to get out of his car when he was told not to.  While he was fined and given a suspended sentence, as a now-convicted felon, Watts will henceforth be denied entry to the United States, and, if he were careless enough to sneak in and were discovered, he’d be in much more serious trouble.  While more than a few readers and supporters were outraged at Watts’s treatment, Watts and others were even more outraged at a law that classes “failure to obey” the same as assault.

Unfortunately, this sort of legal trickery and legerdemain has a long and less than honorable history in the United States, and probably elsewhere in the world.  The American justice establishment has found a number of indirect ways to place people in custody and otherwise convict and sentence them.  Perhaps the most well-known was the conviction of the gangster Al Capone, not for the murders, fraud, and mayhem he perpetrated, but for, of all things, income tax evasion.

In 1940 the Congress passed, and the president signed the Alien Registration Act, otherwise known as the Smith Act, which made illegal, among other things, either the membership in any organization which advocated the violent overthrow of the U.S. government or even helping anyone who belonged to such an organization.  In effect, that meant the government could legally prosecute anyone who had ever been a member of the Communist party or anyone who ever helped anyone who had ever been a member of that party with any party-related activities, no matter how trivial. Initially, the Act was used only against those who had actually been involved in such activities, but in the late 1940s, the FBI and Senator Joe McCarthy and the House Committee on UnAmerican Activities charged thousands of Americans with violation of the provisions of the Smith Act. If someone admitted helping another who had belonged to the Communist Party, they could theoretically spend up to 20 years in jail.  If they denied it and proof was found otherwise, they were guilty of perjury and could also go to jail.  Eventually, the Supreme Court declared many of the more far-reaching interpretations and prosecutions under the law unconstitutional, but not before hundreds of people had been sent to jail or had their lives and livelihoods destroyed, either directly or indirectly, for what often amounted to association with friends and business associates.

Flash to the present.  According to the Salt Lake Tribune, the U.S. Customs and Border Protection Form No. 1651-0111 asks the following questions:

Have you ever been or are you now involved in espionage or sabotage, or in terrorist activities, or genocide, or between 1933 and 1945, were involved in persecutions associated with Nazi Germany or its allies?

Are you seeking entry to engage in criminal or immoral activities?

Now… it’s a safe bet that no one will ever check the “yes” box following either one of these questions, and many people will ask why the government bothers with asking such stupid questions.

The government knows no one will ever admit to either set or acts or intentions.  But… if anyone is ever caught even doing something immoral, not necessarily illegal, if the prosecutors can’t come up with as much evidence as they’d like to lock someone away, they can dig out the handy-dandy form and charge the “entrant” in question with perjury, etc.  It’s effectively a form of after-the-fact bureaucratic insurance.

Personally, I can’t say that it exactly reinforces my confidence in American law enforcement’s ability to find and prosecute the worst offenders when every immigrant who even shop-lifted or visited an escort service could be locked away.  But then, they did lock up Big Al, even if they couldn’t prove a thing against him on the worst crimes he ordered or committed.  So… maybe I shouldn’t complain.  Still… Peter Watts is now a felon for what amounts to stupidity, or at the least, lack of common sense, although he never threatened anyone or lifted a hand against either guard.

Conservative Suicide/Stupidity?

As many of you know, I live in Utah, and as most of you may not, I was the Legislative Director for William Armstrong, one of the most conservative congressmen and senators of his time, as well as the staff director for Ken Kramer, his successor in the House – also one of the most conservative congressmen, not to mention being Director of Legislation and Congressional Relations for the U.S. EPA during the first Reagan administration.  These days, however, even as a registered Republican, I seldom vote for Republicans, and what follows may explain one of the reasons why.

Utah’s two U.S. senators are Bob Bennett and Orin Hatch, both conservative Republicans, and according to the various political ratings, they’re among the most conservative in the Senate.  BUT… they’re not “perfect,” with Bennett receiving “only” an 84% rating and Hatch only an 88% rating from the ultra-conservative American Conservative Union. According to recent polls, over 70% of the GOP delegates to the Utah state Republican convention believe that both Hatch and Bennett should be replaced because they’re not conservative enough.  Bennett is up for re-election and probably will not even win his party’s nomination.  He might not even survive this week’s coming party convention.

Now… although I certainly don’t believe in or support many of their policies and votes, I can see where others might… and might wish for all their votes to follow “conservative” principles – but to throw out a three-term conservative incumbent over such ratings?  Does it really make any sense?

No… it doesn’t, and that’s not because I’m a great fan of either senator.  I’m not.  But here’s why replacing Bennett – or Hatch – is totally against the so-called conservatives’ own best interests.

First, the ratings are based on “political litmus test” votes, often on issues that indicate ideology and don’t represent votes on bills that actually might make a difference.  Second, the “difference” between Bob Bennett’s 84% rating and a perfect 100% rating represents all of four votes taken over the entire year of 2009.  Second, seniority in the Senate represents power.  It determines who chairs or who is the ranking minority member on every committee and subcommittee, and that helps determine not only what legislation is considered, but when it’s considered, and what’s actually included in it.  The Senate is an extremely complex body, and it takes years even to truly understand its workings.  To toss out an incumbent who is predominantly conservative, but not “perfectly” conservative, in favor of a challenger who may not even win an election, but who, if he does, has little knowledge of the Senate, and less power, is not an act of conscience, but one of stupidity.  Third, no matter how conservative [or how liberal] a senator is, each senator is restricted by the rules of the body to voting on what is presented. In the vast, vast, majority of cases, that means that the vote of an “imperfect” conservative can be no different from that of a “perfect” conservative.

I can certainly see, and have no problem, with conservatives targeting a senator who seldom or never votes in what they perceive as their interest, but to remove a sitting senator with power and influence who votes “your way” 80-90% of the time in favor of someone who may not win the election, and who will have little understanding or power if he does… that, I have to say, is less than rational.

In the interests of fairness, I will point out that the left wing of the Democratic Party is also guilty of the same sort of insane quest for ideological purity, and that the majority of Americans are fed up with these sorts of extremist shenanigans.  But in the current political climate, where most Americans are fed up with Congress, they may well vote to throw whoever’s in office right out of office… along with Bob Bennett.  And then, next year, when legislative matters are even worse from their point of view… they’ll be even angrier… even though almost none of the voters will admit that everyone wants more from government, in one way or another, than anyone wants to pay for – except for those on the extreme, extreme right, and they want no government at all… and that’s a recipe for anarchy in a world as technologically and politically complex as ours.

Reality or Perception?

The growth of high-technology, particularly in the area of electronics, entertainment, and communications, is giving a new meaning to the question of what is “real.”  As part of that question, there’s also the issue of how on-line/perceptual requirements are both influencing and simultaneously diverging from physical world requirements.

One of the most obvious impacts of the instant communications capabilities embodied in cell-phones, netbooks, laptops, and desktops is the proliferation of emails and text messages.  As I’ve noted before, there’s a significant downside to this in terms of real-world productivity because, more and more, workers at all levels are being required to provide status reports and replies on an almost continual basis.  This constant diversion encourages so-called “multitasking,” which studies show actually takes more time and creates more errors than handling tasks sequentially – as if anyone in today’s information society is ever allowed to handle tasks sequentially and efficiently.

In addition, anyone who has the nerve or the foolhardiness to point this out, or to refrain from texting and on-line social networking, is considered out of touch, anti-technology, and clearly non-productive because of his or her refusal to “use the latest technology,” even if their physical productivity far exceeds that of the “well-connected.”  No matter that the individual has a cellphone and laptop with full internet interconnectivity and can use them to obtain real physical results, often faster than those who are immersed in social connectivity, such individuals are “dinosaurs.”

In addition, the temptations of the electronic world are such, and have created enough concern, that some companies have tried to take steps to limit what on-line activities are possible on corporate nets.

The real physical dangers of this interconnectivity are minimized, if not overlooked.  There have been a number of fatalities, even here in Utah, when individuals locked into various forms of electronic reality, from Ipods to cellphones, have stepped in front of traffic and trains, totally unaware of their physical surroundings.  Given the growth of the intensity of the “electronic world,” I can’t help but believe these will increase.

Yet, in another sense, the electronic world is also entering the physical world.  For example, thousands and thousands of Asian young men and women labor at various on-line games to amass on-line virtual goods that they can effectively trade for physical world currency and goods.  And it works the other way.  There have even already been murders over what happened in “virtual reality” communities.

The allure of electronic worlds and connections is so strong that hundreds of thousands, if not millions, of students and other young people walk past those with whom they take classes and even work, ignoring their physical presence, for an electronic linkage that might have seemed ephemeral to an earlier generation, but whose allure is far stronger than physical reality.…

Does this divergence between the physical reality and requirements of society and the perceptual “reality” and perceived requirements of society herald a “new age,” or the singularity, as some have called it, or is it the beginning of the erosion of culture and society?

Important Beyond the Words

Despite all the “emphasis” on improving education and upon assessment testing in primary and secondary schools, education is anything but improving in the United States… and there’s a very good reason why.  Politicians, educators, and everyday parents have forgotten one of the most special attributes that makes us human and that lies behind our success as a species – language, in particular, written language.

An ever-increasing percentage of younger Americans, well over a majority of those under twenty, cannot write a coherent paragraph, nor can they synthesize complex written information, either verbally or in writing, despite all the testing, all the supposed emphasis on “education.”  So far, this has not proved to be an obvious detriment to U.S. science, business, and culture, but that is because society, any society, has always been controlled by a minority.  The past strength of U.S. society has been that it allowed a far greater percentage of “have-nots” to rise into that minority, and that rise was enabled by an educational system that emphasized reading, writing, and arithmetic – the three “Rs.”   While mastery of more than those three basics is necessary for success in a higher-technology society, ignoring absolute mastery in those subjects for the sake of knowledge in others is a formula for societal collapse, because those who can succeed will be limited to those whose parents can obtain an education for their children that does require mastery of those fundamental basics, particularly of writing.  And because in each generation, there are those who will not or cannot truly master such basics, either through lack of ability or lack of dedication, the number of those able to control society will become ever more limited and a greater and greater percentage of society’s assets will become controlled by fewer and fewer, who, as their numbers dwindle, find their abilities also diminish.  In time, if such a trend is not changed, social unrest builds and usually results in revolution.  We’re already seeing this in the United States, particularly in dramatically increased income inequality, but everyone seems to focus on the symptoms rather than the cause.

Why writing, you might ask.  Is that just because I’m a writer, and I think that mastery of my specialty is paramount, just as those in other occupations might feel the same about their area of expertise?  No… it’s because writing is the very foundation upon which complex technological societies rest.

The most important aspect of written language is not that it records what has been spoken, or what has occurred, or that it documents how to build devices, but that it requires a logical construct to be intelligible, let alone useful. Good writing requires logic, both in structuring a sentence, a paragraph, or a book.  It requires the ability to synthesize and to create from other information.  In essence, mastering writing requires organizing one’s thoughts and mind.  All the scattered facts and bits of information required by short-answer educational testing are useless unless they can be understood as part of a coherent whole.  That is why, always, the best educational institutions required long essay tests, usually under pressure.  In effect, such tests both develop and measure the ability to think.

Yet the societal response to the lack of writing, and thus thinking, ability has been to institute “remedial” writing courses at the college entry level.  This is worse than useless, and a waste of time and resources.  Basic linguistics and writing ability, as I have noted before, are determined roughly by puberty.  If someone cannot write and organize his or her thoughts by then, effectively they will always be limited.  If we as a society want to reverse the trend of social and economic polarization, as well as improve the abilities of the younger generations, effective writing skills have to be developed on the primary and early secondary school levels.  Later than that is just too late.  Just as you can’t learn to be a concert violinist or pianist beginning at age eighteen, or a professional athlete, the same is true for developing writing and logic skills.

And because, in a very real sense, a civilization is its written language, our inability to address this issue effectively may assure the fall of our culture.

The Failure to Judge… Wisely

In last Sunday’s education supplement to The New York Times, there was a table showing a sampling of U.S. colleges and universities and the distribution of grades “earned” by students, as well as the change from ten years earlier – and in a number of cases, the change from twenty or forty or fifty years ago.  Not surprisingly to me, at virtually every university over 35% of all grades granted were As.  Most were over 40%, and at a number, over half of all grades were As.  This represents a 10% increase, roughly, over the past ten years, but even more important it represents a more than doubling, and in some cases, a tripling of the percentage of As being given from 40-50 years ago.  Are the teachers 2-3 times better?  Are the students?  Let us just say that I have my doubts.

But before anyone goes off and blames the more benighted university professors, let’s look at society as a whole.  Almost a year ago, or perhaps longer, Alex Ross, the music critic for The New Yorker, pointed out that almost every Broadway show now gets a standing ovation, when a standing ovation was relatively rare some fifty years ago.  When I was a grade-schooler, there were exactly four college football bowl games on New Year’s eve or New Year’s day, while today there are something like thirty spread over almost four weeks.  Until something like half a century ago, there weren’t any “divisions” in baseball.  The regular season champion of the American League played the regular season champion of the National League.  It’s almost as though we, as a society, can’t accept the judgment of continual success over time.

And have you noticed that every competition for children has almost as many prizes as competitors – or so it seems.  Likewise, there’s tremendous pressure to do away with grades and/or test scores in determining who gets into what college.  And once students are in college, they get to judge their professors on how well they’re being taught – as if any 18-21 year truly has a good and full understanding of what they need to learn [admittedly, some professors don’t, but the students aren’t the ones who should be determining this].  Then we have the global warming debate, where politicians and people with absolutely no knowledge and understanding of the mechanics and physics of climate insist that their views are equal to those of scientists who’ve spent a lifetime studying climate.  And, of course, there are the intelligent design believers and creationists who are using politics to dictate science curricula in schools, based on their beliefs, rather than on what can be proven.

And there’s the economy and business and education, where decisions are made essentially on the basis of short-term profit figures, rather than on the longer-term… and as a result, as we have seen, the economy, business, and education have all suffered greatly.

I could list page after page of similar examples and instances, but these all point out an inherent flaw in current societies, particularly in western European societies, and especially in U.S. society.  As a society, we’re unwilling or unable, or both, to make intelligent decisions based on facts and experience.

Whether it’s because of political pressure, the threat of litigation, the fear of being declared discriminatory, or the honest but misguided belief that fostering self-esteem before establishing ability creates better students, the fact is that we don’t honestly evaluate our students.  We don’t judge them accurately.  Forty or fifty percent do not deserve As, not when less than thirty percent of college graduates can write a complex paragraph in correct English and follow the logic [or lack of it] in a newspaper editorial.

We clearly don’t judge and hold our economic leaders, or our financial industry leaders, to effective standards, not when we pay them tens, if not hundreds, of millions of dollars to implement financial instruments that nearly destroyed our economy.  We judge those running for political office equally poorly, electing them on their professed beliefs rather than on either their willingness to solve problems for the good of the entire country or their willingness to compromise to resolve problems – despite the fact that no political system can survive for long without compromise.

Nor are we, again as a society, particularly accurate in assessing and rewarding artistic accomplishments, or lack of them, when rap music, American Idol and “reality” shows draw far more in financial reward and audiences than do old-fashioned theatre, musical theatre [where you had to be able to compose and sing real melodies], opera, and classical music, and where hyped-up graphic novels are the fastest-growing form of  “print” fiction.   It’s one thing to enjoy entertainment that’s less than excellent in terms of quality;  it’s another to proclaim it excellent, but the ability to differentiate between popularity and technical and professional excellence is, again, a matter of good judgment.

In fact, “judgment” is becoming the new “discrimination.”  Once, to discriminate meant to choose wisely;  now it means to be horribly biased.  The latest evolution in our current “newspeak” appears to be that to judge wisely on the basis of facts is a form of bias and oppression.  It’s fine to surrender judgment to the marketplace, where dollars alone decide, or to politics, where those who are most successful in pandering for votes decide… but to decide based on solid accomplishment – or the lack thereof, as in the case of students who can’t read or write or think or in the case of financiers who lose trillions of dollars – that’s somehow old-fashioned, biased, or unfair.

Whatever happened to judging wisely?

Jeremiads

Throughout recorded history runs a thread whereupon an older and often distinguished figure rants about the failures of the young and how they fail to learn the lessons of their forebears and how this will lead to the downfall of society.  While many cite Plato and his words about the coming failure of Greek youth because they fail to learn music and poetry and thus cannot distinguish between the values of the ancient levels of wisdom ascribed to gold, silver, and bronze, such warnings precede the Greeks and follow them through Cicero and others.  They also occur in other cultures than in western European descended societies.

Generally, at the time of such warnings, as with the case of Alcibiades with Socrates, there are generally two reactions, one usually from the young and one usually from the older members of society.  One is: “We’re still here; what’s the problem; you don’t understand that we’re different.”  The other is: “The young never understand until it’s too late.”

I’ve heard my share of speeches and talks that debunk the words of warning, and generally, these “debunkers” point out that Socrates and Cicero and all the others warned everyone, but today we live at the peak of human civilization and technology.  And we do… but that’s not the point.

Within a generation of the time of Plato’s reports of Socrates’ warnings, Greece was spiraling down into internecine warfare from which it, as a civilization, never fully recovered.  The same was true of Cicero, but the process was far more prolonged in the case of the Roman Empire, although the Roman Republic, which laid the foundation of the empire, was essentially dead at the time of Cicero’s execution/murder.

The patterns of rise and fall, rise and fall, of cultures and civilizations permeate human history, and so far, no civilization has escaped such a fate, although some have lasted far longer than others.

There’s an American saying that was popular a generation or so ago – “From shirt-sleeves to shirt-sleeves in four generations.”  What it meant was that a man [because it was a society even more male-dominated then] worked hard to build up the foundation for his children, and then the next generation turned that foundation into wealth and success, and the third generation spent the wealth, and those of the fourth generation were impoverished and back in shirt-sleeves.

To build anything requires effort, and concentrated effort requires dedication and expertise in something, which requires concentration and knowledge.  Building also requires saving in some form or another, and that means forgoing consumption and immediate satisfaction.  In societal terms, that requires the “old virtues.”  When consumption and pleasure outweigh those virtues, a society declines, either gradually or precipitously.  Now… some societies, such as that of Great Britain, for years pulled themselves back from the total loss of “virtues.”

But, in the end, the lure of pleasure and consumption has felled, directly or indirectly, every civilization.  The only question appears to be not whether this will happen, but when.

So… don’t be cavalier about those doddering old fogies who predict that the excess of pleasure-seeking and self-interest will doom society.  They’ll be right… sooner or later.

The Continued Postal Service Sell-Out

Once, many, many years ago, I was the legislative director for a U.S. Congressman who served on the Appropriations subcommittee overseeing the U.S. Postal Service.  Trying to make sense out of the Postal Service budget – and their twisted economic rationalizations for their pricing structure – led to two long and frustrating years, and the only reason I didn’t lose my hair earlier than I eventually did was that the USPS comprised only part of my legislative duties.

The latest cry for cuts and service reductions may look economically reasonable, but it’s not, because the USPS has been employing the wrong costing model for over forty years. The model is based on structuring costs, first and primarily, on first class mail, and then treating bulk mail and publications as marginal costs, and setting the rates, especially for bulk mail, based on the marginal costs.

Why is this the wrong model?

First, because it wasn’t what the founding fathers had in mind, and second, because it’s lousy economics.

Let’s go back to the beginning.  Article I, section 8, clause 7 of the U.S. Constitution specifically grants Congress the power “to establish Post Offices and Post roads.”  The idea behind the original Post Office was to further communications and the dissemination of ideas.  There was a debate over whether the Post Office should be allowed to carry newspapers, and a number of later Supreme Court decisions dealt with the limits on postal power, especially with regard to free expression, with the Court declaring, in effect, that the First Amendment trumped the Post Office power to restrict what could be mailed.  During the entire first century after the establishment of the Post Office and even for decades after, the idea behind the Post Office was open communications, particularly of ideas.

The idea of bulk mail wasn’t even something the founding fathers considered and could be said to have begun with the Montgomery Ward’s catalogue in the 1870s, although the Post Office didn’t establish lower bulk mail rates until 1928.  As a result, effectively until after World War II, massive direct bulk mailings were comparatively limited, and the majority of Post Office revenues came from first class mail. Today, that is no longer true.  Bulk mail makes up the vast majority of the U.S. Postal Service’s deliveries, and yet it’s largely still charged as if it were a marginal cost – and thus, the government and first class mail users are, in effect, subsidizing advertising mail sent to all Americans.  Yet, rather than charging advertisers what it truly costs to ship their products, the USPS is proposing cutting mail deliveries – and the reason why they’re talking about cutting Saturday delivery is because – guess what? – it’s the lightest delivery day for bulk mail.

I don’t know about any of you, but every day we get catalogues from companies we’ve never patronized and never will.  We must throw away close to twenty pounds of unwanted bulk mail paper every week – and we’re paying higher postage costs and sending tax dollars to the USPS to subsidize even more of what we don’t want.

Wouldn’t it just be better to charge the advertisers what it really costs to maintain an establishment that’s to their benefit?  Or has the direct mail industry so captured the Postal Service and the Congress that the rest of us will suffer rather than let this industry pay the true costs of the bulk mail designed to increase their bottom line at our expense?

Being A Realist

Every so often, I come head-to-head with an unsettling fact – being a “realistic” novelist hurts my sales and sometimes even upsets my editors.  What do I mean?   Well… after nearly twenty years as an economist, analyst, administrator, and political appointee in Washington, I know that all too many of the novelistic twists and events, such as those portrayed by Dan Brown, are not only absurd, but often physically and or politically impossible.  That’s one of the reasons why I don’t write political “thrillers,” my one attempt at such proving dramatically that the vast majority of readers definitely don’t want their realism close to home.

Unfortunately, a greater number don’t want realism to get in the way, or not too much in the way, in science fiction or fantasy, and my editors are most sensitive to this.  This can lead to “discussions” in which they want more direct action, while I’m trying to find a way to make the more realistic indirect events more action-oriented without compromising totally what I have learned about human nature, institutions, and human political motivations.  For example, there are reasons why high-tech societies tend to be lower-violence societies, but the principal one is very simple.  High-tech weaponry is very destructive, and societies where it is used widely almost invariably don’t stay high-tech.  In addition, violence is expensive, and successful societies find ways to satisfy majority requirements without extensive violence [selective and targeted violence is another question].

Another factor is that people seeking power and fortune wish to be able to enjoy both after they obtain them – and you can’t enjoy either for long if you’ve destroyed the society in order to be in control. This does not apply to fanatics, no matter what such people claim, but the vast majority of fanatics don’t wish to preserve society, but to destroy – or “simplify” – it because it represents values antithetical to theirs.

Now… this sort of understanding means that there’s a lot less “action” and destruction in my books than in most other books dealing with roughly similar situations and societies, and that people actually consider factors like costs and how to pay for things.  There are also more meals and meetings – as I’m often reminded, and not always in a positive manner – but meals and meetings are where most policies and actions are decided in human society.  But, I’m reminded by my editors, they slow things down.

Yes… and no…

In my work, there’s almost always plenty of action at the end, and some have even claimed that there’s too much at the end, and not enough along the way.  But… that’s life.  World War II, in all its combat phases, lasted slightly less than six years.  The economics, politics, meetings, meals, treaties, elections, usurpations of elections, and all the factors leading up to the conflict lasted more than twenty years, and the days of actual fighting, for any given soldier, were far less than that. My flight instructors offered a simple observation on being a naval aviator:  “Flying is 99 percent boredom, and one percent sheer terror.”  Or maybe it was 98% boredom, one percent exhilaration, and one percent terror.

On a smaller and political scale, the final version of Obama’s health care bill was passed in days – after a year of ongoing politicking, meetings, non-meetings, posturing, special elections, etc.   The same is true in athletics – the amount of time spent in training, pre-season, practices, etc, dwarfs the time of the actual contest, and in football, for example, where a theoretical hour of playing time takes closer to three hours, there’s actually less than fifteen minutes of actual playing time where players are in contact or potential contact.

Obviously, fiction is to entertain, not to replicate reality directly, because few read to get what amounts to a rehash of what is now a very stressful life for many, but the question every writer faces is how close he or she hews to the underlying realities of how human beings interact with others and with their societies.  For better or worse, I like my books to present at least somewhat plausible situations facing the characters, given, of course, the underlying technical or magical assumptions.

Often my editors press for less realism, or at least a greater minimization of the presentation of that realism.  I press back.  Sometimes, it’s not pretty. So far, at least, we’re still talking to each other.

So far…

Pondering Some “Universals”

When a science fiction writer starts pondering the basics of science, especially outside the confines of a story or novel, the results can be ugly.  But…there’s this question, and a lot of them that arise from it, or cluster around it… or something.

Does light really maintain a constant speed in a vacuum and away from massive gravitational forces?

Most people, I’m afraid, would respond by asking, “Does it matter?”  or “Who cares?”

Physicists generally insist that it does, and most physics discussions deal with the issue by saying that photons behave as if they have zero mass at rest [and if I’m oversimplifying grossly, I’m certain some physicist will correct me], which allows photons to travel universally and generally invariably [again in a vacuum, etc.] at the speed of light, which is a tautology, if one thinks about it.  Of course, this is also theoretical, because so far as I can determine, no one has ever been able to observe a photon “at rest.”

BUT… here’s the rub, as far as I’m concerned.  Photons are/carry energy.  There’s no doubt about that.  The earth is warmed by the photonic flow we call sunlight.  Lasers produce coherent photonic flow strong enough to cut metal or perform delicate eye surgery.

Second, if current evidence is being interpreted correctly, black holes are massive enough to stop the flow of light.  Now… if photons have no mass, how could that happen, since the current interpretation is that the massive gravitational force stops the emission of light, suggesting that photons do have mass, if only an infinitesimal and currently unmeasurable mass.

These lead to another disturbing [at least for me] question.  Why isn’t the universe “running down”?  Don’t jump on me yet.  A great number of noted astronomers have asserted that such is indeed happening – but they’re talking about that on the macro level, that is, the entropy of energy and matter that will eventually lead to a universe where matter and energy are all at the same level everywhere, without all those nice gradients that make up comparative vacuum and stars and planets and hot and cold.  I’m thinking about winding down on the level of quarks and leptons, so to speak.

Current quantum mechanics seems to indicate that what we think of as “matter” is really a form of structured energy, and those various structures determine the physical and chemical properties of elements and more complex forms of matter.  And that leads to my problem.  Every form of energy that human beings use and operate “runs down” unless it is replenished with more energy from an outside source.

Yet the universe has been in existence for something like fifteen billion years, and current scientific theory is tacitly assuming that all these quarks and leptons – and photons – have the same innate internal energy levels today as they did fifteen billion years ago.

The scientific quest for a “theory of everything” tacitly assumes, as several noted scientists have already observed, unchanging universal scientific principles, such as an unvarying weak force on the leptonic level and a constant speed of light over time.  On a practical basis, I have to question that.  Nothing seems to stay exactly the same in the small part of the universe which I inhabit, but am I merely generalizing on the basis of my observations and anecdotal experience?

All that leads to the last question.  If those internal energies of quarks and leptons and photons are all declining at the same rate, how would we even know?  Could it be that those “incredible speeds” at which distant galaxies appear to be moving are more an artifact of changes in the speed of light?  Or in the infinitesimal decline of the very energy levels of all quarks, etc., in our universe?

Could our universe be running down from the inside out without our even knowing it?

The Absolute Need for Mastery of the Boring

A few weeks so ago, I watched two college teams play for the right to go to the NCAA tournament.  One team, down twenty points at halftime, rallied behind the sensational play of a single star and pulled out the victory by one point in the last seconds.  That was the way television commentators and the print media reported it.  I saw it very differently. One of the starting guards for the losing team missed seven out of twelve free throws, two of them in the last fifteen seconds.  This wasn’t a fluke, a bad day for that player – he had a year-long 40% free throw success percentage.  And just how many games in the NCAA tournament have been lost by “bad” free throw shooting?  Or won by good free throw shooting?  More than just a handful.

Good free-throw shooting is clearly important to basketball success.  Just look at the NBA.  While the free-throw shooting average for NCAA players is 69%, this year’s NBA average is 77%, and 98% of NBA starters have free throw percentages above 60%, with 75% of those starters making more than three-quarters of their free throws.

To my mind, this is a good example of what lies behind excellence – the ability to master even the most boring aspect of one’s profession. Another point associated with this is that simply knowing what a free throw is and when it is employed isn’t the same as being able to do it.  It requires practice – lots of practice. Shooting free throws day after day and improving technique is not exciting; it’s boring.  But the fact that there are very, very few poor free-throw shooters in the NBA is a good indication that mastery of the boring pays off.

The same is true in writing.  Learning grammar and even spelling [because spell-checkers don’t catch everything, by any means] is also boring and time consuming, and there are some writers who are, shall I say, slightly grammatically challenged, but most writers know their grammar.  They have to, because editors usually don’t have the time or the interest in cleaning up bad writing.  It also gets boring to proofread page after page of what you’ve written, from the original manuscript, the copy-edited manuscript, the hardcover galleys, the paperback galleys, and so on… but it’s necessary.

Learning how to fly, which most people believe is exciting, consists of a great deal of boredom, from learning to follow checklists to the absolute letter, to practicing and practicing landings, take-offs, and emergency procedures hour after hour, day after day until they’re second nature.  All that practice is tedious… and absolutely necessary.

My opera director wife is having greater difficulty with each year in getting students to memorize their lines and music – because it’s boring – but you can’t sing opera or musical theatre if you don’t know your music and lines.

I could go on and on, detailing the necessary “boring” requirements of occupation after occupation, but the point behind all this is that our media, our educational system, and all too many parents have instilled a message that learning needs to be interesting and fun, and that there’s something wrong with the learning climate if the students lose interest.  Students have always lost interest.  We’re genetically primed to react to the “new” because it was once a survival requirement.  But the problem today is that the skills required to succeed in any even moderately complex society require mastery of the basics, i.e., boring skills, or sub-skills, before one can get into the really interesting aspects of work.  Again, merely being able to look something up isn’t the same as knowing it, understanding what it means, and being able to do it, time after time without thinking about it and without having to look it up repeatedly.

And the emphasis on fun and making it interesting is obscuring the need for fundamental mastery of skills, and shortchanging all too many young people.

Original

Last week, in my semi-masochistic reading of reviews, I came across a review of The Magic of Recluce that really jarred me.  It wasn’t that the review was bad, or even a rave.  The reviewer noted the strengths of the book and some areas she thought weak, or at least that felt rough to her.  What jarred me were the words and the references which compared it to books that had been published years afterward, as if The Magic of Recluce happened to be copying books that actually appeared after it.  Now, this may have been as much my impression as what the reviewer meant, but it struck a chord – off-key – in my mind because I’ve seen more than a few reviews, especially in recent years, that note that The Magic of Recluce was good, decent… or whatever, but not as original as [fill in the blank].

Now… I’m not about to get into comparative “quality” — not in this blog, at least, but I have to admit that the “not so original” comment, when comparing Recluce to books published later, concerns me.  At the time the book was published, almost all the quotes and reviews noted its originality.  That it seems less “original” now to newer and often younger readers is not because it is less original, but because there are far more books out with differing magic systems.  Brandon Sanderson, for example, has developed more than a few such systems, but all of them came well after my systems in Recluce, Erde, and Corus, and Brandon has publicly noted that he read my work well before he was a published author.

The word “original” derives from “origin,” i.e., the beginning, with the secondary definition that it is not a copy or a duplicate of other work.  In that sense, Tolkien’s work and mine are both original, because our systems and the sources from which we drew are substantially different.  Tolkien drew from linguistics and the greater and lesser Eddas, and, probably through his Inking connections with C.S. Lewis, slightly from Wagner.  I developed my magic system from a basis of physics.  Those are the “origins.”

The other sense of “original” is that signifying preceding that which follows, and in that sense, my work is less original than that of Tolkien, but more “original” than that of Sanderson or others who published later, for two reasons.  First, I wrote it earlier that did those who followed me, and second, I developed magic systems unlike any others [although the Spellsong Cycle magic has similarities to Alan Dean Foster’s Spellsinger, but a fundamentally different technical concept].

There’s also a difference between “original” and “unique.”  While it is quite possible for an original work not to be unique, a truly unique work must be original, although it can be derivative.

Inn any case, my concerns are nothing compared to those raised by the reader review I once read that said that Tolkien’s work was “sort of neat,” even if he did rip off a lot from Terry Brooks.

 

Anyone Can Do That

The other day I received an email from a faithful reader who noted that he had stopped reading The Soprano Sorceress because the song magic was “too easy.” Over the years I’ve received other comments along the lines that all she had to do was open her mouth and sing.

Right. Except that under the magic system in Erde, the song had to be perfectly on pitch and in key; the words had to specify what had to be accomplished; and the accompaniment had to match. In the opening of that book, a sorcerer destroyed a violinist whose accompaniment was imperfect — because it could have threatened his life. Comparatively few professional singers, except classically trained opera singers, can maintain such perfection in a live performance. And some of those don’t have the best diction — yet clear diction would be vital in song spell-casting. Now… try it in the middle of a battle or when your life is under immediate threat.

I bring this up because there are certain skills in any society, but particularly in our society, that almost everyone thinks they can do. Most people believe they can sing, or write, or paint almost as well as the professionals, and almost all of them think they can certainly critique such with great validity.

I’m sorry. Most people have a far higher opinion of their skills than can be objectively confirmed — and that’s likely an understatement. Even in noted music conservatories, only a minority of graduates are good enough, talented enough, and dedicated enough to sing professionally. The same is true of noted writing programs or established art programs. For that matter, comparatively few graduates of noted business schools ever make it to the top levels of business organizations or corporations.

A similar attitude pervades our view of sports. Tens of millions of American men identity with sports and criticize and second-guess athletic professionals whose skills they could never match under pressures they can only vaguely comprehend. Monday morning quarterbacking used to be a truly derogatory term, enough so that its use tended to stop someone cold. Now it’s almost jocular, and everyone’s an expert in everything.

Is all this because our media makes everything look easy? Because the media only concentrate on the handful of individuals in the arts, athletics, and professions who are skilled, dedicated, and talented enough to make it look “easy.” Or is it because our society has decided to tell students that they’re wonderful, or have “special” talents when they’re failing?

The bottom line is that doing anything well is not “easy,” no matter how effortless it looks, especially when one of the talents of the best is to make that accomplishment look effortless… and that usually means that only those who truly understand that skill really know what it took to make it look easy or effortless.

The Impact of the Blog/Twitter Revolution

The Pew Research Center recently reported that among 19-28 year-olds, blogging activity dropped from close to thirty percent in December 2007 to around fifteen percent by the end of 2009, while the number of teenagers who blogged continues to decline. Those under thirty now focus primarily on Facebook and Twitter. On the other hand, blogging has increased among adults over thirty by close to forty percent in the last three years, although the 11% of those who do blog is still below the 15% level of the 19-29 age group. Based on current trends, that gap will close significantly over the next few years.

These data scarcely surprise me. After all, once you’ve blurted out, “Here I am,” and explained who you are, maintaining a blog with any form of regularity takes, thought, insight, and dedicated work, none of which are exactly traits encouraged in our young people today, despite the lip service paid to them. And, after all, while it can be done, it’s hard to fully expose one’s lack of insight and shallowness when one is limited to the 140 characters of a Twitter message, and since Facebook is about “connecting” and posturing, massive thought and insight are not required.

There is a deeper problem demonstrated by these trends — that technology is being used once more to exploit the innate tendency of the young to focus on speed and fad — or “hipness” [or whatever term signifies being cool and belonging]. All too many young adults are functionally damaged in their inability to concentrate and to focus on any form of sustained task. Their low boredom threshold, combined with a socially instilled sense that learning should always be interesting and tailored precisely to them, makes workplace learning difficult, if not impossible, for far too many of them, and makes them want to be promoted to the next position as soon as possible.

As Ursula Burns, the President and CEO of Xerox, recently noted, however, this urge for promotion as soon as one has learned the basics of a job is horribly counterproductive for the employer… as well as for the employee. The young employee wants to move on as soon as he or she has learned the job. If businesses acquiesce in this, they’ll always be training people, and they’ll never be able to take advantage of the skills people have learned, because once they’ve learned the job, they’re gone from that position, either promoted or departed to another organization in search of advancement. It also means that those who follow such a path never fully learn, and never truly refine and improve those skills.

This sort of impatience has always existed among the young, and it’s definitely not unique to the current generations. What is unique, unfortunately, is the degree to which society and technology are embracing and enabling what is, over time, effectively an anti-social and societally fragmenting tendency.

Obviously, not all members of the younger generation follow these trends and patterns, but from what I’ve learned from my fairly widespread net of acquaintances in higher education across the nation, a majority of college students, perhaps a sizable majority, are in fact addicted to what I’d call, for lack of a better term, “speed-tech superficiality,” and that’s going to make the next few decades interesting indeed.

Miscellaneous Thoughts on Publishing

Several of the comments in the blogsphere during the Macmillan-Amazon dust-up focused on the point I and others had raised about the fact that, depending on the publisher, from thirty to sixty percent of all books lost money and that those losses were made up by the better-selling books. A number of commenters to various blogs essentially protested that publishers shouldn’t be “subsidizing” books that couldn’t carry their own weight, so to speak. At the time, I didn’t clarify this misconception, but it nagged at me.

So… almost NO publishers print books that they know will lose money. The plain fact of the matter is that when a publisher prints a book, it is usually with the expectation that it will at least break even, or come close. At times, publishers know a book will be borderline, because the author is new, but they publish the book in the hopes of introducing an author whose later books, they believe, will sell more. While the statistics show that 30%-60% of books lose money, the key point is that the publishers don’t know in advance which books will lose money. Yes, they do know that it’s unlikely that, for example, a Wheel of Time or a Recluce book will lose money, but no publisher has enough guaranteed best-sellers to fill out their printing schedule. Likewise, they really don’t know who will become a guaranteed best-seller. Just look at how many publishers turned down Harry Potter. Certainly, no editors ever thought that the Recluce books would sell as well or for as long as they have. Not to mention the fact that there are authors whose books were at the top of The New York Times bestseller lists whose later books were anything but bestsellers. The bottom line is simple: Publishers do not generally choose to print books that they know will lose money just to subsidize a given book or author. They try to print good-selling books, and they aren’t always successful.

Last week, Bowker released sales figures for the book publishing industry that revealed that only two percent of all book sales in 2009 were of e-books, while 35% were of trade paperbacks, 35% were hardcovers, and 21% were mass market paperbacks. Interestingly enough, though, while chain bookstores sold 27% of all books, e-commerce sites, such as Amazon and BarnesandNoble.com, sold 20% of all titles, including hardcovers, trade paperbacks, and mass market paperbacks. People talk about how fast matters can change, but even “fast” takes time. Jeff Bezos started Amazon in 1994. Today, based on the Bowker figures, Amazon probably accounts for between nine and fourteen percent of all U.S. book sales, but that’s after sixteen years of high growth. A study by Nielsen [the BookScan folks] also revealed that forty percent of all readers would not consider e-books under any circumstances. To me, those figures suggest that, while e-books may indeed be the wave of the future, the industry isn’t going to be doing big-time surfing on it for many years to come.

Total book sales were down about three percent last year, but fiction sales were up seven percent. The overall decline was linked to a decrease in sales of adult nonfiction. That indicates there was definitely an increased market for escapism in 2009.

And one last thought… in 1996, Amazon was still struggling, and there was a question as to whether it would really pull through — and then Jeff Bezos introduced the reader reviews, and Amazon never looked back. Because readers could offer their own views… they bought more books from Amazon. Do so many people feel so marginalized that being able to post comments changes their entire buying habits? The other downside to reader reviews is that the increasingly wide usage of the practice — from student evaluations to Amazon reviews — reinforces the idea that all opinions are of equal value… and they’re not, except in the mind of the opinion-giver. Some reader reviews are good, thoughtful, and logical. Most are less than that.

So, in yet another area, good marketing has quietly undermined product excellence.

Thoughts on "The Oscars"

Actually, this blog deals with my reaction to the expressed thoughts of others about the Oscar ceremony. Before beginning, however, I will cheerfully admit that I watch almost all movies either on DVD or satellite, often years after they’re released.

Now…for those thoughts. By Monday morning, in all too many media outlets, so-called columnists and pundits were complaining about the ceremony being too long and that too much time was wasted on “minor” awards that no one cared about, such as make-up, costumes, sound mixing, and the like. I didn’t happen to see a complaint about special effects, but maybe I overlooked it.

There are two BIG things that bother me about all this Monday morning quarterbacking. First, the Oscars were designed to recognize all aspects of film-making, not just the six “biggies.” As a matter of fact, I could make the argument that those who have been nominated for those — best picture, best director, best leading actor and actress, best supporting actor and actress — need the recognition far less than all the others who enabled the “biggies” to shine. Without a good script, the best actor looks stupid, as some of the greatest names in film have proved a few times. With the wrong music, the right mood doesn’t get created, and Richard Nixon certainly proved that make-up makes a difference. How can you have a Jane Austen period piece, or a Young Victoria, without the right costuming? The entire success of Avatar depends not so much on the actors as on all the things that aren’t the actors. The actors and directors are always recognized. Why begrudge all the others a few hours once a year when a few of them actually get noticed?

In addition, the ceremony and the awards were originally developed to provide such recognition — not to provide prime-time, viewer-oriented “entertainment.” But, of course, because many people have become interested, the “Oscar ceremony” is now packaged as entertainment, and the vast majority of the more technical awards are presented at another ceremony — noted at the “real” Oscar ceremony with a quick picture and thirty seconds of explanation [out of three hours] and not even a listing of names, because, after all, why should one be obliged to read a long list at the “official” Oscar ceremony?

My second BIG objection is that movies, especially today, are a highly technical enterprise that requires great expertise, and yet these commentators seem to want to ignore the very expertise that makes such great films possible in favor of glitz and celebrity. In a way, it reminds me of the Roman Empire, where the great majority of the engineers who designed all those buildings, bridges, and aqueducts were slaves — more privileged slaves, to be sure — but slaves nonetheless. And what happened as even the minimal respect for those slaves vanished in the decadence of glitz and ancient celebrity?

What these commentaries about the dullness of recognizing expertise reveal, unfortunately, is a deploring culture shift away from appreciating the technology that underpins everything we do, including even one of the least substantive aspects of our society — cinema — toward even more superficiality. And even that superficiality that has to be so current. Last year is so passe. As for more than a year ago… forget it.

The polite and bored minimal applause that followed the heartfelt tribute to John Hughes was incredibly painful to hear. A man who gave his life to his art, and combined humor and insight, and the general reaction was, “We’re bored.” And then the “In Memoriam” section was so abbreviated and flashed over so quickly, with names even eliminated when the camera flashed to James Taylor singing, that it was almost a travesty.

Are we so into glitz that we can’t spare an hour or two once a year to allow a little recognition for those who went before and for a comparative handful of experts, who represent tens of thousands of technical specialists that we never otherwise acknowledge, yet whose contributions are absolutely vital to the film industry? Is that really too much to ask?

And, remember, I’m not even a film buff.