Archive for June, 2012

Technology and the Tool-User

Modern technology is a wonder.  There’s really no doubt about that.  We can manipulate images on screens. We can scan the body to determine what might be causing an illness.  We can talk to people anywhere in the world and even see their images as they respond.  We can produce tens of millions of cars and other transport devices so that we aren’t limited by how far our legs or those of an animal can take us.  We can see images of stars billions of light years away.

But… technology has a price.  In fact, it has several different kinds of prices.  Some are upfront and obvious, such as the prices we pay to purchase all the new and varied products of technology, from computers and cell phones to items as mundane as vacuum cleaners and toaster ovens. Others are less direct, such as the various forms of pollution and emissions from the factories that produce those items or the need for disposal and/or recycling of worn-out or discarded items.  Another indirect cost is that, as the demand for various products increases, often the supply of certain ingredients becomes limited, and that limitation increases the prices of other goods using the same ingredients.

But there’s another and far less obvious price to modern technology.  That less obvious price is that not only do people shape technology, but technology shapes and modifies people.  This has worried people for a long time in history. Probably the invention of writing had some pundits saying that it would destroy memory skills, and certainly this issue was raised when the invention of the printing press made mass production of books possible.  In terms of the impact on most human beings, however, books and printing really didn’t change the way most people perceived the world to a significant degree, although it did raise the level of knowledge world-wide to one where at least the educated individuals in most countries possessed similar information, and it did result in a massive increase in literacy, which eventually resulted in a certain erosion of  the power of theological and ruling elites, particularly in western societies… but the impact internally upon an individual’s perception was far less limited than the doomsayers prophesied.

Now, however, with the invention of the internet, search engines, and all-purpose cellphones providing real-time, instant access to information, I’m already seeing significant differences in the mental attitudes of young people and the potential for what I’d term widespread knowledgeable ignorance.

While generations of students have bemoaned the need to learn and memorize certain facts, formulae, processes, and history, the unfortunate truth is that some such memorization is required for an individual to become a thinking, educated individual.  And in certain professions, that deeply imbedded, memorized and internalized knowledge is absolutely necessary.  A surgeon needs to know anatomy inside and out.  Now, some will say that computerized surgeons will eventually handle most operations. Perhaps…but who will program them?  Who will monitor them? Pilots need to know things like the critical stall speeds of their aircraft and the characteristics of flight immediately preceding a potential stall, as well as how to recover, and there isn’t time to look those up, and trying to follow directions in your ears for an unfamiliar procedure is a formula for disaster.

In every skilled profession, to apply additional knowledge and to progress requires a solid internalized knowledge base.  Unfortunately, in this instant-access-to-information society more and more young people no longer have the interest/skills/ability to learn and retain knowledge. One of the ways that people analyze situations is through pattern-recognition, but you can’t recognize how patterns differ if you can’t remember old patterns because you never learned them.

Another variation of this showed up in the recent financial meltdowns, the idea that new technology and ideas always trump the old.  As one veteran of the financial world observed, market melt-downs don’t happen often, perhaps once a generation, and the Wall Street “whiz-kids” were too young to have experienced the last one, and too contemptuous of the older types whose experience and cautions they ignored… and the reactions of all the high-speed computerized tradeing just made it worse.

A noted scholar at a leading school of music observed privately several months ago that the school was now getting brilliant students who had difficulty and in some cases could not learn to memorize their roles for opera productions. In this electronic world, they’d never acquired the skill.  And in opera, as well as in live theatre, if you can’t memorize the music and the words… you can’t perform.  It’s that simple.   This university has been in existence over a century… and never has this problem come up before.

And what happens when all knowledge is of the moment, and electronic – and can be rewritten and revised to suit the present?  When memory is less trusted than the electronic here and now? You think that this is impossible?  When Jeff Bezos has stated, in effect, that Amazon’s goal is to destroy all print publications and replace them all in electronic formats? And when the U.S. Department of Justice is his unwitting dupe?

But then, who will remember that, anyway?

Solutions and Optimism

Believe it or not, I really am a cheerful and optimistic sort, but the reaction to some of my latest blogs brings up several points that bear repeating, although some of my readers clearly don’t need the reminders, because their comments show understanding.  First, a writer is not just what he or she writes. Second, critical assessment, particularly if it’s accurate, of an institution or a societal practice is not always “negative.”  Third, solutions aren’t solutions until and unless they can be implemented.

Readers can be strange creatures, even stranger than authors, at times.  I know an author who writes about the experiences of a white trash zombie.  She’s a very warm person and not at all either white trash or a zombie.  And most readers have no problem understanding that.  Yet, all too often, some readers have great difficulty in understanding that just because a writer accurately portrays a character with whose acts or motivations they disagree it doesn’t necessarily mean the character represents the author.  I’ll admit that some of my characters do embody certain experiences of mine – especially those who are pilots of some sort or involved in government – but that still doesn’t mean that they’re me.  Likewise, just because I point out what I see as problems in society doesn’t mean that I’m a depressed misanthrope.

As I and others have said, often, the first step to solving a problem is recognizing it exists. On a societal level, this is anything but easy. Successful societies are always conservative and slow to change, but societies that don’t change are doomed.  The basic question for any society is how much and how fast to change, and the secondary questions are whether a change is necessary or inevitable… or beneficial, because not all change is for the best.

One of the lasting lessons I learned in my years in Washington, D.C., is that there is usually more than one potential and technically workable solution to most problems.  At times, there are several. Very, very, occasionally, there is only one, and even then there is the possibility of choosing not to address the problem.  And every single solution to a governmental problem has negative ramifications for someone or some group so that addressing any problem incorporates a decision as to who benefits and who suffers. Seldom is there ever an easy or simple solution.  And, of course, as voters we don’t get to choose that solution; we only get to vote for those who will, and often our choice isn’t the one who gets elected.

For that reason, my suggested course of action is almost never to vote for any politician who promises a simple or easy solution.  If two candidates promising simple solutions are running, vote for the one who incites less anger or whose solution is “less simple.”

This electoral emphasis on simplicity has always been present in American politics, but in the past, once the campaign was over, politicians weren’t so iron-clad, and didn’t always insist on a single simple answer/solution. I saw the beginning of the change in the late 1970s, and it intensified in the Reagan Administration. For example, when I was at the Environmental Protection Agency, there was a large group of people who were totally opposed to hazardous waste landfills or incinerators – anywhere.  In addition, and along the same lines, to this day, we don’t have a permanent repository for spent nuclear fuel.  I’m sorry, but in a high tech society with nuclear power plants, you need both.  The waste isn’t going away, and the products we use and consume generate those wastes.  Right now there is NO technology that can generate high tech electronics without creating such wastes, and to make matters worse, the cleaner the technology, the more expensive it is, which is why a lot of electronic gear isn’t manufactured in the USA.  Likewise, the immigration problem won’t go away so long as the United States offers the hope of a better life for millions of people.  We can’t effectively seal the borders.  Nor can we deport all illegal aliens, not without becoming a police state along the lines of Nazi Germany or Stalinist Russia. There are no simple solutions that are workable.  Period.

The current legislative gridlock in Washington, D.C., reflects the iron-clad insistence by each party, and especially, I’m sad to say, the Republicans, that their “solution” is the only correct one.  It’s not a solution if roughly half the people in the country, or half the elected representatives [or a minority large enough to block legislation], oppose it, because it’s not going to get adopted, no matter what its backers claim for it.  In practice, in our society, any workable solution requires compromise.  When compromise fails, as it did over the issue of slavery, the result can only be violence in some form. Unhappily, as I’ve said before, the willingness to work out compromise solutions has declined. In fact, politicians willing to compromise are being branded as traitors.  So are politicians who try to forge alliances across party lines.  So… my suggested solution is to vote for officials who are open to compromise and vigorously oppose those who claim that compromise is “evil” or wrong, or un-Democratic, or un-Republican.  No… it’s not a glamorous and world-shaking solution. But it’s the only way we have left to break the logjam in government.  Until lots of people stop looking for absolute and simple solutions and start agitating for the politicians to work together and hammer things out… they won’t.  Because the message given to every politician out there right now has been that compromise kills political careers.

So we can all stick to our hard and fast principles – and guns, if it comes to that – and watch nothing happen until everything falls apart… or we can reject absolutist politics and get on with the messy business of politics in a representative democratic republic.

 

Older and Depressed?

The other day one of my readers asked, “Is there anything positive you can talk about or have you slid too far down the slope of elder grouchiness and discontent?”  That’s a good question in one respect, because I do believe that there is a definite tendency, if one is intelligent and perceptive, to become more cynical as one gains experience.

Psychological studies have shown, however, that people who suffer depression are far more accurate in their assessments of situation than are optimists, and that may be why optimism evolved – because it would be too damned hard to operate and get things done if we weighed things realistically.  For example, studies also show that entrepreneurs and people who start their own businesses invariably over-estimate the chances of their success and vastly underestimate their chances of failure.  This, of course, makes sense, because why would anyone open a business they thought would fail?

There’s also another factor in play. I spent nearly twenty years in Washington, D.C., as part of the national political scene, and after less than ten years I could clearly see certain patterns repeat themselves time after time, watching as newly elected politicians and their staffs made the same mistakes that their predecessors did and, over the longer term, watching as each political party gained power in response to the abuses of its predecessor, then abused it, and tried to hold on by any means possible, only to fail, and then to see the party newly in power immediately begin to abuse its power… and so on. It’s a bit difficult not to express a certain amount of “grouchiness and discontent,” especially when you offer advice based on experience and have it disregarded because the newcomers “know better”… and then watch them make the same kind of mistakes as others did before them.  My wife has seen the same patterns in academia, with new faculty and new provosts re-inventing what amounts to a square wheel time after time.

It’s been said that human knowledge is as old as written records, but human wisdom is no older than the oldest living human being, and, from what I’ve seen, while a comparative handful of humans can learn from others, most can’t or won’t.  And, if I’m being honest, I have to admit that for the early part of my life I had to make mistakes to learn, and I made plenty. I still make them, but I’d like to think I make fewer, and the ones I make are in areas where I don’t have the experience of others to guide or warn me.

The other aspect of “senior grouchiness,” if you will, is understanding that success in almost all fields is not created by doing something positively spectacular, but by building on the past and avoiding as many mistakes as possible. Even the most world-changing innovations, after the initial spark or idea, require following those steps.

I’m still an optimist at heart, and in personal actions, and in my writing, but, frankly, I do get tired of people who won’t think, won’t learn, and fall back on the simplistic in a culture that has become fantastically complex, both in terms of levels and classes of personal interactions and in terms of its technological and financial systems. At the same time, the kind of simplicity that such individuals fall back on is the “bad” and dogmatic kind, such as fanatically fundamental religious beliefs and “do it my way or else,”  as opposed to the open and simple precepts, such as “be kind” or “always try to do the right thing.”  I’m not so certain that a great portion of the world’s evils can’t be traced to one group or another trying to force their way – the “right way,” of course, upon others.  The distinction between using government to prohibit truly evil behavior, such as murder, abuse of any individual, theft, embezzlement, fraud, assault, and the like, and forcing adherence to what amounts to theological beliefs was a hard-fought battle that took centuries to work itself out, first in English law, and later in the U.S. Constitution and legal system.  So when I see “reformers” – and they exist on the left and the right – trying to undermine that distinction that is represented by the idea of separation of church and state [although it goes far beyond that], I do tend to get grouchy and offer what may seem as depressing comments.

This, too, has historical precedents.  Socrates complained about the youth and their turning away from the Athenian values… but within a century or so Athens was prostrate, and the Athenians never did recover a preeminent position in the world. Cicero and others made the same sort of comments about the Roman Republic, and in years the republic was gone, replaced by an even more autocratic empire.

So… try not to get too upset over my observations. After all, if more people avoided the mistakes I and others who have learned from experience point out, we’d all have more reasons to be optimistic.

 

The Republican Party

Has the Republican Party in the United States lost its collective “mind,” or is it a totally new political party clinging to a traditional name – whose traditions and the policies of its past leaders it has continually and consistently repudiated over the past four years?

Why do I ask this question?

Consider first the policies and positions of the Republican leaders of the past.  Theodore Roosevelt pushed anti-trust actions against monopolistic corporations, believed in conservation and created the first national park. Dwight D. Eisenhower, general of the armies and president, warned against the excessive influence of the military-industrial complex and created the federal interstate highway system.  Barry Goldwater, Mr. Conservative of the 1970s, was pro-choice and felt women should decide their own reproductive future.  Richard Nixon, certainly no bastion of liberalism, espoused universal health insurance and tried to get it considered by Congress and founded the Environmental Protection Agency.  Ronald Reagan, cited time and time again by conservatives, believed in collective bargaining and was actually a union president, and raised taxes more times than he cut them.  The first president Bush promised not to raise taxes, but had the courage to take back his words when he realized taxes needed to be increased.

Yet every single one of these acts and positions has now been declared an anathema to Republicans running for President and for the U.S. House of Representatives and the Senate.  In effect, none of these past Republican leaders would “qualify” as true card-carrying Republicans according to those who now compose or lead the Republican Party.  A few days ago, former Florida governor and Republican Jeb Bush made a statement to the effect that even his father, the first President Bush, wouldn’t be able to get anything passed by the present Congress.

President Obama is being attacked viciously by Republicans for his health care legislation, legislation similar to that signed and implemented by Mitt Romney as governor of Massachusetts and similar in principle to that proposed by Richard Nixon.

Now… I understand that people change their views and beliefs over time, but it’s clear that what the Republican Party has become is an organization endorsing what amounts almost an American version of fascism, appealing to theocratic fundamentalism, and backed by a corporatist coalition, claiming to free people from excessive government by underfunding or dismantling all the institutions of government that were designed to protect people from the abuses of those with position and power.  Destroy unions so that corporations and governments can pay people less.  Hamstring environmental protection in the name of preserving jobs so that corporations don’t have to spend as much on environmental emissions controls. Keep taxes low on those making the most.  Allow those with wealth to spend unlimited amounts on electioneering, if in the name of  “issues education,” while keeping the names of contributors hidden or semi-hidden.  Restrict women’s reproductive freedoms in the name of free exercise of religion. Keep health care insurance tied to employment, thus restricting the ability of employees to change jobs.  Allow consumers who bought too much housing to walk away from their liabilities through bankruptcy or short sales (including the honorable junior Senator from Utah), but make sure that every last penny of private student loan debt is collected – even if the students are deceased.

The United States is a representative democratic republic, and if those calling themselves Republicans wish to follow the beliefs and practices now being spouted, that’s their choice… and it’s also the choice of those who choose to vote for them.

But for all their appeal to “Republican traditions,” what they espouse and propose are neither Republican nor traditional in the historic sense,  But then, for all their talk of courage and doing the hard jobs to be done, they haven’t done the first of those jobs, and that’s to be honest and point out that they really aren’t Republicans, and they certainly aren’t traditional conservatives, no matter what they claim.

The Derivative Society?

Once upon a time, banking and investment banking were far less complex than they are today, especially recently.  In ancient times, i.e., when I took basic economics more than fifty years ago, banks used the deposits of their customers to lend to other customers, paying less to their depositors than what they charged those to whom they made loans.  Their loans were limited by their deposits, and banks were required to retain a certain percentage of their assets in, if you will, real dollars.  Even investment banks had some fairly fixed rules, and in both cases what was classed as an asset had to be just that, generally either real property, something close to blue chip securities, municipal, state, or federal notes or bonds, or cash. With the creeping deregulatory legislation that reached its apex in the 1990s, almost anything could be, with appropriate laundering, otherwise known as derivative creation, be classed as someone’s asset.

And we all know where that led.

And for all the furor about derivatives, and the finger-pointing, something else, it seems to me, has gone largely unnoticed.  The fact is that our entire society, especially in the United States, has become obsessed with derivatives in so many ways.

What are McDonald’s, Wendy’s, Burger King, Applebee’s, Olive Garden, Red Lobster, Chili’s, and endless other restaurant chains, fast-food and otherwise, but derivatives.  What happened to unique local restaurants?  The ones with good inexpensive food often became chains, deriving their success from the original.  The others, except for a few handfuls, failed.  Every year it seems, another big name chef starts a restaurant franchise, hoping to derive success and profit from a hopefully original concept [which is becoming less and less the case].

Department stores used to be unique to each city.  I grew up in Denver, and we had Daniels & Fisher, with its special clock tower, the Denver Dry Goods [“The Denver”], and Neustaeder’s.  Then the May Company took over D&F, and before long all the department stores were generic. In Louisville, where my wife was raised, there were Bacon’s, Kaufmann’s, Byck’s, Selman’s, and Stewart’s. Not a single name remains.

Even Broadway, especially in musical theatre, has gone big for remakes and derivatives. Most of the new musicals appear to be remakes of movies, certainly derivative, or re-dos of older musicals. Every time there is a new twist on TV programming the derivatives proliferate.  How many different “Law and Order” versions are there?  Or CSI?  How many spin-offs from the “American Idol” concept?  How many “Reality TV” shows are there?  Derivative after derivative… and that proliferation seems to be increasing. Even “Snow White” has become a derivative property now.

In the field of fantasy and science fiction writing, the derivatives were a bit slower in taking off, although there were more than a few early attempts at derivatives based on Tolkien, but then… somewhere after Fred Saberhagen came up with an original derivative of the Dracula mythos, vampires hit the big-time, followed by werewolves, and more vampires, and then zombies.  Along the way, we’ve had steampunk, a derivative of a time that never was, fantasy derivatives based on Jane Austin, and more names than I could possibly list, and now, after the “Twilight” derivatives, we have a raft of others.

Now… I understand, possibly more than most, that all writing and literature derives from its predecessors, but there’s a huge difference between say, a work like Mary Robinette Kowal’s Shades of Milk and Honey, which uses the ambiance of a Regency-type culture and setting in introducing a new kind of fantasy [which Kowal does] and a derivative rip-off such as Pride and Prejudice and Zombies or Emma and the Werewolves.  When Roger Zelazny wrote Creatures of Light and Darkness or Lord of Light, he derived something new from the old myths.  In a sense, T.S. Eliot did the same in The Wasteland or Yeats in “No Second Troy.”  On the other hand, I don’t see that in John Scalzi’s Redshirts, which appears to me as a derivative capitalization on Star Trek nostalgia.

How about a bit more originality and a lot fewer “literary” derivatives?  Or have too many writers succumbed to the lure of fast bucks from cheap derivatives? Or have too many readers become too lazy to sort out the difference between rip-off and robbery whole-cloth derivatives and thoughtful new treatments of eternal human themes?

 

Coincidences?

We’ve all been there, I think, on the telephone discussing something important to us or with someone important to us… and no one else is home, when the doorbell rings, or another call comes through, with someone equally important, or both at once.  Now, it doesn’t matter that no one has called or rung the doorbell for the previous two hours and no one will for another hour or two.  What is it about the universe that ensures that, in so many cases, too many things occur at the same time?

I’m not talking about those which aren’t random, but can be predicted, like the political calls that occur from five or six in the evening until eight o’clock, or the charitable solicitations that are timed in the same way [both conveniently excepted from the do-not-call listing]. I’m talking about calls and callers and events that should be random, but clearly aren’t.  Sometimes, it’s merely amusing, as when daughters located on different coasts call at the same time.  Sometimes, it’s not, as when you’re trying to explain why you need the heating fixed now, and your editor calls wanting an immediate answer on something… or you’re discussing scheduling long-distance with your wife and you ignore the 800 call that you later find out was an automated call, without ID, informing you that your flight for six A.M. the next morning has been cancelled… and you don’t find out until three A.M. the next morning when you check your email before leaving for the airport… and end up driving an extra 60 miles to the other airport. There’s also the fact that, no matter what time of the afternoon it is, there’s a 10-20% chance that, whenever I’m talking to my editor, either FedEx, UPS, or DHL will appear at the door [upstairs from my office] needing a signature… and we don’t get that many packages [except from my publisher] and I spend less than a half hour a week on the phone with my editor.

I know I’m not alone in this.  Too many people have recounted similar stories, but the logical types explain it all away by saying that we only remember the times these things happen, but not the times that they don’t.  Maybe… but my caller I.D. gives the times for every incoming call, and when I say that there haven’t been any calls for two or three hours, and then I get three in three minutes… it doesn’t lie – not unless there’s a far grander conspiracy out there than I even wish to consider.  And why is it that I almost always get calls in the ten minutes or so a day when I’m using the “facilities”?  No calls at all in the half hour before or after, of course.

This can extend into other areas – like supermarket checkout lines. The most improbable events occur in all too many cases in whatever line I pick.  The juice packet of the shopper in front of me explodes all over the conveyor belt.  The checker I have is the only one not legally able to ring up beer, and the manager is dealing with an irate customer in another line.  The register tape jams.  The credit/debit card machine freezes on the previous customer, just after I’ve put everything on the belt.

Now… to be fair, it sometimes works the other way. There was no possible way I ever could have met my wife.  None [and I won’t go into the details because they’d take twice the words of my longest blog], but it happened, and she’s still, at least occasionally, pointing out that it had to be destiny… or fate.  Well… given how that has turned out, I wouldn’t mind a few more “improbable” favorable coincidences, but… they’re pretty rare.  Then again, if all the small unfavorable improbabilities are the price for her… I’ll put up with them all.

 

The Next Indentured Generation?

The other day I received a blog comment that chilled me all the way through.  No, it wasn’t a threat.  The commenter just questioned why state and federal government should be supporting higher education at all.

On the surface, very much on the surface, it’s a perfectly logical question. At a time of financial difficulty, when almost all states have severe budget constraints, if not enormous deficits, and when the federal deficit is huge, why should the federal government and states be supporting higher education?

The question, I fear, arises out of the current preoccupation with the here and now, and plays into Santayana’s statement about those who fail to learn the lessons of history being doomed to repeat them. So… for those who have mislaid or forgotten a small piece of history, I’d like to point out that, until roughly 1800, there were literally only a few handfuls of colleges and universities in the United States – less than 30 for a population of five million people. Most colleges produced far, far fewer graduates annually than the smallest of colleges in the USA do today.  Harvard, for example, averaged less than 40 graduates a year.  William & Mary, the second oldest college in the United States, averaged 20 graduates a year prior to 1800.  Although aggregated statistics are unavailable, estimates based on existing figures suggest that less than one half of one percent of the adult population, all male, possessed a college education in 1800, and the vast majority of those graduates came from privileged backgrounds.  Essentially, higher education was reserved for the elites. Although more than hundred more colleges appeared in the years following 1800, many of those created in the south did not survive the Civil War.

In 1862, Congress created the first land-grant universities, and eventually more than 70 were founded, based on federal land grants, primarily to teach agricultural and other “productive” disciplines, but not to exclude the classics. By 1900, U.S. colleges and universities were producing 25,000 graduates annually, out of a population of 76 million people, meaning that only about one percent of the population, still privileged, received college degrees, a great percentage of these from land grant universities supported by federal land grants and state funding.  These universities offered college educations with tuition and fees far lower than those charged by most private institutions, and thus afforded the education necessary for those not of the most privileged status.  Even so, by 1940, only five percent of the U.S. population had a college degree.  This changed markedly after World War II, with the passage of the GI bill, which granted veterans benefits for higher education. Under the conditions which existed after WWII until roughly the early 1970s, talented students could obtain a college degree without incurring excessive debt, and sometimes no debt at all.

As we all know, for various reasons, that has changed dramatically, particularly since state support of state colleges and universities has declined from something close to 60% of costs forty years ago to less than 25% today, and less than 15% in some states.  To cover costs, the tuition and fees at state universities have skyrocketed.  The result? More students are working part-time and even full-time jobs, as well as taking out student loans.  Because many cannot work and study full-time, the time it takes students to graduate takes longer, and that increases the total cost of their education. In 2010, 67% of all graduating college seniors carried student loan debts, with an average of more than $25,000 per student.  The average student debt incurred by a doctor just for medical school is almost $160,000, according to the American Medical Association.

Yet every study available indicates that college graduates make far more over their lifetime than those without college degrees, and those with graduate degrees generally fare even better.  So… students incur massive debts.  In effect, they’ll become part-time higher-paid indentured servants of the financial sector for at least 20 years of their lives.

The amounts incurred are far from inconsequential.  Student debt now exceeds national credit card debt [and some of that credit card debt also represents student debt, as well]. The majority of these costs reflect what has happened when states cut their support of higher education, and those costs also don’t reflect default rates on student loans that are approaching ten percent.

As a result, college graduates and graduates from professional degree programs are falling into two categories – the privileged, who have no debt, and can choose a career path without primarily considering the financial implications and those who must consider how to repay massive debt loads.  And as state support for higher education continues to dwindle, the U.S, risks a higher tech version of social stratification based on who owes student loans and who doesn’t.

So… should the federal and state governments continue to cut  support of higher education? Are such cuts a necessity for the future of the United States?  Really?  Tell that to the students who face the Hobson’s Choice of low-paying jobs for life or student loan payments for life.  Or should fewer students attend college?  But… if that’s the case, won’t that just restrict education to those who can afford it, one way or another?

The Tax Question

These days an overwhelming number of political figures, especially conservatives and Republicans, continue to protest about taxes and insist that taxes should be lowered and that federal income taxes, at the very least, should be left at the lower levels set during the administration of the second President Bush. Although many conservatives protest that taxes are being used for “liberal” social engineering, the fact is that there are so many “special provisions” embodied in the tax code that such “engineering” runs from provisions purported to help groups ranging from the very poorest to the very wealthiest.  In addition, much of the complexity of the tax code arises from generations of efforts to make it “fairer.”

For all that rhetoric, the basic purpose of taxes is to pay for those functions of government that the elected representatives of past and present voters have deemed necessary through the passage of federal laws and subsequent appropriations.  Or, as put by the late and distinguished Supreme Court Justice Oliver Wendell Holmes, Jr., “Taxes are what we pay for a civilized society.”

Grumbling about taxation has been an American preoccupation since at least the 1700s when the American colonists protested the British Stamp Tax and later the tax on imported British tea.  In the case of the tea tax, the colonists were paying more for smuggled tea than for fully taxed British tea, which has always made me wonder about the economic rationality of the Boston Tea Party, and who really was behind it… and for what reason, since it certainly wasn’t about the price of British tea.

Likewise, my suspicions are that the current furor about taxes, and federal income taxes in particular, may not really be primarily about taxes themselves, but a host of factors associated with taxes, most of which may well lie rooted in the proven “loss aversion” traits of human beings.  Put simply, most of us react far more strongly to events or acts which threaten to take things from us than to those which offer opportunities, and in a time when most people see few chances for economic improvement, loss aversion behavior, naturally, becomes stronger.  And most people see higher taxes, deferred Social Security retirement ages, and higher Medicare premiums as definite losses, which they are.

What’s most interesting about this today is that the leaders of the conservative movements and the Republican party are generally from that segment of society which has benefited the most in the past twenty years from the comparative redistribution of wealth to the uppermost segment of American society and yet they are appealing to those members of society who feel they have lost the most through this redistribution – the once more highly paid blue collar workers in the old automotive industries and other heavy manufacturing areas of the U.S. economy.  The problem with this appeal is not that it will not work – it definitely will work, especially if economic conditions do not improve – but that the policies espoused by the “keep taxes low/cut taxes” conservatives won’t do anything positive to benefit the vast majority of those to whom these conservatives are appealing.  They will, of course, greatly benefit the wealthy, but the comparative lack of federal/state revenues is already hurting education, despite the fact that both conservatives and liberal both agree that improved education is vital for today’s and tomorrow’s students if they are to prosper both economically  and occupationally.  The lack of money for transportation infrastructure will only hamper future economic growth, as will the lack of funding to rebuild and modernize our outdated air transport control system and a number of other aging and/or outdated infrastructure systems.

The larger problem is, of course, that the conservatives don’t want government to spend money on anything, and especially not anything new, while the liberals have yet to come up with a plan for anything workably positive… and, under those circumstances, it’s very possible that “loss aversion” politics, and the anti-taxation mood, will dominate the political debates of the next six months… which, in the end, likely won’t benefit anyone.

 

Cleverness?

Over the years, every so often, I’ve gotten a letter or review about one of my books that essentially complains about the ruthless nature of a protagonist, who is supposed to be a good person.  These often question why he or she couldn’t have done something less drastic or resolved the situation they faced in a more clever fashion.  I realized, the other day, after seeing a review of Imager’s Intrigue and then receiving an email from another writer who was disappointed that Quaeryt couldn’t be more “clever” in his resolution of matters and less reliant upon force exactly what my grandmother had meant in one of her favorite expressions.  She was always saying that some businessman or politician was “too clever by half.”

So, I believe, are some writers.  I try not to be excessively clever, because it’s highly unrealistic in the real world, but it’s difficult when there’s an unspoken but very clear pressure for authors to be “clever.”  My problem is that I’m moderately experienced in how the “real world” operates, and seldom is a “clever” solution to anything significant or of major import a truly workable solution. As I and numerous historians have pointed out, in WWII, with a few exceptions, the Germans had far more “clever” and advanced technology.  They lost to the massive application of adequate technology.  In Vietnam, the high-tech and clever United States was stalemated by the combination of wide-scale guerilla warfare and political opposition within the USA.  Despite the application of some of the most sophisticated and effective military technology ever deployed, the U.S. will be fortunate to “break even” in its recent military operations in the Middle East… and given the costs already and the loss of lives for what so far appear to be negligible gains, it could be argued that we’ve lost.  I could cite all too many examples in the business world where “clever” and “best” lost out to cheaper and inferior products backed by massive advertising.  The same sort of situations are even more prevalent in politics.

“Clever,” in fact, is generally highly unrealistic as a solution to most large scale real-world problems.  But why?

Because most problems are, at their base, people problems, it takes massive resources to change the course of human inertia/perceived self-interest. That’s why both political parties in the United States mobilize billions of dollars in campaign funds… because that’s what it takes, since most people have become more and more skeptical of any cleverness that doesn’t fit their preconceptions…  partly because they’re also skeptical of the “clever” solutions proposed by politicians.  It’s why most advertising campaigns have become low-level, not very clever, saturation efforts.  Military campaigns that involve national belief structures and not just limited and clearly defined tactical goals also require massive commitments of resources – and clever just gets squashed if it stands in the way of such effectively deployed resources.

That’s why, for example, in Imager’s Intrigue, Rhenn’s solutions are “clever” only in the sense that they apply massive power/political pressure to key political/military/social vulnerabilities of his opponents.  Nothing less will do the job.

I’m not saying that “clever” doesn’t work in some situations, because it does, but those situations are almost always those where the objectives are limited and the stakes are not nearly so high.  That makes “clever” far more suited to mysteries, spy stories, and some thrillers than to military situations where real or perceived national interests or survival are at stake.