Ideals and Reality

One of the great advantages of writing science fiction is that I can create a society from relatively whole cloth and try to make it real to my readers, but there are certainly dangers in making that effort.  If you don’t understand some of the basics of societies – such as economics, trade, politics, the role of beliefs, etc. – you may still have a wonderful story, but one that many readers will not finish, or if they do,  they’ll being saying that the society or culture really wouldn’t work.  Most professional writers understand that, but a number of those fail to ask another question:  How did the society/culture get that way? This was a point brought up by another writer at a recent conference I attended [LTUE], who made the comment about a well-known best-selling book, “The society would work, once it reached this point, but I can’t see how it ever got there, given human nature.”

The reason I bring this up is two-fold.  The first is to point out a few things to aspiring writers: (1) gross errors in world-building can hurt, and (2) given the example, so long as it seems to work, even if there’s no way to have gotten there, it probably won’t hurt your sales. The second is to suggest that, even in our world, political ideologues don’t seem to understand that, no matter how good an idea or principle is, you have to have a way, technically and politically, to get there.

I often get comments on various blogs suggesting idealistic solutions to various problems or difficulties we face today.  Many of these comments suggest “whole-cloth” solutions, whether it be a total free-market system or the replacement of the entire income tax system with a value-added tax, or…  There’s been a substantial number of these idealistic solutions over the years, but the difficulty all of them have is… there’s no practical way to get there from where we are now, except via catastrophe.  History suggests, rather strongly, that civilizations either make gradual changes or ossify and collapse… or sometimes, just implode into revolution or chaos.

What that means is, for example, that short of a civil war, a takeover by a dictator, or the complete and total meltdown of the banking and economic system, we are not going to see the total abolition of the welfare system as now practiced in the United States and its replacement by a totally new system.  Why not?  Because there’s no way to get there from where we are now, because too many people will oppose such radical change – unless our system collapses totally.  Even the threat of total collapse won’t do it.

The same thing appears to be true of dealing with global warming.  Until a few island chains cease to exist, until Miami and New Orleans are drowned, until New York City suffers such a storm-surge and hurricane or Nor’easter that all the subways are flooded and inoperative and the east coast is blacked out for weeks, there won’t be the economic or political support for meaningful measures… and by the time that there is, the problem will be so big that no amount resources will be able to save large sections of the planet where literally hundreds of millions of people live… and given who lives where, it appears likely that a great number of those who oppose gradual but meaningful change are going to be hit the hardest – along with a lot of those who would like change, but don’t have the power to effect it.

In the end, while ideals can prevail, they have to change  underlying political or social conditions first, but when ideals conflict with physical reality, reality wins.

 

The Cost of Principles – To Others

At the moment, there are a number of court cases dealing with the conflict between “religious freedom” and statutory law. The core issue in many of them is whether various corporations or organizations should be required under law to provide medical services, primarily those involving contraception and abortion, to employees when those services are against the deeply held beliefs of the corporate/organization owners.

As I see it, there are three fundamental problems with the assertion that withholding such services from health care plans is an exercise of religious freedom, and that compelling the provision of those services is a violation of that freedom.  The first problem is the definition of “freedom of religion.”  The provision of coverage to pay for such services neither obligates the provider to endorse that service nor to require anyone to use it.  Employees are free to exercise their “religious” rights either to use or not use those services.  On the other hand, failure to provide such services requires employees who wish or need those services to pay for them or do without.  Therefore, allowing an exemption to such employers is effectively allowing the employing organization to impose its beliefs on all employees… and imposes an additional burden on the employees if they wish not to follow those beliefs.  This part of the issue has been raised and will doubtless be decided by the courts in some fashion or another, sooner or later.

The second aspect of the problem, however, doesn’t seem to have received much attention, and that’s the full scope of the economic discrimination the exercise of such “religious freedom” can have.  If Corporation A does not provide certain medical services, for whatever reason, the likelihood is that its healthcare costs will be lower than those of Corporation B, which does. In addition, the costs of those services, when used, must be absorbed by the employees of Corporation A.  Thus, Corporation A gains a competitive advantage while its employees are at a disadvantage. Given the fact that jobs remain hard to get, it’s also unlikely that many, if any, of the employees from Corporation A will depart over the additional costs they will incur.  Thus, the exercise of “religious freedom” also results in corporate economic gain while reducing the available income to employees who need the uncovered medical services.

The third aspect of the problem is that, at least in the United States, we don’t allow religious laws or practices to supersede basic laws.  You can’t break speed limits under the cloak of “religious freedom.”  Nor can you pay employees below the minimum wage on the basis of their religion or the lack of it.  You cannot base differentials in pay on religious practices or preferences – and yet, in effect, that is what an exemption from health coverage requirements would allow.

My bottom line is simple.  You have the right to your expression of your religious beliefs, but only so long as what you practice doesn’t harm others or pick their pockets, especially under the guise of religious freedom.  Whether what the courts will decide, and when, comes close to this position is still an open question.

Writing Collaborations

The other day I received an email from a reader who expressed dissatisfaction with the collaborative efforts of several well-known writers and who wanted to know how I had resisted the trend of established writers entering into collaborations that produced weak or less satisfying collaborative efforts.  While it’s an interesting inquiry, upon reflection, I feel, it bears a resemblance to a question along the lines of “How did you possibly escape beating your dog when all the other writers do once they get established?”

That’s not to say that collaborative efforts are always weaker or that they should be avoided. I’ve said on more than one occasion that collaboration ideally should only be attempted when the work is something that neither author could produce alone.  And sometimes, frankly, the collaboration is far better than either could accomplish alone, as in the case of the musical works of Gilbert and Sullivan.  [I’m not about to offer a public comparison in F&SF].

I’ve only done one collaboration, the ill-fated if well-reviewed Green Progression, with Bruce Scott Levinson, and that was a book which would have been difficult for me to do without his expertise in various areas, and it was a relatively easy collaboration because we were also working at the same Washington, D.C., consulting firm at the time. The book is far, far better than its dismal sales would indicate, but it’s also an indication that, even if one of the authors is moderately well-known, the name recognition of an author doesn’t necessarily carry over to a collaboration in terms of sales.

Some “collaborations” also result from necessity.  The final books of The Wheel of Time necessitated what was essentially a collaboration between Robert Jordan, posthumously, and Brandon Sanderson.  Although Sanderson technically wrote more than 90% [if the numbers I’ve heard are correct] of the last three books, the ground work had been laid by Jordan and there was an outline, as well as some 40,000 words or more of Jordan’s prose for Brandon to work with, which, in my mind, at least, makes it a collaboration rather than a ghost-written conclusion. Years ago, Piers Anthony did something similar with a book entitled Through the Ice, in completing a book largely finished by a young author named Robert Kornwise, who suffered an untimely and early death.

In thinking about collaborations I’ve read and the books that I’ve kept, I surveyed my shelves and the volumes on my e-reader and realized that I’ve only kept one collaboration, besides my own, at least ones that I know of, since I do know a number of authors doing collaborations under a single pen name, and there well may be others of which I’m unaware.  While that can’t be mere chance, it does suggest that, for me, collaborations don’t have the feel or flavor of a single-author book.

In my own instance, part of the answer to why I don’t do collaborations any more is simple.  I don’t feel either the desire or need to, and I really enjoy working on my own ideas at my own pace, which might well be just because I’m a type A control freak so far as my writing is concerned.

Slavery

Perhaps because of all the publicity over Twelve Years a Slave or because it’s Black History Month, I’ve been thinking about slavery and a number of points that I seldom see raised, if ever… and probably, by the time I’ve mentioned them, no one will be pleased, but since no one else seems to be pointing them out, most likely because each one will offend someone deeply, someone really ought to… and I appear to be the only one foolish enough to do so.

The first point is that virtually every black person enslaved in Africa was originally captured and sold into slavery by other blacks… and that virtually every slave purchased or kept in slavery in the United States was purchased or owned by a white person, usually a white male. The institution of slavery would not have been possible without both groups. I’m not excusing anyone, just noting a fact that seems to be overlooked.

The second point is that slavery existed in what we today would call a “free market,” that is, there were originally [not until the early nineteen century when Great Britain abolished the slave trade in 1807 and then slavery itself in 1833] no restrictions on the sale and purchase of slaves. Slaves had no rights and no legal protections. Sellers and buyers negotiated with complete freedom from outside interference. In that sense, slavery was the logical extension of totally free markets, where even human beings could be bought and sold, and even killed, for whatever the market would bear. So, all you free-market types, think about that when you preach about the need for “free” markets.

Third, given the diversity of the original slaves, who came from many different groups and tribes, those American blacks descended from slaves do not have a single “history/culture” predating the institution of slavery in the United States, except perhaps the shared misfortune of losing out in local African warfare, which resulted in their being enslaved in the first place. Their shared “history” is that of slavery, which is a failed and despicable culture. For this reason, I have to admit I frankly don’t understand the emphasis I see among many blacks from this background on finding their “culture,” because there isn’t a single one that all have in common prior to their ancestors landing in North America in a state of enslavement. Add to that the fact that any of the truly great African cultures had collapsed well before the beginning of the American slave trade, and a search for “history” and culture is more like poor whites seeking a history in Greek mythology than a particularly fruitful or worthwhile effort.

Fourth, over the past centuries and even into the present, many of those who opposed rights for blacks, almost entirely those of Caucasian backgrounds, cited the need for racial purity or opposition to “mixed races.” Come again? DNA studies show that every racial group besides “pure” African blacks [and some recent DNA testing even raises questions there about interbreeding with yet another undiscovered human species/race] has DNA confirming that their ancestors interbred with Neanderthals and Denisovians, both of whom failed to survive. That’s not exactly a hallmark of “purity”… or even good judgment on the part of one’s distant ancestors. Caucasians and Asians already had a mixed-blood background, even while some whites trumpeted their untainted blood. So let go of the damned racial purity argument. All of us are mongrels in some way or another now.

Fifth, in the end, at some point, we have to acknowledge what was, ALL of what was… and then get on with improving the future, no matter how one group denies what was and another dwells on it excessively, because we can’t change what was, only what will be.

A New High?

According to The Economist, the United States has the highest rate of credit card fraud of any developed nation, a rate far, far higher, than European Union nations, as well as far higher monetary losses. This isn’t necessarily just because we have more credit card thieves, which we apparently do, but also because the United States has far more credit cards and, equally important, has lagged behind the E.U. in adopting the so-called “pin and chip” credit card that contains a microchip with security features. The “pin and chip” system means effectively that it is far more difficult to use a stolen card or card number.

American business has lagged in employing this system, although Target, the latest and largest victim of hacking and the theft of tens of millions of credit card numbers and user names, is now looking into developing and issuing credit cards with greater security features. The reason for the delay? The new systems will cost more to install and implement, because new card readers will be required.

Or, in other words, until the losses to business make it clear that it’s “cost-effective” for them, regardless of the costs and hassles to consumers, they really don’t want to adopt a new and more secure system. These are also the men and women who, not unanimously, but overwhelmingly, try every method they can to reduce their costs. They beg their consumers to “go paperless,” claiming that doing so will benefit consumers while their real reason is to reduce their own paperwork burden. They’re the same retail executives who employ part-timers so that they won’t have to pay health benefits, who cut middle-management and overwork the survivors, and who outsource overseas anything they can to reduce costs, disregarding what it does to both their employees and the economy as a whole.

Yet when it comes to reducing the burden of fraud on their consumers, most are notably silent, or even oppose any improvement because it will increase their short-term costs. Just as cleaner environmental production and distribution systems might do… or health insurance or living wages. Fancy that.