No Objective Truth?

The other day, one commenter on a blog asked if I wanted to write about the growth of a belief structure in American society that essentially denies the existence of “objective truth.”  Actually, I’ve written about aspects of this before, particularly as a subset of the selective use of information to reinforce existing confirmation bias, but I find the growth of the feeling that there is no objective truth, or that scientifically confirmed “facts” remain a matter of opinion – and that everyone’s opinion is equal – to be a disturbing but almost inevitable outcome of the fragmentation of the media along lines corresponding to existing belief structures, as well as of the increasing role that the internet and social media play in the day-to-day life of most people.

The basic ground rule of any successful marketing effort is to get the target audience to identify with your product.  Usually that’s accomplished by positioning the product to appeal to biases and beliefs.  Information – which is largely no longer news or factual/objective reporting outside of stringently peer-reviewed scientific journals – apparently must no longer have more than a tangential relationship to facts or objectivity, but has its content manipulated to appeal to its desired target audience.  Now… this is scarcely new.  Modern yellow journalism dates back more than a century, but because the economics of journalistic production limited the number of perspectives that could be specifically pandered to, because the law did have an effect in so far as actual facts were concerned, and because there remained a certain basic integrity among at least some media outlets until comparatively recently, facts were not quite so routinely ignored or distorted in quite so many ways.

One of the mixed blessings of technology is that millions and millions of people in every high-tech society have access to and the ability to use comparatively sophisticated media techniques (particularly compared to those available even a generation ago) to spread their views and versions of the “facts” in ways that can be appealing and often compelling.  In turn, the multiplicity of ways of presentation and distortion of existing verified facts now in existence creates the impression that such facts are not fixed, and the next step for many people is the belief that facts are only a matter of opinion… and since everyone’s opinion is valid… why then, “my” view of which fact or interpretation is correct, or can be ignored, is just as good as anyone else’s.

This “personalization of truth” leads to many rather amazing results such as, for example, that, as the scientific consensus on the issue of global warming has become almost unanimous in the fact that, first, such global warming is occurring, and, second, that there is a strong anthropomorphic component to such warming, popular opinion agreeing with these findings has dropped almost twenty percent.

Unfortunately, occurrences such as global warming or mechanisms such as oncoming vehicles combined with high-volume earbuds, famines and political unrest, viruses and bacteria, and high-speed collisions are all present in our world. Consequently, rising sea levels, violent weather changes, fatalities due to disease among the unvaccinated, starvation, or due to failure to wear seatbelts will all take their toll, regardless of the beliefs of those who ignore the facts.

Belief is not a viable defense or preventative measure against climate change, biology, on-coming heavy objects, or other objective impingements upon subjective solipsistic unreality… no matter what or how you believe.

 

“Downers,” Stereotypes, and Literary Quality

Yesterday, when I was flying back from the World Fantasy Convention in San Diego, I read through two “best-of-the-year” anthologies… and landed in Cedar City thoroughly depressed… somewhat disgusted… and more than a little irritated.  No… I wasn’t irritated that the anthologists hadn’t picked one of my infrequent short stories.  In the first place, I know that it’s unlikely anything I write will find favor with most anthologists [although there have been exceptions].  In the second place, I hadn’t published any short fiction for the year in question.

My anger, irritation, and depression came from the same root cause.  Out of something like fifty stories, all but perhaps ten were downers.  Out of the ten that weren’t, eight were perhaps neutral or bitter-sweet, and only two could be called upbeat.  Now… I don’t have anything against downbeat or depressing stories.  I don’t even have anything against them being singled out as good stories.  Certainly they were all at least better than competently written, and some were indeed superbly written.  And I definitely don’t think a story has to be upbeat to be great or “best-of-the-year,” but after many, many years of writing professionally, and even more of reading all manner of books and stories, ranging from “genre” fiction to the acclaimed classics, it’s clear to me that the excessive citation of literarily depressing stories as classics and excellence is hardly a mark of intellectual distinction, let alone impartial judgment.

All this, of course, reinforces my feelings about those critics and anthologists who seem to dismiss anything with any upbeat feel or positive aspects… or anything that isn’t “literary mainstream.”

The latest New York Times book section contains a review with an opening along the line of “A literary novelist writing a genre novel is like an intellectual dating a porn star.”  Supposedly, reviewers who write about books should be able to transcend stereotypes, not reinforce them, but then, snobbery is often based on the enshrinement of stereotypes contrary to the snob’s world view, and all too many critics, reviewers, and even some anthologists are little more than snobs.

A good story is a good story, and a bad story is a bad story, whether it’s “literary” or genre.

More Wall Street Idiocy

I recently discovered that the cable company Hibernia Atlantic is spending $300 million to construct and lay a new transatlantic cable between London and New York [New Scientist, 1 October].  Why? In order to cut 6 milliseconds from the 65 millisecond transit time in order to get more investment trading firms to use their cable.  For 6 milliseconds?  That’s apparently a comparative age when computers can execute millions of instructions in a microsecond, and London traders must think that those 6 milliseconds will make a significant difference in the prices paid and/or received.

And they may well.  Along the same lines, a broker acquaintance of mine pointed out that New York City real estate closest to the New York Stock Exchange computers commands exorbitant rents and prices for exactly the same reason… but I find the whole idea totally appalling – not so much an additional data cable, but the rationale for its use. Human beings can’t process much of anything in 6 milliseconds so that the speed advantage is only useful to computers using trading algorithms.  As I’ve noted earlier, the use of programmed and computer trading has led to a shift in the rationale behind trading to almost total reliance on technical patterns, which, in turn, has led to increased volatility in trading.  Faster algorithmic trading can only increase that volatility, and, regardless of those who deny it, can also only increase the possibility of yet another “flash crash” like that of May 2010, and, even if the new “circuit-breakers”cut in and work as designed, the results will still disrupt trading significantly and likely penalize the minority of traders without superspeed computers.

Philosophically speaking, the support for building such a cable also reinforces the existing and continually growing reliance on maximizing short-term profits and minimizing longer-term concerns, as if we don’t already have a society that isn’t excessively short-term. You might even call it the institutionalization of business thrill-seeking and attention-deficit-disorder. This millisecond counts; what happens next year isn’t my concern.  Let my kids or grandkids worry about what happens in ten or twenty years.

And one of the problems is that this culture is so institutionalized that any executive who questions it essentially destroys his or her future. All you have to do is look at those who did before the last meltdown.

Yes, the same geniuses who pioneered such great innovations as no-credentials-check-mortgages, misleadingly “guaranteed” securitized mortgages, banking deregulation, fees-for-everything-banking, and million dollar bonuses for crashing the economy are now going to spend a mere hundreds of millions to find another way to take advantage of their competitors… without a single thought about the implications and ramifications.

Isn’t the free market wonderful?

 

Why Don’t the Banks Get It?

Despite the various “Occupy Wall Street” and other grass-roots movements around the country, banks, bankers, and investment bankers really don’t seem to get it.  Oh, they understand that people are unhappy, but, from what I can tell, they don’t seem terribly willing to accept their own role in creating this unhappiness.

It certainly didn’t help that all the large banks ducked out of the government TARP program as soon as possible so that they wouldn’t be subject to restrictions on salaries and bonuses for top executives – bonuses that often exceeded a million dollars an executive and were sometimes far, far greater.  They all insist, usually off the record, that they feared “losing” top talent, but where would that talent go?  To other banks?

Then after losing hundreds of billions of dollars on essentially fraudulently rated securitized mortgage assets, they took hundreds of billions of dollars in federal money, but apparently not to lend very much of it, especially not to small businesses, who are traditionally the largest creators of new jobs in the country. At the same time, they continue to foreclose on real estate on a wholesale basis, even when ordered not to by judges and states and regulators and even in cases when refinancing was feasible with an employed homeowner.

And then… there’s the entire question of why the banks are having financial difficulties.  I’m an economist by training, and I have problems understanding this.  They’re getting money cheaply, in some cases, almost for free, because what they pay depositors is generally less than one percent, and they can obtain federal funds at an even lower rate.

Mortgages are running 4-6%, and interest on credit card debt is in the 20% range and often in excess of 25%.  Yet this vast differential between the cost of obtaining the product and the return on it apparently isn’t sufficient?

And that brings us to the latest bank fiasco.  For years, the banks, all of them, have been urging customers to “go paperless.”  Check your statement electronically; don’t write checks; use your debit card instead. Then, after the federal government tried to crack down on excessive fees for late payments, overdrafts, and the like, now several of the largest banks are floating the idea of a monthly fee for debit card use.  Wait a second!  Wasn’t this the banks’ idea in the first place?  Wasn’t it supposed to reduce costs?  So why are they going to charge depositors more to use their own money?

And the banks still don’t get it?  With all those brilliant, highly compensated executives?

Or don’t they care?

What Is a Cult?

Recently, apparently members of the Christian right are suggesting that presidential candidate Mitt Romney is not a “Christian,” but a member of a “cult.” As a resident of Utah for nearly twenty years, and as a not-very-active Episcopalian who still resents the revision of the King James version of the Bible and Book of Common Prayer, I find the raising of this issue more than a disturbing, not so much the question of what Mr. Romney believes, but the implications that his beliefs are any stranger or weirder than the beliefs of those who raised the issue.

Interestingly enough, the top dictionary definitions of the word “cult” are “a system of religious rites and observances” and “zealous devotion to a person, ideal, or thing.”  Over the past half-century or so, however, the term cult has come to be used in a more pejorative sense, referring to a group whose beliefs or practices are considered abnormal or bizarre.  Some sociologists make the distinction that sects, such as Baptists, Lutherans, Anglicans, Catholics, etc., which are products of religious schism, therefore arose from and maintain a continuity with traditional beliefs and practices while cults arise spontaneously around novel beliefs and practices. Others define a cult as an ideological organization held together by charismatic relationships and the demand of total commitment to the group and its practices.

Mitt Romney is a practicing Mormon, a member of the Church of Jesus Christ of Latter Day Saints, but does that make him a member of a cult?  Since the LDS faith specifically believes in Jesus Christ and follows many “Christian” practices such as baptism, belief in an omnipotent God and his son Jesus Christ, and rejected the practice of polygamy a century ago, can it be said to be a total “novel” faith or any more “bizarre” or “abnormal” than any number of other so-called Christian faiths?  Mormonism does demand a high degree of commitment to the group and its practices, but is that degree of commitment any greater than that required by any number of so-called evangelical but clearly accepted Christian sects?

While I’m certainly not a supporter of excessive religious beliefs of any sort, as shown now and again in some of my work, and especially oppose the incorporation of religious beliefs into the structure of secular government, I find it rather amazing that supporters who come from the more radical and even “bizarre” [in my opinion] side of Christianity are raising this question.  What troubles me most is the implication that fundamentalist Christianity is somehow the norm, and that Mormonism, which, whether one likes it or not, is clearly an offshoot of Christianity, is somehow stranger or more cultlike than the beliefs of the evangelicals who are raising the question.

This isn’t the first time this kind of question has been raised, since opponents of John F. Kennedy questioned whether the United States should have a Catholic president, with the clear implication that Catholicism was un-American, and it won’t be the last time.  The fact that the question has been raised at all in this fashion makes me want to propose a few counter-questions.

Why are those politicians who endorse and are supported by believers in fundamentalist Christianity not also considered members of cults?

Are we electing a president to solve pressing national problems or one to follow a specific religious agenda?

Does rigid adherence to a religious belief structure make a more effective president or a less effective one?  What does history show on this score?

And… for the record, I’m not exactly pleased with any of the candidates so far.