Archive for September, 2012

The “Greed is Good,” “I Deserve It,” and “Someone Else Should Pay for It Society”

Several days ago, poor Mitt Romney committed the unforgivable. He said something so obvious, so accurate, and so to the point that people, especially Obama and the leftists, are jumping all over him.  Now… as must be clear to most of my readers, I’m generally appalled at the Republican positions on many issues, but what I find ironic is when someone whose positions I dislike says something that is absolutely obvious… and gets roundly criticized.  I fully admit that I supported Gerald Ford over Jimmy Carter, but I applauded when Carter made the obvious statement that, “Life is not fair,” for which the media and everyone condemned him.

What was it that Mitt said?  In effect, he was saying that he was never going to reach 47% of the population because they were getting benefits from the government for which they weren’t paying.  And he’s absolutely right.  When only 53% of Americans pay federal income taxes, then the 47% who don’t are getting all the federal programs paid for by those taxes for nothing.  Should they get such benefits?  Of course, some certainly should – such as the truly deserving poor, hungry children, and others that have a true need.

Some of the liberals have made the point that many of the “47%” do pay taxes, such as Social Security and Medicare payroll taxes, sales taxes, and property taxes, and they’re right. Many do pay those taxes.  But what the left wing ignores is that those taxes do not fund most government programs, and for all the hullabaloo about deficits, Medicare and Social Security are not yet contributing to those deficits.  The deficits are caused by outlays in programs funded by federal income taxes.

But the larger questions raised by Mitt’s offhand, if honest, comment, go beyond that. As some courageous Republicans and many Democrats have noted, Americans now pay the lowest percentage of their income in federal taxes in more than 70 years… and yet the Republicans, the Tea Party, and the Libertarians are all demanding that taxes be lowered more.  Given the current deficit, this isn’t possible without literally eliminating not only a wide range of existing federal programs, but also ALL tax deductions, and that includes the cherished mortgage interest tax deduction, the earned income tax credit, credit for college and education expenses, and certainly various subsidies and business tax deductions.

All that isn’t going to happen, not in the current political climate of “I deserve it and someone else should pay for it.”  Why not?  Because it would destroy too many people.  For example, although “only” about a quarter of U.S. homeowner mortgages are technically underwater [owing more than the house is worth], close to 50% are realistically underwater and unsalable in the current market because of the other additional costs required in selling a house and moving.  If Congress were to remove the mortgage interest tax break, that would make the situation even worse, because the vast majority of homeowners would have even less income to make mortgage payments.  Similar problems would arise with the elimination of the earned income credit, and others… and what politician is really going to have the nerve to eliminate deductions that will make things worse for their constituents… and the immediate economy, regardless of the possible long-term impact?

How did it come to this? That’s a chicken and egg question, but one thing is very clear.  Americans, both rich and poor, have a lot more “things” than they did sixty years ago. The average new house being built is almost three times the size of those built in 1950, even while family size has declined since the 1960s.  In 1950, the average family had one car; by 1995, the average family had more than two cars… again with a smaller number of people in the family.  Almost every statistic – except for food consumed at home – dealing with personal consumption shows a significant real increase over the last 60 years – at all levels of income.  Likewise, government programs have grown enormously since 1950.  The one thing that hasn’t kept pace on a per capita or per family basis is the amount of federal tax revenue, and, as I’ve pointed out time and time again, while the taxes of the wealthiest individuals – and to a lesser degree those of the working poor – have decreased proportionately far more than those of other groups, even massive increases in taxes on the wealthy won’t close that revenue gap.  And, remember, that “wealth” isn’t the same as income, so that under our current income tax system, income taxes can’t reach the huge amount of assets already held by the extraordinarily wealthy.

In essence, Americans as a whole have come to expect a combination of personal and government benefits greater that we are willing to pay for, and many of those increased personal benefits have come through deficit spending at the cost of more money in our pockets and less going to government. Even though most people will protest violently that this isn’t so… it is, and simple arithmetic proves it time after time.

So… whether I like Mitt Romney and his proposed policies or not, he was right about who’s paying for what [even if I disagree, which I do, with how much who should pay]… and especially who’s not.

But then, regardless of political party, no one likes embarrassing accurate facts.

 

 

The Hidden Costs of “Instant”

What just about everyone loves about the Internet is its speed and convenience, and what’s not to like about instant messaging, near-instant email, Tweets and Twitter, and instant on-line shopping?  Yet there is a high and hidden cost… one far greater than most people realize or consider – and a number of these costs were detailed in a front-page story in The New York Times on September 23rd, which outlined the results of a year-long study.

For example, on a world-wide basis, internet data centers, now numbering more than three million world-wide, “use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants.”   The United States alone accounts for about thirty percent of that.  One of the most staggering figures revealed by the study was that actual computer/server computations and data processing only took six to twelve percent of that electrical load.  The rest was merely to keep all systems “on alert” to handle intermittent peak loads and information surges.

It’s not that the technology to make data centers more efficient doesn’t exist.  It does.  The National Energy Research Scientific Computing Center has refined its systems to achieve more than 90% efficiency, and a company called Power Assure markets a technology that enables commercial data centers to safely power down servers in off-peak periods.  Yet Silicon Valley Power – the utility that serves Santa Clara and Silicon Valley – has not been able to entice a single data center to adopt such energy saving programs.

Not only is the internet energy wasteful, but these data centers are significant sources of air pollution. In just the states of Virginia and Illinois more than a dozen data centers have been cited for violations of air quality standings.  In northern Virginia alone, Amazon – one of the larger operators of data centers – was cited with 24 violations over three years, including running diesel generators without a permit, and was fined over a quarter of a million dollars.

So why is there so much waste and unnecessary pollution caused by internet data centers?

One reason is that companies that live by the “instant” fear that failure to always have instant access will have an adverse impact on sales.  A corollary of that is that data center managers aren’t rewarded for saving on the electrical bill or reducing air pollution.  They’re rewarded for having data centers on-line and able to handle anything 99.999% of the time.  That’s another reason why Northern Virginia’s data centers together have back-up diesel generators with a combined output almost equal to a standard nuclear power plant… with air emissions far greater than most conventional power plants.

Another aspect of the problem, and one not touched by the Times’ investigation, is that this increasing electrical usage created by the internet puts additional strain on the national and regional power grids, an infrastructure that is already overstrained in many areas… and this is getting worse. For example, data centers in Northern Virginia now draw over 500 million watts of electricity and plans on the drawing boards suggest that load will double in five years.

Instant access… it’s wonderful… but can we really keep this up?

 

The “Cheapster” Approach

The other day, the local newspaper had a front page story announcing a new local, college-based reality television show – entitled “Cheapster.” The idea behind the show is for college students to come up with innovative ways to show their frugality… and the winner will receive $10,000.

While I’m certainly for wise spending, the whole concept of “cheapster” I find appalling, especially the title. Everywhere I look, there’s another facet of the “cheaper is better”  belief, from Amazon and WalMart to so many “sales” that a recent survey revealed that many consumers won’t buy anything unless it’s on sale. Part of this emphasis and concern about price is doubtless a result of the long recession and the slow rate of recovery, especially in better-paying jobs, but I think the emphasis goes beyond that… and the implications certainly do.

When we as a society emphasize “cheap,” we’re also inducing, if not forcing, manufacturers and retailers to produce goods in the cheapest way possible, even if that means outsourcing production to third-world sweatshops and child labor.  It’s also an inducement to deception, as in the case of the book industry, as I’ve pointed out, where the “cheapest” prices for bestsellers doesn’t necessarily translate into overall lower prices… and where the reduction in book outlets where people can browse has greatly contributed to a decline [in real dollar terms] in sales and certainly in the diversity of books provided by publishing firms, thereby effectively reducing choice.  Yes, I know that self-publishing ebooks has taken off, but most people don’t have the time to peruse all those titles… and that’s another facet of reducing choice in a realistic way.

Then there’s telecommunications industry where, despite all the claims to the contrary, overall people are spending far more on communications than ever before and where “basic” service is more expensive now, even for cellphones, than it was in the time of the great Bell monopoly.  This tends to be forgotten because long distance calling is “cheap,” if not close to “free.”

“Cheap” airline fares aren’t really, not with all the extra charges, and travelers pay more in the way of inconvenience because the cabins are jammed with luggage to avoid checked bag fees, and that means that flights take longer because it takes longer to load the aircraft… and that, in turn, increases operating costs and overall travel time.

Beyond the myriad deceptions of cheapness is also a larger question. What ever happened to other virtues, such as quality or reliability?  And what happened to the idea that price reflects value?

But does all that matter, so long as it’s “cheap”?

 

The New Monopolists

A week or so ago, a U.S. District Court approved the e-book settlement between Hachette, Simon & Schuster, and HarperCollins and the Department of Justice, a settlement that opens the way for Amazon to sell ebooks from those publishers at any price Amazon chooses.  The Justice Department, of course, hails the settlement as a groundbreaking and anti-monopolistic agreement that will provide cheaper books to consumers. In thinking this all over, I realized that the entire structure and operation of monopoly has changed in the last twenty years, while the definition has not, so much so that the Sherman Anti-Trust Act, designed to prevent the harmful effects of monopoly has, in the case of the publishing settlement, become an instrument to support monopoly – and no one seems to realize this.  How did this happen?

A century ago, the operation of a monopoly was clearly defined.  A company, such as Standard Oil, bought up all the competition, or the majority of it, sometimes used low prices as a temporary measure to bankrupt competitors or drive them out, then took control of the market and raised prices to make a greater profit.  Today, companies like WalMart and Amazon have developed a very different monopolistic approach. They begin with selectively low prices and equally low wages for employees.  The low prices of highly visible selected goods attract more customers, and few people notice that other goods aren’t any cheaper, and in some cases, are even more expensive. WalMart gets around this by allowing customers to show competitors’ prices and then matching those prices… but most customers can’t and won’t do that for the majority of goods.  In the case of Amazon, Jeff Bezos lost money for years building that bookselling customer base.

Then, once the new monopolists have that customer base, they exert pressure on suppliers to provide goods for lower and lower prices.  Both WalMart and Amazon are excellent at this.  Amazon provides its marketplace for online retailers, then scans their sales, discovers what items are selling well and in large quantities, and either pressures the supplier to sell to Amazon directly for less, thus undercutting the Amazon affiliate, or finds another supplier to do so more cheaply. Recently, reports have surfaced that Amazon is using similar tactics with small and independent publishers, who don’t have the clout or the nerve that some of the larger publishers have.  Thus, in the end, the new monopolists aren’t gouging the consumer, but using excessive market power to gouge the suppliers and their own employees.  All the while they can claim that they’re not monopolists because people are getting goods for lower prices.

What the Department of Justice and the legal scholars seem to be overlooking is that such behavior is still restraint of trade – it’s just restraint of trade from the suppliers and through low employee wages rather than price-fixing from the retailer… and it has a definite negative impact on both local economies and the national economy, most obviously in the outcome that lower paid employees can’t live as well, don’t buy as much of other goods, and pay less in taxes.

In fact, Jeff Bezos even declared that his goal was to destroy the traditional paper-based publishing industry and take over the information marketplace. If that isn’t a declared intent to monopolize an industry, I don’t know what is. The new monopoly structure also may well be a far more deadly form of monopoly than the old one because it impacts the entire supply chain and effectively reduces incomes and the standard of living of tens of millions of Americans, both directly and indirectly. As I’ve noted before, already the publishing marketplace has changed, in that there’s less diversity in what’s published by major publishers, and more and more former midlist authors are having trouble getting published… or have already been dropped.

While Borders Books had its management problems, the final straw that pushed the company out of business was likely Amazon’s predatory pricing. In the years before its final collapse, Borders annual sales were around $4 billion, and it operated close to 400 brick and mortar stores with approximately 11,000 employees.  Those sales, and payrolls, not to mention the store rental costs, likely generated a positive economic impact of anywhere from $40 to $70 billion. While some of those sales have gone to Barnes & Noble or Amazon, most have not, and the operating expenses and payrolls paid by Borders are almost entirely an economic loss, since Amazon and Barnes & Noble didn’t add many new employees or, in the case of B&N, open new stores.  Books-A-Million did open some new stores, but only a handful.

Amazon’s policies have also resulted in lost revenue for independent bookstores, as well as closure of a number of stores of smaller regional bookstore chains, just as WalMart’s policies have adversely affected local and regional retailers. Yet the Department of Justice claims a victory in a settlement that reinforces the practices of the new monopolists where, apparently, the only determining factor is how cheaply consumers can obtain a carefully selected range of ebooks.

All hail the monopolists of “cheap” and “cheaper.”

 

The Danger of Blind Faith

A film that most Americans had never heard of or considered appears on U-Tube, and anti-American riots break out in Egypt and Libya, during which four Americans are killed, including the U.S. ambassador to Libya. While recent information suggests that the demonstration was planned as a cover for the assassination, the fact remains that there was a demonstration in Egypt and the Libyan plotters had no trouble in rounding up plenty of outraged Muslims, and additional protests have since occurred in Malaysia, Bangladesh, and Yemen. Some might dismiss this as a one-time occurrence.  Unfortunately, it’s not.  Several years ago, a Danish newspaper published some satirical cartoons of Mohammed, and that caused violence and uproar.  When the novelist Salman Rushdie published The Satanic Verses, the Ayatollah Khomeini of Iran issued a fatwa calling on all good Muslims to kill Rushdie and his publishers, forcing Rushdie into seclusion for years.

Some people might declare that things are different in the United States… and they are, in the sense that our population doesn’t have so many “true believers” who are willing to kill those who offend their religious beliefs or so-called religious sensibilities, but we do have people like that, just not so many.  After all, what is the difference between fanatical anti-abortionists who kill doctors who perform legal abortions and fanatical believers in Islam who kill anyone who goes against what they believe? Is there that much difference in principle between Muslims who want Islamic law to replace secular law and fundamentalist Christians who want secular law to reflect their particular beliefs?  While there’s currently a difference in degree, five hundred years ago there certainly wasn’t even that.

What’s overlooked in all of the conflict between religious beliefs and secular law is the fundamental difference that, for the most part, secular law is concerned with punishing acts that inflict physical or financial harm on others, in hopes of deterring such actions, while religious law is aimed at requiring a specific code of conduct based on particular religious practices of a single belief. The entire history of the evolution of law reflects a struggle between blind adherence to a narrow set of beliefs and an effort to remove the codes that govern human behavior from any one set of beliefs and to base law on a secular basis, reflecting the basics common to all beliefs. Historically, most religious authorities have resisted this change, not surprisingly, because it reduced their power and influence.

Thus, cartoons of Mohammed or satirical movies do not cause physical harm, but they are seen to threaten the belief structure.  Allowing women full control of their bodies likewise threatens the belief structure that places the life or potential life of an unborn child above that of the mother.  When blind faith rules supreme and becomes the law of any land, no questions to that law are acceptable.

When a specific belief structure dominates a culture or subculture, the lack of questioning tends to permeate all aspects of that society.  To me, it’s absolutely no surprise that there’s a higher rate of denial of scientific findings, such as evolution and global warming, among Christian fundamentalists because true science is based on questioning and true belief is based on suppressing anything that raises questions… and such societal suppression is the greatest danger of all from blind faith, whether that faith is Islam, LDS, Christianity, or even a “political” faith, such as Fascism, Nazism, or Communism.

 

Success Or Failure?

Some twenty years ago, at the Republican convention that nominated George H.W. Bush for his second term, Pat Buchanan made a speech essentially claiming that what he stood for was the beginning of a fight for the soul of the Republican Party.   That struggle has persisted for twenty years, and now the Republican Party platform seems largely in conformity to what Buchanan outlined.  Paradoxically, some opponents of Republican policies might claim that platform proves that the Party has no soul, but I don’t see anyone raising the larger question:  Should a political party aim to have “a soul”?

Over the more than two centuries since the U.S. Constitution was adopted, there have been more than a few disputes and scores of court cases involving the respective roles of religion and government in American society, the idea of separation of  church and state notwithstanding.  Yet doesn’t anyone else find it strange that, in a society that theoretically does not want government dictating what its people should believe, and in a land created to avoid just that, one of the major political parties has been striving to find its soul, when the very idea of a soul is a highly religious symbol?

Not only that, but the closer the Republican Party has come to adopting Buchanan’s positions, the more the partisans of this “soulful” party have attempted to force government to adhere to positions based on highly religious views – many of which are not shared by the majority of Americans.  And requiring a secular state, which the United States is, despite the “under God” phraseology, to require conduct based on religious views is diametrically opposed to what the Founding Fathers had in mind.

Part of the reason for the growing push to embody “religious” ideas in statute is likely the fact that the United States has become more diverse, and many feel that the nation does not follow the “traditional” values and have reacted by attempting to prohibit any government program that they see as opposing or not supporting such traditional values. There have always been those who did not fully embrace such values, including such Founding Fathers as Thomas Jefferson, but the idea of using government to insist on such values in law, as opposed to defining acceptable conduct in secular terms, has continued to increase, particularly in the past twenty years.

Even if the United States continues to diversify, I suspect that the founders of this nation, who were largely skeptical of political parties, would be even more skeptical about fighting for the “soul” of a political party.

 

The “Birther” Controversy?

According to the September issue of The Atlantic, one in four Americans believe that President Obama is not a “natural born citizen” of the United States, while half of all Republicans believe this.  Given the latest political identification as indicated by the Rasmussen Report of June 2012, and the number of registered voters in the United States, that means that even twenty percent of Democrats and independents hold to this belief, still a considerable number.

The U.S. Constitution only specifies that, to be President, a person must be a “natural born citizen” of the United States, but does not define that term.  Over the time since the Constitution was adopted, the courts have defined “natural-born citizen as a person who was born “in” the United States and under its jurisdiction, even those born to alien parents; or was born abroad to U.S. citizen-parents, either in the United States or elsewhere; or by being born in other situations meeting legal requirements for U.S. citizenship at birth.

At least three court suits have been filed on the question of Obama’s citizenship, all in different states, and the determinations in all cases have affirmed that he is a “natural-born” citizen.  He was, despite all the rhetoric to the contrary, born in a U.S. state of an American citizen.

So why do so many people, Republicans, in particular, believe he isn’t a “natural-born” citizen?

Yes, his mother divorced his father and then married an Indonesian and moved to Indonesia for a time, but the courts have previously ruled in other cases that similar acts, including the case of a woman born in the United States [with only her mother as a U.S. citizen, as was the case with Obama] who lived in a foreign country from the age of three until she was twenty-one was still a natural born citizen.

And why do so many Americans believe that he is a Muslim, when the man has attended Christian churches for so many years?

Or are these convenient beliefs merely a cover for the fact that Obama is black, and many voters, obviously including a significant proportion of Republicans, simply don’t want to admit publicly that they don’t like and don’t want a black President?  Instead, they claim that his mother was too young when she married his father [using convoluted legal rhetoric to claim that because she was so young, the rules for a child being a citizen when only one parent is a citizen don’t apply, that is, if Obama didn’t happen to have been born in a U.S. state, ignoring the fact that he was] or that his birth certificate was forged, or that he was really born in Kenya.

It’s one thing to oppose a politician for what he stands for; it’s another to invent reasons to oppose him to avoid facing personal prejudices… and it’s a shame so many Americans have to go to such lengths to avoid admitting those prejudices.  And it certainly doesn’t speak well of the United States that so many Americans accept such arguments as having any validity at all.

 

The Stigmatization of Early “Failure”

College professors are faced with a new generation of students, one filled with students termed “teacups,” students who literally break or go to pieces when faced with failure of any sort.  They’ve been protected, nurtured, and coddled from their first pre-school until they’ve sent off to college.  Their upbringing has been so carefully managed that all too many of them have never faced major disappointments or setbacks. Their parents have successfully terrorized public school teachers into massive grade inflation and a lack of rigor – except at select schools and some advanced placement classes where the pressure is so great that many of the graduates of those schools come to college as jaded products of early forced success, also known as “crispies” – already burned out.

Neither “regime” of “success” is good for young people. As I’ve noted before, the world is a competitive place, and getting more so.  Not everyone can be President, or CEO, or a Nobel Prize-winning author or scientist.  Some do not have the abilities even for the few middle management jobs available, and many who do have the abilities will not achieve their potential because there are more people with ability than places for them.

Even more important is the fact that most successful individuals have had more failures in life than is ever widely known, at least until after they’ve been successful. Before he became President Abraham Lincoln had a most mixed record. Among other things, he failed as a storekeeper, as a farmer, in his first attempt to obtain political office, his first attempt to go to Congress, in trying to get an appointment to the United States Land Office, in running for the United States Senate, and in seeking the nomination for the vice-presidency in 1856.  Thomas Edison made 1,000 attempts before he created a successful light bulb. Henry Ford went broke five times before he succeeded.

For the most part, people learn more from their failures than their successes.  More often than not, most people who are early successes, without failure somewhere along the line, never really fulfill their potential.  Even Steve Jobs, thought of as an early success, failed several times before he could launch Apple, and then the management of the company that he founded threw him out… before he returned to revitalize Apple.

Yet these young college students are so terrified of failing that many of them will not attempt anything they see as risky or where a possibility of failure exists.  Yet, paradoxically, many will attempt something they have no business trying or something well beyond their ability because they have been told how wonderful they are all their lives – and they become bitter and angry at everyone else when they fail, because they have no experience with failing… and no understanding that everyone fails at something sometime, and that it’s a learning experience.

Instead, they blame the professor for courses that are too difficult or that they were overstressed or overworked… or something else, rather than facing the real reasons why they failed.

Failure is a learning experience, one that teaches one his or her shortcomings and lacks, and sometimes a great deal about other people as well.  The only failure with failure is failing to understand this and to get on with the business of life… and learning where and at what you can succeed.