The Unseen Danger from AIs

The vast majority, if not all, of evolutionary biologists believe that one of the critical factors in the rise of the homo sapiens, as reflected in the species terminology, was the ability to think, an ability that led to sophisticated tool-making, agriculture, organized societies, and so forth. The combination of thinking and a tool-making culture has led to the creation of ever more sophisticated tools and a greater understanding of life and the universe.

But… if a trend observed by two U.S. researchers continues to take hold, all that may change. The two studied graduate students using high level computational tools and found that, when solutions eluded the students or when the results were unsatisfactory, almost invariably, the students attempted to figure out new and different ways to use the computerized tools and never addressed either the structure of or the assumptions behind the questions they had posed or the approach they had taken in addressing the problem at hand. In short, they had stopped truly thinking analytically and had reduced themselves to mental mechanics, as opposed to higher-level thinkers.

This isn’t just a problem for doctoral students in the sciences. It’s already everywhere. Because a large number of students have never really learned basic mathematics, they can’t estimate solutions, and if a calculator or computer is wildly off, they often never catch it. Many retail employees have trouble making change. Students seem to assume that all the answers are somewhere on the internet.

These and other examples suggest that people are blindly relying on the answers and methods provided by modern technology, instead of asking questions and thinking about the approaches and implications. Again… this isn’t new. A good twenty years ago, when I was working in the environmental field, I watched researchers and public policymakers get sucked in by mathematical models and accept the output relatively uncritically… and when, as a consultant, I asked some rather pointed and critical questions, they all deferred to the models as if they were infallible. They’re only models of reality. Sometimes they come very close, and sometimes they don’t, but it takes thought to determine which. That was twenty years ago, and today it’s even worse. Most trades on the stock market are handled by the computers of large funds, and those trades are in turn determined by mathematical algorithms, which are based on certain assumptions. But what happens if the assumptions change? Who’s watching?

This isn’t necessarily a problem when such thoughtlessness occurs in those people whose occupation isn’t supposed to be thinking, but it seems to be happening more and more often among those whose expertise is supposed to include analytical thought.

Now… just take this trend another step forward, to when we get more and more intelligent computer systems, even AIs. Certainly, Kubrick and Clarke anticipated this in 2001: A Space Odyssey with Hal… but very few viewers seem to see the parallels to our own culture today. Will homo sapiens give way to homo unsapiens without anyone even thinking about it?