TEXT JOIN TO 77022

Why Bother?

To what purpose? That is the question of each individual life; one that is asked at least 8 billion times a day. Said in a less cosmetic manner: why bother?

The answer is visible in the way that each of us lives. The religious among us may strive to meet the commandments of our faiths, where our failures may be more visible than our successes, but at least there is a theme to our lives. 

Some of us follow our own codes of conduct. This can be interesting to observe, and perhaps more exciting in outcome, but it’s sometimes dangerous. Some have secular philosophies as broadly encompassing as most religions. These can be very persuasive to the bewildered and to those who have lost faith when an older belief has failed. 

There are many who simply follow others, letting someone else determine their course in life. This, of course, is the way of children. And there are always some who follow impulse, or the corporal demands of hunger, shelter, pain and pleasure. These are often the first victims of circumstance.

To this mix of philosophies, now there is an added ingredient, the nature of which has only just been plumbed, and that only at the shallow end. Those who say that there is nothing new under the sun ought to take notice. Artificial intelligence—AI—is not only something mankind has not encountered before, but something that offers new perspectives on the human predicament. 

For instance, what would be the moral philosophy of such machine intelligence? Is preservation a matter of importance when a specific device can be exactly replicated? What part does identity play for a contrivance that can add to its very being as easily as plugging in an extra hard drive? Meanwhile, human beings cannot change the genetic code that drives us. 

Into this brave new world we now go. Efficiency has already been set as the standard measure. We have been chasing that chimera for centuries now. Speed is also a mark of distinction. Cost is certainly a criterion. And it appears already that the mere human being is not up to snuff. 

Hugo de Garis and the Cam brain machine at Starlab. Photo by Antonio RIBEIRO/Gamma-Rapho via Getty Images

But these deficiencies are all the more radically exaggerated by the ideas of such scientists as Hugo de Garis and his “Cosmist” vs. “Terran” contest envisioned as a “gigadeath war.” This conceit has already been reimagined in the movie “Terminator.” Funnily enough (if humor can be found) de Garis became a professor at Wuhan University, home of the COVID virus, after years of working at other major university research labs. Such stuff of “dangerous visions” is certainly in the mix.

To be sure, most people would prefer not to think about it at all, but an altered universe is openly advocated in authoritarian circles, where any advantage in developing AI might benefit the first tyrant to achieve a useful self-awareness coupled with the capacity to self-replicate. Tyrants always believe they can maintain control of the monsters they create. But more likely results will come from the world of business, where amorality is rampant, and profit is the only prophet of success. Dr. Frankenstein now works at Google.

Other real dangers loom. If our human abilities become inferior to some sort of AI, especially ones we have invented ourselves, how might we stand in the eyes of a God in whose image we believe ourselves to be made? Does that call into question the omnipotence of the Creator, or does it only enhance the value of the creator? 

Meanwhile, the atheist is left to determine his own self-worth by comparison. And the greater public must ask, what’s in it for me? It is easy then to imagine a coming spike in the number of suicides.

Science fiction, as a genre, has probed much of this for years, since well before the rendition of Hal in the 1968 movie “2001: A Space Odyssey,” but has never answered the deeper questions of motivation and purpose. Partly this is because the questions themselves, while thrilling, produce answers that are partisan, and too obviously, such narrative speculation into the meaning of human life, or any life at all, has so far failed. All answers proffered would be attacked mercilessly for ignoring one point or neglecting another. Happy endings for such dystopian conjuring are few and usually tied to a temporarily triumphant human spirit, with no longer-term solutions forthcoming.

Preceding most current concepts of artificial intelligence, author Isaac Asimov’s rules for robots included the conceit that such critters would not hurt human beings. His three laws of robotics are: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey commands given it by human beings except where such orders would conflict with the 1st law. 3) A robot must protect its own existence as long as such protection does not conflict with the 1st or 2nd law. All very pat.

But it appears that these three leaps of faith were predicated on the naïve belief that science itself would always be rooted in a quest for the best for mankind. Other writers had less trust in the good will of their fellow human beings. Philip K. Dick’s story, Do Androids Dream of Electric Sheep (best known in its filmed translation as “Blade Runner”) imagined the angst of killing an android you love, but had little spare change for human beings who had no respect for other human beings.

It would seem, in a world beset by man-made viruses, man-made poverty, man-made war, and man-made hate, that any man-made attempts to overcome our liabilities will not end well. The inherent stupidity of the nihilists and deconstructionists alone has polluted the waters sufficiently to make every drink of knowledge suspect, no matter our degree of thirst. And nearly as bad, any open-hearted exploration of our predicament is always made more difficult by the poison of politics. But questions must be asked, and answers provided must be proven to be false so that more possibilities can be explored. Unlike gods, we will certainly fail, but our strength has always been our ability to learn from our mistakes

Many people working in the field of AI have considered the problems and conundrums of machines making ethical choices. The real gulf occurs between those trying to design systems to meet prior conceptions of good and evil based on purely human concerns and the obvious absence of such restrictions on the “mind” of an AI device that, unlike the Asimov robot, is not worried by human limitations, and acts on the simple expedient “can it be done”? 

We already see a common social illness around us of people who operate on that motive. “If it can be done, do it,” is not all that far from the Boomer philosophy of, “If it feels good, do it.” There is no separation between that thinking and a machine realization on the same level, but without the taint of moral consequence, while actual morality, specifically, depends on a proper valuing of life.

When conservatives point to the void beyond atheistic thinking, this is what they see. All the problems with religious morality are trumped by the vacuum of no morality at all when addressed to the problem of artificial intelligence. Which leaves us to consider new alternatives if we are to avoid the sort of internecine human conflict posed by amoral scientists like de Garis, or many religious thinkers who have an all or nothing approach to human existence, or with the socialists who want to control human activity for their own conception of the good of humanity. 

The open society envisioned by our founders did not allow for the problems of machine intelligence, but they did well understand the principles behind any solution. Whatever answer we achieve must be derived through a strict calculation of human liberty. Anyone who can place the “if it can be done, do it” principle ahead of the human consequence is as much a moral enemy as a common Marxist. Such existential absolutism found in the need to preserve liberty will not be accepted by many so-called scientific thinkers today. Already well-schooled in the mechanical ethics of, “if it can be done, do it;” they disallow for religious belief as irrational, as well as for the superior rule of a common law tied to the history of human experience. 

If the struggle to better ourselves, which is manifestly evident in mankind’s history, will only lead to our self-destruction, why bother? If every challenge we meet and overcome is only met with another, greater one, why bother? The answer is that we must bother so that we may have any future at all.

But make no mistake, along with such lovelies as nuclear energy and lab-created viruses, A.I. is here to stay. Our challenge is to be worthy of ourselves as creators, as well as of any God worth the salt. 

Get the news corporate media won't tell you.

Get caught up on today's must read stores!

By submitting your information, you agree to receive exclusive AG+ content, including special promotions, and agree to our Privacy Policy and Terms. By providing your phone number and checking the box to opt in, you are consenting to receive recurring SMS/MMS messages, including automated texts, to that number from my short code. Msg & data rates may apply. Reply HELP for help, STOP to end. SMS opt-in will not be sold, rented, or shared.

About Vincent McCaffrey

Vincent McCaffrey is a novelist and bookseller. Visit his website at www.vincentmccaffrey.com.

Photo: Beata Zawrzel/NurPhoto