The uncritical use of language frequently effects our thinking in unforeseen ways.  With little exception, our uncritical use stems from appropriating the environmentally-ordinary; in other words, we talk the way others do.  As a consequence, we tend to think the way others do, too.  Unfortunately, the uncritical use of language flows effortlessly, while stopping to think through the context and meaning of our language requires engaging in a struggle.

Among the most common and uncritically appropriated patterns of thought today stands the problem-mentality: the belief that all challenges we face are problems to be solved.  Countless hours–months, years, even decades–could go into dismantling the historical development of this belief as it presently exists.  But in any age, place, or circumstance, the problem-mentality emerges into its prevalence wherever the belief in a teleological universe fades.  Even individuals who maintain this belief in a personal, or spiritual sense, can be swept along by the thought that all challenges in this life are problems to be solved.

And what is wrong about this belief?  To put it first in abstract terms: every problem is necessarily particular, and therefore has a particular solution (or a set of possible particular solutions).  Getting across the river is a problem; we can solve it by using a boat, building a bridge, constructing a rope swing, etc.  But not all challenges are particular.  Some are complex, involving a multitude of parts, having a multitude of far-reaching consequences… and while every challenge will be manifested in particulars, those particulars are very often rooted in universal difficulties.  By “universal” I mean what is not hic et nunc, but (to leave it vague for now), more than what can be reduced to the instance of particularity.

But abstract terms, to be clearly understood, need concrete illustrations.  So I’d like to look at two instances of this problem-mentality and demonstrate how it misleads us.

Reverse Engineering a Fallacy

Wired.com recently ran an article about artificial intelligence’s “hallucination” problem.  In short, no matter how much “deep learning” ability (layered interpretation of data according to increasingly complex algorithms which “place” the data according to certain classes) their programmers give to the robots, the robots just cannot seem to match humans in pattern recognition.  That is, they far exceed human ability in recognizing some patterns; and the easier that pattern is represented in mathematical symbols, the easier the robot will recognize it.

But at the same time, the robots are still very easily fooled, not only by coincidental representations (as, it says in the article, Google’s AI identifying two men on skis as a dog), but (as the article also says) especially by human deception.  In other words, the robots may be terrifying fast, accurate, and able to beat human beings mercilessly in games of limited rules (such as chess or go); but the human beings have a cleverness the machines cannot match.  Every sci-fi story of humans prevailing over the robots depends on this.  Nevertheless, AI researchers are trying to trying to solve this problem, which threatens driverless vehicles, virus-protection software, and any other safety-oriented application of AI.  Until AI can be imbued with the intuitive recognition and interpretation that human beings possessed, it won’t be true AI.

However, underlying and therefore undermining the efforts of the researchers is an error about human intellectual activity: namely, that it is a more sophisticated process of computation than what has yet been achieved in the robots.  Therefore, it is believed, if we can increase the complexity of computation and optimize it, then the robots will be able to attain a cleverness comparable to human beings.

The uncritically appropriated and erroneous belief that human minds are essentially highly-sophisticated computational devices–that is, that minds are the software of the organic hardware, the brain; that, as Steven Pinker so succinctly encapsulated the fallacy, “Minds are what the brain does”–misses what it is to be human.  We experience much more than mere computation.  The thinkers of pan-informational paradigms (including AI researchers, cognitive scientists, and many others) would tell you that these experiences of “more” than computation are mere epiphenomena, i.e., what really appears to us but which does not exist outside that appearance.

I think this epiphenomenal dismissal a case of focal disappearance; that is, something being so close and common that we fail to notice it even exists and require an abnormal event to make it evident to us again–like glasses on one’s face that are forgotten until they become a problem.  In this case, the disappearing something is the uniqueness of human knowledge.  We know not only that things relate to us, but we know that things are; and in this knowledge, implicitly, that their existence is irreducible to that relation to us.

Obviously, an abstract point.  To put it concretely: human beings may wonder what it means to be human, or what the sun is, or why the universe exists; inquiries which computation cannot resolve.  While a machine may be programmed to seek answers to these questions, it does so only ever in the pursuit of solving a problem which has been put to it… which is why Google returns over 98,000,000 results for a search for “the meaning of life”; no answer, but an aggregation of human explorations of the question.  No robot can, in fact, arrive at any answer for what anything means, or why anything is (either why it is at all or why it is so and so).  The robots will only be able to report the consensus of human findings, for only humans can interpret the intelligible meaning of beings [not to plug myself too hard, but this human ability is the focal concern of my upcoming book on The Intersection of Semiotics and Phenomenology].

Unless the robots can be programmed to possess the unique human abilities which enable our grasp of the intelligible meaning of things themselves, they will never be “true AI”.  Whether they can be so programmed remains to be seen.  Personally, I do not believe they can… and that if they are, not only will that be quite a ways in the future, but they will no longer really be robots.

But the mistaken idea that human intellection is a computational process is only one instance of the broader fallacy that is the problem-mentality.

Fancy Souls and Soulless Fancies

Elizabeth Bruenig, to whom conservatives pay attention on account of her Catholic-informed social stances, caused a bit of a brouhaha this week with her short op-ed column in the Washington Post, “It’s time to give socialism a try“.  Shortly before, she published the opening remarks of a debate had with Dr. Bryan Caplan (professor of economics at George Mason Unviersity), her “Case in favor of socialism“, a more robust assertion of the same ideal.

Therein, Bruenig makes many valid rebukes of capitalism: most especially its tendency to commodify what should not be, such as education and healthcare.  And she notes that 20th century socialism is seldom argued for, which I think we can all infer is justly because it has rather quickly failed to result in any “goods because” and probably only “goods despite”.  21st century socialism hasn’t been going so swimmingly either, but that’s not really my concern here.

That socialism is the only alternative to capitalism is itself quite the fallacy which shows a prevailing contemporary lack of imagination and political prudence in our culture, but that also is not exactly my concern, either.

No, my objection to Bruenig’s socialist advocacy is that she believes it, as stated in the opening remarks of her debate, a solution to a moral problem:

Socialism represents a moral response to an immoral society, and a harsh rebuke to the commandeering of the modern imagination by individualism, cynicism, competition, misanthropy and indifference. With regard to certain practices and industries, socialists may claim that socialist-style approaches will result in greater utility or efficiency, but the greatest recommendation of socialism is that it is its own moral case, and this is nowhere clearer than in contexts where freedom is held among the highest values. “The chief advantage that would result from the establishment of Socialism is, undoubtedly, the fact that Socialism would relieve us from that sordid necessity of living for others which, in the present condition of things, presses so hardly upon almost everybody,” Oscar Wilde wrote in The Soul of Man Under Socialism. For Wilde, socialism opened onto an ever more perfect individualism; for me, it opens up the possibility not of living strictly for ourselves, but for living for human excellence, for our own common good.

Slightly before this, Bruenig claims that “capitalism sells us the worst possible story about ourselves, imagining human nature as inherently greedy, jealous, destructive and anti-social”.  If this were true, then it seems socialism re-distributes to us the best possible story about ourselves, imagining human nature as inherently generous, humble, productive, and communal; and naturally equal, besides.  Of the two alternatives, I would be inclined to agree with the one that recognizes original sin and its effects; but I would not say that these pictures of human nature are necessary consequences of the economic systems, but rather only conventional associations of those economic systems with already-underlying conceptions of human nature which, perhaps, those systems exacerbate.

Unsurprisingly, Bruenig’s advocation of a socialist regime lacks specificity, which she leaves “to more talented policy-makers”.  Contemporary advocates of socialism much prefer to leave it in the broad strokes of the ideal, I think, because attempts at specifying its dictates inevitably undermines the legitimacy of their claims; not the least of which, central to Bruenig’s theorizing, is liberty for human excellence.  Under capitalism, many–most, even–work for a good not their own, from which their own labor is alienated, and which, therefore, provides little in the way of fulfillment for the true human good.  That socialism would rectify this has no historical support; but I fail to see the support for it even in theory, given that we consider human beings realistically, and not as the magnanimous beings socialism presupposes.

That is, socialism might actually solve the cynical misanthropic individualism of Americans, or Westerners generally, as a current problem.  But cynicism, misanthropy, and individualism, while they might be facilitated by capitalism, are not caused by it.  They are products ultimately of failures in human morality, which is not a particular problem, although it manifests in such, but a universal difficulty.  That is, so long as human beings are human, they will find new ways to commit old sins; or, if you prefer, alternative means to the same errors.  You cannot simply replace an economic system and produce morally-superior human beings.  At best, you will confound their ability to act immorally, for a time.  Historically, I am not certain that socialism as a national economic system, or a key element within a national economic system, has succeeded in confounding human immorality for more than a single generation–if that.

More importantly than a system’s inability to prevent human evil, however, is its inability to produce human good.  Bruenig says that socialism opens the possibility of “not living strictly for ourselves, but for living for human excellence, for our own common good.”  On the contrary, this possibility is always open to us, no matter how inherently flawed the economic system and with what depravity it is abused.  That’s not to say all systems are created equal, or that they do not impinge upon our freedom for pursuing the good; but it is to say that whatever human excellence may be lived, that is not a consequence of the system; only, perhaps, that the system has not yet been twisted to interfere with or interrupt that living.

In other words, I quite agree with Bruenig about the current disasters which a capitalist system has failed to prevent and even in some regards fostered; but I disagree with her solution.  G.K. Chesterton, with his usual perspicacity, summed up the situation 108 years ago:

This is the arresting and dominant fact about modern social discussion; that the quarrel is not merely about the difficulties, but about the aim.  We agree about the evil; it is about the good that we should tear each other’s eyes out.  We all admit that a lazy aristocracy is a bad thing.  We should not by any means all admit that an active aristocracy would be a good thing.  We all feel angry with an irreligious priesthood; but some of us would go mad with disgust at a really religious one… The social case is exactly the opposite of the medical case.  We do not disagree, like doctors, about the precise nature of the illness, while agreeing about the nature of health.  On the contrary, we all agree that England is unhealthy, but half of us would not look at her in what the other half would call blooming health….

We can all see the national madness; but what is national sanity?  I have called this book “What Is Wrong with the World?” and the upshot of the title can be easily and clearly stated.  What is wrong is that we do not ask what is right.

What’s Wrong with the World? p.17.

Human Learning

A system, whether it is the artificial intelligence of a robot or the legal enshrinement of economic policies, is never anything more than a system; which is to say that it can only ever operate within the rules provided for it.  Its goal is set independently of it, and it has no freedom in determining goals for itself except insofar as those goals are subordinated to its overall purpose.  Human beings, too, have an innate goal which we cannot reject; but since that goal is–because our nature includes that cognitive grasp that beings are at all and therefore irreducible to our relations to those beings–the unspecified “good”, we have an infinite possibility of specific goals at which we can aim.

This enables us also to find infinite possible ways of abusing systems–whether of the robots or of the economy.  We find loopholes to systems so readily because we are essentially independent of any and every system; no matter how thoroughly we may be incidentally, or voluntarily, confined by them (as when we play a game), the virtue of our intellect is per se incapable of systematic confinement.

If we want to prevent abuse–or minimize it, at least–the safeguard is neither more nor different legislation (the complexity of the American tax code being a solid indication of how an increase in complexity produces ever-more-sophisticated abuse).  Rather, the only plausible “solution” is not to treat the abuse of systems as a problem to be solved, but as a difficulty to be struggled against.  A solution can be applied and forgotten; repaired, perhaps, as time wears it down, or revised to meet new circumstances.  But the difficulties which come from within–as do all moral failings–must be struggled with, wrestled with, on a day-to-day, hour-to-hour, minute-to-minute basis.  The most chaste man may find himself tempted to infidelity and the most honest woman might find herself wanting to lie.  And the most committed socialist may desire to gain at the expense of others–and find a way to do so, even within the most rigorous of socialist systems.

So what we need is education–not just any education, and God knows not STEM education (to paraphrase Matthew Peterson, the Nazis were getting pretty good at that), but a moral education; which is to say, an education in which we ask with G.K. Chesterton what is right, but, more importantly, and more foundationally, we ask what is good.


P.S., I would exhort every Catholic to consider carefully Pope Leo XIII’s savage dismantling of the socialist systems of his own time, and ask whether or not those criticisms apply today as well.  For a taste:

LeoXIII
Leo XIII

…although the Socialists, stealing the very Gospel itself with a view to deceive more easily the unwary, have been accustomed to distort it so as to suit their own purpose, nevertheless so great is the difference between their depraved teachings and the most pure doctrine of Christ that a greater could not exist: “for what participation hath justice with injustice? or what fellowship hath light with darkness?” (II Cor. vi, 14)  Their habit, as we have intimated, is always to maintain that nature has made all men equal, and that therefore neither honour nor respect is due to majesty, nor obedience to laws, unless, perhaps, to those sanctioned by their own good pleasure.  But, on the contrary, in accordance with the teachings of the Gospel, the equality of men consists in this: that all, having inherited the same nature, are called to the same most high dignity of the sons of God; and that, as one and the same end is set before all, each one is to be judged by the same law and will receive punishment or reward according to his deserts.

Quod Apostolicis Muneris, §5

Advertisements

2 thoughts on “Socialism and the Robots

  1. The claim that “the failure of human morality” is a “universal difficulty” is doubly ungrounded. First, socialism doesn’t purport to solve the failure of human morality, whatever that might mean in whatever historical context and it doesn’t mean one thing. Rather it solves the particular problem of the exploitation of others that arises out of particular forms of capitalism. And no system has been better at exploiting others for personal aggrandizement than capitalism (which is why it is the least in accord with the gospel message than any other system). Indeed, it’s diabolically good at that, so much so that it occludes exploitation as a “virtue” – the virtue of work. Second, that human morality is a “universal difficulty” has no basis in history, since of course morality is historically constructed. There is no inevitability to moral discourse, much less any particular moral discourse. It arose at a certain time for a particular reason with a particular audience in mind. It’s probably useful to see it as a Greek phenomenon, dispersed by Helenism and then appropriated and universalized by orthodox (credal) Christianity. The point is the idea that the idea of morality was always universal is patently false. The claim is meaningful only in a post-Christian historical environment based on credal Christianity and its institutions. And that, probably, is fleeting

    Like

    1. This is cute, but, frankly, silly and historically myopic Marxist propaganda. Slavery–an institution of many societies independent of capitalism for thousands of years–did a fair better job of exploitation than any capitalist country has heretofore achieved, for far greater personal aggrandizement. There are some monuments in Giza which speak to this, I believe, of which you may have heard.

      And if morality is an accident of historical construction originating in Hellenistic thought and universalized by Christianity, what then is Taoism? Or Confucianism? Or any of the various codes of law spread throughout Mesopotamia which indicate measures aimed not merely at the maintenance of political order but retributive justice?

      You say, “of course morality is historically constructed”, as though this is a claim in need of no defense or explanation. What do you mean by “historically constructed”? Is it constructed by human beings in particular times at particular places for particular reasons? If so–constructed out of what? Ex nihilo? That would be absurd, so I assume not. But what, then, from where, how, and why?

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s