One of the great debates in the field of ethics centers around the thinking of Emmanuel Kant vs. the Utilitarians – most notably John Stuart Mill. To simplify, Kant’s philosophy suggests that the means justify the ends: we should always do the right thing and trust the results to work out for themselves. Mill, on the other hand, argued that we should do what produced the greatest happiness for the greatest number of people, and that the ends justified the means.
I’ve always tried to do the right and moral thing, of course, but when push comes to shove I’ve been an unapologetic utilitarian. I might, in my brasher moments, have put it this way: what matters is the outcome, the result, and doing the noble thing when it leads to a tragic result isn’t ethical, it’s both immoral and stupid. In a sense, this might be seen as privileging pragmatism over idealism, although those things have long been at war in my soul and I can’t say which will eventually win. (I’ll go ahead and apologize now to any real philosophers reading this for the hash I’m probably making of their field’s great minds.)
Last night I had a thought that may change all this. It occurred to me that both Chaos and Complexity Theories may have implications for the centuries-old debate between the ethics of duty and the ethics of utility.
Let’s start with the principle of sensitive dependence on initial conditions, better known as the “Butterfly Effect.” In the 1960s, meteorologist Edward Lorenz, in his attempts to model weather, discovered that minuscule changes in the inputs to an equation resulted not in equally minuscule changes in output, but in changes so vast and dramatic as to be unpredictable. The popular explanation says that a butterfly flapping its wings in America today can cause a hurricane in China next year – hence “the Butterfly Effect.” Of course, Chaos Theory is intensely mathematical and difficult, and our layman’s discussions are relegated to the simple and metaphorical applications, but the theory clearly suggests something important to our ethics discussion.
Utilitarian ethics make a lot of assumptions about the knowability of an outcome. That is, it presupposes that I can know what result is desirable and am therefore better able to work toward it. It assumes the ability to predict and dictate the ends.
Sensitivity to initial conditions, however, dismisses the possibility that I can reliably predict the results of my actions. Even in highly controlled mathematical contexts, where inputs can be controlled to as many decimal places as you have computing power to manage, a .00000001% alteration can change the end result massively. Human activity – especially in a dynamic system with multiple interacting agents – is hardly that precise, so the difference between keeping the extra 50 cents the cashier accidentally gave me and returning it, the difference between telling a white lie and ‘fessing up, the difference between stopping to help a stranded motorist and speeding past is guaranteed to set in motion a series of events that will lead me to a more or less unknowable end.
All of which points to the futility of a utilitarian approach. If I can’t accurately predict the results of my actions, then how can I possibly act in accordance with an ethical code that assumes the output? Sure, we can make educated general guesses and perhaps we’ll be generally correct, but as I ponder the ethics of the Butterfly Effect I realize there’s substantially less certainty in the system that I had previously imagined.
Kant’s ethics of duty, on the other hand, are completely unconcerned with the unpredictable ends, and instead focus on that which is knowable and controllable – the initial action, the input. If I’m faced with an ethical dilemma and I accept that I can’t act in accordance with practical results, the only thing left is to act in accordance with moral rules; as Kant put it, “I am never to act otherwise than to will that my maxim should become universal law.” Or, as they say on television and in the movies, “do the right thing.” Act morally and trust the universe, I guess.
It also occurs to me that Complexity Theory has something to say on the subject, too, and that the implications are consistent with Kant. Artificial Life researchers have conclusively demonstrated, in their attempts to model living systems, that rule-heavy, top-down systems that attempt to define too many pieces of the system are destined to fail. In the end, truly dynamic lifelike activity is too complex to micromanage.
What does work are systems where the activity of individual agents are guided by two or three simple rules. Take Craig Reynolds’ famous “Boids” model, for instance:
In 1986 I made a computer model of coordinated animal motion such as bird flocks and fish schools. It was based on three dimensional computational geometry of the sort normally used in computer animation or computer aided design. I called the generic simulated flocking creatures boids. The basic flocking model consists of three simple steering behaviors which describe how an individual boid maneuvers based on the positions and velocities its nearby flockmates…
The behavior of the boids in Reynolds’ simulation wasn’t over-determined. Instead, each individual boid was programmed to follow three basic rules.
- Separation: steer to avoid crowding local flockmates
- Alignment: steer towards the average heading of local flockmates
- Cohesion: steer to move toward the average position of local flockmates
The result was startlingly lifelike behavior on the part of the A-Life agents, and the validity of Reynolds’ findings have been borne out by substantial research since.
So how does this bear on our ethics question? Well, it seems that a utilitarian model, by assuming the knowability of outcomes and focusing on strategies to force the ends, are very much over-determined, like the top-down A-Life models that consistently fail to generate lifelike behavior. Those models decide at the outset what the result will look like and set out to try and sheepdog all the agents of action toward a predetermined conclusion.
The Kantian model, on the other hand, makes no assumptions about outcomes at all. It merely acts in accordance with basic moral rules that are structurally similar to the operational rules of a working A-Life system.
I’m neither a trained philosopher nor a scientist, but it seems to me that two schools of scientific thought nonetheless have something to say here about an important ethical conversation. For me, at least, my epiphany about the implications of Chaos and Complexity pose challenges to the code I have lived by for my entire adult life. If Chaos and Complexity (and my interpretations of them) are correct, it’s all of a sudden more difficult to be a Utilitarian.
Even more critically, it means I need to focus more attention on my own core first principles. If I can make no assumptions about the outcomes of my actions, then it seems all I have left is the moral value of the actions in and of themselves.
Chaos: Making a New Science, by James Gleick
Complexity: The Emerging Science at the Edge of Order and Chaos, by M. Mitchell Waldrop
NOTE: I am unaware of any research that broaches the questions raised here. I have not had time to conduct a formal search, however, and if others have addressed the relationship between Kant, Mill, Chaos and Complexity I would appreciate being pointed toward that research.