Why Free Will is Not an "Illusion"

By Brian Tomasik

First published: . Last nontrivial update: .

Summary

Some commentators assert that physics and neuroscience prove that we don't have free will. I think these claims are misguided, because they don't address our fundamental confusion about what free will is. I take the compatibilist view that humans (and other decision-makers, including animals and robots to varying degrees) have free will despite operating mechanically and deterministically. Ultimately, the stance we take toward free will in various circumstances should be driven by instrumental considerations about how that stance will affect outcomes; our evolved intuitions may or may not give the most helpful judgments.

Contents

Disclaimer and credits

I haven't studied the literature on free will in depth, so this piece is written mainly from a common-sense standpoint. My guess is that most or all of what I say here has been explained by previous authors. In an excellent 15-minute presentation, "A Better Choice Than Free Will," Darren McKee makes points similar to those in this essay.

I first learned about the determinism and free-will debates in 2003, when my 11th-grade English class studied philosophy and read Oedipus the King, Macbeth, and Rubaiyat of Omar Khayyam. A few years later, I explored the issue further, and in 2007 I wrote an essay (which I now realize was confused) in which I argued that we should assume we have libertarian free will—even if we think it unlikely based on what we know about science—because if we don't have it, then we have no libertarian-free control over what we do. In other words, we should try to believe we have libertarian free will, because only in the case where we do have it will our libertarian-free attempt to believe we have it be successful.

In summer 2008, I read Eliezer Yudkowsky's discussion of free will on the blog Overcoming Bias; the contents now reside on LessWrong. I tried to apply my existing, confused framework to Yudkowsky's arguments, but eventually I saw that I wasn't making sense. Instead, Yudkowsky's approach did make sense. Looking back in hindsight, the issue now seems obvious, perhaps in the same way that sixth-grade math would now seem obvious to you but wasn't obvious when you first learned it.

Introduction

"With Earth's first Clay They did the Last Man's knead,
And then of the Last Harvest sow'd the Seed:
Yea, the first Morning of Creation wrote
What the Last Dawn of Reckoning shall read."
--Rubaiyat of Omar Khayyam, stanza 53

It's not uncommon to hear claims like, "The universe is deterministic, so we don't have free will," or "Neuroscience proves that free will is an illusion." I think these statements are not quite correct. There is something true behind them, but the real problem is that the popular conception of what "free will" is doesn't make sense. So simply making a statement that "people don't have free will" gives a false impression. As an analogy, I think strong forms of moral realism are almost certainly false, but it would be misleading and damaging to say "It's not wrong to kill people because morality doesn't exist."

Free will is not an illusion, just like consciousness is not an illusion, and morality is not an illusion. It's just that these things aren't what most of us naively thought they were.

What should "free will" mean?

When people say they have free will, what do they mean by that? It's not precisely clear, but often that statement is taken to suggest ideas like the following:

Let's keep these in mind, as we turn next to a hypothetical robot, named Willy.

Willy the robot

Willy is a robot that has the following hobbies in his repertoire: {greet_experimenter, feed_bunny, break_cookie_jar}. Willy is programmed to choose the action that gives highest expected reward, where expected reward is computed as follows:

expected_reward = random(0,1) * expected_experimenter_greetings + random(0,1) * expected_bunny_smiles + random(0,1) * expected_smashing_noise.

Here random(A,B) means that a pseudorandom floating-point number is drawn uniformly from the interval [A,B]. Pseudorandom numbers are completely deterministic, and if we set a pseudorandom number generator at a particular seed, we can generate out the exact same set of pseudorandom numbers over and over again as much as we like. At the same time, to an outside observer, pseudorandom numbers are unpredictable. The pseudorandom numbers used in Willy's reward function approximate the environmental variance, mood fluctuations, and other seemingly random factors that influence human choices.

Day 1

On day 1, Willy's CPU chooses some pseudorandom numbers and ends up with this reward function:

expected_rewardday1 = 0.8 * expected_experimenter_greetings + 0.5 * expected_bunny_smiles + 0.2 * expected_smashing_noise.

Based on his probabilistic calculations, Willy thinks that

Now he computes:

The greet_experimenter action has highest expected reward, so Willy chooses that. As he predicted, the experimenter greets him back exactly once.

Day 2

Now it's day 2. Willy recomputes his random weight coefficients, and his expectations for the results of his actions may also be slightly updated based on what he learned since yesterday. On day 2:

expected_rewardday2 = 0.1 * expected_experimenter_greetings + 0.5 * expected_bunny_smiles + 0.9 * expected_smashing_noise.

Willy recomputes the expected rewards of each possible option. This time he finds that expected_rewardday 2(break_cookie_jar) is highest, so he breaks the cookie jar.

The experimenter sees this and gets furious: "Willy, you bad robot! Stop it!" However, Willy doesn't have a term in his reward function that values or disvalues being yelled at, so Willy's behavior doesn't change.

Day 5

On Day 5, after another cookie-jar smash on Day 4, the experimenter realizes that she forgot to program Willy to disvalue being scolded. She rewrites his reward function to be the following:

expected_rewardday5 = random(0,1) * expected_experimenter_greetings + random(0,1) * expected_bunny_smiles + random(0,1) * expected_smashing_noise - random(10,15) * expected_experimenter_scolding.

Now Willy consistently refrains from breaking the cookie jar, because he expects that doing so would cause him to be scolded, and the coefficient of the scolding penalty is never less than 10. Willy has responded to a change in incentives.

Day 50

Willy has been a good robot for a long time now. He has greeted the experimenter on many occasions, and the bunny is well fed. But today, the experimenter departed from the lab, on her way to the airport, whence she'll travel to Germany to give a conference presentation. She left Willy alone in the lab. Willy knows that she'll probably take a week to return.

The experimenter left three hours ago. Willy is pretty sure she won't come back. Today his reward function is

expected_rewardday50 = 0.9 * expected_experimenter_greetings + 0.1 * expected_bunny_smiles + 0.8 * expected_smashing_noise - 12 * expected_experimenter_scolding.

He would like to greet the experimenter, but she's gone, so she can't greet him back. He's not in the mood to feed the bunny. But he would enjoy smashing the cookie jar that has been lying on the counter since he smashed the last one on Day 4.

Willy eyes the jar deviously. He thinks: "The experimenter is really gone, right? What are the odds she'll find out if I smashed the jar? I could clean up afterwards, and then I could go buy a new jar using some coins in her coin pile. She'd never notice." In fact, to make sure this plan would work, Willy goes out and buys the new jar first. He comes home with the jar in his bag, skipping along the sidewalk with anticipatory delight. He returns to the lab and sets up the new jar on the counter. He computes expected rewards one last time. His value for expected_experimenter_scolding is 0.01, because it seems highly unlikely the experimenter would return. So:

expected_rewardday50(break_cookie_jar) = 0.9 * 0 + 0.1 * 0.3 + 0.8 * 1.5 - 12 * 0.01 = 1.11.

This action has highest expected reward value, so Willy breaks the jar—smash!—and then cleans it up. As expected, the experimenter remains gone, and the shards of the jar are emptied by the trash haulers before she returns. Willy's plan was a success.

(As an aside, I should note that Willy as depicted here is rather anthropomorphic and unrealistic. For instance, it's not clear why he couldn't pry the bunny's mouth into a smile configuration many times with less effort than he took to buy the new jar, or why he couldn't shut off his audio inputs when the experimenter scolded him to avoid triggering his "scolding detectors." We can invent hypothetical constraints on these possibilities to make the story work.)

Why Willy has (limited) free will

I think it's fair to say that Willy the robot has a degree of free will. Let's return to the example criteria for what free will should look like:

Humans are vastly more complicated and unpredictable than Willy, but similar ideas apply to our physically determined choices. I like how Yudkowsky explained it in "Thou Art Physics":

People's choices are determined by physics. What kind of physics? The kind of physics that includes weighing decisions, considering possible outcomes, judging them, being tempted, following morals, rationalizing transgressions, trying to do better...

There is no point where a quark swoops in from Pluto and overrides all this.

Responsibility as an incentive mechanism

The experimenter scolded Willy as an instinctive reaction to his bad behavior, but after Willy was reprogrammed, the scolding began to serve a purpose: Inhibiting future bad behavior. Most humans are already wired to want to avoid scolding, so the disincentive tends to work for them "out of the box." When we feel like "You are responsible for your wrongdoing," we're expressing an attitude that we're holding toward another person, and this attitude can serve the purpose of discouraging future bad behavior. This idea that judgment is ultimately useful as an incentive-shaping tool is elaborated in "Instrumental Judgment and Expectational Consequentialism."

Of course, to us, it often feels like there's something more fundamental: This person is really bad in some intrinsic way; it's not just that I'm saying he's bad to discourage his behavior. But this sense of something more intrinsic to personal responsibility is just part of what it feels like to judge someone at all. Evolution designed us to maintain an attitude of resentment toward an "evildoer" even without any cost-benefit calculations on our part because in order to maintain credible threats of retaliation for wrongdoing, we need to feel like the evildoer "deserves" to be punished, even if doing so can't right the wrong. If we didn't have this kind of intrinsic feeling of "just deserts," we might not risk exacting revenge on someone who wronged us because we'd think, rationally, there's nothing further to be gained by revenge. But if we had that attitude, there would cease to be incentive by others to refrain from wronging us in the first place.

My discussion so far has focused on cases of "judging" someone for doing something wrong, but in the modern world, I think there are very few instances where this behavior continues to be appropriate. We have governments to enforce revenge now, and vigilante justice is very dangerous. In addition, many "bad" behaviors tend to result from people living in impoverished, abusive, un-loving households and communities. And locking people away in jail for years can often make them less emotionally healthy than before they started. In practice, our criminal-justice system needs a lot of reform. And when we interact with people on a personal level, kindness and positive incentives are almost always the better approach.

As an example, consider another robot, Billy. He was raised by a more compassionate programmer than Willy. Billy's programmer nurtured him through loving interactions in order to change the weight that Billy assigns to the reward of hearing a smashing noise from random(0,1) to random(0,0.2). In addition, Billy's programmer gave Billy many other hobbies that have much more positive social value, such as help_old_lady_across_street, hold_door_open_for_person_behind, and promote_concern_for_insect_suffering. Billy rarely smashes his owner's cookie jar because he's focused on doing better things.

Intentional vs. physical stances

We intuitively feel that people have free will over their actions but a rock doesn't. Also, there are some cases where a person doesn't have free will either, such as if he's insane, has a brain tumor that produces abnormal behavior, and so on. These distinctions seem to result from thinking about an agent either as intentional or physical, to use Dennett's distinction.

If we're dealing with an intentional agent, it responds to incentives, including our judgments about its behavior and our penalties for criminal activity. In contrast, a physical process just does what it does, and it pays no attention to our praises or complaints. It doesn't help to "should" at the universe. Animism and religion often put natural phenomena into the intentional category, believing that, for instance, the sun's rising is responsive to human sacrifice, or illnesses are responsive to heavenward prayers.

It makes practical sense to separate physical processes that have free will (intentional stance) from those that don't (physical stance). We see this principle in our legal system, where penalties depend on whether a perpetrator acted with "soundness of mind," i.e., whether the person would have changed his behavior due to the imposition of penalties. White-collar criminals are likely to respond to legal penalties, while those with tumors in their brains are not as likely to do so. This explains the rightful distinctions around free will that we draw in legal contexts.

When free-will attributions help, and when they harm

The reason physics, neuroscience, and other scientific paradigms challenge our notions of free will is that they describe intentional agents in mechanistic, physical terms, and this elicits "physical stance" feelings in our minds. If a person is just a bunch of atoms moving, and if atoms just do what they're determined to do, then how can this person be "responsible"? He's just doing what his atoms made him do. This attitude ignores the fact that our response to his atomic movements might change the course of those movements. The way we regard a person affects his behavior. Likewise, whether we believe we have free will affects our own behavior. There have been studies showing that when people are told they don't have free will, they act less responsibly. The way that we mentally frame ourselves (deterministically) makes a difference to our actions.

That said, there are cases where evoking this "physical stance" / "no free will" frame of mind can be healthy. For instance, when we look at low-level subroutines within ourselves, it may help to avoid feeling needlessly guilty about things that are hard to change. As an example, obesity is largely genetic (heritability of 77% by one estimate), and while it does depend to some extent on rational choices, there's a limit to how much a reasonable exertion of effort can affect it.a Beyond that, it's not helpful and potentially damaging to feel guilty about being overweight, even though in some popular conceptions, obesity is "just" an issue of self-control. One paper explains:

Contemporary studies indicate that the heritability of adiposity remains high, even in the face of a strongly obesogenic environment. [...] While the rising prevalence of obesity is related to increasing ease of access to high-energy palatable food combined with diminishing requirement for physical activity, differences in inter-individual susceptibility to obesity are likely to be related to inherited variation in the efficiency of central control mechanisms influencing eating behavior. Such a construct understandably courts unpopularity, since it can appear to diminish the importance of human free will [...]. We argue that a view of obesity that emphasizes the profound biological basis for inter-individual differences in responding to the challenges of achieving a healthy control of nutrient intake should result in a more enlightened attitude toward people with obesity with a consequent reduction in their experience of social and economic discrimination.

We can think of similar examples in many other areas: Depression, attention-deficit disorder, and a vast array of additional psychiatric conditions. On more short-term scales, I can feel how much difference biological factors can make in my decisions and moods compared with rational ones. I like to say we're all at the mercy of the godsb of biochemistry.

Downplaying free will can make sense for environmental circumstances too. For instance, maybe you grew up in a harsh environment that made you somewhat unpleasant to deal with at times. Blaming yourself for that doesn't help. Improvements are always relative to where you are. The attitude one adopts toward one's efforts should instead be of the form: "Did I take this small, feasible next step toward self-improvement?" And I'm not even sure if judging oneself is helpful at all. Personally I try to love myself unconditionally and do good things because I care about helping others, not being I risk self-judgment if I fail. I'm lenient with my selfishness in order to avoid burning out and growing resentful.

I think a similar attitude of unconditional love is appropriate for interpersonal interactions. Society already has formal incentive structures to enforce the basic requirements on behavior. I don't want to try to use my love for another person as another carrot. I think people are more likely to grow in healthy ways when they're respected for who they are, wherever they are. This runs contrary to the evolved purpose of judgment, but I think society is at a different place now. In modern times, a more loving, tolerant environment seems to work better. To this extent, it can indeed be helpful to dissolve our intuitions about free will by thinking of ourselves and other people as mechanical processes as a way to suspend feelings of judgment. Here what we're doing is not taking a "more true" perspective on reality—because as noted above, we are in fact intentional agents who do respond to incentives—but rather exploiting our ability to look at things in different cognitive modes and choosing a mode (the "no free will" physical stance) that's more helpful in a particular case. It's about cultivating an attitude in ourselves that we expect to yield healthier outcomes.

Free will and unpredictability

Often free will is associated with unpredictability. Simple computer programs are thought to lack free will because we can tell exactly how they'll behave. If I type "a" on my keyboard, my word processor will display "a", never a different letter. The behavior of these programs is clearly constrained by external factors.

In contrast, when we think about ourselves or other people, we can never fully predict what we or they will choose. There are plenty of statistical regularities, and psychology experiments uncover a number of highly reproducible patterns of human cognition. But on any given occasion, we can't be totally sure of the outcome, and sometimes we're surprised. Sometimes it's claimed that only humans, not machines, can have true creativity.

In fact, humans are machines too, and we are also fundamentally constrained by our environmental inputs, but what separates us from word-processing programs is our complexity. You have 85 billion neurons, some subset of which are firing tens of times per second, and this central nervous system is interacting with some of the 37 trillion other cells in your body, as well as other physical features external to you. It's amazing the system behaves as orderly as it does! Weather patterns are also amazingly complex, but we don't attribute free will to them. Complexity does not imply non-determination.

It's not surprising that a subset of our neurons can't predict with perfect accuracy what the whole collection will decide—at least not when the decision is a close call. In the same way, pollsters can't perfectly predict the outcome of an election when the race is very close. Like with personal decisions, sometimes you just have to wait until the election is over and see what result was chosen.

Determinism vs. fatalism

Sometimes people picture determinism as meaning "there's nothing I can do to change things." Such scenarios are often portrayed dramatically in literature and films, such as when a protagonist tries to escape fate, either successfully or unsuccessfully. But fatalistic prophesies in the realm of personal decisions are mostly nonsense. Just as no one with present-day computing power can predict whether it will rain on 18 May ten years from now in Washington, DC, no one with present-day computing power can with consummate fidelity predict your future actions in most circumstances. For all practical purposes, the future is non-deterministic from our point of view.

The extent of future fatalism comes in degrees. If you have a strong genetic predictor of a future cancer, you are more likely to get that cancer, even if you try hard by lifestyle choices to avoid it. If you already have the cancer at a life-threatening stage, you are reasonably likely to die of it. If you have a severe, general, and incurable learning disorder, you are unlikely to win a Nobel Prize (though it can't be ruled out). In contrast, if a fortune cookie predicts that you will fail your next exam, this doesn't require it to come true.

This is all common sense, but fiction sometimes stretches fatalism to extremes, so it's important to remember what determinism is and isn't saying.

Would physical randomness help?

Sometimes it's claimed that quantum mechanics provides a route to free will. Arthur Eddington made this proposal in his 1932 "The Decline of Determinism," and it has been repeated ever since by various philosophers and scientists. Under the Copehagen interpretation, quantum outcomes are subject to literal randomness during observation events; Bell's theorem shows that certain quantum phenomena can't be explained by deterministic "hidden variables."

But what "freedom" is there in randomness? Suppose you wanted to go to your friend's birthday party, but then quantum effects in your brain changed your decision, so that instead you now want to go to an art exhibit. Was that change of plan "your" free choice? Of course, it depends where we locate "you." If "you" includes quantum fluctuations, then sure, it was your choice to change plans. But this doesn't jive well with what we intuitively want free choice to be, which is more about weighing options, imagining outcomes, asking how our hearts feel, thinking about what we've been advised by our mentors, and various other deterministic actions.

In any case, I find the many-worlds interpretation (MWI) of quantum mechanics more plausible than the Copehagen interpretation, and MWI doesn't contain any true randomness, since all outcomes are deterministically realized with measures that can be deterministically computed from the wave function. I see determinism as one of the many strengths of MWI, because I can't even comprehend what literal randomness would look like. How do you compute something that's literally random? As far as I'm aware, Copenhagen quantum mechanics is the only mainstream theory in physics (??), or in all of science, that contains "true" randomness; this seems fishy to me, especially when the deterministic MWI works just as well.

Dualism and free will

Confusion about free will and confusion about consciousness are similar, and in many cases they're somewhat related. I think a main reason free will seems confusing is that most of us, before internalizing scientific reductionism, picture ourselves as free-floating souls—ghosts in our machine bodies. We think our souls drive our choices. But then if we learn that our bodies operate as purely physical machines, it appears that our souls can't budge them. Like Wallace's robotic pants in The Wrong Trousers, it's as though the machines are running on their own and can't be stopped by our souls. This is why determinism makes us feel hopeless.

When we instead realize that we are the machines, and there are no souls, the situation looks different. Sure, the machines are still making choices, but those are our choices. They belong to us. Just by locating "ourselves" in a different place, the picture changes.

Of course, even if we were souls external to our physical bodies, the problems of determinism would apply as much as before. Either the souls obey soul-type physical laws, in which case they're determined, or else the souls make choices randomly, in which case it's doubtful to say that the souls deliberately chose actions. Souls just push the problem a step back without solving anything.

The subjective sense of choice

Steven Kaas made a famous Twitter post:

You are not the king of your brain. You are the creepy guy standing next to the king going “a most judicious choice, sire”.

This sentiment is instructive but needs clarification. I would define "you" as all the processes in your brain, including the low-level ones. In that case, "you" are not just the king of your brain but the entire kingdom. Indeed, there is no "king" of your brain in the sense of a single region that makes the important decisions; there's more of a parliament, combined with a bureaucracy of task-specific routines. Still, it is true that the parts of your brain that form verbal thoughts like these are not your whole brain. On the other hand, it's also not true that your verbal thoughts are ineffectual. They reverberate to other parts of your brain and may update action inclinations. In this sense, the verbal parts of your brain are like the news media: They report on actions by the government but can also have subsequent effects on actions by the government.

With those caveats in mind, we can turn to what's right about Kaas's quote. There's not a "central you" from which choice emanates. What would such a thing look like? Rather, there are many parts of your brain's government that work together (and sometimes, in opposition). The subjective feeling of "I'm making this choice" is a high-level summary of the state of your brain's kingdom during a governmental dispute. Like a news reporter's article, this summary is mostly ex post facto, except in cases of protracted decisions where the media's reporting has enough time to influence the parliamentary votes.

Like the feeling that "I'm conscious", the sense that "I'm making this choice" is a representation in the language of cartoon metaphysics to describe a more complicated underlying physical state. And like with consciousness, oversimplified metaphysical cartoon representations can cause confusion by producing thoughts that "if I'm just a mechanical system, I can't actually make the final decision" or that "if I'm just a mechanical system, I can't actually be conscious".

In Emotion Explained (p. 410), Edmund T. Rolls suggests

that when this type of reflective, conscious, information processing is occurring and leading to action, the system performing this processing and producing the action would have to believe that it could cause the action, for otherwise inconsistencies would arise, and the system might no longer try to initiate action. This belief held by the system may partly underlie the feeling of free will.

This discussion might seem to belittle the work that goes into making some choices. Suppose you know you should go to bed now, but you're tempted to stay up a little longer. It feels like "I" have to exert effort to make the right choice; "I" can't just sit back and let my brain make the decision for me? Of course not. But exerting effort is also something your brain does as part of its decision, and this sense of effort constitutes some of what the news media reports about your brain's state. The idea that "I'm helpless to change things because I lack free will" is an idea that can sometimes become lodged in brains and, like a harmful news-broadcast meme telling viewers that they're helpless to affect government policy, stifle the parts of your brain that were already exerting effort to make the "right" choices. You can remove this harmful meme from your brain by realizing that it causes you to make poorer choices.

"Making the world better" given determinism?

Vinding (2017) raises the question of whether ethics and improving the world make sense if the laws of physics are deterministic, such that there is only one possible future outcome. Vinding (2017) notes that the view that ethics requires ontological possibilities is "A combustible and controversial claim".

Vinding (2017) goes on to argue that we can't be certain that the laws of physics are deterministic. However, I would take a different approach: even if the laws of physics are deterministic (as I expect they ultimately are), it's still meaningful to talk about "making the future better". That's because this talk refers to what Vinding (2017) calls "hypothetical possibilities":

we are clearly able to think in terms of different outcomes being possible, and to then plan and take action based on such thinking, but that does not imply that those outcomes were ever actual possibilities, as opposed to purely thought up ones that just serve as a thinking tool.

A robot deciding how to act can estimate the value of different possible actions and then choose the highest-scoring action. This choice process is what we mean by "making the world better". We can crudely simulate various possible outcomes and conclude that, relative to those simulations, we expect that the robot's choice "resulted in a better future" than the hypothetical futures described by other simulation trajectories. Someone with infinite computing power and perfect knowledge of the state of the universe could simulate these possible future trajectories exactly, and these more exact simulations would constitute the "truth" about how the world would change if a given possible action were chosen, even if the action is never in fact chosen.

It's true that only one outcome is ever chosen (ignoring quantum many worlds, etc.), but "making the world better" only ever refers to the simulation and choice process, not to something more ontologically laden. The ontological existence of "real possibilities" (whatever that would mean) doesn't matter for the context where words like "choice" and "making things better" are applied. Such words are used to refer to the deterministic evaluation of different options by choice-making robots (including humans). Why? Because the process of prudently assessing options when undertaking decisions is the kind of behavior that we want to (deterministically) encourage when using motivating concepts like "ethics" and "making a difference", and rational deliberation is the best referent for these concepts within a deterministic universe.

Choice algorithms that are multiply instantiated

We should think of ourselves as all copies of our algorithm at once because when our algorithm chooses option A instead of option B, all physical instantiations of the algorithm jointly choose A over B. A decision maker should consider the impact of a choice on all the locations where this choice has consequences.

Here's a toy example. Suppose that in addition to being a biological clump of cells, your brain's algorithm is also being run by the population of China in real time. You (and your China-brain copy) learn that the Chinese people are tired of running the China brain and want to get back to more productive activities. You (and your copy) also learn that the rulers of China will only stop running the China brain once you go to sleep, since your brain activity is less interesting during sleep. Now you're faced with a choice: Do I go to bed or stay up a little longer? Because you, as your algorithm, realize that your choice is also instantiated in the China brain, if you stay up longer, the people of China will have to exert more useless effort emulating you. So you choose to go to bed now, and the Chinese people get to stop running your brain.

Considering the impacts of your algorithm in the multiple locations where it's instantiated is a straightforward extension of considering the impacts of your algorithm in a single instance. In either case, you (i.e., your algorithm) are still evaluating the outcomes that you predict would result from different choices as feedback to the final decision.

Here's another example to drive the point home. Suppose you're writing a program that will make requests to a server, and this program will be run on many computers simultaneously. Suppose that the program is given as an input the number of copies of it that are running at once. The program then runs as follows:

if(num_copies_running_at_once < 1000):
    make_request_to_small_server()
else:
    make_request_to_large_server()

The larger server is needed in order to handle a higher volume of simultaneous requests. The idea behind this code is that the program (implicitly, in this case) "knows" that all computers where this code is being run are using the same logic, so if one copy makes a request to a given server, all copies make a request to that server. Thus, the program "chooses" how to act based on knowledge about how all its copies would act if a given choice were made.

Varieties of free will

Free will can look many different ways depending on the situation. Willy had a degree of free will, in which he chose one of three hobbies and decided how to go about engaging in the chosen hobby. Humans have many more potential hobbies, as well as a more complicated decision system that includes not just expected-reward maximization but also many opposing impulse signals from other brain components. When Jesus said, "The spirit is willing, but the flesh is weak" (Matthew 26:41), he was referring to a conflict of different subsystems in his brain. Edmund T. Rolls suggests that lower-level action selection among a more limited set of options based on simple algorithms should count as less "free".

An animal, such as a chicken, has free will in the sense that it, like a human, chooses among possible actions based on expected rewards combined with other reflexive, instinctive, and otherwise less deliberative neural inputs. If the reward landscape for a chicken is changed, its behavior will change. The same can be said of even elementary robots that we find today. It may be less obvious how to tweak the reward landscape of a robot than a chicken, but if we know what its decision input signals are, we can do so.

Previously I emphasized that the physical stance is separate from the intentional stance, but even physical systems can exhibit similar properties of changing their choices in response to changes of system dynamics. For instance, water flowing down a hill tends to "choose" the path of least resistance, but if you impose a barrier on the easiest path, the water changes its "behavior" and moves in a different way. Ultimately the physical and intentional stances work on a spectrum. After all, the world is at bottom completely physical, and "intentional behavior" is just a helpful abstraction to describe certain more complex forms of planning and reactivity that some physical systems exhibit to greater degrees than others.

Ultimate responsibility?

Robert Kane believes that free will requires not just choosing among alternative possibilities but also "ultimate responsibility" for who one is in the first place. Galen Strawson responds that you can't be responsible for who you are in the first place, because whatever you started out being was due to factors outside your control. For instance, Willy couldn't have been ultimately responsible for his temptation to break cookie jars, because his valuation of cookie-jar smashing was programmed by someone else. Likewise, a murderer is not "ultimately responsible" for killing someone because even though he decided to undertake the murder at age 35, the fact that he got into that position was based on decisions and circumstances at age 34, and the fact that he got into those circumstances was based on decisions and circumstances at age 33, ..., and those were based on his genes, parents, and other inputs at age 0.

Strawson is correct that we have no ultimate responsibility in the technical sense. Indeed, how could anything have ultimate responsibility for itself? If something didn't start out with certain initial conditions, did it come into existence out of thin air or vacuum fluctuations? But even then, a person can't be responsible for his creation ex nihilo.

That said, for practical purposes it doesn't matter that people lack ultimate responsibility. We still have to impose penalties for violating the law, because this does in fact prevent people from violating the law (in certain cases), even if they're not "ultimately responsible" for law-breaking. Free will is an operational thing; the attitude we choose is about causing targeted changes in the world. Sure, we also happen to have evolved intuitions on the subject, which may or may not align well with present social circumstances. But those intuitions are easily confused, similarly to our intuitions about consciousness, personhood, or other topics.

It is certainly sad that the murderer ended up in a situation where he wanted to murder, just like it's sad that Willy was programmed with a temptation for shattering cookie jars. We can see that the world would have been more "fair" if the murderer had been differently programmed by a better childhood environment and if Willy had been differently programmed in C++. Society has to impose punishments for deterrence and to keep other people safe, but we can see how it may be more effective to change the conditions that (deterministically, though not always predictably) lead people to do harmful things rather than just punishing the seeming "evil essence" of a person. It's the idea of an "evil essence," rather than free will per se, that is an illusion.

Footnotes

  1. "Reasonable effort" means not going to extremes to change one's condition. A person with obesity could theoretically lock himself in a closet for a year to (probably only temporarily) lose weight, but at high cost to mental and social health.  (back)
  2. Relatedly, here's a quote attributed to Chuck Palahniuk:

    Experts in ancient Greek culture say that people back then didn't see their thoughts as belonging to them. When ancient Greeks had a thought, it occurred to them as a god or goddess giving an order. Apollo was telling them to be brave. Athena was telling them to fall in love.

    Now people hear a commercial for sour cream potato chips and rush out to buy, but now they call this free will. At least the ancient Greeks were being honest.  (back)