A Lower Bound on the Importance of Promoting Cooperation

First written: 3 Jan. 2014; last update: 7 Jun. 2016

This piece suggests a lower-bound Fermi calculation for the cost-effectiveness of working to promote international cooperation based on one specific branch of possible future scenarios. The purpose of this exercise is to make our thinking more concrete about how cooperation might exert a positive influence for suffering reduction and to make its potential more tangible. I do not intend for this estimate to be quoted in comparison with standard DALYs-per-dollar kinds of figures because my parameter settings are so noisy and arbitrary, and more importantly because these types of calculations are not the best ways to compare projects for shaping the far future when many complex possibilities and flow-through effects are at play. I enumerate other reasons why advancing cooperation seems robustly positive, although I don't claim that cooperation is obviously better than alternate approaches.

Introduction

Compromise has the potential to benefit most value systems in expectation, by allowing each side in a dispute to get more of what it wants than its fractional share of power. This is wonderful, but how much could compromise matter? In this piece I suggest a Fermi calculation for a lower bound on how much suffering might be prevented by working to promote compromise. The estimates that I use for each variable are more conservative than I think is likely to be the case.

Caveats

I do not think a Fermi calculation like I describe below is the best approach for evaluating relative cost-effectiveness. This calculation traces one specific, highly conjunctive branch in the vast space of possible branches for how the future might unfold. Most of the expected impact of promoting compromise probably comes from branches that I'm ignoring.

Likewise, activities other than promoting compromise also have many flow-through effects on many different possible future branches. Comparing projects to shape the future requires much more than a single Fermi calculation. We should use additional quantitative and qualitative estimates across many models, as well as general heuristics. One of the strongest arguments for promoting compromise is not that it dominates in a Fermi calculation (probably it doesn't) but that "increasing the pie" for many value systems is generally a good idea and seems more robustly positive than almost anything else.

That said, explicit and detailed Fermi estimates can help to clarify our thinking and identify holes, and this is one reason for undertaking the exercise.

By what fraction could compromise reduce future suffering?

Suppose the following parameter estimates. Remember, these are designed to be conservatively low, not most likely. The estimates in each bullet are conditional probabilities given the outcomes from the previous bullets.

  • 40% chance that humanity doesn't go extinct due to causes other than artificial intelligence (AI) in the next few centuries.
  • 20% chance that humanity will develop strong AI in the next few centuries conditional on not going extinct due to non-AI factors.
  • 5% chance that human values will be encapsulated by strong AI.
  • 5% chance that those values, once encapsulated, would be preserved indefinitely rather than changing in arbitrary directions or converging to some inevitable endpoint.
  • 10% chance that, in the default scenario, there is a competition among nations to build the first strong AI to serve that nation's own interests and values.
  • 10% chance that this competition could be turned to compromise if enough people worked on promoting moral tolerance and international cooperation. For example, creating a world democratic government would make it likely that AI competition could be molded into cooperation.
  • What is "enough people" working to promote cooperation? Say, for example, 1 in 100 working adults on the planet devoting their careers to the cause, over the next 200 years. Assuming a population of maybe ~5 billion working people at a given time, that means about 50 million people over 200 years, or 10 billion person-years.
    • Assume that an effort some fraction of this size has a linearly reduced expected impact. For instance, 5 billion person-years of work instead of 10 billion would mean the chance of turning competition to cooperation is 5% instead of 10%.
    • 50 million people is a lot. There are just ~4,500 PhDs awarded in the social sciences in the US per year. Foreign Affairs magazine has ~150,000 subscribers. The entire US government employs just over 4 million people in civilian and military roles.
  • Absent cooperation, suppose there would be two main superpowers in conflict. Of course, there might be more, but I think the analysis would be basically the same in that case. Imagine that one superpower cares slightly more about suffering reduction than the other. (For example, the USA currently cares more about animal welfare than China.) In particular, suppose that if one country's values controlled the future, the amount of suffering would be X, and if the other controlled the future, suffering would be 1.02X. Suppose each side has equal odds of winning this "Cold War" race. The expected amount of suffering under winner takes all is (X + 1.02X)/2 = 1.01X. Suppose that because of diminishing returns to additional suffering reduction per unit of resources, a compromise arrangement would allow the sides to reach 1.009X suffering instead -- roughly a 1 in 1000 reduction. This specific calculation has been rather detailed, but from a higher level, suggesting that cooperation could reduce expected suffering in the future by 1 in 1000 due to harmonization of conflicting values across countries seems conservatively low.
  • 10% chance that the difference in suffering between the policies of these two countries would actually be a permanent feature of the AIs that those countries would produce in winner-takes-all scenarios rather than being eliminated by extrapolation.
  • Apply a discount for uncertainty about whether people's efforts actually improve or degrade cooperation in the long run. For example, maybe some activists push for changes to nuclear policy that disrupt the stability of mutual deterrence and make conflicts worse. In particular, suppose the chance is 65% that efforts to promote cooperation actually do promote cooperation and 35% that they hinder it by an equal amount. Then the discount factor is 0.65 - 0.35 = 0.3.

Given these parameter settings, a lower bound on the fraction of future suffering reduced per person-year of work to promote cooperation is

40% * 20% * 5% * 5% * 10% * 10% * [1/(10 billion)] * (1/1000) * 10% * 0.3 = 6 * 10‑21.

How much future suffering is in our hands?

  • Suppose, conservatively, that only 10‑5 of total, intensity-weighted hedonic experience in the future is negative. This might include suffering subroutines, sentient simulations of wild animals, etc.
  • Nick Bostrom estimates potential future hedonic experience as 1038 humans surviving for ~1010 years in the Virgo Supercluster, or 1048 experience-years. Of course, some of these might be animals and suffering subroutines, but I'll keep using the "currency" of human experience-years as the reference point.
  • Say the probability that a colonization scenario with this many minds actually happens is 10‑8. Alternatively, we could say that the probability is 10‑6 that a colonization future with this magnitude of computational power happens and has 1% hedonically relevant computations. Any combination of possibilities that leads to an expected 10‑8 fractional multiplier is equivalent.
  • Apply a significant discount factor to account for the incredulity of the proposition that we are in a position to influence this many future experience-years. One might think it's almost impossible that we would happen to be the influential few to affect such a vast future, but model uncertainty suggests we might give some nonzero probability that we are actually in such an incredible period of history. Say the probability that we are is 10‑10.

The expected number of suffering-years in our hands would then be

10‑5 * 1048 * 10‑8 * 10‑10 = 1025.

Combining the estimates

Multiplying 6 * 10‑21 by 1025 gives 60,000 expected suffering-years that we can prevent per year of work to promote compromise. Assuming a year of work means 40 hours per week for 50 weeks, this is (60,000)/(40*50) = 30 suffering-years per hour, or 0.5 per minute.

To convert this into a per-dollar estimate, suppose it would take $150K per year to pay someone to work on compromise, assuming that person would otherwise have done something unrelated and altruistically neutral. This figure is very high for a nonprofit salary, but if someone is willing to work for a lot less, chances are she's already committed to the cause and would have a high opportunity cost, because she could be earning to give instead. In order to attract talented people who would otherwise do altruistically neutral work, a high salary would be required. And remember, this is a conservative calculation. 60,000 expected suffering-years divided by $150K is ~150 suffering-days prevented per dollar. (Here I'm ignoring the fact that future labor-years should be cheaper in present dollars assuming investment returns outpace increases in wages.)

It's important to remember just how imprecise these particular numbers are. For instance, if I had taken the anthropic discount factor to be 10‑5 instead of 10‑10, we would have had 6 billion suffering-years prevented per year of work, or 40,000 suffering-years prevented per dollar.

Weird physics

This scenario assumed a bound of 1048 experience-years, but there's some chance physics is other than we think and allows for obscenely higher amounts of computation. Indeed, there's a nonzero probability that infinite computation is possible, implying infinite future suffering. Our calculation would then blow up to say that every second spent on promoting compromise prevents infinite expected suffering.

A few thoughts on this:

  1. Blowing up the number of experience-years we affect shouldn't change relative comparisons among activities that shape the far future. (And every activity shapes the far future to some extent.) It merely highlights that everything we do has a remote chance of being massively more important than it seems.
  2. Black swans like these are a main reason why Fermi calculations are incomplete and inadequate to capture all the factors we need to consider for choosing policies. It can be much more stable to use heuristics like "work on positive-sum projects that make it more likely that our descendants can use their vastly greater wisdom to tackle problems that are beyond our grasp."
  3. It seems absurd that we would be the lucky few to be in a position to influence infinitely many future minds. The probability of this seems naively like it should tend to 1/infinity. In general, I think anthropic considerations are an Achilles heel for these calculations about the astronomical importance of the far future.

Other reasons to support cooperation

I've taken pains to clarify that the calculation in this piece is hardly exhaustive of why cooperation is important but only scratches the surface with one concrete scenario. There are many other reasons for suffering reducers to support international cooperation:

  1. Rogue developers. While it seems reasonable to assume that most countries care appreciably about reducing suffering, the same needn't be true for smaller groups of "rogue" AI developers. Stronger global governance would help states enforce coordination rather than letting a bunch of individual groups compete against governments and against each other to build the first strong AI to satisfy their own peculiar ideologies. Note that this kind of enforcement could be desirable even for the people who would have joined the rogue groups because they would be forced to cooperate rather than defect on a (multi-player) prisoner's dilemma, which is Pareto-preferred by every prisoner in the game. (Compare with "Why do hockey players support helmet rules, even though they choose not to wear helmets when there is no rule?")
  2. Increased humaneness. Cooperation and tolerance make society more humane. When violence is less of a concern, people have more room to explore self-expression, and cultural heroes may shift from being focused on military victory to being focused on kindness. Conversely, the expanding circle of compassion can help advance cooperation, by showing people that those in other countries aren't really very different, and we're all citizens of the world.
  3. Maintaining stability and rule of law. Some of the most significant potential sources of suffering in the future are reinforcement-learning algorithms, artificial-life simulations, and other sentient computational processes. Reducing these forms of suffering would plausibly require machine-welfare laws or norms within a stable society. It's hard to imagine humane concerns carrying currency in a competitive, Wild West environment. International cooperation and other measures to maintain social tranquility are important for enabling more humane standards for industrial and commercial computations.
  4. More time to reflect. Cooperation is expected to slow or avert AI arms races, which means humanity should have more time to improve social institutions and philosophical reflectiveness before making potentially irrevocable decisions about the future of the galaxy.
  5. Being nice. Because cooperation is good for everyone, if suffering reducers promote it, others will appreciate this fact and may be more inclined to reciprocate toward suffering reducers by doing them favors in other ways. In other words, promoting international cooperation is a form of interpersonal cooperation with other altruists who have different values from ours.
  6. Common-sense heuristics. Almost everyone on Earth agrees that stronger international cooperation would be good. "World peace" is a near universal goal, even though it has a ring of platitude by now.
  7. Robustness. Probably the strongest reason, which generalizes some of the scenarios discussed above, is that cooperation puts our descendants in a better position, both in terms of social institutions and moral values, to be able to tackle issues that we have no hope of addressing today. It's quite plausible that most of the suffering in the future will come from something that we can't even anticipate now. We should aim to empower our descendants to handle unknown unknowns, by advancing positive social technology -- including institutions for peace and compromise -- relatively faster than scientific technology.

Putting our descendants in a better position to address challenges is useful even if strong AI and space colonization never materialize. Even if humans just continue on Earth for a few million years more, cooperation still improves our trajectory. Of course, this case involves vastly less suffering for us to mitigate, and what we do now may not have a significant impact on what happens tens of thousands of years hence absent goal-preserving AI, so this scenario is negligible in the overall calculations, but those who feel nervous about tiny probabilities of massive impacts would appreciate this consideration. That said, if our only concern was about Earth in the very short term, then plausibly other interventions would appear more promising.

A value-of-information argument for future focus

There's a general argument that we should focus on far-future scenarios even if they seem unlikely to materialize due to anthropic considerations because of value of information. In particular, suppose there were two main scenarios to which we assigned equal prior probability before anthropic updating: ShortLived, where humanity lasts only a few more centuries, and LongLived, where humanity lasts billions more years. Say LongLived has N times as many experience-moments as ShortLived and so is N times as important. Correspondingly, the anthropic-adjusted probability of LongLived might, under certain views of anthropics, tend toward 1/N. The expected value of ShortLived is (probability)*(value) = (roughly 1)*(1) = 1 compared against an expected value for LongLived of (probability)*(value) = (1/N)*N = 1. So it's not clear whether to focus on short-term actions (e.g., reducing wild-animal suffering in the coming centuries) or long-term actions (e.g., promoting international cooperation, good governance, and philosophical wisdom in order to improve the seed conditions for the AI that colonizes our galaxy).

When we consider value of information, it pushes toward longer-term actions because they leave open the option of returning to focus on short-term actions if further analysis leads to that conclusion. To make the explanation simple, imagine that halfway through the expected lifetime of humanity given ShortLived, altruists reassessed their plans to decide if they should continue doing actions targeting LongLived futures or if they should focus instead on ShortLived futures. For the sake of clarity, imagine that at this juncture, they have perfect knowledge about whether to focus on short-term or long-term futures. If long-term futures were best to focus on, they would have already been doing the right thing so far and could stick with it. If short-term futures were more important, they could switch to working on short-term futures for the remaining half of humanity's lifetime and still get half the total value as if they had worked on short-term issues from the beginning.1

Of course, a reverse situation could also be true: start focusing on short-term futures and then re-evaluate to decide whether to focus on long-term futures halfway. The difference is that if people have focused on long-term futures from the beginning, they'll have more wisdom and capacity at the halfway point to make this evaluation. This is an instance of the general argument for frontloading wisdom and analysis early and then acting later. Of course, there are plenty of exceptions to this -- for instance, maybe by not acting early, people lose motivation to act altruistically at all. This general conceptual point is not airtight but merely suggestive.

In personal communication, Will MacAskill made a similar argument about "option value" in a related context and thereby partly inspired this section. Needless to say, there are other considerations besides option value in both directions. For instance, there's greater entropy between our actions now and the quality of experience-moments billions of years from now (though a nontrivial probability of a pretty small entropy, assuming we influence a goal-preserving or otherwise politically stable outcome). Meanwhile, experience-moments of the future may have greater intensity, so the stakes may be higher.

Finally, as was hinted in the Fermi calculation, we could fudge a way to make the far future dominate by saying there's a nontrivial probability that our anthropic discount is wrong and that the future really is as important as it seems naively. This may work, though it also feels suspicious because similar sorts of model-uncertainty arguments could be invoked to justify lots of weird considerations dominating our calculations. The importance of the far future seems one of the more robust sentiments among intelligent thinkers, though, so the fudge feels less hacky in this case.

Footnotes

  1. Suppose the value of short-term work is known to be 1. The value of long-term work is either N with probability 1/N or is 0 otherwise. By doing just short-term work, we could guarantee a value of 1. By doing long-term work that includes research about whether long-term work is worthwhile, we could choose -- at the halfway point of humanity's lifetime in the ShortLived case -- to switch to short-term work or not. In the exaggerated scenario where we learn perfectly whether far-future work will pay off, we switch to doing short-term work in (N-1)/N of the cases, garnering value of 0.5 for the remaining time. And in 1/N cases, we stick with long-term work and get a payoff of N. So now our expected value is [(N-1)/N] * 0.5 + (1/N) * N, which is basically 1.5 if N is large. This is 50% better than just starting with the short-term work.  (back)