Friday, December 31, 2010

A stupid way to invest

Here's a fun little puzzle for introducing some issues in decision theory. You want to invest a sum of money that is very large for you (maybe it represents all your present savings, and you are unlikely to save that amount again), but not large enough to perceptibly affect the market. A reliable financial advisor suggests you diversifiedly invest in n different stocks, s1,...,sn, putting xi dollars in si. You think to yourself: "That's a lot of trouble. Here is a simpler solution that has the same expected monetary value, and is less work. I will choose a random number j between 1 and n, such that the probability of choosing j=i is proportional to xi (i.e., P(j=i)=xi/(x1+...+xn)). Then I will put all my money in sj." It's easy to check that this method does have the same expected value as the diversified strategy. But it's obvious that this is a stupid way to invest. The puzzle is: Why is this stupid?

Well, one standard answer is this. This is stupid because utility is not proportional to dollar amount. If the sum of money is large for you, then the disutility of losing everything is greater than the utility of doubling your investment. If that doesn't satisfy, then the second standard answer is that this is an argument for why we ought to be risk averse.

Maybe these answers are good. I don't have an argument that they're not. But there is another thought that from time to time I wonder about. We're talking of what is for you a very large sum of money. Now, the justification for expected-utility maximization is that in the long run it pays. But here we are dealing with what is most likely a one-time decision. So maybe the fact that in the long run it pays to use the simpler randomized investment strategy is irrelevant. If you expected to make such investments often, the simpler strategy would, indeed, be the better one—and would eventually result in a diversified portfolio. But for a one-time decision, things may be quite different. If so, this is interesting—it endangers Pascal's Wager, for instance.

Tuesday, December 28, 2010

Omnipotence and omniscience

  1. Every omnipotent being is perfectly free.
  2. Every perfectly free being knows every fact and is not wrong about anything.
  3. Therefore, every omnipotent being knows every fact and is not wrong about anything.
Premise (1) is, I think, very plausible. What about (2)? Well, perfect freedom requires perfect rationality and a lack of "imaginative constraints". Imaginative constraints are cases where one cannot will something because one can't think of it. For instance, Cleopatra couldn't will to speak Esperanto, because she didn't have the concept of speaking Esperanto. A lack of imaginative constraints requires quite a bit of knowledge—one has to know the whole space of possible actions. But not only must one know the whole space of possible actions, one must also know everything relevant to evaluating the reasons for or against these actions. But, plausibly, every fact will be relevant to evaluating the reasons for or against some action. Consider this fact, supposing it is a fact: tomorrow there will occur an even number of mosquito bites in Australia. This is a pretty boring fact, but it would be relevant to evaluating the reasons for or against announcing that tomorrow there will occur an even number of mosquito bites in Australia. If this is right, then perfect freedom requires complete knowledge of everything.

In particular, open theists can't take God to be omnipotent.  There is another route to that conclusion.  If open theism is true, God can't now know whether tomorrow I will mow my lawn.  But if God couldn't now know what I will write in my next sentence, then he can't intentionally bring it about that right now (open theists need to accept absolute simultaneity, of course) on Pluto there exists a piece of paper saying what my next freely produced sentence will be.  But to be unable to do that would surely be a limitation of God's power.

Monday, December 27, 2010

Science fun for kids and adults

This isn't philosophy, but I've been having fun with sciencey (sciency?) things.

I just did this little demonstration for my five-year-old: I took a 12ml syringe with no needle attached, and filled it about 15% with hot tap water, turned it tip up, and used the plunger to push out the air.  I then plugged the tip of the syringe with my finger, and pulled the plunger back, creating a partial vacuum (obviously, some air leaks back in).  The water immediately started to boil, thereby demonstrating that the boiling point of water goes down as air pressure goes down.

Another step one can add (I didn't) is to touch the water that had just been boiling and observe that it's not boiling hot (I guess if one does that with a five-year-old, one accompanies it with warnings that normally water that had just been boiling is hot).  A more sophisticated experiment would involve measuring the temperature of the water before and after the boiling, comparing with a control sample in another syringe that hadn't been made to boil, and seeing if the boiling removes thermal energy (as I expect it does).

The other fun sciencey thing I got to do was that last night I went to our astronomy club's observatory and got to operate the 24" scope.  Here's a quick photo I took of a small portion of the Andromeda Galaxy, showing a large star cloud (big circle) in it, an open cluster (small circle) in it, and a bunch of dark lanes, presumably due to Andromedan dust blocking out the light.  The photo is about 17 arcmin on each side.

I also took a quick photo of Comet Hartley, which I had previously seen in the fall.  The comet is down and left of center, with coma going up and to the right.

Sunday, December 26, 2010

Hierarchy and unity

Vatican II gives a very hierarchical account of unity in the Church:

This collegial union is apparent also in the mutual relations of the individual bishops with particular churches and with the universal Church. The Roman Pontiff, as the successor of Peter, is the perpetual and visible principle and foundation [principium et fundamentum] of unity of both the bishops and of the faithful. The individual bishops, however, are the visible principle and foundation of unity in their particular churches, fashioned after the model of the universal Church, in and from which churches comes into being the one and only Catholic Church. For this reason the individual bishops represent each his own church, but all of them together and with the Pope represent the entire Church in the bond of peace, love and unity.(Lumen Gentium 23)
The unity of each local Church is grounded in the one local bishop, and the unity of the bishops is grounded in the one pope. Unity at each level comes not from mutual agreement, but from a subordination to a single individual who serves as the principle (principium; recall the archai of Greek thought) of unity. This principle of unity has authority, as the preceding section of the text tells us. In the case of the bishops, this is an authority dependent on union with the pope. (The Council is speaking synchronically. One might also add a diachronic element whereby the popes are unified by Christ, whose vicars they are.)

A hierarchical model of unity is perhaps not fashionable, but it neatly avoids circularity problems. Suppose, for instance, we talk of the unity of a non-hierarchical group in terms of the mutual agreement of the members on some goals. But for this to be a genuine unity, the agreement of the members cannot simply be coincidental. Many people have discovered for themselves that cutting across a corner can save walking time (a consequence of Pythagoras' theorem and the inequality a2+b2<(a+b)2 for positive a and b), but their agreement is merely coincidental and they do not form a genuine unified group. For mutual agreement to constitute people into a genuine group, people must agree in pursuing the group's goals at least in part because they are the goals of the group. But that, obviously, presents a vicious regress: for the group must already eist for people to pursue its goals.

The problem is alleviated in the case of a hierarchical unity. A simple case is where one person offers to be an authority, and others agree to be under her authority. They are united not by their mutual agreement, but by all subordinating themselves to the authority of the founder. A somewhat more complex case is where several people come together and agree to select a leader by some procedure. In that case, they are still united, but now by a potential subordination rather than an actual one. This is like the case of the Church after a pope has died and another has yet to be elected. And of course one may have more complex hierarchies, with multiple persons owed obedience, either collectively or in different respects.

This, I think, helps shed some light on Paul's need to add a call for a special asymmetrical submission in the family—"Wives, be subject to your husbands, as to the Lord" (Eph. 5:22)—right after his call for symmetrical submission among Christians: "Be subject to one another out of reverence for Christ" (Eph. 5:21). Symmetrical submission is insufficient for genuine group unity. And while, of course, everyone in a family is subject to Christ, that subjection does not suffice to unite the family as a family, since subjection to Christ equally unites two members of one Christian family as it does members of different Christian families. The need for asymmetrical authority is not just there for the sake of practical coordination, but helps unite the family as one.

In these kinds of cases, it is not that those under authority are there for the benefit of the one in authority. That is the pagan model of authority that Jesus condemns in Matthew 20:25. Rather, the principle of unity fulfills a need for unity among those who are unified, serves by unifying.

There is a variety of patterns here. In some cases, the individual in authority is replaceable. In others, there is no such replaceability. In most of the cases I can think of there is in some important respect an equality between the one in authority and those falling under the authority—this is true even in the case of Christ's lordship over the Church, since Christ did indeed become one of us. But in all cases there is an asymmetry.

Here is an interesting case. The "standard view" among orthodox Catholic bioethicists (and I think among most pro-life bioethicists in general) is that:

  1. Humans begin to live significantly before their brains come into existence.
  2. Humans no longer live when their brains have ceased all function (though their souls continue to exist).
There is an apparent tension between these two claims. Claim (2) suggests that brains are central to our identity as living animals. Claim (1) suggests otherwise. But there is a way of seeing the rest of the human body as hierarchically subject to the brain that allows one to defend both (1) and (2). For there is a crucial difference between the state of the embryonic body prior to the brain's formation and the state of the adult body after the brain's destruction. In the embryonic case, there is a developmental striving for the production of a brain to be subject to. This is like a group that has come together to select a leader, and they are already unified by their disposition to be subject to the leader once selected. In the case of an adult all of whose brain function has ceased, even if there is heartbeat and respiration (say, because the news that the brain has ceased to function hasn't reached the rest of the body, or because of electrical stimulation), there is no striving towards the production of a brain to be subject to. This is like a bunch of people whose leader has died and where there is neither disposition nor obligation to select another: the social group has effectively been dissolved.

Thursday, December 23, 2010

Cognitivist normative and metaethical relativism

Cognitivist normative moral relativism is the thesis that for all x and A:
  1. x morally ought to A if and only if x believes that she[note 1] morally ought to A.
(Notice that one then has to drop "ought implies can".) Cognitivist normative moral relativism is a thesis at the normative level that tells us what, in fact, is obligatory.
Cognitivist metaethical moral relativism wants to add to (1) a parallel account of what it is to have a moral ought. I want to spend this post thinking about whether this can be done. The simplest attempt is:
  1. What it is for it to be the case that x morally ought to A is for x to believe that she morally ought to A.
But this is viciously circular, since "morally ought" appears on both sides of the definition.
But perhaps there is some way of redescribing the belief without mentioning its content. Maybe, for instance, there is some "pragmatic" account of what it is to believe that one morally ought to A in terms of patterns of emotion and behavior. But that threatens to become a non-cognitive account, and it is cognitivist moral relativism that we're looking at. (Maybe, though, one has a pragmatic account of all belief and cognition? If so, then maybe one can run this line.)
Or maybe there is some other "ought" that one can put in the definiens in (2). Perhaps, what it is for it to be the case that x morally ought to A is for x to believe that she simply ought to A, or for x to believe that she all things considered ought to A. Let's take the second option for definiteness—any similar proposal will have the same problem.
Then, the belief that one morally ought to A and the belief that one all things considered ought to A either are or are not the same belief. If they are the same belief, our modification of (2) remains circular, since "all things considered" is just a synonym for "morally". And if they are not the same belief, then we get an account that surely conflicts with (1). For if they are not the same belief, then someone could believe that she morally ought to A without believing that she ought all things considered to A. By the analogue of (2), it is not the case that she morally ought to A, and by the analogue of (1), it is the case that she morally ought to A.
So, it is difficult to come up with a cognitivist relativistic metaethical theory that neatly matches (1). One might give up on cognitivism, but then one needs to modify (1), since (1) commits one to beliefs about what one morally ought. The other move is to accept (1) but couple it with a non-relativistic metaethics. For instance, it is prima facie coherent to conjoin (1) with:
  1. What it is for x to morally ought to A is for God to command x to A.
If one wants one's normative ethics to hold necessarily, one should then say that necessarily God commands everybody who is capable of moral beliefs to do what they believe they ought to do and that he commands nothing that goes beyond that. Such a metaethically absolutist normative relativism is fairly coherent, but also not plausible. Why think that God commands this and nothing beyond this? Similarly with other views, like natural law or virtue ethics, that one can plug in metaethically. The resulting theory may be coherent, but it does not appear plausible.
But there is one version of the theory that is kind of interesting. Suppose that
  1. Necessarily, if God made persons other than himself, then out of a concern for their moral life he made them in such a way that they believe that they are morally omniscient, where x is morally omniscient provided that (x believes that she ought to A) if and only if x ought to A, and x knows that she ought to A if and only if x believes that she ought to A.
One could couple (4) with any metaethics compatible with theism, and then one gets (1) as a consequence. Of course, on its face, (4) appears pretty implausible when conjoined with a non-relativistic metaethics. There seems to be too much moral disagreement. To make (4) plausible on non-relativistic metaethics, one might have to combine (4) with a view on which people have all sorts of moral beliefs that they apparently don't know about. But notice that at this point we've departed quite far from the spirit of relativism.

Tuesday, December 21, 2010

Lunar eclipse


I stayed up to watch the lunar eclipse.  It was quite nice.  I took the kids out for the grand finale.

I also took a whole bunch of photos.  I'll be editing them a bit more and trying to write some script to align the frames better (and maybe even de-rotating?), but for now, here is the set.  The animation jumpy because I wasn't taking pictures all the time--some of the time I was indoors watching Starship Exeter.  I used a perl script and ImageMagick to animate the photos, using the exif time stamps and speeding up by a factor of 400.  Some of the shadows are odd--I've had trouble with shadows of clouds, branches and internal telescope structures.  I'll eventually try to clean up the photos and remove the bad ones.


I suppose one of the remarkable things about an eclipse is that one is used to astronomical views changing much more slowly.  The video covers a period of about an hour.

Monday, December 20, 2010

Repair and repentance

I love fixing things. I just fixed two keys on my phone, by scrubbing the contacts out with acetone on a toothpick. Since last night, I've also been trying to fix the WiFi on my PDA, by installing a software upgrade and running the battery out. On Saturday, I was repairing some of our furniture. There is a joy when something that once was broken is working again. And especially when it is working better than when it was new, e.g., because I can use a better glue (Titebond II) than what was probably used at the furniture factory.

This is a faint image of the joy of our Shepherd when he brings us back after we have strayed and not only restores us to the grace we had before the Fall, but raises us to something higher. The disanalogy is that all too often the stuff I fix is stuff that I broke (see the "Update" in the link), or that wasn't made right in the first place, while the creatures that God fixes are ones that he made right, but they freely broke themselves, or were broken by others that freely broke themselves. I am glad my possessions don't freely break. I'd be mad at them.

A moral argument for theism

  1. (Premise) Humans have intrinsic goods that do not reduce to pleasure.
  2. (Premise) If there are intrinsic proper functions in humans, probably God exists.
  3. (Premise) If there are no intrinsic proper functions in humans, either humans have no intrinsic goods or all intrinsic human goods reduce to pleasure.
  4. Therefore, probably God exists.
The first premise is the least controversial, but it, too, is controversial. Premise 2 requires a subsidiary argument. If God doesn't exist, then our only hope for an explanation of intrinsic proper functions in humans is evolutionary accounts of proper function. But evolutionary accounts of proper function all fail (see, for instance, this argument). Premise 3 is, I think, fairly plausible—except perhaps for pleasure and goods reducible to pleasure, any intrinsic human goods we can imagine (e.g., friendship, wisdom, etc.) are only good on the assumption that there is such a thing as the intrinsic proper function in a human. A lot more needs to be said about each premise, of course.

Notice that unlike other moral arguments for theism, this one does not necessarily lead to a divine command ethics. The analogue to divine command ethics would be a "designer's purpose" view of proper function. But I think that doesn't give intrinsic proper function.

Thursday, December 16, 2010

An argument against euthanasia

  1. (Premise) It is wrong to euthanize a patient who does not give valid consent for euthanasia.
  2. (Premise) Valid consent is not the expression of a mental state that constitutes an abnormal mental condition.
  3. (Premise) Suicidality is an abnormal mental condition.
  4. (Premise) Consent is not valid when it comes from external threats.
  5. (Premise) A resolve to die is an instance of suicidality, unless it comes from external threats.
  6. (Premise) Consent for euthanasia is the expression of a non-threat-motivated resolve to die, unless it comes from external threats.
  7. A patient either consents or does not consent to euthanasia. (Tautology)
  8. If a patient consents to euthanasia, the consent is not valid. (2-6)
  9. Therefore, no one gives valid consent for euthanasia. (7 and 8)
  10. Therefore, it is always wrong to euthanize a patient. (1 and 9)

Presumably, defenders of euthanasia will deny at least one of 3 and 5, thereby denying that all non-threat-motivated resolves to die are abnormal mental conditions.

But now take a paradigmatic case of a suicide. Jones is a disgraced lawyer. She has gambled her clients' money and lost, driving some of her clients to suicide, and has done all sorts of other spectacularly bad things. She now thinks that because of facts about her psychological make-up, she will never again be able to hold up her head in society given how infamous her case is. And so she attempts suicide. We think we should stop her and that she is in an abnormal mental condition. But is Jones' case significantly different from that of Smith who is facing unremitting physical pain and the alleged indignity of medical treatment for the rest of her life? Suppose Jones is right that given her psychological make-up and her social environment, the rest of her life will be full of psychological pain and social indignity (including jail, which is surely more undignified than just about any medical procedure). It seems that that if we think, as we should, that Jones' resolve to die is an abnormal suicidality, we should think the same thing about Smith.

Now, we might say this. Jones is only facing unremitting psychological pain because of an underlying psychological abnormality. Normal people bounce back, and so Jones is not normal. Be that as may be, this abnormal condition of Jones stands to Jones' motivation to die in exactly the same way that Smith's abnormal physical condition stands to Smith's motivation to die. Both of them have a condition that will almost certainly render the rest of their lives miserable. That in the one case the condition is psychological and in the other case it is physical surely makes little difference. Besides, the line between the psychological and physical is hard to draw (though for some purposes a rough-and-ready distinction is helpful). A crucial part of Smith's misery will be pain, and pain is a psychological phenomenon. (And surely it makes no difference whether Smith's pain is normal or abnormal.)

Of course, there is the difference that Smith hadn't done terrible things in the past, while Jones had. But we don't stop Jones from suicide primarily because she had done wicked deeds. We stop her because that's the thing to do when someone is suicidal. And we should likewise stop Smith from killing herself, and a fortiori not help her to do so.

Wednesday, December 15, 2010

Risk reduction policies

The following policy pattern is common.  There is a risky behavior which a portion of a target population engages in.  There is no consensus on the benefits of the behavior to the agent, but there is a consensus on one or more risks to the agent.  Two examples:
  • Teen sex: Non-marital teen sex, where the risks are non-marital teen pregnancy and STIs.
  • Driving: Transportation in motor vehicles that are not mass transit, where the risks are death and serious injury.
In both cases, some of us think that the activity is beneficial when one brackets the risks, while others think the activity is harmful.  But we all agree about the harmfulness of non-marital teen pregnancy, STIs, death and serious injury.

In such cases, it is common for a "risk-reduction" policy to be promoted.  What I shall (stipulatively) mean by that is a policy whose primary aim is to decrease the risk of the behavior to the agent rather than to decrease the incidence of the behavior.  For instance: condoms and sexual education not centered on the promotion of abstinence in the case of teen sex; seat-belts and anti-lock brakes in the case of driving.  I shall assume that it is uncontroversial that the policy does render the behavior less risky.  

One might initially think--and some people indeed do think this--that it is obvious, a no-brainer, that decreasing the risks of the behavior brings benefits.  There are risk-reduction policies that nobody opposes.  For instance, nobody opposes the development of safer brakes for cars.  But other risk-reduction policies, such as the promotion of condoms to teens, are opposed.  And sometimes they make the argument that the risk-reduction policy will promote the behavior in question, and hence it is not clear that the total social risk will decrease.  It is not uncommon for the supporters of the risk-reduction policy to think the policy's opponents "just don't care", are stupid, and/or are motivated by something other than concerns about the uncontroversial social risk (and indeed the last point is often the case).  For instance, when conservatives worry that the availability of contraception might increase teen pregnancy rates, they are thought to be crazy or dishonest.

I will show, however, that sometimes it makes perfect sense to oppose a risk-reduction policy on uncontroversial social-risk principles.  There are, in fact, cases where decreasing the risk involved in the behavior increases total social risk by increasing the incidence.  But there are also cases where decreasing the risk involved in the behavior decreases total social risk.  

On some rough but plausible assumptions, together with the assumption that the target population is decision-theoretic rational and knows the risks, there is a fairly simple rule.  In cases where a majority of the target population is currently engaging in the behavior, risk reduction policies do reduce total social risk.  But in cases where only a minority of the target population is currently engaging in the behavior, moderate reductions in the individual risk of the behavior increase total social risk, though of course great reductions in the individual risk of the behavior decrease total social risk (the limiting case is where one reduces the risk to zero).

Here is how we can see this.  Let r be the individual uncontroversial risk of the behavior.  Basically, r=ph, where p is the probability of the harm and h is the disutility of the harm (or a sum over several harms).  Then the total social risk, where one calculates only the harms to the agents themselves, is T(r)=Nr, where N is the number of agents engaging in the harmful behavior.  A risk reduction policy then decreases r, either by decreasing the probability p or by decreasing the harm h or both.  One might initially think that decreasing r will obviously decrease T(r), since T(r) is proportional to r.  But the problem is that N is also dependent on r: N=N(r).  Moreover, assuming the target population is decision-theoretic rational and assuming that the riskiness is not itself counted as a benefit (both assumptions are in general approximations), N(r) decreases as r increases, since fewer people will judge the behavior worthwhile the more risky it is.  Thus, T(r) is the product of two factors, N(r) and r, where the first factor decreases as r increases and the second factor increases as r increases.  

We can also say something about two boundary cases.  If r=0, then T(r)=0.  So reducing individual risk to zero is always a benefit with respect to total social risk.  Of course any given risk-reduction policy may also have some moral repercussions--but I am bracketing such considerations for the purposes if this analysis.  But here is another point.  Since presumably the perceived benefits of the risky behavior are finite, if we increases r to infinity, eventually the behavior will be so risky that it won't be worth it for anybody, and so N(r) will be zero for large r and hence T(r) will be zero for large r.  So, the total social risk is a function that is always non-negative (r and N(r) are always non-negative), and is zero at both ends.  Since for some values of r, T(r)>0, it follows that there must be ranges of values of r where T(r) decreases as r decreases and risk-reduction policies work, and other ranges of values of r where T(r) increases as r decreases and risk-reduction policies are counterproductive.

To say anything more precise, we need a model of the target population.  Here is my model.  The members of the population targeted by the proposed policy agree on the risks, but assign different expected benefits to the behavior, and these expected benefits do not depend on the risk.  Let b be the expected benefit that a particular member of the target population assigns to the activity.  We may suppose that b has a normal distribution with standard devision s around some mean B.  Then a particular agent engages in the behavior if and only if her value of b exceeds r (I am neglecting the boundary case where b=r, since given a normal distribution of b, this has zero probability).  Thus, N(r) equals the numbers of agents in the population whose values of b exceed r.  Since the values of b are normally distributed with pre-set mean and standard deviation, we can actually calculate N(r).  It equals (N/2)erfc((r-B)/s), where erfc is the complementary error function, and N is the population size.  Thus, N(r)=(rN/2)erfc((r-B)/s).

Let's plug in some numbers and do a graph.  Suppose that the individual expected benefit assigned to the behavior has a mean of 1 and a standard deviation of 1.  In this case, 84% of the target population thinks that when one brackets the uncontroversial risk, the behavior has a benefit, while 16% think that even apart from the risk, the behavior is not worthwhile.  I expect this is not such a bad model of teen attitudes towards sex in a fairly secular society.  Then let's graph T(r) (on the y-axis it's normalized by dividing by the total population count N--so it's the per capita risk in the target population) versus r (on the x-axis). (You can click on the graph to tweak the formula if interested.)

We can see some things from the graph.  Recall that the average benefit assigned to the activity is 1.  Thus, when the individual risk is 1, half of the target population thinks the benefit exceeds the risk and hence engages in the activity.  The graph peaks at r=0.95.  At that point one can check from the formula for N(r) that 53% of the target population will be engaging in the risky activity.

We can see from the graph that when the individual risk is between 0 and 0.95, then decreasing the risk r always decreases the total social risk T(r).  In other words we get the heuristic that when a majority (53% or more for my above numbers) of the members of the population are engaging in the risky behavior, we do not have to worry about increased social risk from a risk-reduction policy, assuming that the target population does not overestimate the effectiveness of the risk-reduction policy (remember that I assumed that the actual risk rate is known).

In particular, in the general American adult population, where most people drive, risk-reduction policies like seat-belts and anti-lock brakes are good.  This fits with common sense.

On the other hand, when the individual risk is between 0.95 and infinity, so that fewer than 53% of the target population is engaging in the risky behavior, a small decrease in the individual risk will increase T(r) by moving one closer to the peak, and hence will be counterproductive.

However, a large enough decrease in the individual risk will still put one on the left side of the peak, and hence could be productive.  But the decrease may have to be quite large.  For instance, suppose that the current individual risk is r=2.  In that case, 16% of the target population is engaging in the behavior (since r=2 is one standard-deviation away from the mean benefit assignment).  The per-capita social risk is then 0.16.  For a risk-reduction policy to be effective, it would then have to reduce the individual risk so that it is far enough to the left of the peak that the per-capita social risk is below 0.16.  Looking at the graph, we can see that this would require moving r from 2 to 0.18 or below.  In other words, we would need a policy that decreases individual risks by a factor of 11.

Thus, we get a heuristic.  For risky behavior that no more than half of the target population engages in, incremental risk-reduction (i.e., a small decrease in risk) increases the total social risk.  For risky behavior that no more than about 16% of the target population engages in, only a risk-reduction method that reduces individual risk by an order of magnitude will be worthwhile.

For comparison, condoms do not offer an 11-fold decrease in pregnancy rates.  The typical condom pregnancy rate in the first year of use is about 15%;  the typical no-contraceptive pregnancy rate is about 85%.  So condoms reduce the individual pregnancy risks only by a factor of about 6.

This has some practical consequences in the teen sex case.  Of unmarried 15-year-old teens, only 13% have had sex.  This means that risk-reduction policies aimed at 15-year-olds are almost certainly going to be counterproductive in respect of reducing risks, unless we have some way of decreasing the risks by a factor of more than 10, which we probably do not.  In that population, the effective thing to do is to focus on decreasing the incidence of the risky behavior rather than decreasing the risks of the behavior.

In higher age groups, the results may be different.  But even there, a one-size-fits-all policy is not optimal.  The sexual activity rates differ from subpopulation to subpopulation.  The effectiveness with regard to the reduction of social risk depends on details about the target population.  This suggests that the implementation of risk-reduction measures might be best assigned to those who know the individuals in question best, such as parents.

In summary, given my model:
  • When a majority of the target population engages in the risky behavior, both incremental and significant risk-reduction policies reduce total social risk.
  • When a minority of the target population engages in the risky behavior, incremental risk-reduction policies are counterproductive, but sufficiently effective non-incremental risk-reduction policies can be effective.
  • When a small minority--less than about 16%--engages in the risky behavior, only a risk-reduction policy that reduces the individual risk by an order of magnitude is going to be effective;  more moderately successful risk-reduction polices are counterproductive.

Principles of Alternate Possibilities and God

  1. (Premise) If x chooses A, then x chooses A over some alternative B such that x deliberated over both A and B.
  2. (Premise) A perfectly rational being does not deliberate over options he knows with certainty to be impossible for him to choose.
  3. (Premise) If x is an omniscient being, then x knows with certainty exactly which options it is impossible for him to choose.
  4. (Premise) God is omniscient and perfectly rational.
  5. Therefore, God knows with certainty exactly which options it is impossible for him to choose. (3 and 4)
  6. Therefore, God does not deliberate over any options that it is impossible for him to choose. (2 and 5)
  7. Therefore, if God chooses A, then God chooses A over some alternative B such that it was possible for God to choose B. (1 and 6)

Tuesday, December 14, 2010

Deterministic and probabilistic causation

Suppose that some random process C selects uniformly a random number in the interval [0,1]. Let C* be a process just like C, except that it can't select the number 1/2. Basically, C* works like this: a random number x in [0,1] is randomly picked with uniform distribution; if x is not 1/2, then the result of C* is x; if x = 1/2, then the result of C* is 1/4.

Then, for any (measurable) subset S of [0,1], under normal unfinked circumstances, the probability that C selects a number in S is equal to the probability that C* selects a number in S. (Proof: C and C* assign the same probability to any subset that does not contain either 1/2 or 1/4. But {1/2, 1/4} has probability zero, so C and C* assign the same probability to any (measurable) subset.)

This shows that numeric probability values fail to characterize all there is to be said about probabilistic causation. In addition to the probability distribution, we need something else that we might call the "range" of the probabilistic causation. C has a larger range than C*: C can pick out 1/2, but C* can't. And just as we shouldn't try to define the probabilistic strength of causation in terms of conditional probabilities, as that can be finked, we also shouldn't try to define the range of the probabilistic causation in modal terms (tempting suggestion: in C's range we put all the events that it's logically possible for C to cause given the laws; this fails as God can miraculously make C cause something outside its range).

The range of probabilistic causation is relative to a partition of logically possible relevant states. Thus, relative to the partition { [0,1/2], (1/2,1], everything else }, C and C* have the same range, namely { [0,1/2], (1/2,1] }. On the other hand, relative to the partition { {1/2}, [0,1/2), (1/2,1], everything else }, the range of C is { {1/2}, [0,1/2), (1/2,1] } while the range of C* is { [0,1/2), (1/2,1] }.

This also solves the following little puzzle: Here's a game. Suppose you pick a number, and then the process C (just as above) picks a number, and you get hurt iff C picks the same number as you did, and otherwise you get a dollar. Sam picks 0.14159. Jane picks 2. Obviously Jane did the safer thing, even though her probability of getting hurt is zero, which is the same as Sam's. Puzzle: Why? Answer: Because {0.14159} is in the range of C relative to the partition given by all the real numbers, while 2 isn't in the range.

We can now define deterministic causation: C deterministically causes E iff C probabilistically causes E and the range of C relative to the partition { E, ~E } is { E }.

Interesting fact: if C is a probabilistic cause, and R is the range of C, then the union (or disjunction, if you will) of all the events in R is something that C deterministically causes. Therefore, in any normal situation where there is probabilistic causation, there is also deterministic causation.

Deterministic causation

Defining deterministic causation is tough. Standard definitions of "causal determinism" do not do justice to the causal aspect of the determinism. The natural suggestion that A deterministically causes B provided that the occurrence of A conjoined with the laws of nature entails B doesn't work if the laws are ceteris paribus (I'm grateful to Jon Kvanvig for this point) and is in any case subject to counterexample.

I don't have a definition here. But I do have a way to point to the notion, by way of analogy. Start with the notion of probabilistic causation of strength s, where s is a number between 0 and 1. If C probabilistically causes E with strength s, then C has a disposition of strength s to produce E. Moreover, normally—but this can be finked—the probability of E given C and the laws will be s, and tis probability will be explained by the probabilistic tendency of strength s of C to produce E.

Deterministic causation then is a kind of limiting case of probabilistic causation. C's deterministically causing E is a relation that stands to the condition that, if everything is normal, the occurrence of C and the laws entail E in the way in which C's probabilistically causing C with strength s stands to the condition that, if everything is normal, the occurrence of C and laws give probability s to E.

This gives us an account of deterministic causation by analogy.

The tempting suggestion that deterministic causation is just the s=1 special case of probabilistic causation is insufficient, since events of zero probability can happen. However, it is correct to say that deterministic causation is a special case of probabilistic causation of strength 1.

Monday, December 13, 2010

Divine freedom

Consider this sketchy argument against libertarianism: It is impossible for God to choose wrongly. But freedom requires the ability to choose otherwise if libertarianism is correct, and significant moral responsibility requires the freedom to choose wrongly. Hence, if libertarianism is correct, God lacks significant moral responsibility. But God has significant moral responsibility. Thus, libertarianism is incorrect.

There are various somewhat shaky details here. There is, for instance, the question whether the ability to choose otherwise that libertarians insist on has to imply an ability to choose wrongly in the relevant case. And there is the issue that some libertarians are only source-incompatibilists, and source-incompatibilism does not imply an ability to choose otherwise in the special case where the agent is God.

But I want to make a different point, namely that there is an argument in the vicinity that applies just as much on compatibilism as on incompatibilism. Consider this moral intuition:

  1. Someone who in ordinary circumstances chooses not to commit moral horrors lacks perfect virtue.
But no circumstances are going to press God extraordinarily, and God does not merely avoid moral horrors. Pushing on this intuition leads to the following suggestion:
  1. God does not choose to act non-wrongly.
Of course, there are various non-wrong things that God chooses to do, but he does not choose them over acting wrongly. That the things that God does are non-wrong is not something that counts in their favor in his choice, because all of the options he is choosing between are non-wrong. But, now, plausibly:
  1. Significant moral responsibility requires that one choose to act wrongly or that one choose to act non-wrongly.
  2. God has significant moral responsibility.
In (3), by "choose to act Fly", I mean something contrastive: choose to act Fly over acting non-Fly. But now (2)-(4) are contradictory.

Observe two things about this puzzle. First, the argument nowhere makes any use of incompatibilism. The intuitions behind (1) and (2) are either neutral between compatibilism and incompatibilism, or slightly favor compatibilism (because the compatibilist lays greater stress on the connection between action and character).

Second, someone who rejects one of the premises (2)-(4) will be untroubled by the original sketchy anti-libertarian argument.

If one rejects (2), then one will think that God chooses between non-wrong and wrong. But a rational being does not choose between options he knows to be possible and impossible options, and so the only way an omniscient and perfectly rational being could choose between non-wrong and wrong would be if it were possible for him to choose the wrong. Thus, someone who rejects (2) will reject the claim that God cannot choose the wrong.

If one rejects (3), then one will surely also reject the claim that significant moral responsibility requires the freedom to choose wrongly.

And (4) is just as much a premise of the original argument as of this one.

But everyone should reject one of (2)-(4) since these premises are logically incompatible. Therefore, everyone should reject one of the premises of the original anti-libertarian argument.

What should the theist do? I think the theist should reject the conjunction of (3) and (4): The sort of significant moral responsibility that logically requires choosing to act non-wrongly is not worth having, and so God doesn't have it.

Sunday, December 12, 2010

Choice and incommensurability

Right now, I am making all sorts of choices. For instance, I just chose to write the preceding sentence. When I made that choice, it was a choice between writing that sentence and writing some other sentence. But it was not a choice between writing that sentence and jumping up an down three times. Let A be the writing of the sentence that I wrote; let B be the writing of some alternate sentence; let C be jumping up and down three times. Then, I chose between A and B, but not between A and C. What makes that be the case?

Here is one suggestion. I was capable of A and of B, but I was not capable of C. If this is the right suggestion, compatibilism is false. For on standard compatibilist analyses of "is capable of", I am just as capable of C as of A and B. I was fully physically capable of doing C. Had I wanted to do C, I would have done C. So if the capability of action suggestion is the only plausible one, we have a neat argument against compatibilism. However, there is a decisive objection to the suggestion: I can choose options I am incapable of executing. (I may choose to lift the sofa, without realizing it's too heavy.)

To get around the last objection, maybe we should talk of capability of choosing A and capability of choosing B. Again, it does not seem that the compatibilist can go for this option. For if determinism holds, then in one sense neither choosing B nor choosing C are available to me—either choice would require a violation of the laws of nature or a different past. And it seems plausible that in that compatibilist sense in which choosing B is available to me—maybe the lack of brainwashing or other psychological compulsion away from B—choosing C is also available to me. Again, if this capability-of-choosing suggestion turns out to be the right one, compatibilism is in trouble.

Here is another suggestion, one friendly to compatibilism. When I wrote the first sentence in this post, I didn't even think of jumping up and down three times. But I did, let us suppose, think of some alternate formulations. So the difference between B and C is that I thought about B but did not think about C. However, this suggestion is unsatisfactory. Not all thinking about action has anything to do with choosing. I can think about each of A, B and C without making a choice. And we are capable of some limited parallel processing—and more is certainly imaginable—and so I could be choosing between D and E while still thinking, "purely theoretically" as we say, about A, B and C. There is a difference between "choosing between" and "theorizing about", but both involving "thinking about".

It seems that the crucial thing to do is to distinguish between the ways that the action-types one is choosing between and those action-types that one is merely theorizing about enter into one's thoughts. A tempting suggestion is that in the choice case, the actions enter one's mind under the description "doable". But that's mistaken, because I can idly theorize about a doable without at all thinking about whether to do it. (Kierkegaard is really good at describing these sorts of cases.) The difference is not in the description under which the action-types enter into the mind, as that would still be a difference within theoretical thought.

I think the beginning of the right thing to say is that those action-types one is choosing between are in one's mind with reasons-for-choosing behind them. And these reasons-for-choosing are reasons that one is impressed by—that actively inform one's deliberation. They are internalist reasons.

But now consider this case. Suppose I could save someone's life by having a kidney removed from me, or I could keep my kidneys and let the person die. While thinking about what to do, it occurs to me that I could also save the other person's life by having both kidneys removed. (Suppose that the other person's life wouldn't be any better for getting both kidneys. Maybe only one of my kidneys is capable of being implanted in her.) So, now, consider three options: have no kidneys removed (K0), have one kidney removed (K1), and have two kidneys removed (K2). If I am sane, I only deliberate between K0 and K1. But there is, in a sense, a reason for K2, namely that K2 will also save the other's life, and it is the kind of reason that I do take seriously, given that it is the kind of reason that I have for K1. But, nonetheless, in normal cases I do not choose between K0, K1 and K2. The reasons for K2 do not count to make K2 be among the options. Why? They do not count because I have no reason to commit suicide (if I had a [subjective] reason to commit suicide, K2 would presumably be among the options if I thought of K2), and hence the reasons for K2 are completely dominated by the reasons for K1.

If this is right, then a consequence of the reasons-for-choice view of what one chooses between is that one never has domination between the reasons for the alternatives. This supports (but does not prove, since there is also the equal-reason option to rule out) the view that choice is always between incommensurables.

A corollary of the lack-of-domination consequence is that each of the options one is choosing between is subjectively minimally rational, and hence that it would be minimally rational to choose any one of them. I think this is in tension with the compatibilist idea that we act on the strongest (subjective) reasons. For then if we choose between A and B, and opt for A because the reasons for A were the strongest, it does not appear that B would have been even minimally rational.

Maybe, though, the compatibilist can insist on two orderings of reasons. One ordering is domination. And there the compatibilist can grant that the dominated option is not among the alternatives chosen between. But there is another ordering, which is denoted in the literature with phrases like "on balance better" or "on balance more (subjectively) reasonable". And something that is on balance worse can still be among the alternatives chosen between, as long as it isn't dominated by some other alternative.

But what is it for an option to be on balance better? One obvious sense the phrase can have is that an action is on balance better if and only if it is subjectively morally better. But the view then contradicts the fact that I routinely make choices of what is by my own lights morally worse (may God have mercy on my soul). Another sense is that an action is on balance better if and only if it is prudentially better. But just as there can be moral akrasia, there can be prudential akrasia.

Here is another possibility. Maybe the compatibilist can say that reasons have two kinds of strength. One kind of strength is on the side of their content. Thus, the strength of reason that I have to save someone's life is greater than the strength of reason that I have to protect my own property. Call this "content strength". The other kind of strength is, basically, how impressed I am with the reason, how much I am moved by it. If I am greedy, I am more impressed with the reasons for the protection of my property than with the reasons for saving others' lives. Call this "motivational strength". We can rank reasons in terms of the content strength, and then we run into the domination and incommensurability stuff. But we can also rank reasons in terms of motivational strength. And the compatibilist now says that I always choose on the basis of the motivationally strongest reasons.

This is problematic. First, it suggests a picture that just does not seem to be that of freedom—we are at the mercy of the non-rational strengths of reasons. For it is the content strength rather than the the motivational strength of a reason that is a rational strength. Thus, the choices we make are only accidentally rational. The causes of the choices are reasons, but what determines which of the reasons we act on is something that is not rational. Rationality only determines which of the options we choose between, and then the choice itself is made on the non-rational strengths. This is in fact recognizable as a version of the randomness objection to libertarian views of free will. I actually think it is worse than the randomness objection. (1) Agent-causation is a more appealing solution for incompatibilists than for compatibilists, though Markosian has recently been trying to change that. (2) The compatibilist story really looks like a story on which we are in bondage to the motivational strengths of reasons. (3) The content strength and content of the outweighed reasons ends up being explanatorily irrelevant to the choice. And (4) the specific content of the reasons that carried the day is also explanatorily irrelevant—all that matters is (a) what action-type they are reasons for and (b) what their motivational strength is.

In light of the above, I think the compatibilist should consider giving up on the language of choice, or at least on taking choice, as a selection between alternatives, seriously. Instead, she should think that there is only the figuring out of what is to be done, and hold with Socrates that there is no akrasia in cases where we genuinely act—whenever we genuinely act (as opposed to being subject to, say, a tic) we do what we on balance think should be done. I think this view would give us an epistemic-kind of responsibility for our actions, but not a moral kind. Punishment would not be seen in retributivist terms, then.

Saturday, December 11, 2010

Choice: compatibilist and incompatibilist views

I often wonder whether the compatibilist doesn't see choice as a matter of figuring something out—figuring what is to be done in the light of one reasons and/or desires. If one does see choice in this way, then it is quite unsurprising that we can assign responsibility without alternate possibilities. For the process is epistemic or relevantly like an epistemic process, and it is pretty plausible that epistemic responsibility does not require alternate possibilities.

The incompatibilist, on the other hand, is apt not to think of choice as a matter of figuring out anything. This is, I think, clearest on Thomistic accounts of choice on which choice is always between incommensurables. On these accounts, when one chooses, reason has already done all it can do—it has elucidated the reasons for all the available rationally-available options. And now, given the reasons, a choice must be made which reasons to follow. This choice isn't a matter of figuring out anything, since the figuring-out has all already been done. Reason has already informed us that option A is pleasant but cowardly and leading to ill-repute, option B is unpleasant, brave and leading to ill-repute, and option C is unpleasant, cowardly, and leading to good repute. Reason has also informed us that our duty is to avoid cowardice. And now a choice has to be made between pleasure, virtue and good reputation.

Deep Thoughts XXIX

No one has proved the unprovable.

Friday, December 10, 2010

Deep Thoughts XXVIII

Only the impossible is not possible.

The reliability of intuitions

Often, analytic philosophers give some case to elicit intuitions. Intuitions elicited by certain kinds of cases count for less. Here is one dimension of this: Intuitions elicited by worlds different from ours are, other things being equal, reliable in inverse proportion to how different the worlds are from ours. Here is the extreme case: intuitions elicited by cases of impossible worlds.

For an example of such an intuition, consider the argument against divine command theory that even if God commanded torture of the innocent, torture of the innocent would be wrong. Now, the obvious response is that it's impossible for God, in light of his goodness, to command torture of the innocent. But, the opponent of divine command theory continues, if God were per impossibile to command it, it would still be wrong, but according to divine command theory, it would be right. (E.g., Wes Morriston has given an argument like that.)

I've criticized this particular argument elsewhere (not that I think divine command theory is right).[note 1] But here is a point that is worth making. This argument elicits our intuition by a case taking place in an impossible world. But impossible worlds are very different from ours. So we have good reason to put only little weight on intuitions elicited by per impossibile cases.

This does not mean that we should put no weight on them.

Wednesday, December 8, 2010

Responsibility and God

I've been playing with the idea that while responsibility in our case is always contrastive, and tied to choices that must be understood contrastively, God's responsibility is non-contrastive. Thus, I am for writing this post rather than doing some grading. But God is responsible simpliciter for the existence of kangaroos.

Tuesday, December 7, 2010

Wittgensteinian views of religious language

Wittgensteinians lay stress on the idea that

  1. One cannot understand central worldview concepts without living as part of a community that operates with these concepts.
The non-Christian cannot understand the Christian concept of the Trinity; the Christian and the atheist cannot understand the Jewish concept of God's absolute unity as understood by Maimonedes; the theist cannot understand the concept of a completely natural world; and the non-Fascist cannot understand the concept of the Volk. It is only by being a part of a community in which these concepts are alive that one gains an understanding of them.

Often, a corollary is drawn from this, that while internal critique or justification of a worldview tradition such as Christianity, naturalism or Nazism is possible, no external critique or justification is possible. In fact, there is an argument for this corollary.

  1. (Premise) One's evidence set cannot involve any propositions that involve concepts one does not understand.
  2. (Premise) Necessarily, if a proposition p uses a concept C, and a body of propositions P is evidence for or against p for an agent x, then some member of P involves C.
  3. If x is not a member of the community operating with a central worldview concept C, then x does not have any evidence for or against any proposition involving C. (1-3)
  4. (Premise) External critique or justification of a worldview of a community is possible only if someone who is not a member of the community can have evidence for or against a proposition involving a central worldview concept of that community.
  5. Therefore, external critique or justification of a worldview of a community is not possible. (4 and 5)
This is a particularly unfortunate result in the case of something like Nazism, and may suggest an unacceptable relativism.

The argument is valid but unsound, and I think unsalvageable. I think that (5) is false, and on some plausible interpretations of (1), (2) and (3) are false as well.

First of all, people successfully reason with scientific concepts that they do not understand, like the concept of a virus or of gravity. They inherit the concept from a scientic community that they are not a member of, and while they do not understands the concept, they get enough about the inferential connections involving the concept that the concept should become useful. Thus, even if I do not really understand the concept of a virus, my evidence set can include facts about viruses that I know by virtue of testimony[note 1], and inferential connections with other facts, such as that if x has the common cold, then many viruses are present in x's body. Thus, (2) is false.

As for (3), I don't know for sure if it's false, but seems quite possible that while C does not occur in one's evidence set, it might occur in one's rules of inference. And there does not seem to be anything wrong with having a concept in one's evidence set that one does not understand.

But perhaps you are not convinced by the critique of (2) and (3). I suspect this is because you take (1) to be more radical than I do. The "cannot understand" in (1) is understood as entailing "cannot operate with"—even the weak sort of grasp that the layperson has of scientific concepts is denied to non-members of a community in the case of central worldview concepts. On this interpretation of (1), (2) and (3) are false. I am inclined to think that this interpretation of (1) is the incorrect one because it renders (1) false. The central worldview concepts of a community do not seem to be significantly different from the central concepts of a scientific community. Still, I see the force of such a beefed-up (1), at least in the case of the concepts of the Christian faith (not so much because of the need for community membership as such, but because of the need for grace to enlighten one's understanding).

In any case, (5) is false on either understanding of (1). The reason is simple. To support or criticize a position, one does not need evidence for or against a position. One only needs evidence for or against the second order claim that the position is true. Often, this a distinction without much of a difference. I have evidence that

  1. there is life on Mars
if and only I have evidence that
  1. the proposition that there is life on Mars is true.
However, this is so only because in (8) I refer to the proposition under the description "that there is life on Mars." But take a different case. I go to a mathematics lecture. Unfortunately, as I shortly discover, it's in German. I sit through it uncomprehendingly. At the end of it, I turn to a friend who knows German and ask her what she thought. She is an expert in the field and says: "It was brilliant, and I checked that his central lemma is right." I still don't know what the speaker's central lemma is, but I know that it is true. I do not have evidence for the lemma, and it could even be (say, if the talk is in a field of mathematics I don't know anything about) that I don't have the requisite concepts for grasping the lemma, but I have evidence that the lemma is true.

Likewise, it is possible to have evidence for and against the claim that the community's central worldview propositions are true, without grasping these propositions and having evidence for or against them. For instance, while I may not be able to understand what the members of the community are saying in their internal critiques, but I may understand enough of the logical form of these critiques and of the responses to them to be able to make a judgment that the critiques are probably successful. Moreover, even if I do not understand some concept, I may grasp some metalinguistic facts such as that if x is a Gypsy, then x is not a part of anything in the extension of the term "the Volk", or that if the only things that exists at w are the particles of current physics, and at w their only properties and relations are those of current physics, then w is in the extension of the term "completely natural". Given such facts, I can gain arguments for or against the thesis that the central worldview claims of the community are true. Thus, (5) is false.

There is a hitch in my argument against (5). External evaluation of the community seems to require that while I have no grasp of particular central terms, I have some grasp of the larger grammar of sentences used by members of the community and I understand some of the non-central terms in their language. But what if I don't? This could, of course, happen. The community could speak an entirely foreign language that I am incapable of parsing.

I can make two responses. The first is that (5) is a general claim about communities whose central worldview concepts I do not understand, and that general claim has been shown to be false. There could be some radical cases where the outsider's lack of understanding is so complete that external critique or justification is impossible. But such cases do not in fact occur for us. Humans share basic structures of generative grammar and a large number of basic concepts due to a common environment.

The second response is that the behavior of members of the community can provide evidence for and against the correctness of their central ideas. If their airplanes keep on crashing, there is good reason to think their scientific concepts are bad. If they lead a form of life that does not promote the central human goods, there is good reason to think that their ethics is mistaken, while if they lead a form of life that does promote the central human goods, there is good reason to their ethics is sound. Now, of course, I could be wrong. Maybe for religious reasons they want their airplanes to crash and design them for that. Maybe they abstain from some central human goods for the sake of some God-revealed higher good. Maybe they are a bunch of hypocrites, and they aren't really achieving the central human goods. However, such possibilities only show that I cannot be certain in my external evaluation. But the claim that external justification and critique is possible is not the claim that one can achieve certainty in external justification and critique. What I've said shows we can achieve high probability even in cases where the community's language is radically not understandable.

Things might be different if we're dealing with an alien species of intelligent beings. But I suspect we could still come to probabilistic judgments, just somewhat less confident ones.

I think the above considerations not only show that the argument (1)-(6) fails, but that we're unlikely to get any successful argument along those lines.

Saturday, December 4, 2010

Consequence arguments and responsibility

Consequence arguments like Peter van Inwagen's basically conclude that if determinism is true, then if p is any truth, Np is also a truth. Here, roughly (different formulations will differ here), Np says that p is true and that there is nothing anyone could do to make p false.

Supposedly this conclusion is a problem for the compatibilist. But why? Why can't the compatibilist just say: "I freely and responsibly did A, even though N(I did A)"?

I suspect that the consequence argument has a further step that is routinely left out, and this involves an application of the inference rule:

  1. gamma: If Np, then no one is responsible for p.
But gamma might be invalid. Take a Frankfurt scenario. The neurosurgeon watches me. If within two minutes, I don't make the choice to vote for candidate A, or I make a choice to vote for any other candidate, she forces me to vote (or pseudo-vote, since a forced vote may not legally be a vote) for A. Thirty seconds into the two-minute period, I freely choose to vote for A, and do so. I am responsible for my voting for A. Now suppose that the neurosurgeon is not free, and in fact is an automaton that no one could ever have done anything about, and that my choice how to vote is the first free choice ever done. I am still responsible for voting for A, and hence responsible for its being the case that I vote for A. However, there is nothing anyone could do to make it be false that I vote for A.

The argument above does use a somewhat problematic step: going from "I am responsible for voting for A" to "I am responsible for its being the case that I vote for A". But the point is sufficient to show that gamma is problematic.

My suspicion is that gamma is correct in the case of finite agents and direct agent-responsibility of the sort involved in criminal law, but not in the case of the kind of outcome-responsibility that is involved in tort law. For the kind of outcome-responsibility that is involved in tort law is tied to but-for conditionals: but for my doing something, the harm wouldn't have happened. It is correct to say in the Frankfurt case that I don't have that sort of responsibility. Had I not chosen to vote for A, I would still have voted (or at least pseudovoted) for A. But I do have agent-responsibility for voting for A. If the elections were the American ones, I should be liable in criminal law for voting for A (I am Canadian, so for me to vote in U.S. elections would be an instance of fraud), but not in civil law (even if my vote causes some harm to someone), in the Frankfurt case.

Friday, December 3, 2010

Beta 2: a theorem

Finch and Warfield's version of the Consequence Argument for incompatibilism uses:

  1. beta 2: If Np and p entails q, then Nq
Here, "Np" is: p and nobody (no human?) can ever do anything about p. The argument for incompatibilism is easy. Let P be the distant past and L the laws. Suppose p is a proposition that is determined by the distant past and the laws. Then:
  1. P&L entails p. (Premise)
  2. N(P&L). (Premise)
  3. Therefore, Np. (1-3)
In other words, if something is determined by the distant past and the laws, nobody can ever do anything about it. In particular, if all actions are determined by the distant past and the laws, no one can do anything about any actions. And this is supposed to imply that there is no freedom.

Here's a cool thing I arrived at in class when teaching about the argument. Suppose we try to come up with a definition of the N operator. Here's a plausible version:

  • Np if and only if p and there does not exist an action A, agent x and time t such that (a) x can do A at t; and (b) (x does A at t)→~p.
Here, "→" is the subjunctive conditional. So, Np holds if and only if p and nobody could do anything such that if she did it, we would have not-p.

Anyway, here's an interesting thing. Beta 2 is a theorem if we grant these axioms:

  1. If q entails r, and pq, and p is logically possible, then pr.
  2. If x can do A at t then it is logically possible that x does A at t.
Axiom (6) is really plausible. Axiom (5) is a consequence of David Lewis's account of counterfactuals. Analogues of it are going to hold on accounts that tie counterfactuals to conditional probabilities.

The proof of beta 2 from (5) and (6) is easy. Suppose that Np is true and p entails q. For a reductio, suppose that ~Nq. If ~Nq, then either ~q or there are A, x and t such that (a) x can do A at t; and (b) (x does A at t)→~q. Since Np is true, p is true, and hence q is true as p entails q. So the ~q option is out. So there are A, x and t such that x can do A at t, and were x to do A at t, it would be the case that ~q. But ~q entails ~p, since p entails q, so by (5) and (6) it follows that were x to do A at t, it would eb the case that ~p. And so ~Np, which contradicts the assumption that Np and completes the proof.

So it looks like the consequence argument is victorious. The one controversial premise, beta 2, is a theorem given very plausible axioms.

Unfortunately, there is a problem. With the proposed definition of N, premise (2) says that there is no action anybody can do such that were they to do it, it would be the case that ~(P&L). While this is extremely plausible, David Lewis famously denies this on his essay whether one can break the laws. I think he's wrong to deny it, but the argument in this formulation directly begs the question against him.

Note that in the definition of the N operator, we might also replace the → with a might-conditional: were x to do A at t, it might be the case that ~p. (This gives the M operator in the Finch and Warfield terminology; see also Huemer's argument.) The analogue of (5) for might-conditionals is about as plausible. So once again we get as a theorem an appropriate beta-type principle.

Wednesday, December 1, 2010

A simple design argument

  1. P(the universe has low entropy | naturalism) is extremely tiny.
  2. P(the universe has low entropy | theism) is not very small.
  3. The universe has low entropy.
  4. Therefore, the low entropy of the universe strongly confirms theism over naturalism.

Low-entropy states have low probability. So, (1) is true. The universe, at the Big Bang, had a very surprisingly low entropy. It still has a low entropy, though the entropy has gone up. So, (3) is true. What about (2)? This follows from the fact that there is significant value in a world that has low entropy and given theism God is not unlikely to produce what is significantly valuable. At least locally low entropy is needed for the existence of life, and we need uniformity between our local area and the rest of the universe if we are to have scientific knowledge of the universe, and such knowledge is valuable. So (2) is true. The rest is Bayes.

When I gave him the argument, Dan Johnson made the point to me that this appears to be a species of fine-tuning argument and that a good way to explore the argument is to see how standard objections to standard fine-tuning arguments fare against this one. So let's do that.

I. "There is a multiverse, and because it's so big, it's likely that in one of its universes there is life. That kind of a universe is going to be fine-tuned, and we only observe universes like that, since only universes like that have an observer." This doesn't apply to the entropy argument, however, because globally low entropy isn't needed for the existence of an observer like me. All that's needed is locally low entropy. What we'd expect to see, on the multiverse hypothesis, is a locally low entropy universe with a big mess outside a very small area--like the size of my brain. (This is the Boltzmann brain problem>)

II. "You can't use as evidence anything that is entailed by the existence of observers." While this sort of a principle has been argued for, surely it's false. If we're choosing between two evolutionary theories, both of them fitting the data, both equally simple, but one of them making it likely that observers would evolve and the other making it unlikely, we should choose the one that makes it likely. But I can grant the principle, because my evidence--the low entropy of the universe--is not entailed by the existence of observers. All that the existence of observers implies (and even that isn't perhaps an entailment) is locally low entropy. Notice that my responses to Objections I and II show a way in which the argument differs from typical fine-tuning arguments, because while we expect constants in the laws of nature to stay, well, constant throughout a universe, not so for entropy.

III. "It's a law of nature that the value of the constants--or in this case of the universe's entropy--is exactly as it is." The law of nature suggestion is more plausible in the case of some fundamental constant like the mass of the electron than it is in the case of a continually changing non-fundamental quantity like total entropy which is a function of more fundamental microphysical properties. Nonetheless, the suggestion that the initial low entropy of the universe is a law of nature has been made in the philosophy of sceince literature. Suppose the suggestion is true. Now consider this point. There is a large number--indeed, an infinite number--of possible laws about the initial values of non-fundamental quantities, many of which are incompatible with the low initial entropy. The law that the initial entropy is low is only one among many competing incompatible laws. The probability given naturalism of initially low entropy being the law is going to be low, too. (Note that this response can also be given in the case of standard fine-tuning arguments.)

IV. "The values of the constant--or the initially low entropy--does not require an explanation." That suggestion has also been made in the philosophy of science literature in the entropy case. But the suggestion is irrelevant to the argument, since none of the premises in the argument say anything about explanation. The point is purely Bayesian.