Tuesday, January 31, 2012

How can I knowingly and freely do wrong?

I accept the following two claims:

  1. Every free action is done for a reason.
  2. If an action is obligatory, then I have on balance reason to do it.
Consider cases where I know that an action is obligatory, but I don't do it. How could that be? Well, one option is that I don't realize that obligatory actions are ones I have on balance reason to do. Put that case aside: I do know it sometimes when I do wrong. So I know that I have on balance reason to do an action, but I refrain from it. But then how could I have a reason for my refraining? And without a reason, my action wouldn't be free.

It strikes me that this version of the problem of akrasia may not be particularly difficult. There is no deep puzzle about how someone might choose a game of chess over a jog for a reason. A jog is healthier but a game of chess is more intellectually challenging, and one might choose the game of chess because it is more intellectually challenging. In other words, there is a respect in which the game of chess is better than the jog, and when one freely chooses the game of chess, one does so on the basis of some such respect. The jog, of course, also has something going for it: it is healthier, and one can freely choose it because it is better in respect of health.

Now, suppose that the choice is between playing a game of chess and keeping one's promise to visit a sick friend. Suppose the game of chess is more pleasant and intellectually challenging than visiting the sick friend. One can freely choose the game of chess because there are respects in which it is better than visiting the friend. There are, of course, respects in which the game of chess is worse: it is a breaking of a promise and a neglecting of a sick friend. But that there are respects in which visiting the sick friend is better does not make there be no reason to play chess instead, since there are respects in which the chess game is better.

But isn't visiting the sick friend on balance better? Certainly! But being on balance better is just another respect in which visiting the sick friend is better. It is still in some other respects better to play the game of chess. If one freely chooses to play the game of chess, then one chooses to do so on account of those other respects. That one option is on balance better is compatible with the other option being in some respects better. It is no more mysterious how one can act despite the knowledge that another option is on balance better than how one can act despite the knowledge that another option is more pleasant. The difference is that when one chooses against an action that one takes to be on balance better, one may incur a culpability that one does not incur when one chooses against an action that is merely more pleasant, but the incurring of that culpability is just another reason not to do the action.

But isn't it decisive if an action is on balance better? Isn't it irrational to go against such a decisive reading? Well, one can understand a decisive reason in three ways: (a) a reason that in fact decides one; (b) a reason that cannot but decide one; and (c) a reason that rationality requires one to go with. That an action is on balance better need not be what decides you, even if in fact you do the on balance better action. Now, granted, rationality requires one to go with an on balance better action. But that rationality requires something does not imply you will do it.

But if you don't, aren't you irrational, and hence not responsible? Well, if by irrational one means lack of responsiveness to reasons, then that would indeed imply lack of responsibility, but that is not one's state when one chooses to do the wrong thing for a reason. It need not even be true that one is not responsive to what is on balance better. For to be responsive to a reason does not require that one act on that reason. The person who chooses the chess game over the jog is likely quite responsive to reasons of health. If she were not responsive to reasons of health, it might not be a choice but a shoo-in. Likewise, the person who chooses against what is on balance better is responsive to what is on balance better, but goes against it.

Now, of course, the person who knowingly does what she knows she on balance has reason not to do, does not respond to the reason in the way that she should. In that sense, she is irrational. But that sense of irrationality is quite compatible with responsibility.

Monday, January 30, 2012

Modus Ponens versus Affirming the Consequent

Consider these two rules of doxastic practice:

  • Modus Ponens (MP): If you believe that p and you believe that if p, then q, then infer q.
  • Affirming the Consequent (AC): If you believe that q and you believe that if p, then q, then infer p.
MP is a good rule of inference. AC is a fallacy. But why is MP better? An obvious pair of relevant modal facts is:
  1. Necessarily, if it is true that p and it is true that if p, then q, then it is true that q.
  2. Possibly, it is true that q, it is true that if p, then q, but it is not true that p.
These facts suggest that
  1. MP is a more effective way of getting to truth than AC.

But (3) does not necessarily follow from (1) and (2). For instance, from (1) and (2), we get these claims:

  1. Necessarily, if all your beliefs are true, and you apply MP to generate a new belief, your beliefs will still be all true.
  2. Possibly, all your beliefs are true, and you apply AC to generate a new belief, and your resulting beliefs are not all true.
But obviously (4) and (5) tell us nothing about whether MP is better than AC for us, since the antecedent in (4) is not satisfied in our situation: it is false that all our beliefs are true.

Imagine Sam. Most of Sam's beliefs are true. But in cases in which he believes that if p, then q, it is more often true that the converse conditional is true than that this conditional is true. It could very well be the case for Sam that following AC is a more effective way of getting to truth than following MP is.

Or imagine Dory. While most of her beliefs are true, and it is more often the case when she believes that if p, then q, that this conditional is true than that the converse conditional is true, nonetheless due to some cause she happens to tend to apply AC or MP almost only in cases where only the converse conditional is true. Again, for her following AC is a more effective way of getting to truth than following MP is.

Of course, I expect that for most if not all of us MP is a more effective way of getting to truth than AC. But there is no necessity in this. In particular, that MP is a more effective way of getting to truth than AC is not a thesis of logic (but of what? psychology? natural theology?). Nothing surprising about that, of course.

Sunday, January 29, 2012

Grounding

It is normal to think that a disjunctive proposition that is true is grounded in each of its true disjuncts.

This may be false. Let p be the proposition that 2+2=4. Let q be the infinite disjunction p or (p or (p or ...)). Then q is its own second disjunct. Moreover, q is true. But surely what q is grounded in is not q itself but p.

For the same reason, it does not appear correct to say that a conjunction is always partly grounded in at least one of its conjuncts. For instance, take the infinite conjunction: p and (p and (p and ...)). The second conjunct is the conjunction itself, but it does not seem that this conjunction is even partly grounded in itself.

When I came up with the disjunction example today, I thought that I could weaken the disjunctive grounding principle to say that a disjunction is grounded in at least one of its disjuncts. But even that is not clear to me right now. For perhaps we could construct a complex infinite disjunction such that each disjunct is the disjunction itself. But I am less sure that such a disjunction really does exist.

On reflection, I wonder if my examples work. Maybe there is no disjunctive proposition p or (p or (p or ...)), but only the disjunctive proposition ...(((p or p) or p) or p).... In other words, the direction of the infinite nesting may matter. The difference is that the latter disjunctive proposition has a starting point: we take p and disjoin p to it infinitely often. The former one does not.

Another interesting case is: "2+2=4 or the proposition expressed by this sentence is true."

Saturday, January 28, 2012

Book cover image for One Body?

Notre Dame University Press sent me an email asking if I have any ideas for a cover image for my One Body: An Essay in Christian Sexual Ethics. I don't. But perhaps one of my worthy readers does. Any suggestions? Suggestions should be family friendly. Public domain is preferable, of course.

If I follow your image recommendation, I'll give you a one year's free Pro membership to instructables.com, as a very small token of my gratitude.

Please post any suggestions in comments. Thank you.

Friday, January 27, 2012

Copper pipe glockenspiel

My 6-year-old son and I built this copper pipe glockenspiel. Full build instructions with more photos are here.

A reason why voting methods are compromises

Voting involves compromise on two levels. On the ground level, a vote involves coming to a compromise decision. But on the meta level, a voting system embodies compromise between different desiderata. Arrow's Theorem is a famous way of seeing the latter point. But there is also another way of seeing it, which in one way goes beyond Arrow's Theorem: while Arrow's Theorem only applies where there are three or more options, what I say applies even in binary cases.

We suffer from both epistemic and moral limitations. Good voting systems are a way of overcoming these, by combining the information offered by us in such a way that no small group of individuals, suffering as it may from epistemic or moral shortcomings, has too much of a say. It is interesting to see that there is an inherent tension between overcoming epistemic and moral limitations.

Consider one of two models. On both models, a collection of options is offered to a population.

  1. Model 1: Each voter comes up with her honest best estimate of the total utility of each option, and offers a report of her estimate.
  2. Model 2: Each voter comes up with her honest best estimate of the utility for her of each option, and offers a report of her estimate.
On the assumption that (a) the voters' errors in their estimations are independent Gaussians with mean zero and we have no information as to who has bigger variances, and that we want to maximize total expected utility (which will be approximately true) and (b) the voters accurately report their estimates, there is provably an optimal voting system under both models: we simply arithmetically average the voters' estimates and select the option with the highest average utility estimate (see my earlier post on this for some computer simulation data). Any voting system whose departs from this will be inoptimal under these circumstances.

Assuming that whatever people are going to say in a vote is going to be somehow based on their estimates of utility on the whole or utility to them, this averaging system is the best way to leverage the information scattered in the population. Unfortunately, while this is a good way to overcome our epistemic limitations, it does terribly with regard to our moral limitations. If one lies boldly enough, namely comes up with utility estimates that are far more inflated than anybody else's, one controls the outcome of the vote. Let's say that option 2 is the best one for me. Then I simply specify that the utility for option 2 is 10100000000 and for option 1 is −10100000000. And of course, there will be an arms race in the population to specify big numbers if there is more than one dishonest member of the population. But in any case, the dishonest will win.

In other words, the optimal system in the case of honest utility estimates is pretty much the worst system where honesty does not generally hold. A good voting system for morally imperfect voters must cap the effect each voter has. But in capping the effect each voter has, information can will in general be lost.

This is most clear in Model 2. We can imagine that an option moderately benefits a significant majority but horrendously harms a minority. Given honest utility reports from everyone and the averaging system, the option is likely to be defeated, since the members of the minority will report enormously negative utilities that will overcome the moderate positive utilities reported by members of the majority. But as soon as one caps the effects of each voter, the information about the enormously negative utilities to the minority will be lost. Model 1 is more helpful (presumably, civic education is how we might get most people to vote according to Model 1), but information will still be lost due to the differences in epistemic access to the total utility. On Model 1, capping will lose us the case where one individual genuinely has information about an enormous negative effect but is unable to convince others of this information. But capping of some sort is necessary because of moral imperfection.

(The optimal method of utility estimation also faces the problem that we are better at rank orderings than at absolute utilities. This can in principle be overcome to some degree by giving people additional hypothetical options to rank-order and then recovering utility estimates from these.)

A brief way to make the point is this. The more trusting a voting system is, the more information it brings to the table; but the more trusting a voting system is, the worse it does with regard to moral imperfection. A compromise is needed in this regard. And not just in voting.

Thursday, January 26, 2012

Presentism and Epicurus' death argument

Becoming friendless is a harm, even if one does not know that one's last friend has just betrayed one. Likewise, one is harmed when the persons or causes one reasonably cares about are harmed, again whether or not one knows about the harm. But we also, I think, have the intuition that this is a different sort of harm from that which one undergoes when one loses an arm or when one is tortured. Call the first set of harms, extrinsic, and the well-being that they detract from extrinsic well-being, and call the second set of harms intrinsic. Apart from an incarnation, God is not subject to intrinsic harms, but he may be subject to extrinsic harms, such as when someone he loves (i.e., anyone at all) is harmed.

Now, introduce the intuitive notion of a temporally pure property. A temporally pure property is one that is had by x only in virtue of how x is at the given time. Thus, being circular is temporally pure but being married to a future president of the United States or being fifty years old are temporally impure. (If the fact that x has Fness is a soft fact, in the Ockhamist sense, then F is temporally impure.)

Then:

  1. (Premise) Only the having of an intrinsic property can constitute an intrinsic harm.
  2. (Premise) Ceasing to exist can be an intrinsic harm.
  3. (Premise) If presentism is true, only temporally pure properties can be intrinsic.
  4. (Premise) Ceasing to exist cannot be a property constituted in virtue of how x is at a particular time.
  5. Ceasing to exist cannot be constituted in virtue of one's temporally pure properties. (4 and definition)
  6. If presentism is true, ceasing to exist cannot be an intrinsic property. (3 and 5)
  7. If presentism is true, ceasing to exist cannot be an intrinsic harm. (1 and 6)
  8. Presentism is not true. (2 and 7)

This is of course in the same spirit as Epicurus' argument that death isn't bad because when you're dead, you don't exist and hence can't be badly off, and when you're not dead, you're not dead. But notice that Epicurus' argument fails to show that death isn't extrinsically bad. Also, I formulated the argument in terms of a (hypothetical) cessation of existence rather than death, since in fact death is not a cessation of existence for human beings, and it is not completely clear that death is an intrinsic harm to non-human animals.

Interestingly, the growing block theorist, who thinks only past and present events and things are real, has a similar problem. For if growing block is true, only hard properties (ones that depend only on how things were or are) can be intrinsic properties, and ceasing to exist is not a hard property.

The eternalist, however, can say that the property of being such that one ceases to exist is an intrinsic property, at least on one interpretation of "ceases to exist". It is an intrinsic property of oneself as a temporally extended being, the property of one's life being futureward finite. It is just as much an intrinsic property as the property of being circular or of finite girth. And if someone were to cause one to have the property of one's life being futureward finite, or a more specific property like that of one's life being being no more than 54 years long, she would thereby be imposing a harm on one.

And even the cessation of existence at age 54 as such isn't an intrinsic harm, the eternalist can talk of such intrinsic harms to someone as that one's life does not include any joys after the the age of 54, thereby doing some justice to the intuition that cessation of existence is intrinsically harmful.

Wednesday, January 25, 2012

A dilemma for divine command theory

Either God does or does not have moral obligations.

If he has moral obligations, divine command theory seems to be false. Divine command theory comes in two versions: command theory and will theory. On command theory, an action is obligatory if and only if God commands it to one. But no one can impose obligations on himself by commands (one can impose obligations on oneself by promises, of course). On will theory, an action is obligatory if and only if God wills (in a relevant sense) one to do it. But what one wills oneself to do does not impose an obligation. That's all I'll say about this horn, though more can probably be said.

If God has no moral obligations, however, then in particular he has no moral obligation to keep his promises and reveal only truths to us. But the Western monotheistic religions are founded on an utter reliance on God's promises and revelation. Without God having moral obligations, why think that God's promises and revelation are trustworthy? (It would obviously be circular to think so on the basis of God's promises and revelations.) So if God has no moral obligations, Western monotheistic religions are in trouble. But most divine command theorists accept one of the Western monotheistic religions.

Perhaps, though, it is impossible for God to break promises or lie, even though he is under no obligation to keep promises or refrain from lying. But if it is not wrong for him to do these things, why can't he do it? If it's just a brute limitation in what he can do, then that seems to conflict with his omnipotence. Maybe, though, God's inability to promise or lie follows from some other essential attribute of God.

Perhaps his goodness? But goodness in a context where duty is not at issue, i.e., deontologically unconstrained goodness, does not seem sufficient to rule out breaking promises or lying.

Maybe in the case of an omnipotent being, though, it does. Goodness is opposed to inducing false beliefs in others, since false beliefs are intrinsically bad. So in our case, deontologically unconstrained goodness might lead one to break a promise, because one made the promise in ignorance of some aspect of the consequences of keeping it, and to lie because there is no other way of achieving some good. But an omnipotent and omniscient being is not going to suffer from such limitations. Sometimes the only humanly possible way to save someone's feelings from being hurt is by lying to him, and deontologically unconstrained goodness may lead one to do that. But God can directly will to have someone's feelings not be hurt.

But this line of thought is a dangerous one to the theist. For it is pretty much the same line of thought that leads the atheist to conclude that God, if he existed, would prevent various horrendous evils. In response to the atheist, the theist has to insist that there may very well be goods—perhaps but not necessarily beyond our ken—that are served by not preventing the horrendous evils. But if we are impressed by this line of thought, we will likewise be unimpressed by the thought that whatever end might be accomplished by lying or breaking of promises can be accomplished by an omniptoent and omniscient being without these. In particular, a sceptical theist cannot give the response I gave in the preceding paragraph.

There is a different line of thought, though, that might work better, inspired by Steve Evans' version of divine command theory. In addition to the distinction between permissible and impermissible actions, there is a distinction between virtuous and vicious actions, and it is only the permissible/impermissible distinction that is grounded by divine command theory. God, one can say, is essentially virtuous. But lying and breaking promises is vicious. Hence God can't do these actions, not because they are wrong, but because they are vicious. I think this is the best response to the dilemma, but I am not convinced.

One reason I am not convinced is this line of thought. Suppose that what makes lying and promise-breaking vicious is that these things are wrong. This is actually plausible. Consider this line of thought. A lot of people think that in extreme circumstances it is permissible to lie or break a promise (we might, though, argue that an omnipotent being doesn't end up in such extreme circumstances—this may be a subtly different line of argument from one that I argued against above, I think). They aren't going to say that lying or breaking promises is always vicious—only that it is vicious when it is wrong, and then because it is wrong. A minority of people, including me, think lying is always wrong (I don't know the promise literature, so I won't talk about promises here). They presumably think lying is always vicious. But surely it is always vicious precisely because it is always wrong. If so, then it is quite plausible that lying and promise-breaking are vicious because, and to the extent that, they are wrong. But the divine command theorist who says that they're vicious but not wrong for God cannot take this line.

Another plausible view is that lying and promise-breaking are wrong, when they are wrong, because they are vicious. But again a divine command theorist cannot take this line of thought, because that would allow one to ground wrongness facts in non-deontological virtue fact, and would make divine command theory unnecessary.

What the divine command theorist needs to hold here is that there is no explanatory relationship between the wrongness of lying and promise breaking and the viciousness of these. And that doesn't seem very plausible, though I do not have a knock-down argument against that.

Tuesday, January 24, 2012

Beating Condorcet (well, sort of)

This builds on, but also goes back over the ground of, my previous post.

I've been playing with voting methods, or as I might prefer to call them "utility estimate aggregation methods." My basic model is there are n options (say, candidates) to choose between and m evaluators ("voters"). The evaluators would like to choose the option that has the highest utility. Unfortunately, the actual utilities of the options are not known, and all we have are estimates of the utilities by all the evaluators.

A standard method for this is the Condorcet method. An option is a Condorcet winner provided that it "beats" every other option, when an option x "beats" an option y provided that a majority of the evaluators estimates x more highly than y. If there is no Condorcet winner, there are further resolution methods, but I will only be looking at cases where there is a Condorcet winner.

My first method is

  • Method A: Estimate each option's utility with the arithmetical average of the reported utilities assigned to it by all the evaluators, and choose the option with the highest utility.
(I will be ignoring tie-resolution in this post, because all the utilities I will work with are real-numbered, and the probability of a tie will be zero.) This method can be proved to maximize epistemically expected utility under the
  • Basic Setup: Each evaluator's reported estimate of each option's utility is equal to the actual utility plus an error term. The error terms are (a) independent of the actual utilities and (b) normally distributed with mean zero. Moreover, (c) our information as to the variances of the error terms is symmetric between the evaluators, but need not be symmetric between the options (thus, we may know that option 3 has a higher variance in its error terms than option 7; we may also know that some evaluators have a greater variance in their error terms; but we do not know which evaluators have a greater variance than which).

Unfortunately, it is really hard to estimate absolute utility numbers. It is a lot easier to rank order utilities. And that's all Condorcet needs. So in that way at least, Condorcet is superior to Method A. To fix this, modify the Basic Setup to:

  • Modified Setup: Just like the Basic Setup, except that what is reported by each evaluator is not the actual utility plus error term, but the rank order of the actual utility plus error term.
In particular, we still assume that beneath the surface—perhaps implicitly—there is a utility estimate subject to the same conditions. Our method now is
  • Method B: Replace each evaluator's rank ordering with roughly estimated Z-scores by using the following algorithm: a rank of k (between 1 and n) is transformed to f((n+1/2−k)/n), where f is the inverse of the cumulative normal distribution function. Each option's utility is then estimated as the arithmetical average of the roughly estimated Z-scores across the evaluators, and the option with the highest estimate utility is chosen.

Now time for some experiments. Add to the Basic Setup the assumptions that (d) the actual utilities in the option pool are normally distributed with mean zero and variances one, and (e) the variances of all the evaluators' error terms are equal to 1/4 (i.e., standard deviation 1/2). All the experiments use 2000 runs. Because I developed this when thinking about grad admissions, the cases that interest me most are ones with a small number of evaluators and a large number of options, which is the opposite of how political cases work (though unlike in admissions, I am simplifying by looking for just the best option). Moreover, it doesn't really matter whether we choose the optimal option. What matters is how close the actual utility of the chosen option is to the actual utility of the optimal option. The difference in these utilities will be called the "error". If the error is small enough, there is no practically significant difference. Given the normal distribution of option utilities, about 95% of actual utilities are between -2 and 2, so if we have about 20 option, we can expect the best option to have a utility of somewhere of the order of magnitude of 2. Choosing at random would then give us an average error of the order of magnitude of 2. The tables below give the average errors for the 2000 runs of the experiments. Moreover, so as to avoid between different choices of resolution methods, I am discarding data from runs during which there was no Condorcet winners, and hence comparing Method A and Method B to Condorcet at its best (interestingly, Method A and Method B also work less well when there was no Condorcet winner). Discarded runs were approximately 2% of runs. Source code is available on request.

Experiment 1: 3 evaluators, 50 options.

Condorcet0.030
Method A0.023
Method B0.029
So, with a small number of evaluators and a large number of options, Method A significantly beats Condorcet. Method B slightly beats Condorcet.

Experiment 2: 50 evaluators, 50 options.

Condorcet0.0017
Method A0.0011
Method B0.0015
So we have a similar distribution of values, but of course with a larger number of evaluators, the error is smaller. It is interesting, however, that even with only three evaluators, the error was already pretty small, about 0.03 sigma for all the methods.

Experiment 3: 3 evaluators, 3 options.

Condorcet0.010
Method A0.007
Method B0.029
Method B is much worse than Condorcet and Method A in this case. That's because with three options, the naive Z-score estimation method in Method B fails miserably. With 3 options Method B is equivalent to a very simple method we might call Method C where we simply average the rank order numbers of the options across the evaluators. At least with 3 options, that is a bad way to go. Condorcet is much better, and Method A is even better if it is workable.

Experiment 4: 50 evaluators, 3 options.

Condorcet0.0003
Method A0.0002
Method B0.0159
The badness of Method B for a small number of options really comes across here. Condorcet and Method A really benefit from boosting the number of evaluators, but with only 3 options, Method B works miserably.

So, one of the interesting consequences is that Method B is strongly outperformed by Condorcet when the number of options is small. How small? A bunch of experiments suggests that it's kind of complicated. For three evaluators, Method B catches up with Condorcet at around 12 options. Somewhat surprisingly, for a greater number of evaluators, it needs more options for Method B to catch up with Condorcet. I conjecture that Method B works better than Condorcet when the number of options is significantly greater than the number of evaluators. In particular, in political cases where the opposite inequality holds, Condorcet far outperforms Method B.

One could improve on Method B, whose Achilles heel is the Z-score estimation, by having the evaluators include in their rankings options that are not presently available. One way to do that would be to increase the size of the option pool by including fake options. (In the case of graduate admissions, one could include a body of fake applications generated by a service.) Another way would be by including options from past evaluations (e.g., applicants from previous years). Then these would enter into the Z-score estimation, thereby improving Method B significantly. Of course, the down side of that is that it would be a lot more work for the evaluators, thereby making this unworkable.

Method A is subject to extreme evaluator manipulation, i.e., "strategic voting". Any evaluator can produce any result she desires by just reporting her utilities to swamp the utilities set by others. (The Basic Setup's description of the errors rules this out.) Method B is subject to more moderate evaluator manipulation. Condorcet, I am told, does fairly well. If anything like Method A is used, what is absolutely required is a community of justified mutual trust and reasonableness. Such mutual trust does, however, make possible noticeably better joint choices, which is an interesting result of the above.

So, yes, in situations of great trust where all evaluators can accurately report their utility estimates, we can beat Condorcet by adopting Method A. But that's a rare circumstance. In situations of moderate trust and where the number of candidates exceeds the number of evaluators, Method B might be satisfactory, but its benefits over Condorcet are small.

One interesting method that I haven't explored numerically would be this:

  • Method D: Have each evaluator assign a numerical evaluations to the options on a fixed scale (say, integers from 1 to 50). Adjust the numerical evaluations to Z-scores, using data from the evaluator's present and past evaluations using some good statistical method. Average these estimated Z-scores across evaluators and choose the option with the highest average.
Under appropriate conditions, this method should converge to Method A over time in the Modified Setup. There would be possibilities for manipulation, but they would require planning ahead, beyond the particular evaluation (e.g., one could keep all one's evaluations in a small subset of the scale, and then when one really wants to make a difference, one jumps outside of that small subset).

Monday, January 23, 2012

An optimal voting method (under some generally implausible assumptions)

Let me qualify what I'm going to say by saying that I know next to nothing about the voting literature.

It's time for admissions committees to deliberate. But Arrow's Theorem says that there is no really good voting method with more than two options.

In some cases, however, there is a simple voting method that, with appropriate assumptions, is provably optimal. The method is simply to have each voter estimate a voter-independent utility of every option, and then to average these estimates, and choose the option with the highest average. By a "voter-independent utility", I mean a utility that does not vary from voter to voter. This could be a global utility of the option or it could be a utility-for-the-community of the voter or even a degree to which a certain set of shared goals are furthered. In other words, it doesn't have to be a full agent-neutral utility, but it needs to be the case that the voters are all estimating the same value—so it can depend on the group of voters as a whole.

Now if we are instead to choose n non-interacting options (i.e., the utilities of the options are additive), then we just choose the n with the highest averages. Under some assumptions, these simple methods are optimal. The assumptions are onerous, however.

Voting theory, as far as I can tell, is usually conducted in terms of preferences between options. In political elections, many people's preferences are probably agent-centered: people are apt to vote for candidates they think will do more for them and for those they take to be close to them. In situations like that, the simple method won't work, because people aren't estimating voter-indepenent utilities but agent-centered utilities.

But there are cases where people really are doing something more like estimating voter-independent utilities. For instance, take graduate admissions or hiring. The voters there really are trying to optimize something like "the objective value of choosing this candidate or these candidates", though of course their deliberations suffer from all sorts of errors.

In such cases, instead of thinking of the problem as a preference reconciliation problem, we can think of it as an estimation problem. We have a set of unknown quantities, the values of the options. If we knew what these quantities are, we'd know what decision to take: we'd go for the option(s) with the highest values. Instead, we have a number of evaluators who are each trying to estimate this unknown. Assume that each evaluator's estimate of the unknown quantity simply adds an independent random error to the quantity, and that the error is normally distributed with mean zero. Assume, further, that either the variances of the normal errors are the same between evaluators or that our information about these variances is symmetric between the evaluators (thus, we may know that evaluators are not equally accurate, but we don't know which ones are the ones who are more accurate). Suppose that I have no further relevant information about the differences in the values of the options besides the evaluators' estimates, and so I have the same prior probability distribution for the value of each option (maybe it's a pessimistic one that says that the option is probably bad).

Given all of the above information, I now want to choose the option that maximizes, with respect to my epistemic probabilities, the expected value of the option. It turns out by Bayes' Theorem together with some properties of normal random variables that the expected value of an option o, given the above information, can be written Aa0+Ba(o), where a0 is the mean-value of my baseline estimate for all the options and a(o) is the average of the evaluators' evaluations of o, and where both A and B are positive. It follows that under the above assumptions, if I am trying to maximize expected value, choosing the option(s) with the highest value of a(o) is provably optimal.

Now there are some serious problems here, besides the looming problem that the whole business of numerical utilities may be bankrupt (which I think in some cases isn't so big an issue, because numerical utilities can be a useful approximation in some cases). One of them is that one evaluator can skew the evaluations by assigning such enormous utilities to the candidates that her evaluations swamp everyone else's data. The possibility of such an evaluator violates my assumption that each person's evaluation is equal to the unknown plus an error term centered on zero. Such an evaluator is either really stupid, or dishonest (i.e., not reporting her actual estimates of utilities). This problem by itself is enough to ensure that the method can't be used except in a community of justified mutual trust.

A second serious problem is that we're not very good at making absolute utility judgments, and are probably better at rank ordering. The optimality condition requires that we work with utilities rather than rank orderings. But in a case where the number of options is largish—admissions and hiring cases are like that—if we assume that value is normally distributed in the option pool, we can get an approximation to the utilities from an evaluator's rank ordering of the n options. One way to do this is to use the rank ordering to assign estimated percentile ranks to each option, and then convert them to one's best estimate of the normally distributed value (maybe this can just be done by applying the inverse normal cumulative distribution function—I am not a statistician). Then average these between evaluators. Doing this also compensates for any affine shift, such as that due to the exaggerating evaluator in the preceding paragraph. I can't prove the optimality of this method, and it is still subject to manipulation by a dishonest evaluator (say, one who engages in strategic voting rather than reporting her real views).

I think the above can also work under some restrictive assumptions even if the evaluators are evaluating value-for-them rather than voter-independent value.

The basic thought in the above is that in some cases instead of approaching a voting situation as a preference situation, we approach it as a scientific estimation situation.

Thursday, January 19, 2012

Presentist counting

In a posthumous paper, David Lewis shows that one can find a presentist paraphrase of sentences like "There have ever been, are or ever will be n Fs" for any finite n. But his method doesn't work for infinite counting.

It turns out that there is a solution that works for finite and infinite counts, using a bit of set theory. For any set S of times, say that an object x exactly occupies S provided that at every time in S it was, is or will be the case that x exists and at no time outside of S it was, is or will be the case that x exists. For any non-empty set S of times, let nF(S) be a cardinality such that at every time t in S it was, is or will be the case that there are exactly nF(S) objects exactly occupying S. This is a presentist-friendly definition. Let N be any set of abstracta with cardinality nF(S) (e.g., if we have the Axiom of Choice, we should have an initial ordinal of that cardinality) and let eF(S) be the set of ordered pairs { <S,x> : xN }. We can think of the members of eF(S) as the ersatz Fs exactly occupying S. Let eF be the union of all the eF(S) as S ranges over all subsets of times. (It's quite possible that I'm using the Axiom of Choice in the above constructions.) Then "There have ever been, are or ever will be n Fs" can be given the truth condition |eF|=n.

This ersatzist construction suggests a general way in which presentists can talk of ersatz past, present or future objects. For instance, "There were, are or ever will be more Fs than Gs" gets the truth condition: |eG|≤|eF|. "Most Fs that have ever been, are or will be were, are or will be Gs" gets the truth condition |eFG|>(1/2)|eF|, where FG is the conjunction of F with G. I don't know just how much can be paraphrased in such ways, but I think quite a lot. Consequently, just as I think the B-theory can't be rejected on linguistic grounds, it's going to be hard to reject presentism on linguistic grounds.

Wednesday, January 18, 2012

"He who intends the end intends..."

It is a classic maxim that:

  1. He who intends the end intends the means.

Here is a problem. I take a pill to relieve a headache. Unbeknownst to me, the pill relieves the headache by means of numbing certain pain receptors I know nothing about. Plainly, I don't intend to numb these pain receptors, since I don't know anything about them. So I intend the end but don't intend the means.

One might weaken (1):

  1. He who intends the end intends the known means.
This also doesn't work. Suppose I have always taken a pill to relieve a headache. My reasoning has always been: "This pill relieves headaches and has few side-effects. I have good reason to relieve my headache. So I will take this pill." At a certain age, I learned how that pill works. But my knowledge of how that pill works in no way affected my practical reasoning, since it didn't undercut any part of the practical syllogism I employed. But intention is a matter of practical reasoning, so my newly gained knowledge did not affect my intentions. Alternate argument: intentions are explanatory of action, but the knowledge of how that pill works did not change the explanation of my actions, so it did not change my intentions.

Moreover, there are cases where two causal pathways are known to causally contribute to an end, but only one is intended. For instance, take the classic case of bombing the enemy HQ in order to end the war sooner, while accepting that civilians on the streets around the HQ will die. Suppose, for instance, that ne expects that the destruction of the enemy HQ in itself hastens the end of the war by a month, but that the deaths of the civilians are expected to hasten the end of the war by another month. The bombing can still be legitimate, as long as one only intends the first of these two means. In fact, it can still be legitimate even if the deaths of the civilians are a greater effect. Imagine that one is planning to bomb the enemy HQ because it hastens the end of the war by a month and one has prudently decided that the proportionality condition in the Principle of Double Effect holds. An analyst then announces that the deaths of the civilians will hasten the end of the war by another two months. Surely the analyst's announcement shouldn't stop one from bombing.

Now the last case may seem a bit unfair. We might say: there are two causal pathways to hastening the end of the war, but only one of them is the means to it. But if we say that, then by "means" we mean "intended means" and (1) becomes:

  1. He who intends the end intends the intended means.
But this is trivial if by "the intended means" we mean "all the intended means" and dubious if we mean "the one and only intended means", since there may be several intended means in an action.

I suggest a very simple alternative repair to (1). Just replace a definite article by an indefinite one:

  1. He who intends the end intends a means.
This is not trivial: it implies that every action has an intended means. One might worry about God's creating ex nihilo. I think there we can stipulate that God's creating A is a means to the existence of A, even if it turns out that God's creating A just is the existence of A (cf. chapter 12 of my PSR book), by generalizing the notion of a means to that of "the way in which the event is made to happen."

(I would expect that (1) would be a translation of some Latin maxim. Latin doesn't have articles, so whatever Latin would be behind (1) might well be understandable as (4).)

Now go back to the original pill case. I don't intend to numb my pain receptors. So what means do I intend? Answer: I don't intend any specific means—I simply intend whatever means it is by which the pill relieves headaches. That's why my intentions don't need to change when I learn how the pill works.

Now consider this wackier case. Suppose that I learn that the way the headache relief pill works is this. There is a homunculus inside me that has the power to relieve my headaches. When I take the pill, I cause horrific pain (much greater than my headache) to the homunculus, and he rushes to relieve my headache, afraid that if he doesn't, I'll take another dose. If I am right that given a normal story about how pain relief works, I need not be intending to numb pain receptors, likewise in this story I needn't be intending to torture the homunculus, even though I know about the homunculus and his pain. However, I do intend whatever means it is by which the pill relieves headaches. And that means is in fact horrific pain for the homunculus. I accomplish my means, and so my accomplishment in fact includes horrific pain for the homunculus. And it is really bad when one's accomplishment is known to have horrific pain for someone else as a part of it.

Tuesday, January 17, 2012

Literalism and inerrantism

In the popular imagination, the doctrines of literalism and inerrantism about Scripture go hand-in-hand. And there may well be a positive correlation between adherence to these doctrines.

But isn't this a strange marriage? Inerrantism is basically the doctrine that every proposition asserted by Scripture is true (perhaps with an "oeconomic necessity" operator applied). On the other hand, literalism is something like the doctrine that narrative sentences in Scripture, with the exception of those that the Bible marks otherwise and those that sufficiently closely stylistically and/or contextually resemble those so market, are to be understood pretty much the way they would be understood if their vocabulary were mildly modernized and they were embedded in a present-day work of history. (It's clear that literalism is much harder to define then inerrancy—it's a slippery doctrine. It has some charateristic marks, though, such as thinking that Genesis 1 and 2 are meant to be, basically, history.)

An obvious difference is that it would be hard to both be an atheist and accept inerrance (one would have to have a really wacky interpretation of Scripture), but it is quite possible (and it actually happens, perhaps quite often) for an atheist to be a literalist.

In fact one would expect a negative correlation between adherence to literalism and adherence to inerrantism. If one is an inerrantist, then one of the exegetical tools available to one is an inference from "p is false" to "Scripture does not assert p", and this exegetical tool, together with modern science, should result in the rejection of literalism.

Monday, January 16, 2012

Aliens and the Bible

My nine-year-old daughter suggested that the fact that aliens aren't mentioned in the Bible gave us good reason to think there aren't any aliens. I countered that dolphins aren't mentioned in the Bible either. My daughter noted that kangaroos aren't either, but she thought that aliens were the sort of thing that, if they existed, the Bible would mention them. I thought there was something to that idea, but perhaps only a weaker claim can be made: the fact that the Bible doesn't mention aliens gives us a good reason to think that humans aren't going to meet up with them in this life. For if we are going to meet up with them, we would need the sort of ethical guidance that we expect from Scripture.

I don't think this is a very powerful argument against the claim that there will be human-alien contact. After all, as long as the aliens appear to be rational beings subject to moral constraints we have good reason to think that they are in the image and likeness of God just as much as we are, and we can apply Scriptural principles. But I do think, nonetheless, that the silence of Scripture is some evidence against humans meeting up with aliens in this life.

Note added later: I definitely should have included Tradition alongside Scripture. See the comments.

Friday, January 13, 2012

Expressivism and non-doxastic propositional attitudes

One can fear that a certain medical procedure is wrong, one can hope that one's musical composition is beautiful, one can wish that a certain action be permissible, one can intend that one's children will make the right choices, one can be horrified that someone has committed a murder[note 1], one can promise that one will gain the contract in a morally licit way, one can rejoice that the expensive painting one has commissioned is good, etc. All of these are propositional attitudes. But the objects of propositional attitudes are propositions. Hence, that a medical procedure is wrong and that one's musical composition is beautiful (and and so on) are propositions, and expressivism is false.

Wednesday, January 11, 2012

Expressing and asserting

Consider the following broadly Wittgensteinian line of thought:
  1. Sentences like "I love you", "This is scary" and "God is all powerful" express love, fear and awe respectively.
  2. Therefore, sentences like "I love you", "This is scary" and "God is all powerful" are not assertions of propositions.
I am happy to grant (1), at least if we qualify the "express" with "typically express". But I think the inference of (2) from (1) is simply a non sequitur, though a tempting one.

Consider this somewhat parallel argument:
  1. A birthday cake expresses one's honoring of the years someone has lived.
  2. Therefore, a birthday cake is not a piece of food.
It is clear that (4) is a non sequitur. Obviously the right thing to say is not that a birthday cake is not a piece of food, but that a birthday cake is a piece of food that expresses one's honoring of the years someone has lived. By analogy, why shouldn't we say that "I love you", "This is scary" and "God is all powerful" are assertions of propositions which assertions express love, fear and awe respectively? I can express, for instance, love by holding hands, baking a cake or sharing a joke. So why can't I also express love by asserting a relevant proposition, viz., that I have a love for the person? And the same point goes through for the other examples.

In other words, the line of thought (1)-(2) sets up a false dilemma: either these sentences are assertions of a proposition or they are expressions of an attitude. But the natural thing to say is that they are both. It is if anything less surprising that assertions of these very relevant contents should express the attitudes they do than that, say, being on one's knees should express awe or that holding hands should express love.

Taking the sentences in question to be assertions and expressions of the indicated attitudes better fits the data than just taking them to be expressions of the indicated attitudes. For the sentences can be embedded in ways that give purely expressivist accounts great trouble. "If God cannot prevent earthquakes, then God is not all powerful"; "If I love you, then I pursue what I take to be your good"; "Either this is scary or my judgments are completely off."

Tuesday, January 10, 2012

God, service to neighbor and human flourishing

Suppose that there is no God, that human beings are the highest beings relevant to our moral calculus (i.e., there may be aliens somewhere else that are higher than humans, but they don't morally matter). What, then, should one take as the highest aspect of human flourishing? Surely service to fellow humans. But service to fellow humans aims at an end beyond itself, namely our fellow humans' benefit. Now this benefit to our fellow humans cannot primarily consist in enabling them to serve their neighbor, or else the highest aspect of human flourishing consists in helping others to help others to help others ..., which results in vicious regress or circularity. Rather, in the end, our collective service to one another would have to be aimed at something else than service to one another. But if there is no God, then service to one another is the highest part of our flourishing. So it seems that if there is no God, the highest aspect of our flourishing consists in our promoting other, and hence lower, aspects of the flourishing of others. And that doesn't seem right.

Here's another way to see this problem. There is something paradoxical about pursuing the flourishing of others as our central end: what if we all achieved our end? Then our lives would lose what centrally gives them their meaning.

How does the existence of God change things? Well, our service to others in itself is not the highest human good any longer. Loving union with God is the highest human good, and service to others is valuable as it is partly constitutive of one's own union with God and promotes that union for others.

Monday, January 9, 2012

Creative suggestion to improve my Leibniz and Spinoza seminar

I looked at my teaching evaluations from the fall.  There were some useful suggestions.  And one that was particularly amusing: background mood music.

Cardinality and Bayesian regularity

Regularity is the Bayesian thesis that an ideal agent assigns probability zero only to impossible propositions. This creates obvious problems in the case of contingent propositions—such as that an infinitely thin dart will hit such-and-such a location or that a coin tossed infinitely often will always come up heads—that according to the standard probability calculus have probability zero. But maybe the Bayesian can hope that assigning some sort of hyperreal infinitesimal probability will do the trick? Timothy Williamson has a very nice argument that that's not going to work in the case of the coin tossed infinitely often. Here is another argument in the same direction, this one based on cardinalities.

The basic result is a theorem that shows that, assuming the Axiom of Choice, for any totally ordered finitely additive probability measure, there is a cardinality K such that as long as there are at least K mutually exclusive options, at least one of these options will receive probability zero (in fact, all but K of them will receive probability zero). But for any cardinality K, one can find a set of more than K mutually exclusive contingent propositions, for instance the set of propositions that there are exactly n entities, or n spatiotemporally disconnected island universes, where n ranges from 1 to something high enough to guarantee that there will be more than K such propositions.

Now on to the formal setting for my no-go theorem. Ordinary probabilities take real numbers as values. Here we allow that to be generalized. The generalization is this. Our optimistic Bayesian regularist, let us suppose, has some set V with a total ordering <, an identity 0 and an operation + satisfying the following conditions:
  1. a+0=a for all a
  2. if b<c then a+b<a+c for all a, b and c.
Additionally, we have a finitely-additive probability space taking values in V. This is a space X together with an algebra F of "measurable" subsets of X, where F is non-empty and closed under complementation and binary unions, and there is a function P from F to V such that:
  1. P(A)≥0 for all A
  2. P(A  B)=P(A)+P(B) whenever A  B=  .
Now comes the main formal result:

Theorem. Suppose that X also has a total ordering < and that (a) every singleton {x} where x is in X is measurable (i.e., a member of F) and (b) every set of the form {x<y} for a y in X is measurable. Let R be the range of P, i.e., the set of all values P(A) as A ranges over the members of F. Suppose that |X|+1 > |R|. Then there is at least one value of x such that P({x}) is not 0.

Now, it follows from the Axiom of Choice that every set has a total ordering (a claim somewhat weaker than the Axiom of Choice, apparently). Hence:

Corollary. Assuming the axiom of choice, if |X|+1>|V| and every subset of X is measurable, then there is a non-empty subset A of X such that P(A)=0.

In particular, if our probabilities take real values, then as long as we have more than continuum many mutually exclusive alternatives, at least one of them will have probability zero. (This particular result can be improved: all that's needed is that one have more than countably many exclusive alternatives.)

The proof of the theorem is pretty easy. To get a contradiction, suppose P({x}) is never zero. Let Sx={z < x}. Let U be the set of all sets of the form Sx for x in X as well as the set X itself.  Then:

Lemma. If A and B are distinct members of U, then P(A) and P(B) are distinct members of R.

Given the Lemma, since U has |X|+1 members, it follows immediately that R must have at least |X|+1 members, which contradicts our assumptions. The proof of the Lemma is all that's remaining. Well, suppose first that A=Sx and B=Sy and x and y are distinct. By total ordering, x<y or y<x. Suppose x<y—the proof in the other case is the same. Then it is easy to show using the facts that P({x})>0 and that x is a member of Sy but not of Sx together with (1)-(4) that P(Sy)>P(Sx), and hence P(A) and P(B) are distinct. Next we need to show that P(Sx) and P(X) are distinct. But that can be shown in much the same way, since x is a member of X but not of Sx.

Friday, January 6, 2012

We are fundamental entities

"I think therefore I am." It's hard to dispute either the argument or the conclusion. But while I undoubtedly exist, do I have to be one of the fundamental objects in the ontology?

Here is a line of thought to that conclusion, somewhat similar to some things I've heard Rob Koons say. Non-fundamental objects are entia rationis, at least in part creatures of our cognitive organization of the world. But we cannot be, even in part, mere creatures of our cognitive organization of the world on pain of circularity. So whatever non-fundamental objects there may be, we are not among them.

I think the controversial claim in the argument may be that non-fundamental entities are entia rationis, but I am not sure. This whole line of argument is difficult for me to think about.

Thursday, January 5, 2012

Moon+ ebook reader for Android

In the summer, I tested ten ebook reader apps for Android, looking for something that worked well with large documents. My best choices were Moon+ and Mantano, but neither was ideal. Moon+ took 20 seconds to open the Summa epub, though it searched it in a speedy 20 seconds, and Mantano opened it almost instantly, though it took 80 seconds to search.

The Moon+ developer has just pointed me to his latest apk of Moon+ which improves the Moon+ loading speed significantly It can now load the Summa in 10 seconds on my Archos 43, and probably faster on faster devices. The search seems to be slightly slowed down, to about 23 seconds, but that's still quite decent. Moon+ is now clearly the best reader for large documents if you want searching, and is all-around an excellent reader. So I think I'm close to the point where I can start converting my large library from Plucker to epub.

I've accordingly updated my mini-reviews of epub readers.

Further update: On my new Epic 4G Touch (the Sprint version of the Galaxy S2, though a bit slower), it takes 4 seconds for the latest Moon+ to open the Summa, and 25 to search.

Wednesday, January 4, 2012

If monotheism is true, mereological universalism is false

  1. (Premise) If monotheism is true, there is no entity other than God that has all of God's causal powers.
  2. (Premise) A mereological sum of x and y has all of the causal powers of x.
  3. (Premise) If mereological universalism is true, then for any two distinct entities x and y such that y is not a part (proper or improper) of x, there is a mereological sum of x and y that is distinct from x.
  4. (Premise) If monotheism is true, God exists and I am not a part of God.
  5. (Premise) I exist.
  6. If monotheism and mereological universalism are true, there is a mereological sum of God and me that is distinct from God.  (By 3, 4 and 5)
  7. Every mereological sum of God and me has all of the causal powers of God. (By 2)
  8. If monotheism and mereological unviersalism are true, there is something distinct from God that has all of the causal powers of God. (By 6 and 7)
  9. If monotheism and mereological universalism are true, monotheism is false. (By 1 and 8 and first order logic)
  10. If monotheism is true, mereological universalism is false. (By 9 and first order logic)
This is a variant on a simpler argument I once blogged that if mereological universalism is true, then there is something greater than God, namely the mereological sum of God and something else, and it's absurd that there be something greater than God.

Change

The B-theory of time, according to which the distinctions between past, present and future (possibly unlike the distinctions between earlier-than and later-than) are merely perspectival, is often accused of being a "static theory of time".

But it is clearly a sufficient condition for x to change with respect to a predicate P that x satisfy P at one time and not at another. I am not claiming here that this is what change is. I am only claiming that satisfying a predicate at one time but not at an another is sufficient for change. How could something be round at one time and not round at another without its having changed in respect of roundness.

But of course it is a part of a typical B-theory that objects satisfy predicates at some but not at other times. In other words, something that is sufficient for change is a part of the B-theory. So how can be the B-theory be accused of being static?

Well, it could be the case that a theory T is incompatible with some phenomenon C (say, change) but nonetheless posits a phenomenon A (say, objects satisfying different predicates at different times). Such a theory is metaphysically incoherent, but of course there are metaphysically incoherent theories. So my response to the staticness charge (not the same as a static charge!) against the B-theory is not complete. But I think it shifts the onus of proof. Given that the B-theory of time posits something that clearly entails the phenomenon of change, if the theory is incompatible with the existence of change, the theory is metaphysically incoherent—and that has not been shown by its opponents. And it is too much to ask the B-theorist to prove the coherence of their theory, since showing metaphysical coherence is very hard in metaphysics. (Of course, one can prove a particular formalization of a theory to be formally coherent. And it's not hard to do that with the B-theory or the A-theory. But the question we're interested in is metaphysical coherence, not formal coherence.)

Monday, January 2, 2012

Choice, rationality and contrastivity

Suppose I choose A over B. For me to have chosen over B, B must have been a relevant alternative. For instance, I am choosing to write this post over doing dishes, but I am not choosing to write this post over plugging in a soldering iron and grabbing its hot tip. Why? Well, I was impressed by some reasons in favor of doing dishes but not impressed by any reasons in favor of holding the tip of a hot soldering iron.

To choose A over B, I not only needed to have a reason to choose A, but also a reason to choose B. Moreover, plausibly, choices are contrastive and so are the reasons for them. If so, then the reason to choose B would have to have been a contrastive reason, a reason for choosing B over A. If this is right, then to choose A over B, I need a reason for A over B and a reason for B over A. Now when A rationally dominates B for me in the sense that any of my reasons for B is at least as much a reason for A, then I have no reason for B over A. But lacking a reason for B over A, I cannot choose A over B, paradoxical as that sounds. I may have reason to do A rather than B, but this isn't a matter of choice, because B is not a relevant alternative to A, since in the context of a choice between A and B, there are no reasons for B, i.e., no reasons for B over A.

We now have several principles:

  • Rationality of Choice: one can only choose between options for which there are reasons in the context of choice
  • Contrastivity of Reasons: reasons in the context of choice are always reasons for an option over the alternatives
  • Domination Principle: choice between A and B is impossible when every reason for B is at least as much a reason for A
  • Incommensurability Principle: choice between A and B is only possible when there is a reason for each of these that isn't, or isn't as much, a reason for the other.
The Domination and Incommensurability Principles are equivalent, and are basically endorsed by Aquinas. The argument at the beginning of the post shows that Rationality of Choice plus Contrastivity of Reasons implies the Domination and Incommensurability Principles.

An interesting consequence of the Incommensurability Principle is that one's moral psychology had better not endorse both of the following theses:

  • Total Ordering of Strengths: for any two desires d1 and d2, either they are equal in strength, or one is stronger than the other
  • Desires are Reasons: the reasons on the basis of which one chooses are desires and their strengths are the strengths of reasons.
For Total Ordering of Strengths, Desires are Reasons and Incommensurability together implies that there are no choices.

Humean compatibilists are committed to Desires are Reasons. Humean determinists are committed to Total Ordering of Strengths given how on Humean grounds we can test the strength of desire by seeing what the agent is determined by her psychological state to choose. If this is right then if Rationality and Contrastivity are true, Humeanism needs to be rejected.