Thursday, August 31, 2017

Musings on authority

I have a lot of authority to impose hardships on myself. I can impose hardships on myself in two main ways. I can do something that either is or causes a hardship or risk of hardship to myself. Or I can commit myself to doing something that is or causes me a hardship or risk of hardship (I can commit myself by making a promise or by otherwise putting myself in a position where there is no morally permissible way to avoid the hardship). I have a wide moral latitude to decide which burdens to bear for the sake of which goods, though not an unlimited latitude. The decisions between goods are morally limited by the virtue of prudence. It would be wrong to undertake a 90% risk of death for the sake of a muffin. But it's morally up to me, or at least would be if I had no dependents, whether to undertake a 40% risk of death for the sake of writing a masterpiece. I do have the authority to impose some hardships on my children and my students, but that authority is much more limited: I do not have the authority to impose a 40% risk of death for the sake of writing a masterpiece. My authority to impose hardships on myself is much greater than my authority to impose hardships on others.

One explanation of the difference in the degree of our authority over ourselves and our authority over others is that people's authority over others derives from people's authority over themselves: we give authority over us to others. That is what the contractarian thinks, but it is implausible for familiar reasons (e.g., there aren't enough voluntarily accepted contracts to make contractarianism work). I prefer one of these two stories:

  1. Both authority (of the hardship-imposing kind) over self and authority over others derives from God's authority over us.
  2. Of necessity, some relationships are authority-conferring, and different kinds of relationships are necessarily authority-conferring to different degrees. For instance, identity in a mature person confers great authority of x with respect to x. Parenthood by a mature person of an immature person confers much authority but less than identity of a mature person does.

What about God's authority? On view (1), we would expect God to have more authority to impose hardships than anybody else has, including more authority to impose hardships on us than we have with respect to our own selves. What about on view (2)? That's less clear. We would intuitively expect that the God-creature relationship be more authority-conferring than the parent-child one. But how does it compare to identity? It would be religiously uncomfortable to say that someone has more authority over me than God does, even if I am that someone. Can we give a philosophical explanation for this religious intuition? Maybe, but I'm not yet up to it. I think a part of the story is that all our goods are goods by participation in God, that our telos is a telos-by-participation in God as the ultimate final cause of all.

Suppose we could argue that God has more hardship-imposing authority over ourselves than we have over ourselves. Then I think we would have a powerful tool for theodicy. A crucial question in theodicy is whether it is permissible for God to allow hardship H to me for the sake of good G (for myself or another). We would then have a defeasible sufficient condition for this permissibility: if it would not be immorally imprudent for me to allow H to myself for the sake of G, then it would be permissible for God to allow H to me for the sake of G. This is a much stronger criterion than one that is occasionally used in the literature, namely that if I would rationally allow H to myself for the sake of G, then God can permissibly allow it, too.

Tuesday, August 29, 2017

Present moment ethical egoism

One of the least popular ethical theories is present moment ethical egoism (pmee): you ought do what produces the state that is best for you at the present moment.

But pmee has a very lovely formal feature: it can be used to simulate every other ethical theory of permissibility, simply by changing the value function, the function that ranks states in terms of how good they are for you at the present moment. To simulate theory T, just assign value 1 to one’s presently choosing an action that T says is permissible and value −1 to one’s presently choosing an action that T says are impermissible. In this way, pmee simulates Aquinas, Kant, virtue ethics, utilitarianism, non-present-moment egoism, etc.

This formal feature is not shared by non-egoistic consequentialist theories. For the only way a consequentialist theory can simulate a deontological theory is by assigning an overwhelmingly large negative value to wrong choices. But this gives a result incompatible with many deontological theories, namely that you should choose to commit a murder in order to prevent two other people from doing the same.

The formal feature is also not shared by egoistic but not present-moment theories. For on some deontological theories, it is wrong to commit a murder now in order to prevent oneself from choosing two murders later.

Here is another curious thing. Basically, the only present thing in my present control is my present choice. This means that pmee cannot be a consequentialist theory in the typical sense of the word, because all causal consequences take time, and hence every causal consequence within my present control is in the future. In other words, it is the value of the present choice that pmee needs to focus on (both in itself, and in a larger context).

But once we see that it is the value of the choice itself, and not the causal consequences of the choice, that pmee must base a decision on, then given the fact that the most compelling value that a choice has is its moral value, it seems that pmee tells one that one should do what is morally right. And what is morally right cannot be defined in terms of pmee on pain of circularity.

This is a Parfit-like thought, of course. (Maybe even exactly something from Parfit. It’s been a while since I’ve read him.)

Right and wrong choices

Here's a thought I had that might have theodical applications. Agents tend to be more responsible when they choose rightly than when whey choose wrongly. For when one chooses wrongly, one acts against reason. And that cannot but contribute to making one less responsible for the action than had one acted following reason.

Friday, August 25, 2017

The blink of an eye response to the problem of evil

I want to confess something: I do not find the problem of evil compelling. I think to myself: Here, during the blink of an eye, there are horrendous things happening. But there is infinitely long life afterwards if God exists. For all we know, the horrendous things are just a blip in these infinitely long lives. And it just doesn’t seem hard to think that over an infinite future that initial blip could be justified, redeemed, defeated, compensated for with moral adequacy, sublated, etc.

It sounds insensitive to talk of the horrors that people live through as a blip. But a hundred years really is the blink of an eye in the face of eternity.

Wouldn’t we expect a perfect being to make the initial blink of an eye perfect, too? Maybe. But even if so, we would only expect it to be perfect as a beginning to an infinite life that we know next to nothing about. And it is hard to see how we would know what is perfect as a beginning to such a life.

This sounds like sceptical theism. But unlike the sceptical theist, I also think the standard theodicies—soul building, laws of nature, free will, etc.—are basically right. They each attempt to justify God’s permission of some or all evils by reference to things that are indeed good: the gradual building up of a soul, the order of the universe, a rightful autonomy, etc. They all have reasonable stories about how the permission of the evils is needed for these goods. There is, in mind, only one question about these theodicies: Are these goods worth paying such a terrible price, the price of allowing these horrors?

But in the face of an eternal future, I think the question of price fades for two reasons.

First, the goods gained by soul building and free will last for an infinite amount of time. It will forever be true that one has a soul that was built by these free choices. And the value of orderly laws of nature includes an order that is instrumental to the soul building as well as an order that is aesthetically valuable in itself. The benefits of the former order last for eternity, and the beauty of the laws of nature—even as exhibited during the initial blink of an eye—lasts for ever in memory. It is easy for an infinite duration of a significant good to be worth a very high price! (Don’t the evils last in memory, too? Yes, but while memories of beauty should be beautiful things, memories of evil should not be evils—think of the Church’s memory of the Cross.)

Second, it is very easy for God to compensate people during an infinite future for any undeserved evils they suffered during the initial blip. And typically one has no obligation to prevent someone’s suffering when (a) the prevention would have destroyed an important good and (b) one will compensate the person to an extent much greater than the sufferings. The goods pointed out by the theodicies are important goods, even if we worry that permitting the horrors is too high a price. And no matter how terrible these short-lived sufferings were—even if the short period of time, at most about a mere century, “seemed like eternity”—infinite time is ample space for compensation. (Of course, it would be wrong to intentionally inflict undeserved serious harms on someone even while planning to compensate.)

Objection 1: Can one say this while saying that the fleeting goods of our lives yield a teleological argument for the existence of God?

Response: One can. One can be quite sure from a single paragraph in a novel that it is written by someone with great writing skills. But one can never be sure from a single paragraph in a novel that it is not written by someone with great writing skills. (For all we know, the author was parodying bad writing in that paragraph, and the paragraph reflects great skill. But notice that we cannot say about the great paragraph that maybe the author has no skills but was just parodying great writing.)

Objection 2: It begs the question to suppose our future lives are infinite.

Response: No. If God exists, it is very likely that the future lives of all persons, or at the very least of all persons who do not deserve to be annihilated, will be infinite. The proposition that God exists is equivalent to the disjunction: (God exists and there is eternal life) or (God exists and there is no eternal life). If the argument from evil presupposes the absence of eternal life, it is only an argument against the second disjunct. But most of the probability that God exists lies with the first disjunct, given that P(eternal life|God exists) is high. Hence, the argument doesn't do much unless it addresses the first disjunct.

Two sources of discomfort with substitutionary views of atonement

On one family of theories of the atonement, the harsh treatment that justice called for in the light of our sins is imposed on Christ and thereby satisfies retributive justice. Pretty much everybody who thinks about this is at least a little bit uncomfortable with it—some uncomfortable to the point of moral outrage.

It’s useful, I think, to make explicit two primary sources of discomfort:

  1. It seems unjust to Christ that he bear the pain that our sins deserve.

  2. It seems unjust that we are left unpunished.

And it’s also useful to note that these two sources of discomfort are largely independent of one another.

I think that those who are uncomfortable to the point of moral outrage are likely to focus on (1). But it is not hard to resolve (1) given orthodox Christology and Trinitarianism. The burden imposed on Christ is imposed by the will of the Father. But the will of the Father in orthodox theology is numerically identical with the will of the Son. Thus, the burden is imposed on Christ by his own divine will, which he then obeys in his own human will. It is thus technically a burden coming from Christ’s own will, and a burden coming from one’s own will for the sake of others does not threaten injustice.

While (2) is also a source of discomfort, I think it is less commonly a discomfort that rises to the level of moral outrage. Maybe some people do feel outrage at the idea that a mass murderer could be left unpunished if she repentantly accepted Christ into her life and were baptised. But I think it tends to be a moral fault if one feels much outrage at leniency shown to a repentant malefactor.

I also think (2) is the much harder problem. Note, for instance, that the considerations of consent that dissolve (1) seem to do little to help with (2). Imagine that I was a filthy rich CEO of a corporation that was knowingly dumping effluent that caused the deaths of dozens of people and I was justly sentenced to twenty years imprisonment. It would clearly be a failure of justice if I were permitted to find someone else and pay her a hundred million dollars to go to prison in place—even though there would no doubt be a number of people who would be very eager, of their own free will, to do that for the price.

It would be nice if I could now go on to solve (2). But my main point was to separate out the two sources of discomfort and note their independence.

That said, I did just now have a thought about (2) while talking to a student. Suppose that you do me a very good turn. I say: “How can I ever repay you?” And you say: “Pass it on. Maybe one day you’ll have a chance to do this for someone else. That will be repayment enough.” If I one day pass on the blessing that I’ve received from you, justice has been done to you. The beneficiary of my passing on the blessing rightly substitutes for you. Maybe there is a mirror version of this on the side of punishment?

Sentencing to time served

Sometimes people are sentenced to “time served”: the time they spent in jail prior to trial is retroactively counted as their sentence. But doesn’t justice call for harsh treatment to be imposed as a punishment? The jail time, however, was not imposed on the malefactor as a punishment—it was imposed on a person presumed innocent to negate a probability of flight. How can it turn into a punishment retroactively?

Well, one solution is to reject a retributive account of punishment. Another is to say that justice is served by such punishment.

But I think there is a less revisionary approach. Instead of saying that justice calls for harsh treatment to be imposed, say that justice calls for one to ensure something harsh happening as a result of the crime. Sometimes, one ensures a state of affairs by causally imposing it. But one can also ensure a state of affairs by verifying the occurrence of the state of affairs while being committed to causing the state of affairs if that were to fail.

This provides a way for a retributivist to accept the intuition that if someone is paralyzed for life as a result of trying to blow a bank vault, there need be no further call to send them to prison—one may be able to ensure more than sufficient punishment simply by verifying that the paralysis occurred as a result of the crime. Another way for the retributivist to accept that intuition would be to say that while we didn’t impose the paralysis on the robber as a punishment, God did. But the move from imposing to ensuring allows the retributivist to avoid mixing up God in human justice here.

Wednesday, August 23, 2017

Eliminating or reducing parthood

Parthood is a mysterious relation. It would really simplify our picture of the world if we could get rid of it.

There are two standard ways of doing this. The microscopic mereological nihilist says that only the fundamental “small” bits—particles, fields, etc.—exist, and that there are no complex objects like tables, trees and people that are made of such bits. (Though one could be a microscopic mereological nihilist dualist, and hold that people are simple souls.)

The macroscopic mereological nihilist says that big things like organisms do exist, but their commonly supposed constituents, such as particles, do not exist, except in a manner of speaking. We can talk as if there were electrons in us, but there are no electrons in us. The typical macroscopic mereological nihilist is a Thomist who talks of “virtual existence” of electrons in us.

Both the microscopic and macroscopic nihilist get rid of parthood at the cost of ridding themselves of large swathes of objects that common sense accepts. The microscopic nihilist gets rid of the things that are commonly thought to be wholes. The macroscopic nihilist gets rid of the things that are commonly thought to be parts.

But there is a third way of getting rid of parthood that has not been sufficiently explored. The third kind of mereological nihilist would neither deny the existence of things commonly thought to be wholes nor of things commonly thought to be parts. Instead, she would deny the parthood relation that is commonly thought to hold between the micro and the macro things. Parts of the space occupied by me are also occupied by my arms, my legs, my heart, the electrons in these, etc. But these things are not parts of me: they are just substances that happen to be colocated with me. I’ll call this “parthood nihilism”.

This is compatible with a neat picture of organ transplants. If my kidney becomes your kidney, nothing changes with respect to parthood. All that changes is the causal interactions: the kidney that previously was causing certain distributional properties in me starts to cause certain distributional properties in you.

An obvious question is what about property inheritance? Whenever my hand is stained purple, I am partly purple. We don’t want this to be just a coincidence. The common-sense parts theorist has a nice explanation: I inherit being partly purple from my hand being partly purple (note that they’re only properly partly purple—they aren’t purple inside the bones, say). My partial purpleness derives from the partial purpleness of a part of me.

But the parthood nihilist can accept accept this kind of property inheritance and give an account of it: the inheritance is causal. My hand’s being partly purple causes me to be partly purple, which is a distributional property of an extended simple). I guess on the standard view, property inheritance is going to be a kind of grounding: my being partly purple occurs in virtue of my hand’s being a part of me and its being partly purple. On the present nihilism, we have simultaneous causation instead of grounding.

Here’s another difficulty: what about gravity (and relevantly similar forces). I have a mass of 77kg. If my mass is m1 and yours is m2 and the distance between us is r, there is a force pulling you towards me of magnitude Gm1m2/r2. But why isn’t that force equal in magnitude to (m1 + m11 + m12 + m13 + ...)m2/r2, where m11, m12, m13, ... are the masses of what common sense calls “my parts” (about five kilograms for my head, four for my left arm, four for my right arm, and so on)? After all, wouldn’t all these objects be expected to exert gravitational force?

The first two kinds of nihilists have easy answers to the problem. The microscopic nihilist says that only particles have mass as only particles exist. The macroscopic one says that I am all there is here—the head, arms, etc. don’t exist. The standard common-sense view has a slightly more complicated answer available: gravitational forces only take into account non-inherited mass. But parthood nihilist can give a variant of this: it’s a law of nature that only fundamental particles produce gravitational forces.

There is a fourth kind of view. This fourth kind of view is no longer a mereological nihilism, but mereological causal reductivism. On the fourth kind of view, for x to be a part of y just is for x to be identical with y or for x to be a proper part of y. And for x to be a proper part of y just is for a certain causal relation to hold between x’s properties and y’s properties.

Spelling out the details of this causal relation is difficult. Roughly, it just says that all of x’s properties and relations cause corresponding properties and relations of y. Thus, x’s being properly partly located in Pittsburgh causes y to be properly partly located in Pittsburgh, while x’s being wholly located in Pittsburgh causes y to be at least partly located in Pittsburgh; x’s being green on its left half causes y’s being green in the left half of the locational property that x causes y to have; and so on.

As I said, it’s difficult to spell out the details of this causal relation. But it is no more difficult than the common-sense parts theorist’s difficulty in spelling out the details of property inheritance. Wherever the common-sense parts theorist says that there is a part-to-whole inheritance between properties, our reductionist requires a causal relation.

The reductionism changes the order of explanation. Suppose my hand is the only green part of me and it gets amputated. According to the common-sense parts theorist, I am no longer partly green because the green hand has stopped being a part of me. According to the reductionist, on the other hand, the hand’s no longer contributing to my greenness makes it no longer a part of me.

The reductionist and parthood nihilist, however, have an extra explanatory burden. Why do all these causal relations cease together? Why is it that when my right hand stops causing me to be partially green, my right hand also stops causing me to have five right fingers? The common-sense parts theorist has a nice story: when the part stops being a part, all the relevant grounding relations stop because a portion of the ground is the fact that the part is a part.

But there is also a causal solution. The common-sense parts theorist has to give a story as to when it is that certain kinds of causal interaction—say, a surgeon using a scalpel—cause a part to stop being a part. For each such kind of causal interaction, the reductionist and parthood nihilist can say that there is a cessation of all the causal relations that the common-sense parts theorist would say go with inheritance.

All in all, I think the reductionist has a simpler fundamental ideology than the standard common-sense inheritance view: the reductionist can reduce parthood to patterns of causation. Her theory is overall not significantly more complicated than the common-sense inheritance theory, but it is more complicated than either microscopic or macroscopic nihilism. But she gets to keep a lot more of common-sense than the nihilists do. In fact, maybe she gets to keep all of common-sense, except for pretty theoretical claims about the direction of explanation, etc.

The parthood nihilist has most of the advantages of reductionism, but there is some common-sense stuff that she denies—she denies that my arm is a part of me, etc. Overall parthood nihilism is not significantly simpler than reductionism, I think, because the parthood nihilist’s account of how all the relevant causal relations cease together will include all the complications that the reduction includes. So I think reductionism is superior to parthood nihilism.

But I still like macroscopic nihilism more than reductionism.

Beatific vision and scepticism

One way to think of the beatific vision is as a conscious experience whose quale is God himself. Not a representation of God, but the infinite and simple God himself. Such an experience would have have a striking epistemological feature. Ordinary veridical experiences are subject to sceptical worries because the qualia involved in them can occur in non-veridical experiences, or at least can have close facsimiles occurring in non-veridical experiences. But while everything is similar to God, the similarity is always infinitely remote. Moreover, there is a deep qualitative difference between God in the beatific vision and other qualia. No other quale is a person or even a substance.

Thus, someone who has the beatific vision is in the position of having an experience that is infinitely different from all other experiences, veridical or not. This, I think, rules out at least one kind of sceptical worry, and hence the beatific vision is also a fulfillment of the Cartesian quest for certainty—though that is far from being the most important feature of the beatific vision.

Tuesday, August 22, 2017

Aquinas and God

It just occurred to me, while grading a comprehensive exam question on Aquinas, how deeply Jewish Aquinas’s approach to God is. In the structure of the Summa Theologiae, the primary attribute of God, the one on which the derivation of all the others depends, is God’s oneness or simplicity.

Is knowledge of very important things very valuable?

It seems right to say that knowledge, as such, is very valuable when the matter at hand is of great personal importance to one. For instance, it seems intuitively right that it is very valuable to know whether the people one loves are alive.

Suppose Bob, Alice’s beloved husband, was in an area where a disaster happened. Carl read a list of survivors, and told Alice that her husband was one of the survivors. But five minutes later Carl realized that he confused Alice with someone else, and that it wasn’t Alice’s husband’s name that he saw on the list. Carl is terrified that he will have to tell Alice that her husband wasn’t on the list. He goes back to the list and, to his great relief, finds that Bob is on the list as well.

Alice correctly believes that her husband survived the disaster. She does not know that her husband survived, though she thinks she knows. She is Gettiered.

If knowing that one’s beloved husband has survived a disaster is very valuable, Carl would have a quite strong reason to go back to Alice and tell her: “I just checked the list again very carefully, and indeed your husband is on it.” (It would be ill-advised, perhaps, for Carl to say to Alice that he had made the mistake the first time, because if he told her that, she would start worrying that he has made a mistake this time, too.) For, Carl’s telling this to Alice would turn her Gettiered belief into knowledge.

But if Carl has any reason to talk to Alice about this again, the reason is not a very strong one. Hence, even in cases which are of extreme personal importance, knowledge as such is not very valuable.

I conclude that knowledge as such is of little if any intrinsic value. Truth and justification, of course, can have great intrinsic value.

Objection: Carl doesn't have to talk to Alice to turn her true belief into knowledge. For he would have informed Alice had he not found Alice's husband on the list. Thus, on certain externalist views where knowledge depends on the right counterfactuals, Carl's second check of the list is sufficient to turn Alice's true belief into knowledge, even without Carl talking to Alice.

Response: Maybe, but the case need not to be told that way. Perhaps if Alice's husband were not on the list, Carl wouldn't have had the guts to tell Alice. Or perhaps he would have waited twenty four hours to check that Alice's husband doesn't appear on an updated list.

Monday, August 21, 2017

Searching for the best theory

Let’s say that I want to find the maximum value of some function over some domain.

Here’s one naive way to do it:

Algorithm 1: I pick a starting point in the domain at random, place an imaginary particle there and then gradually move the particle in the direction where the function increases, until I can’t find a way to improve the value of the function.

This naive way can easily get me stuck in a “local maximum”: a peak from which all movements go down. In the example graph, most starting points will get one stuck at local maxima.

Let’s say I have a hundred processor cores available, however. Then here’s another simple thing I could do:

Algorithm 2: I choose a hundred starting points in the domain at random, and then have each core track one particle as it tries to move towards higher values of the function, until it can move no more. Once all the particles are stuck, we survey them all and choose the one which found the highest value. This is pretty naive, too, but we have a much better chance of getting to the true maximum of the function.

But now suppose I have this optimization idea:

Algorithm 3: I follow Algorithm 2, except at each time step, I check which of the 100 particles is at the highest value point, and then move the other 99 particles to that location.

The highest value point found is intuitively the most promising place, after all. Why not concentrate one’s efforts there?

But Algorithm 3 is, of course, be a bad idea. For now all 100 particles will be going lock-step, and will all arrive at the same point. We lose much of the independent exploration benefit of Algorithm 2. We might as well have one core.

But now notice how often in our epistemic lives, especially philosophical ones, we seem to be living by something like Algorithm 3. We are trying to find the best theory. And in journals, conferences, blogs and conversations, we try to convince others that the theory we’re currently holding to is the best one. This is as if each core was trying to convince the 99 to explore the location that it was exploring. If the core succeeded, the effect would be like Algorithm 3 (or worse). Forcing convergence—even by intellectually honest means—seems to be harmful to the social epistemic enterprise.

Now, it is true that in Algorithm 2, there is a place for convergence: once all the cores have found their local maxima, then we have the overall answer, namely the best of these local maxima. If we all had indeed found our local maxima, i.e., if we all had fully refined our individual theories to the point that nothing nearby was better, it would make sense to have a conference and choose the best of all of the options. But in fact most of us are still pretty far from even the locally best theory, and it seems unlikely that we will achieve it in this life.

Should we then all work independently, not sharing results lest we produce premature convergence? No. For one, the task of finding the locally optimal theory is one that we probably can’t achieve alone. We are dealing with functions whose values at the search point cannot be evaluated by our own efforts, and where even exploring the local area needs the help of others. And so we need cooperation. What we need is groups exploring different regions of the space of theories. And in fact we have this: we have the Aristotelians looking for the best theory in the vicinity of Aristotle’s, we have the Humeans, etc.

Except that each group is also trying to convince the others. Is it wrong to do so?

Well, one complicating factor is that philosophy is not just an isolated intellectual pursuit. It has here-and-now consequences for how to live our lives beyond philosophy. This is most obvious in ethics (including political philosophy), epistemology and philosophy of religion. In Algorithm 3, 99 of the cores may well be exploring less promising areas of the search space, but it’s no harm to a core to be exploring such an area. But it can be a serious harm to a person to have false ethical, epistemological or religious beliefs. So even if it were better for our social intellectual pursuits that all the factions be doing their searching independently, we may well have reasons of charity to try to convince others—but primarily where this has ethical, epistemological or religious import (and often it does, even if the issue is outside of these formal areas).

Furthermore, we can benefit from criticism by people following other paradigms than ours. Such criticism may move us to switch to their paradigm. But it can benefit us even if it does not do that, by helping us find the optimal theory in our local region.

And, in any case, we philosophers are stubborn, and this stubbornness prevents convergence. This stubbornness may be individually harmful, by keeping us in less promising areas of the search space, but beneficial to the larger social epistemic practice by preventing premature convergence as in Algorithm 3.

Stubbornness can be useful, thus. But it needs to be humble. And that's really, really hard.

A theological argument for four-dimensionalism

One of the main philosophical objections to dualist survivalism, the view that after death and prior to the resurrection we continue existing as disembodied souls is the argument that I am now distinct from my soul and cannot come to be identical with my soul, as that would violate the transitivity of identity: my present self (namely, I) would be identical to my future self, the future self would be identical to my future soul, my future soul would still be identical to my present soul, and so my present self would be identical with my present soul.

(This, of course, won’t bother dualists who think they are presently identical with souls, but is a problem for dualists who think that souls are proper parts of them. And the latter is the better view, since I can see myself in the mirror but I cannot see my soul in the mirror.)

It’s worth noting that this provides some evidence for four-dimensionalism, because (a) we have philosophical and theological evidence for dualist survivalism, while (b) the four-dimensionalist has an easy way out of the above argument. For the four-dimensionalist can deny that my future self is ever identical with my future soul, even given dualist survivalism. My future self, like my present self, is a four-dimensional temporally extended entity. Indeed, the future self and the present self are the same four-dimensional entity, namely I. My future soul, like my present soul, is a temporally extended entity (four-dimensional if souls have spatial extension; otherwise, one-dimensional), which is a proper part of me. And, again, my future soul and my present soul are the same temporally extended entity. At no future time is my future self identical with my future soul even given dualist survivalism. At most, it will be the case that some future temporal slices of me are identical with some future temporal slices of my soul.

Thursday, August 17, 2017

Yet another infinite lottery machine

In a number of posts over the past several years, I’ve explored various ways to make a countably infinite fair lottery machine (assuming causal finitism is false), typically using supertasks in some way.

Here’s another, slightly simplified from a construction in Norton. Suppose we toss a countably infinite number of fair coins to make an array with infinitely many infinite rows that could look like this:

HTHTHHHHHHHTTT...
THTHTHTHTHHHHH...
HHHHHTHTHTHTHT...
...

Make sure that nobody looks at the coins after they are tossed. Here’s something that could happen: each row of the array contains one and only one tails. This is unlikely (probability zero; Norton originally said it's nonmeasurable, but that was a mistake, and we're coauthoring a correction to his paper) but possible. Have a robot scan the array—a supertask will be needed—to verify whether this unlikely event has happened. If not, we have failed to make the machine. But if yes, our array will look relevantly like:

HHTHHHHHHHHHHH...
HHHHHTHHHHHHHH...
HHTHHHHHHHHHHH...
...

Continue making sure nobody looks at the coins. Put a robot at the beginning of the first row. Now, you have an countably infinite fair lottery machine that you can use over and over. To use it, just tell the robot to scan the row it’s at, announce the position of the lone tails, and move to the beginning of the next row. Applied to the above array, you will get the sequence of results 3,6,3,….

Of course, it’s very unlikely that we will succeed in making the machine (the probability is zero). But we might. And once we do, we can run as many paradoxes of infinity as we like. And we might even find ourselves lucky enough to be in a universe where some natural random process has already generated such a lucky array, in which case we don’t even have to flip the coins.

Once we have the machine, we can have lots of fun with it. For instance, it seems antecedently really unlikely that the first hundred times you run the machine, the numbers you get will be in increasing order. But no matter how many numbers you've pulled from the machine, you are all but certain that the next number will be bigger than any of them.

Wednesday, August 16, 2017

Consent and euthanasia

I once gave an argument against euthanasia where the controversial center of the argument could be summarized as follows:

  1. Euthanasia would at most be permissible in cases of valid consent and great suffering.

  2. Great suffering is an external threat that removes valid consent.

  3. So, euthanasia is never permissible.

But the officer case in my recent post about promises and duress suggests that (2) may be mistaken. In that case, I am an officer captured by an enemy officer. I have knowledge that imperils the other officer’s mission. The officer lets me live, however, on the condition that I promise to stay put for 24 hours, an offer I accept. My promise to stay put seems valid, even though it was made in order to avoid great harm (namely, death). It is difficult to see exactly why my promise is valid, but I argue that the enemy officer is not threatening me in order to elicit a promise from me, but rather I am in dangerous circumstances that I can only get out of by making the promise, a promise that is nonetheless valid, much as the promise to pay a merchant for a drink is valid even if one is dying of thirst.

Now, if a doctor were to torture me in order to get me to consent to being killed by her, any death-welcoming words from me would not constitute valid consent, just as promises elicited by threats made precisely to elicit them are invalid. But euthanasia is not like that: the suffering isn’t even caused by the doctor. It doesn’t seem right to speak of the patient’s suffering as a threat in the sense of “threat” that always invalidates promises and consent.

I could, of course, be mistaken about the officer case. Maybe the promise to stay put under the circumstances really is invalid. If so, then (2) could still be true, and the argument against euthanasia stays.

But suppose I am right about the officer case, and suppose that (2) is false. Can the argument be salvaged? (Of course, even if it can’t, I still think euthanasia is wrong. It is wrong to kill the innocent, regardless of consequences or consent. But that’s a different line of thought.) Well, let me try.

Even if great suffering is not an external threat that removes valid consent, great suffering makes one less than fully responsible for actions made to escape that suffering (we shouldn’t call the person who betrayed her friends under torture a traitor). Now, how fully responsible one needs to be in order for one’s consent to be valid depends on how momentous the potential adverse consequences of the decision are. For instance, if I consent to a painkiller that has little in the way of side-effects, I don’t need to have much responsibility in order for my consent to be valid. On the other hand, suppose that the only way out of suffering would be a pill whose owner is only willing to sell it in exchange for twenty years of servitude. I doubt that one’s suffering-elicited consent to twenty years of servitude is valid. Compare how the Catholic Church grants annulments for marriages when responsibility is significantly reduced. Some of the circumstances where annulments are granted are ones where the agent would have sufficient responsibility in order to make valid promises that are less momentous than marriage vows, and this seems right. In fact, in the officer case, it seems that if the promise I made were more momentous than just staying put for 24 hours, it might not be valid. But it is hard to get more momentous a decision than a decision whether to be killed. So the amount of responsibility needed in order to make that decision is much higher than in the case of more ordinary decisions. And it is very plausible that great suffering (or fear of such) excludes that responsibility, or at the very least that it should make the doctor not have sufficient confidence that valid consent has been given.

If this is right, then we can replace (2) with:

  1. Great suffering (or fear thereof) removes valid consent to decisions as momentous as the decision to die.

And the argument still works.

Monday, August 14, 2017

Difficult questions about promises and duress

It is widely accepted that you cannot force someone to make a valid promise. If a robber after finding that I have no valuables with me puts a gun to my head and says: “I will shoot you unless you promise to go home and bring me all of the jewelry there”, and I say “I promise”, my promise seems to be null and void.

But suppose I am a cavalry officer captured by an enemy officer. The enemy officer is in a hurry to complete a mission, and it is crucial to his military ends that I not ride straight back to my headquarters and report what I saw him doing. He does not, however, have the time to tie me up, and hence he prepares to kill me. I yell: “I give you my word of honor as an officer that I will stay in this location for 24 hours.” He trusts me and rides on his way. (The setting for this is more than a hundred years ago.)

However, if promises made under duress are invalid, then the enemy officer should not trust me. One can only trust someone to do something when in some way a good feature of the person impels them to do that thing. (I can predict that a thief will steal my money if I leave it unprotected, but I don’t trust the thief to do that.) But there is no virtue in keeping void promises, since such promises do not generate moral reasons. In fact, if the promise is void, then I might even have a moral duty to ride back and report what I have seen. One shouldn’t trust someone to do something contrary to moral duty.

Perhaps, though, there is a relevant difference between the case of an officer giving parole to another, and the case of the robber. The enemy officer is not compelling me to make the promise. It’s my own idea to make the promise. Of course, if I don’t make the promise, I will die. But that fact doesn’t make for promise-canceling duress. Say, I am dying of thirst, and the only drink available is the diet gingerale that a greedy merchant is selling and which she would never give away for free. So I say: “I promise to pay you back tomorrow as I don’t have any cash with me.” I have made the promise in order to save my life. If the merchant gives me the gingerale, the promise is surely valid, and I must pay the merchant back tomorrow.

Is the relevant difference, perhaps, that I originate the idea of the promise in the officer case, but not in the robber case? But in the merchant case, I would be no less obligated to pay the merchant back if we had a little dialogue: “Could you give me a drink, as I’m dying of thirst and I don’t have any cash?” – “Only if you promise to pay me back tomorrow.”

Likewise, in the officer case, it really shouldn’t matter who originates the idea. Imagine that it never occurred to me to make the promise, but a bystander suggests it. Surely that doesn’t affect the binding force of the promise. But suppose that the bystander makes the suggestion in a language I don’t understand, and I ask the enemy officer what the bystander says, and he says: “The bystander suggests you give your word of honor as an officer to stay put for 24 hours.” Surely it also makes no moral difference that the enemy officer acts as an interpreter, and hence is the proximate origin of the idea. Would it make a difference if there were no helpful bystander and the enemy officer said of his own accord: “In these circumstances, officers often make promises on their honor to stay put”? I don’t think so.

I think that there is still a difference between the robber case and that of the enemy officer who helpfully suggests that one make the promise. But I have a really hard time pinning down the difference. Note that the enemy officer might be engaged in an unjust war, much as the robber is engaged in unjust robbery. So neither has a moral right to demand things of me.

There is a subtle difference between the robber and officer cases. The robber is threatening your life in order to get you to make the promise. The promise is something that the robber is pursuing as the means to her end, namely the obtaining of jewelry. My being killed will not achieve the robber’s purpose at all. If the robber knew that I wouldn’t make the promise, she wouldn’t kill me, at least as far as the ends involved in the promise (namely, the obtaining of my valuables) go. But the enemy officer’s end, namely the safety of his mission, would be even more effectively achieved by killing me. The enemy officer’s suggestion that I make my promise is a mercy. The robber’s suggestion that I make my promise isn’t a mercy.

Does this matter? Maybe it does, and for at least two reasons. First, the robber is threatening my life primarily in order to force a promise. The enemy officer isn’t threatening my life primarily in order to force a promise: the threat would be there even if I were unable to make promises (or were untrustworthy, etc.). So there is a sense in which the robber is more fully forcing a promise out of me.

Second, it is good for human beings to have a practice of giving and keeping promises in the officer types of circumstances, since such a practice saves lives. But a practice of giving and keeping promises in the robber types of circumstances, since such a practice only encourages robbers to force promises out of people. Perhaps the fact that one kind of practice is beneficial and the other is harmful is evidence that the one kind of practice is normative to human beings and the other is not. (This will likely be the case given natural law, divine command, rule-utilitarianism, and maybe some other moral theories.)

Third, the case of the officer is much more like the case of the merchant. There is a circumstance in both cases that threaten my life independently of any considerations of promises—dehydration and an enemy officer whom I’ve seen on his secret mission. In both cases, it turns out that the making of a promise can get me out of these circumstances, but the circumstances weren’t engineered in order to get me to make the promise. But the case of the robber is very different from that of the merchant. (Interesting test case: the merchant drained the oases in the desert so as to sell drinks to dehydrated travelers. This seems to me to be rather closer to the robber case, but I am not completely sure.)

Maybe, though, I’m wrong about the robber case. I have to say that I am uncomfortable with voidly promising the robber that I will get the valuables when I don’t expect to do so—there seems to be a lie involved, and lying is wrong even to save one’s life. Or at least a kind of dishonesty. But this suggests that if I were planning on bringing the valuables, I would be acting more honestly in saying it. And that makes the situation resemble a valid promise. Maybe not, though. Maybe it’s wrong to say “I will bring the valuables” when one isn’t planning on doing so, but once one says it, one has no obligation to bring them. I don’t know. (This is related to this sort of a case. Suppose I don’t expect that there will be any yellow car parked on your street tonight, but I assert dishonestly in the morning that there will be a yellow car parked on your street in the evening. In the early afternoon, I am filled with contrition for my dishonesty to you. Normally, I should try to undo the effect of dishonesty by coming clean to the person I was dishonest to. But suppose I cannot get in touch with you. However, what I can do is go to the car rental place, rent a yellow car and park it on your street. Do I have any moral reason to do so? I don’t know. Not in general, I think. But if you were depending on the presence of the yellow car—maybe you made a large bet about it wit a neighbor—then maybe I should do it.)

Computer languages

It is valuable, especially for philosophers, to learn languages in order to learn to see things from a different point of view, to think differently.

This is usually promoted with respect to natural languages. But the goal of learning to think differently is also furthered by learning logical languages and computer languages. In regard to computer languages, it seems that what is particularly valuable is learning languages representing opposed paradigms: low-level vs. high-level, imperative vs. functional, procedural vs. object-oriented, data-code-separating vs. not, etc. These make for differences in how one sees things that are if anything greater than the differences in how one sees things across natural human languages.

To be honest, though, I’ve only ever tried to learn one language expressly for the above purpose, and I didn’t persevere: it was Haskell, which I wanted to learn as an example of functional programming. I ended up, however, learning OpenSCAD which is a special-purpose functional language for describing 3D solids, though I didn’t do that to change how I think, but simply to make stuff my 3D printer can print. Still, I guess, I learned a bit about functional programming.

My next computer language task will probably be to learn a bit of Verilog and/or VHDL, which should be fun. I don’t know whether it will lead to thinking differently, but it might, in that thinking of an algorithm as something that is implemented in often concurrent digital logic rather than in a series of sequential instructions might lead to a shift in how I think at least about algorithms. I’ve ordered a cheap Cyclone II FPGA from AliExpress ($17 including the USB Blaster for programming it) to use with the code, which should make the fun even greater.

All that said, I don’t know that I can identify any specific philosophical insights I had as a result of knowing computer languages. Maybe it’s a subtler shift in how I think. Or maybe the goal of thinking philosophically differently just isn’t furthered in these ways. But it’s fun to learn computer languages anyway.

Thursday, August 10, 2017

Uncountable independent trials

Suppose that I am throwing a perfectly sharp dart uniformly randomly at a continuous target. The chance that I will hit the center is zero.

What if I throw an infinite number of independent darts at the target? Do I improve my chances of hitting the center at least once?

Things depend on what size of infinity of darts I throw. Suppose I throw a countable infinity of darts. Then I don’t improve my chances: classical probability says that the union of countably many zero-probability events has zero probability.

What if I throw an uncountable infinity of darts? The answer is that the usual way of modeling independent events does not assign any meaningful probabilities to whether I hit the center at least once. Indeed, the event that I hit the center at least once is “saturated nonmeasurable”, i.e., it is nonmeasurable and every measurable subset of it has probability zero and every measurable superset of it has probability one.

Proposition: Assume the Axiom of Choice. Let P be any probability measure on a set Ω and let N be any non-empty event with P(N)=0. Let I be any uncountable index set. Let H be the subset of the product space ΩI consisting of those sequences ω that hit N, i.e., ones such that for some i we have ω(i)∈N. Then H is saturated nonmeasurable with respect to the I-fold product measure PI (and hence with respect to its completion).

One conclusion to draw is that the event H of hitting the center at least once in our uncountable number of throws in fact has a weird “nonmeasurable chance” of happening, one perhaps that can be expressed as the interval [0, 1]. But I think there is a different philosophical conclusion to be drawn: the usual “product measure” model of independent trials does not capture the phenomenon it is meant to capture in the case of an uncountable number of trials. The model needs to be enriched with further information that will then give us a genuine chance for H. Saturated nonmeasurability is a way of capturing the fact that the product measure can be extended to a measure that assigns any numerical probability between 0 and 1 (inclusive) one wishes. And one requires further data about the system in order to assign that numerical probability.

Let me illustrate this as follows. Consider the original single-case dart throwing system. Normally one describes the outcome of the system’s trials by the position z of the tip of the dart, so that the sample space Ω equals the set of possible positions. But we can also take a richer sample space Ω* which includes all the possible tip positions plus one more outcome, α, the event of the whole system ceasing to exist, in violation of the conservation of mass-energy. Of course, to be physically correct, we assign chance zero to outcome α.

Now, let O be the center of the target. Here are two intuitions:

  1. If the number of trials has a cardinality much greater than that of the continuum, it is very likely that O will result on some trial.

  2. No matter how many trials—even a large infinity—have been performed, α will not occur.

But the original single-case system based on the sample space Ω* does not distinguish O and α probabilistically in any way. Let ψ be a bijection of Ω* to itself that swaps O and α but keeps everything else fixed. Then P(ψ[A]) = P(A) for any measurable subset A of Ω* (this follows from the fact that the probability of O is equal to the probability of α, both being zero), and so with respect to the standard probability measure on Ω*, there is no probabilistic difference between O and α.

If I am right about (1) and (2), then what happens in a sufficiently large number of trials is not captured by the classical chances in the single-case situation. That classical probabilities do not capture all the information about chances is something we should already have known from cases involving conditional probabilities. For instance P({O}|{O, α}) = 1 and P({α}|{O, α}) = 0, even though O and α are on par.

One standard solution to conditional probability case is infinitesimals. Perhaps P({α}) is an infinitesimal ι but P({O}) is exactly zero. In that case, we may indeed be able to make sense of (1) and (2). But infinitesimals are not a good model on other grounds. (See Section 3 here.)

Thinking about the difficulties with infinitesimals, I get this intuition: we want to get probabilistic information about the single-case event that has a higher resolution than is given by classical real-valued probabilities but lower resolution than is given by infinitesimals. Here is a possibility. Those subsets of the outcome space that have probability zero also get attached to them a monotone-increasing function from cardinalities to the set [0, 1]. If N is such a subset, and it gets attached to it the function fN, then fN(κ) tells us the probability that κ independent trials will yield at least one outcome in N.

We can then argue that fN(κ) is always 0 or 1 for infinite. Here is why. Suppose fN(κ)>0. Then, κ must be infinite, since if κ is finite then fN(κ)=1 − (1 − P(N))κ = 0 as P(N)=0. But fN(κ + κ)=(fN(κ))2, since probabilities of independent events multiply, and κ + κ = κ (assuming the Axiom of Choice), so that fN(κ)=(fN(κ))2, which implies that fN(κ) is zero or one. We can come up with other constraints on fN. For instance, if C is the union of A and B, then fC(κ) is the greater of fA(κ) and fB(κ).

Such an approach could help get a solution to a different problem, the problem of characterizing deterministic causation. To a first approximation, the solution would go as follows. Start with the inadequate story that deterministic causation is chancy causation with chance 1. (This is inadequate, because in the original dart-throwing case, the chance of missing the center is 1, but throwing the dart does not deterministically cause one to hit a point other than the center.) Then say that deterministic causation is chancy causation such that the failure event F is such that fF(κ)=0 for every cardinal κ.

But maybe instead of all this, one could just deny that there are meaningful chances to be assigned to events like the event of uncountably many trials missing or hitting the center of the target.

Sketch of proof of Proposition: The product space ΩI is the space of all functions ω from I to Ω, with the product measure PI generated by the product measures of cylinder sets. The cylinder sets are product sets of the form A = ∏iIAi such that there is a finite J ⊆ I such that Ai = Ω for i ∉ J, and the product measure of A is defined to be ∏iJP(Ai).

First I will show that there is an extension Q of PI such that Q(H)=0 (an extension of a measure is a measure on a larger σ-algebra that agrees with the original measure on the smaller σ-algebra). Any PI-measurable subset of H will then have Q measure zero, and hence will have PI-measure zero since Q extends PI.

Let Q1 be the restriction of P to Ω − N (this is still normalized to 1 as N is a null set). Let Q1I be the product measure on (Ω − N)I. Let Q be a measure on Ω defined by Q(A)=Q1I(A ∩ ΩN). Consider a cylinder set A = ∏iIAi where there is a finite J ⊆ I such that Ai = Ω whenever i ∉ J. Then
Q(A)=∏iJQ1(Ai − N)=∏iJP(Ai − N)=∏iJP(Ai)=PN(A).
Since PN and Q agree on cylinder sets, by the definition of the product measure, Q is an extension of PN.

To show that H is saturated nonmeasurable, we now only need to show that any PI-measurable set in the complement of H must have probability zero. Let A be any PI-measurable set in the complement of H. Then A is of the form {ω ∈ ΩI : F(ω)}, where F(ω) is a condition involving only coordinates of ω numbered by a fixed countable set of indices from I (i.e., there is a countable subset J of I and a subset B of ΩJ such that F(ω) if and only if ω|J is a member of B, where ω|J is the restriction of ω to J). But no such condition can exclude the possibility that a coordinate of Ω outside that countable set is in H, unless the condition is entirely unsatisfiable, and hence no such set A lies in the complement of H, unless the set is empty. And that’s all we need to show.

Tuesday, August 8, 2017

Naturalists about mind should be Aristotelians

  1. If non-Aristotelian naturalism about mind is true, a causal theory of reference is true.

  2. If non-Aristotelian naturalism about mind is true, then normative states of affairs do not cause any natural events.

  3. If naturalism about mind is true, our thoughts are natural events.

  4. If a causal theory of reference is true and normative states of affairs do not cause any thoughts, then we do not have any thoughts about normative states of affairs.

  5. So, if non-Aristotelian naturalism about mind is true, then we do not have any thoughts about normative states of affairs. (1-4)

  6. I think that I should avoid false belief.

  7. That I should avoid false belief is a normative state of affairs.

  8. So, I have a thought about a normative state of affairs. (6-7)

  9. So, non-Aristotelian naturalism about mind is not true. (5 and 8)

Note that the Aristotelian naturalist will deny (2), for she thinks that normative states of affairs cause natural events through final (and, less obviously, formal) causation, which is a species of causation.

I think the non-Aristotelian naturalist’s best bet is probably to deny (2) as well, on the grounds that normative properties are identical with natural properties. But there are now two possibilities. Either normative properties are identical with natural properties that are also “natural” in the sense of David Lewis—i.e., fundamental or “structural”—or not. A view on which normative properties are identical with fundamental or “structural” natural properties is not a plausible one. This is not plausible outside of Aristotelian naturalism. But if the normative properties are identical with non-fundamental natural properties, then too much debate in ethics and epistemology threatens to become merely verbal in the Ted Sider sense: “Am I using ‘justified’ or ‘right’ for this non-structural natural property or that one?”

"Finite"

In conversation last week, I said to my father that my laptop battery has a “finite number of charge cycles”.

Now, if someone said to me that a battery had fewer than a billion charge cycles, I’d take the speaker to be implicating that it has quite a lot of them, probably between half a billion and a billion. And even besides that implicature, if all my information were that the battery has fewer than a billion charge cycles, then it would seem natural to take a uniform distribution from 0 to 999,999,999 and think that it is extremely likely that it has at least a million charge cycles.

One might think something similar would be the case with saying that the battery has a finite number of charge cycles. After all, that statement is logically equivalent to the statement that it has fewer than ℵ0 charge cycles, which by analogy should implicate that it has quite a lot of them, or at least give rise to a uniform distribution between 0, inclusive, and ℵ0, exclusive. But no! To say that it has a finite number of charge cycles seems to implicate something quite different: it implicates that the number is sufficiently limited that running into the limit is a serious possibility.

Actually, this may go beyond implicature. Perhaps outside of specialized domains like mathematics and philosophy, “finite” typically means something like not practically infinite, where “practically infinite” means beyond all practical limitations (e.g., the amount of energy in the sun is practically infinite). Thus, the finite is what has practical limits. (But see also this aberrant usage.)

Thursday, August 3, 2017

Connected and scattered objects

Intuitively, some physical objects, like a typical organism, are connected, while other physical objects, like a typical chess set spilled on a table, are disconnected or scattered.

What does it mean for an object O that occupies some region R of space to be connected? There is a standard topological definition of a region R being connected (there are no open sets U and V whose intersections with R are non-empty such that R ⊆ U ∪ V), and so we could say that O is connected if and only if the region R occupied by it is connected.

But this definition doesn’t work well if space is discrete. The most natural topology on a discrete space would make every region containing two or more points be disconnected. But it seems that even if space were discrete, it would make sense to talk of a typical organism as connected.

If the space is a regular rectangular grid, then we can try to give a non-topological definition of connectedness: a region is connected provided that any two points in it can be joined by a sequence of points such that any two successive points are neighbors. But then we need to make a decision as to what points count as neighbors. For instance, while it seems obvious that (0,0,0) and (0,0,1) are neighbors (assuming the points have integer Cartesian coordinates), it is less clear whether diagonal pairs like (0,0,0) and (1,1,1) are neighbors. But we’re doing metaphysics, not mathematics. We shouldn’t just stipulate the neighbor relation. So there has to be some objective fact about the space that decides which pairs are neighbors. And things just get more complicated if the space is not a regular rectangular grid.

Perhaps we should suppose that a physical discrete space would have to come along with a physical “neighbor” structure, which would specify which (unordered, let’s suppose for now) pairs of points are neighbors. Mathematically speaking, this would turn the space into a graph: a mathematical object with vertices (points) and edges (the neighbor-pairs). So perhaps there could be at least two kinds of regular rectangular grid spaces, one in which an object that occupies precisely (0,0,0) and (1,1,1) is connected and another in which such an object is scattered.

But we can’t use this graph-theoretic solution in continuous spaces. For here is something very intuitive about Euclidean space: if there is a third point c on the line segment between the two points a and b, then a and b are not neighbors, because c is a better candidate for being a’s neighbor than b. But in Euclidean space, there is always such a third point, so no two points are neighbors. Fortunately, in Euclidean space we can use the topological notion.

But now we have a bit of a puzzle. We have a topological notion of a physical object being connected for objects in a continuous space and a graph theoretic notion for objects in a discrete space. Neither notion reduces to the other. In fact, we can apply the topological one to objects in a discrete space, and conclude that all objects that occupy more than one point are scattered, and the graph theoretic one to objects in Euclidean space, and also conclude that all objects that occupy more than one points are scattered.

Maybe we should have a disjunctive notion: an object is connected if and only if it is graph-theoretically connected in a space with a neighbor-relation or topologically connected in a space with a topological structure.

That’s not too bad, but it makes the notion of the connectedness of a physical object be a rather unnatural and gerrymandered notion. Maybe that’s how it has to be.

Or maybe only one of the two kinds of spaces is actually a possible physical space. Perhaps physical space must have a topological structure. Or maybe it must have a graph-theoretic structure.

Here’s a different suggestion. Given a region of space R, we can define a binary relation cR where cR(a, b) if and only if the laws of nature allow for a causal influence to propagate from a to b without leaving R. Then say that a region of space R is connected provided that any two distinct points can be joined by a sequence of points such that successive points are cR-related in one order or the other (i.e., if di and di+1 are successive points then cR(di, di+1) or cR(di+1, di)).

On this story, if we have a universe with pervasive immediate action at a distance, like in the case of Newtonian gravity, all physical objects end up connected. If we have a discrete universe with a neighbor structure and causal influences can propagate between neighbors and only between them, we recover the graph-theoretic notion.

Wednesday, August 2, 2017

Disconnected bodies and lives

We can imagine what it is like for a living to have a spatially disconnected body. First, if we are made of point particles, we all are spatially disconnected. Second, when a gecko is attacked, it can shed a tail. That tail then continues wiggling for a while in order to distract the pursuer. A good case can be made that the gecko’s shed tail remains a part of the gecko’s body while it is wiggling. After all, it continues to be biologically active in support of the gecko’s survival. Third, there is the metaphysical theory on which sperm remains a part of the male even after it is emitted.

But even if all these theories are wrong, we should have very little difficulty in understanding what it would mean for a living thing to have a spatially disconnected body.

What about a living thing having a temporally disconnected life? Again, I think it is not so difficult. It could be the case that when an insect is frozen, it ceases to live (or exist), but then comes back to life when defrosted. And even if that’s not the case, we understand what it would mean for this to be the case.

But so far this regarded external space and external time. What about internally spatially disconnected bodies and internally temporally disconnected lives? The gecko’s tail and sperm examples work just as well for internal as well as external space. So there is no conceptual difficulty about a living thing having a disconnected body in its inner space.

But it is much more difficult to imagine how an organism could have an internal-time disconnect in its life. Suppose the organism ceases to exist and then comes back into existence. It seems that its internal time is uninterrupted by the external-time interval of non-existence. An external-time interval of non-existence seems to be simply a case of forward time-travel, and time-travel does not induce disconnectes in internal time. Granted, the organism may have some different properties when it comes back into existence—for instance, its neural system might be damaged. But that’s just a matter of an instantaneous change in the neural system rather than of a disconnect in internal time. (Note that internal time is different from subjective time. When we go under general anesthesia, internal time keeps on flowing, but subjective time pauses. Plants have internal time but don’t have subjective time.)

This suggests an interesting apparent difference between internal time and internal space: spatial discontinuities are possible but temporal ones are not.

This way of formulating the difference is misleading, however, if some version of four-dimensionalism is correct. The gecko’s tail in my story is four-dimensional. This four-dimensional thing is connected to the four-dimensional thing that is the rest of the gecko’s body. There is no disconnection in the gecko from a four-dimensional perspective. (The point particle case is more complicated. Topologically, the internal space will be disconnected, but I think that’s not the relevant notion of disconnection.)

This suggests an interesting pair of hypotheses:

  • If three-dimensionalism is true, there is a disanalogy between internal time and internal space with respect to living things at least, in that internal spatial disconnection of a living thing is possible but internal temporal disconnection of a living thing is not possible.

  • If four-dimensionalism is true, then living things are always internally spatiotemporally connected.

But maybe these are just contingent truths Terry Pratchett has a character who is a witch with two spatially disconnected bodies. As far as the book says, she’s always been that way. And that seems possible to me. So maybe the four-dimensional hypothesis is only contingently true.

And maybe God could make a being that lives two lives, each in a different century, with no internal temporal connection between them? If so, then the three-dimensional hypothesis is also only contingently true.

I am not going anywhere with this. Just thinking about the options. And not sure what to think.