Idealized Idealization

On another occasion, I discussed the Aristotelian idea that the act of the mind does not use an organ. In an essay entitled Immaterial Aspects of Thought, James Ross claims that he can establish the truth of this position definitively. He summarizes the argument:

Some thinking (judgment) is determinate in a way no physical process can be. Consequently, such thinking cannot be (wholly) a physical process. If all thinking, all judgment, is determinate in that way, no physical process can be (the whole of) any judgment at all. Furthermore, “functions” among physical states cannot be determinate enough to be such judgments, either. Hence some judgments can be neither wholly physical processes nor wholly functions among physical processes.

Certain thinking, in a single case, is of a definite abstract form (e.g. N x N = N²), and not indeterminate among incompossible forms (see I below). No physical process can be that definite in its form in a single case. Adding cases even to infinity, unless they are all the possible cases, will not exclude incompossible forms. But supplying all possible cases of any pure function is impossible. So, no physical process can exclude incompossible functions from being equally well (or badly) satisfied (see II below). Thus, no physical process can be a case of such thinking. The same holds for functions among physical states (see IV below).

In essence, the argument is that squaring a number and similar things are infinitely precise processes, and no physical process is infinitely precise. Therefore squaring a number and similar things are not physical processes.

The problem is unfortunately with the major premise here. Squaring a number, and similar things, in the way that we in fact do them, are not infinitely precise processes.

Ross argues that they must be:

Can judgments really be of such definite “pure” forms? They have to be; otherwise, they will fail to have the features we attribute to them and upon which the truth of certain judgments about validity, inconsistency, and truth depend; for instance, they have to exclude incompossible forms or they would lack the very features we take to be definitive of their sorts: e.g., conjunction, disjunction, syllogistic, modus ponens, etc. The single case of thinking has to be of an abstract “form” (a “pure” function) that is not indeterminate among incompossible ones. For instance, if I square a number–not just happen in the course of adding to write down a sum that is a square, but if I actually square the number–I think in the form “N x N = N².”

The same point again. I can reason in the form, modus ponens (“If p then q“; “p“; “therefore, q”). Reasoning by modus ponens requires that no incompossible forms also be “realized” (in the same sense) by what I have done. Reasoning in that form is thinking in a way that is truth-preserving for all cases that realize the form. What is done cannot, therefore, be indeterminate among structures, some of which are not truth preserving. That is why valid reasoning cannot be only an approximation of the form, but must be of the form. Otherwise, it will as much fail to be truth-preserving for all relevant cases as it succeeds; and thus the whole point of validity will be lost. Thus, we already know that the evasion, “We do not really conjoin, add, or do modus ponens but only simulate them,” cannot be correct. Still, I shall consider it fully below.

“It will as much fail to be truth-preserving for all relevant cases as it succeeds” is an exaggeration here. If you perform an operation which approximates modus ponens, then that operation will be approximately truth preserving. It will not be equally truth preserving and not truth preserving.

I have noted many times in the past, as for example here, here, here, and especially here, that following the rules of syllogism does not in practice infallibly guarantee that your conclusions are true, even if your premises are in some way true, because of the vagueness of human thought and language. In essence, Ross is making a contrary argument: we know, he is claiming, that our arguments infallibly succeed; therefore our thoughts cannot be vague. But it is empirically false that our arguments infallibly succeed, so the argument is mistaken right from its starting point.

There is also a strawmanning of the opposing position here insofar as Ross describes those who disagree with him as saying that “we do not really conjoin, add, or do modus ponens but only simulate them.” This assumes that unless you are doing these things perfectly, rather than approximating them, then you are not doing them at all. But this does not follow. Consider a triangle drawn on a blackboard. Consider which of the following statements is true:

  1. There is a triangle drawn on the blackboard.
  2. There is no triangle drawn on the blackboard.

Obviously, the first statement is true, and the second false. But in Ross’s way of thinking, we would have to say, “What is on the blackboard is only approximately triangular, not exactly triangular. Therefore there is no triangle on the blackboard.” This of course is wrong, and his description of the opposing position is wrong in the same way.

Naturally, if we take “triangle” as shorthand for “exact rather than approximate triangle” then (2) will be true. And in a similar way, if take “really conjoin” and so on as shorthand for “really conjoin exactly and not approximately,” then those who disagree will indeed say that we do not do those things. But this is not a problem unless you are assuming from the beginning that our thoughts are infinitely precise, and Ross is attempting to establish that this must be the case, rather than claiming to take it as given. (That is, the summary takes it as given, but Ross attempts throughout the article to establish it.)

One could attempt to defend Ross’s position as follows: we must have infinitely precise thoughts, because we can understand the words “infinitely precise thoughts.” Or in the case of modus ponens, we must have an infinitely precise understanding of it, because we can distinguish between “modus ponens, precisely,” and “approximations of modus ponens“. But the error here is similar to the error of saying that one must have infinite certainty about some things, because otherwise one will not have infinite certainty about the fact that one does not have infinite certainty, as though this were a contradiction. It is no contradiction for all of your thoughts to be fallible, including this one, and it is no contradiction for all of your thoughts to be vague, including your thoughts about precision and approximation.

The title of this post in fact refers to this error, which is probably the fundamental problem in Ross’s argument. Triangles in the real world are not perfectly triangular, but we have an idealized concept of a triangle. In precisely the same way, the process of idealization in the real world is not an infinitely precise process, but we have an idealized concept of idealization. Concluding that our acts of idealization must actually be ideal in themselves, simply because we have an idealized concept of idealization, would be a case of confusing the way of knowing with the way of being. It is a particularly confusing case simply because the way of knowing in this case is also materially the being which is known. But this material identity does not make the mode of knowing into the mode of being.

We should consider also Ross’s minor premise, that a physical process cannot be determinate in the way required:

Whatever the discriminable features of a physical process may be, there will always be a pair of incompatible predicates, each as empirically adequate as the other, to name a function the exhibited data or process “satisfies.” That condition holds for any finite actual “outputs,” no matter how many. That is a feature of physical process itself, of change. There is nothing about a physical process, or any repetitions of it, to block it from being a case of incompossible forms (“functions”), if it could be a case of any pure form at all. That is because the differentiating point, the point where the behavioral outputs diverge to manifest different functions, can lie beyond the actual, even if the actual should be infinite; e.g., it could lie in what the thing would have done, had things been otherwise in certain ways. For instance, if the function is x(*)y = (x + y, if y < 10^40 years, = x + y +1, otherwise), the differentiating output would lie beyond the conjectured life of the universe.

Just as rectangular doors can approximate Euclidean rectangularity, so physical change can simulate pure functions but cannot realize them. For instance, there are no physical features by which an adding machine, whether it is an old mechanical “gear” machine or a hand calculator or a full computer, can exclude its satisfying a function incompatible with addition, say quaddition (cf. Kripke’s definition of the function to show the indeterminacy of the single case: quus, symbolized by the plus sign in a circle, “is defined by: x quus y = x + y, if x, y < 57, =5 otherwise”) modified so that the differentiating outputs (not what constitutes the difference, but what manifests it) lie beyond the lifetime of the machine. The consequence is that a physical process is really indeterminate among incompatible abstract functions.

Extending the list of outputs will not select among incompatible functions whose differentiating “point” lies beyond the lifetime (or performance time) of the machine. That, of course, is not the basis for the indeterminacy; it is just a grue-like illustration. Adding is not a sequence of outputs; it is summing; whereas if the process were quadding, all its outputs would be quadditions, whether or not they differed in quantity from additions (before a differentiating point shows up to make the outputs diverge from sums).

For any outputs to be sums, the machine has to add. But the indeterminacy among incompossible functions is to be found in each single case, and therefore in every case. Thus, the machine never adds.

There is some truth here, and some error here. If we think about a physical process in the particular way that Ross is considering it, it will be true that it will always be able to be interpreted in more than one way. This is why, for example, in my recent discussion with John Nerst, John needed to say that the fundamental cause of things had to be “rules” rather than e.g. fundamental particles. The movement of particles, in itself, could be interpreted in various ways. “Rules,” on the other hand, are presumed to be something which already has a particular interpretation, e.g. adding as opposed to quadding.

On the other hand, there is also an error here. The prima facie sign of this error is the statement that an adding machine “never adds.” Just as according to common sense we can draw triangles on blackboards, so according to common sense the calculator on my desk can certainly add. This is connected with the problem with the entire argument. Since “the calculator can add” is true in some way, there is no particular reason that “we can add” cannot be true in precisely the same way. Ross wishes to argue that we can add in a way that the calculator cannot because, in essence, we do it infallibly; but this is flatly false. We do not do it infallibly.

Considered metaphysically, the problem here is ignorance of the formal cause. If physical processes were entirely formless, they indeed would have no interpretation, just as a formless human (were that possible) would be a philosophical zombie. But in reality there are forms in both cases. In this sense, Ross’s argument comes close to saying “human thought is a form or formed, but physical processes are formless.” Since in fact neither is formless, there is no reason (at least established by this argument) why thought could not be the form of a physical process.

 

Advertisements

The Self and Disembodied Predictive Processing

While I criticized his claim overall, there is some truth in Scott Alexander’s remark that “the predictive processing model isn’t really a natural match for embodiment theory.” The theory of “embodiment” refers to the idea that a thing’s matter contributes in particular ways to its functioning; it cannot be explained by its form alone. As I said in the previous post, the human mind is certainly embodied in this sense. Nonetheless, the idea of predictive processing can suggest something somewhat disembodied. We can imagine the following picture of Andy Clark’s view:

Imagine the human mind as a person in an underground bunker. There is a bank of labelled computer screens on one wall, which portray incoming sensations. On another computer, the person analyzes the incoming data and records his predictions for what is to come, along with the equations or other things which represent his best guesses about the rules guiding incoming sensations.

As time goes on, his predictions are sometimes correct and sometimes incorrect, and so he refines his equations and his predictions to make them more accurate.

As in the previous post, we have here a “barren landscape.” The person in the bunker originally isn’t trying to control anything or to reach any particular outcome; he is just guessing what is going to appear on the screens. This idea also appears somewhat “disembodied”: what the mind is doing down in its bunker does not seem to have much to do with the body and the processes by which it is obtaining sensations.

At some point, however, the mind notices a particular difference between some of the incoming streams of sensation and the rest. The typical screen works like the one labelled “vision.” And there is a problem here. While the mind is pretty good at predicting what comes next there, things frequently come up which it did not predict. No matter how much it improves its rules and equations, it simply cannot entirely overcome this problem. The stream is just too unpredictable for that.

On the other hand, one stream labelled “proprioception” seems to work a bit differently. At any rate, extreme unpredicted events turn out to be much rarer. Additionally, the mind notices something particularly interesting: small differences to prediction do not seem to make much difference to accuracy. Or in other words, if it takes its best guess, then arbitrarily modifies it, as long as this is by a small amount, it will be just as accurate as its original guess would have been.

And thus if it modifies it repeatedly in this way, it can get any outcome it “wants.” Or in other words, the mind has learned that it is in control of one of the incoming streams, and not merely observing it.

This seems to suggest something particular. We do not have any innate knowledge that we are things in the world and that we can affect the world; this is something learned. In this sense, the idea of the self is one that we learn from experience, like the ideas of other things. I pointed out elsewhere that Descartes is mistaken to think the knowledge of thinking is primary. In a similar way, knowledge of self is not primary, but reflective.

Hellen Keller writes in The World I Live In (XI):

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory.

When I wanted anything I liked, ice cream, for instance, of which I was very fond, I had a delicious taste on my tongue (which, by the way, I never have now), and in my hand I felt the turning of the freezer. I made the sign, and my mother knew I wanted ice-cream. I “thought” and desired in my fingers.

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me. I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of “I” and “me” and found that I was something, I began to think. Then consciousness first existed for me.

Helen Keller’s experience is related to the idea of language as a kind of technology of thought. But the main point is that she is quite literally correct in saying that she did not know that she existed. This does not mean that she had the thought, “I do not exist,” but rather that she had no conscious thought about the self at all. Of course she speaks of feeling desire, but that is precisely as a feeling. Desire for ice cream is what is there (not “what I feel,” but “what is”) before the taste of ice cream arrives (not “before I taste ice cream.”)

 

The Practical Argument for Free Will

Richard Chappell discusses a practical argument for free will:

1) If I don’t have free will, then I can’t choose what to believe.
2) If I can choose what to believe, then I have free will [from 1]
3) If I have free will, then I ought to believe it.
4) If I can choose what to believe, then I ought to believe that I have free will. [from 2,3]
5) I ought, if I can, to choose to believe that I have free will. [restatement of 4]

He remarks in the comments:

I’m taking it as analytic (true by definition) that choice requires free will. If we’re not free, then we can’t choose, can we? We might “reach a conclusion”, much like a computer program does, but we couldn’t choose it.

I understand the word “choice” a bit differently, in that I would say that we are obviously choosing in the ordinary sense of the term, if we consider two options which are possible to us as far as we know, and then make up our minds to do one of them, even if it turned out in some metaphysical sense that we were already guaranteed in advance to do that one. Or in other words, Chappell is discussing determinism vs libertarian free will, apparently ruling out compatibilist free will on linguistic grounds. I don’t merely disagree in the sense that I use language differently, but in the sense that I don’t agree that his usage correspond to the normal English usage. [N.B. I misunderstood Richard here. He explains in the comments.] Since people can easily be led astray by such linguistic confusions, given the relationships between thought and language, I prefer to reformulate the argument:

  1. If I don’t have libertarian free will, then I can’t make an ultimate difference in what I believe that was not determined by some initial conditions.
  2. If I can make an ultimate difference in what I believe that was not determined by some initial conditions, then I have libertarian free will [from 1].
  3. If I have libertarian free will, then it is good to believe that I have it.
  4. If I can make an ultimate difference in my beliefs undetermined by initial conditions, then it is good to believe that I have libertarian free will. [from 2, 3]
  5. It is good, if I can, to make a difference in my beliefs undetermined by initial conditions, such that I believe that I have libertarian free will.

We would have to add that the means that can make such a difference, if any means can, would be choosing to believe that I have libertarian free will.

I have reformulated (3) to speak of what is good, rather than of what one ought to believe, for several reasons. First, in order to avoid confusion about the meaning of “ought”. Second, because the resolution of the argument lies here.

The argument is in fact a good argument as far as it goes. It does give a practical reason to hold the voluntary belief that one has libertarian free will. The problem is that it does not establish that it is better overall to hold this belief, because various factors can contribute to whether an action or belief is a good thing.

We can see this with the following thought experiment:

Either people have libertarian free will or they do not. This is unknown. But God has decreed that people who believe that they have libertarian free will go to hell for eternity, while people who believe that they do not, will go to heaven for eternity.

This is basically like the story of the Alien Implant. Having libertarian free will is like the situation where the black box is predicting your choice, and not having it is like the case where the box is causing your choice. The better thing here is to believe that you do not have libertarian free will, and this is true despite whatever theoretical sense you might have that you are “not responsible” for this belief if it is true, just as it is better not to smoke even if you think that your choice is being caused.

But note that if a person believes that he has libertarian free will, and it turns out to be true, he has some benefit from this, namely the truth. But the evil of going to hell presumably outweighs this benefit. And this reveals the fundamental problem with the argument, namely that we need to weigh the consequences overall. We made the consequences heaven and hell for dramatic effect, but even in the original situation, believing that you have libertarian free will when you do not, has an evil effect, namely believing something false, and potentially many evil effects, namely whatever else follows from this falsehood. This means that in order to determine what is better to believe here, it is necessary to consider the consequences of being mistaken, just as it is in general when one formulates beliefs.

Semi-Parmenidean Heresy

In his book The Big Picture, Sean Carroll describes the view which he calls “poetic naturalism”:

As knowledge generally, and science in particular, have progressed over the centuries, our corresponding ontologies have evolved from quite rich to relatively sparse. To the ancients, it was reasonable to believe that there were all kinds of fundamentally different things in the world; in modern thought, we try to do more with less.

We would now say that Theseus’s ship is made of atoms, all of which are made of protons, neutrons, and electrons-exactly the same kinds of particles that make up every other ship, or for that matter make up you and me. There isn’t some primordial “shipness” of which Theseus’s is one particular example; there are simply arrangements of atoms, gradually changing over time.

That doesn’t mean we can’t talk about ships just because we understand that they are collections of atoms. It would be horrendously inconvenient if, anytime someone asked us a question about something happening in the world, we limited our allowable responses to a listing of a huge set of atoms and how they were arranged. If you listed about one atom per second, it would take more than a trillion times the current age of the universe to describe a ship like Theseus’s. Not really practical.

It just means that the notion of a ship is a derived category in our ontology, not a fundamental one. It is a useful way of talking about certain subsets of the basic stuff of the universe. We invent the concept of a ship because it is useful to us, not because it’s already there at the deepest level of reality. Is it the same ship after we’ve gradually replaced every plank? I don’t know. It’s up to us to decide. The very notion of “ship” is something we created for our own convenience.

That’s okay. The deepest level of reality is very important; but all the different ways we have of talking about that level are important too.

There is something essentially pre-Socratic about this thinking. When Carroll talks about “fundamentally different things,” he means things that differ according to their basic elements. But at the same kind the implication is that only things that differ in this way are “fundamentally” different in the sense of being truly or really different. But this is a quite different sense of “fundamental.”

I suggested in the linked post that even Thales might not really have believed that material causes alone sufficiently explained reality. Nonetheless, there was a focus on the material cause as being the truest explanation. We see the same focus here in Sean Carroll. When he says, “There isn’t some primordial shipness,” he is thinking of shipness as something that would have to be a material cause, if it existed.

Carroll proceeds to contrast his position with eliminativism:

One benefit of a rich ontology is that it’s easy to say what is “real”- every category describes something real. In a sparse ontology, that’s not so clear. Should we count only the underlying stuff of the world as real, and all the different ways we have of dividing it up and talking about it as merely illusions? That’s the most hard-core attitude we could take to reality, sometimes called eliminativism, since its adherents like nothing better than to go around eliminating this or that concept from our list of what is real. For an eliminativist, the question “Which Captian Kirk is the real one?” gets answered by, “Who cares? People are illusions. They’re just fictitious stories we tell about the one true world.”

I’m going to argue for a different view: our fundamental ontology, the best way we have of talking about the world at the deepest level, is extremely sparse. But many concepts that are part of non-fundamental ways we have of talking about the world- useful ideas describing higher-level, macroscopic reality- deserve to be called “real.”

The key word there is “useful.” There are certainly non-useful ways of talking about the world. In scientific contexts, we refer to such non-useful ways as “wrong” or “false.” A way of talking isn’t just a list of concepts; it will generally include a set of rules for using them, and relationships among them. Every scientific theory is a way of talking about the world, according to which we can say things like “There are things called planets, and something called the sun, all of which move through something called space, and planets do something called orbiting the sun, and those orbits describe a particular shape in space called an ellipse.” That’s basically Johannes Kepler’s theory of planetary motion, developed after Copernicus argued for the sun being at the center of the solar system but before Isaac Newton explained it all in terms of the force of gravity. Today, we would say that Kepler’s theory is fairly useful in certain circumstances, but it’s not as useful as Newton’s, which in turn isn’t as broadly useful as Einstein’s general theory of relativity.

A poetic naturalist will agree that both Captain Kirk and the Ship of Theseus are simply ways of talking about certain collections of atoms stretching through space and time. The difference is that an eliminativist will say “and therefore they are just illusions,” while the poetic naturalist says “but they are no less real for all of that.”

There are some good things about what Carroll is doing here. He is right of course to insist that the things of common experience are “real.” He is also right to see some relationship between saying that something is real and saying that talking about it is useful, but this is certainly worth additional consideration, and he does not really do it justice.

The problematic part is that, on account of his pre-Socratic tendencies, he is falling somewhat into the error of Parmenides. The error of Parmenides was to suppose that being can be, and can be thought and said, in only one way. Carroll, on account of confusing the various meanings of “fundamental,” supposes that being can be in only one way, namely as something elemental, but that it can be thought and said in many ways.

The problem with this, apart from the falsity of asserting that being can be in only one way, is that no metaphysical account is given whereby it would be reasonable to say that being can be thought and said in many ways, given that it can be in only one way. Carroll is trying to point in that direction by saying that our common speech is useful, so it must be about real things; but the eliminativist would respond, “Useful to whom? The things that you are saying this is useful for are illusions and do not exist. So even your supposed usefulness does not exist.” And Carroll will have no valid response, because he has already admitted to agreeing with the eliminativist on a metaphysical level.

The correct answer to this is the one given by Aristotle. Material causes do not sufficiently explain reality, but other causes are necessary as well. But this means that the eliminativist is mistaken on a metaphysical level, not merely in his way of speaking.

Technology and Culture

The last two posts have effectively answered the question raised about Scott Alexander’s account of cultural decline. What could be meant by calling some aspects of culture “less compatible with modern society?” Society tends to change over time, and some of those changes are humanly irreversible. It is entirely possible, and in fact common, for some of those irreversible changes to stand in tension with various elements of culture. This will necessarily tend to cause cultural decay at least with respect to those elements, and often with respect to other elements of culture as well, since the various aspects of culture are related.

This happens in a particular way with changes in technology, although technology is not the only driver of such irreversible change.

It would be extremely difficult for individuals to opt out of the use of of various technologies. For example, it would be quite difficult for Americans to give up the use of plumbing and heating, and a serious attempt to do so might lead to illness or death in many cases. And it would be still more difficult to give up the use of clothes, money, and language. Attempting to do so, assuming that one managed to preserve one’s physical life, would likely lead to imprisonment or other forms of institutionalization (which would make it that much more difficult to abandon the use of clothes.)

Someone might well respond here, “Wait, why are you bringing up clothes, money, and language as examples of technology?” Clothes and money seem more like cultural institutions than technology in the first place; and language seems to be natural to humans.

I have already spoken of language as a kind of technology. And with regard to clothes and money, it is even more evident that in the concrete forms in which they exist in our world today they are tightly intertwined with various technologies. The cash used in the United States depends on mints and printing presses, actual mechanical technologies. And if one wishes to buy something without cash, this usually depends on still more complex technology. Similar things are true of the clothes that we wear.

I concede, of course, that the use of these things is different from the use of the machines that make them, or as in the case of credit cards, support their use, although there is less distinction in the latter case. But I deliberately brought up things which look like purely cultural institutions in order to note their relationship with technology, because we are discussing the manner in which technological change can result in cultural change. Technology and culture are tightly intertwined, and can never be wholly separated.

Sarah Perry discusses this (the whole post is worth reading):

Almost every technological advance is a de-condensation: it abstracts a particular function away from an object, a person, or an institution, and allows it to grow separately from all the things it used to be connected to. Writing de-condenses communication: communication can now take place abstracted from face-to-face speech. Automobiles abstract transportation from exercise, and allow further de-condensation of useful locations (sometimes called sprawl). Markets de-condense production and consumption.

Why is technology so often at odds with the sacred? In other words, why does everyone get so mad about technological change? We humans are irrational and fearful creatures, but I don’t think it’s just that. Technological advances, by their nature, tear the world apart. They carve a piece away from the existing order – de-condensing, abstracting, unbundling – and all the previous dependencies collapse. The world must then heal itself around this rupture, to form a new order and wholeness. To fear disruption is completely reasonable.

The more powerful the technology, the more unpredictable its effects will be. A technological advance in the sense of a de-condensation is by its nature something that does not fit in the existing order. The world will need to reshape itself to fit. Technology is a bad carver, not in the sense that it is bad, but in the sense of Socrates:

First, the taking in of scattered particulars under one Idea, so that everyone understands what is being talked about … Second, the separation of the Idea into parts, by dividing it at the joints, as nature directs, not breaking any limb in half as a bad carver might.”

Plato, Phaedrus, 265D, quoted in Notes on the Synthesis of Form, Christopher Alexander.

The most powerful technological advances break limbs in half. They cut up the world in an entirely new way, inconceivable in the previous order.

Now someone, arguing much in Chesterton’s vein, might say that this does not have to happen. If a technology is damaging in this way, then just don’t use it. The problem is that often one does not have a realistic choice not to use it, as in my examples above. And much more can one fail to have a choice not to interact with people who use the new technology, and interacting with those people will itself change the way that life works. And as Robin Hanson noted, there is not some human global power that decides whether or not a technology gets to be introduced into human society or not. This happens rather by the uncoordinated and unplanned decisions of individuals.

And this is sufficient to explain the tendency towards cultural decline. The constant progress of technology results, and results of necessity, in constant cultural decline. And thus we fools understand why the former days were better than these.

The Error of Parmenides

Parmenides entirely identified “what can be” and “what can be thought”:

Come now, I will tell thee—and do thou hearken to my saying and carry it away— the only two ways of search that can be thought of. The first, namely, that It is, and that it is impossible for it not to be, is the way of belief, for truth is its companion. The other, namely, that It is not, and that it must needs not be,— that, I tell thee, is a path that none can learn of at all. For thou canst not know what is not—that is impossible— nor utter it; . . . . . . for it is the same thing that can be thought and that can be.

As I pointed out here, the error here comes from an excessive identification of the way a thing is known and the way a thing is. But he does this only in a certain respect. We evidently think that some things are not other things, and that there are many things. So it would be easy enough to argue, “It is the same thing that can be thought and that can be. But we can think that one thing is not another, and that there are many things. So one thing can fail to be another, and there can be many things.” And this argument would be valid, and pretty reasonable for that matter. But Parmenides does not draw this conclusion and does not accept this argument. So his claim that what can be thought and what can be are the same must be taken in a more particular sense.

His position seems to be that “to be” has one and only one real meaning, in such a way that there is only one way for a thing to be. Either it is, or it isn’t. If it is, it is in the only way a thing can be; and if it is not, it is not in the only way a thing can be. But this means that if it is not, it is not at all, in any way, since there is only one way. And in this case it is not “something” which is not, but nothing. Thus, given this premise, that there is only one way to be, Parmenides’s position would be logical.

In reality, in contrast, there is more than one way to be. Since there is more than one way to be, there can be many things, where one thing is in one way, and another  thing is in another way.

Even granting that there is more than one way to be, Parmenides would object at this point. Suppose there is a first being, existing in a first way, and a second being, existing in a second way. Then the first being does not exist in the second way, and the second being does not exist in the first way. So if we say that “two beings exist,” how do they exist? The two do not exist in the first way, but only the first one does. Nor do the two exist in the second way, but only the second one does. And thus, even if Parmenides grants for the sake of argument that there is more than one way to be, he can still argue that this leads to something impossible.

But this happens only because Parmenides has not sufficiently granted the premise that there is more than one way to be. As I pointed out in the discussion of being and unity, when two things exist, the two are a pair, which is being in some way, and therefore also one in some way; thus the two are “a pair” and not “two pairs.” So the first being is in one way, and the second being is in a second way, but the two exist in still a third way.

The existence of whole and part results from this, along with still more ways of being. “The two” are in a certain respect the first, and in a certain respect the second, since otherwise they would not be the two.

Thus we could summarize the error of Parmenides as the position that being is, and can be thought and said, in only one way, while the truth is that being is, and can be thought and said, in many ways.

Language as Technology

Genesis tells the story of the Tower of Babel:

Now the whole earth had one language and the same words. And as they migrated from the east, they came upon a plain in the land of Shinar and settled there. And they said to one another, “Come, let us make bricks, and burn them thoroughly.” And they had brick for stone, and bitumen for mortar. Then they said, “Come, let us build ourselves a city, and a tower with its top in the heavens, and let us make a name for ourselves; otherwise we shall be scattered abroad upon the face of the whole earth.” The Lord came down to see the city and the tower, which mortals had built. And the Lord said, “Look, they are one people, and they have all one language; and this is only the beginning of what they will do; nothing that they propose to do will now be impossible for them. Come, let us go down, and confuse their language there, so that they will not understand one another’s speech.” So the Lord scattered them abroad from there over the face of all the earth, and they left off building the city. Therefore it was called Babel, because there the Lord confused the language of all the earth; and from there the Lord scattered them abroad over the face of all the earth.

The account suggests that language is a cause of technology, as when the Lord says, “this is only the beginning of what they will do; nothing that they propose to do will now be impossible for them.”

But is possible to understand language here as a technology itself, one which gives rise to other technologies. It is a technology by which men communicate with each other. In the story, God weakens the technology, making it harder for people to communicate with one another, and therefore making it harder for them to accomplish other goals.

But language is not just a technology that exists for the sake of communication; it is also a technology that exists for the sake of thought. As I noted in the linked post, our ability to think depends to some extent on our possession of language.

All of this suggests that in principle, the idea of technological progress  is something that could apply to language itself, and that such progress could correspondingly be a cause of progress in truth. The account in Genesis suggests some of the ways that this could happen; to the degree that people develop better means of understanding one another, whether we speak of people speaking different languages, or even people already speaking the same language, they will be better able to work together towards the goal of truth, and thus will be better able to attain that goal.