UNTANGLING THE PUZZLE OF CAUSALITY Julian L. Simon INTRODUCTION Especially since David Hume addressed himself to the subject first in 1739 and again in 1748, the concept of causality as used in scientific work has tantalized philosophers and has occasioned furious controversies. All that could be agreed upon has been lack of agreement. Dissatisfaction with prevailing views of causality and the need for a new and better concept has recently reappeared in the field of artificial intelligence, as researchers have attempted to show "common sense" and imitate aspects of our "intuitive" understanding of the world (Waldrop, 1987, p. 1297). In leisurely personal fashion this essay offers what I believe to be an intellectually satisfactory, as well as practical and useful, method of handling the causality concept in social-scientific work. The key is to not ask what a causal relationship is, because in principle one cannot arrive at a useful statement of properties of the relationship of cause to effect. Rather, we should ask which characteristics a relationship must have for us to usefully choose to call it "causal". The cue for this shift away from thinking in terms of properties is Einstein's break with the conventional notion of time, and his adoption of the new viewpoint that underlay his discovery of the idea of relativity. Reorienting one's mind in this fashion turns out to be very difficult, however, even after Einstein has shown us the path. Bits and pieces of the approach presented here may be foreshadowed in Hume and in other writers discussed below, depending upon how you read them. But even if this approach is not completely novel, stating it explicitly and systematically must be new. (Perhaps the reader will more quickly forgive this ado about credit when it is remembered that Hume--whom friends such as Adam Smith thought to be as admirable a person as has lived--confessed a yearning for scholarly distinction. In his extraordinary though brief deathbed autobiography, he spoke of "my love of literary fame, my ruling passion..." If Hume could be so up-front about his yearning, who am I to be bashful?) I'll lead up to the main ideas by way of some intellectual and personal history, in hopes that a chronological development will be easier to follow than a purely logical and abstract presentations; perhaps it will also constitute a useful case study of the byways through which scholarly thinking passes. And to complement the personal-chronological presentation I shall occasionally employ the casual and slangy prose of oral storytelling. The story begins in college. My curiosity about philosophy seemed to me quite ordinary. My first course in the subject was not a systematic survey but rather a gallop through Western thought of recent centuries. I enjoyed it, and did much of the extensive reading. But the exam was my last of five, and I therefore let slide my cramming for it. Then I found I had lost my class notes, which were unusually important in that course. A philosophy major upstairs lent me his notebook for a few hours, but it didn't seem to help much. I then had the inspiration to ask him to teach me some impressive-sounding German words that I could insert into my exam essays. He did so. And the next day, for the first and last time in my life, I B- S'ed on an exam. The result? During my first three semesters I had not received an A for a course, perhaps because I never addressed myself to getting grades as a game, or because I could never write an exam answer that was squarely on the beaten track. This time I got an A, and my phil major friend upstairs, who had been doing very well in all his classes and understood the material thoroughly, got a C. Lest one put the incident down to the inadequacies of a third-rate institution, this happened at Harvard College. The Navy ROTC (my ticket to college) had a system whereby at the end of each semester we returned our used text books; they constituted a pool from which students would draw the next semester. The book room was run rather loosely, and we were not discouraged from taking books that might be "relevant" to a course. Nor was there strict monitoring about book return. So it was not unusual for one of us to take a book on "extended loan", the loan records being in our heads. Once I happened to see Positivism by Richard von Mises. Triggered by a stray remark in the philosophy course, the title piqued my interest and I liberated the book. When I got around to reading it some time later, it greatly influenced my thinking, perhaps because Mises was not doctrinaire to the point of considering poetry and religion simply metaphysical nonsense, as did some Positivists. My amateur interest in philosophy continued through three years in the Navy, a year writing advertising copy just off Madison Avenue, and then into graduate school in business. I found the philosophy of scientific methods particularly interesting, especially the concept of the operational definition which, like Positivism, focuses on what can be observed and measured. It battles against vague concepts that cannot be pinned down. I had been lucky enough to have learned about operational definition in my undergraduate major of experimental psychology, a field in which it greatly helped clarify such concepts as morale and intelligence. Late in my graduate study -- which continued beyond a one- year Master's degree in business only because the University of Chicago was then hard-up for students to fill its forward-driving business school and therefore offered fat financial assistance and a remarkably undemanding PhD program (changed soon afterward), and because I had a romantic reason for hanging around awhile longer -- I caught an intellectual fever and embarked on writing a philosophical system based on the core idea of creativity. Anyone who has had the fever of a philosophical or religious system will testify that it is a powerful experience. Intoxicated, you find that any fact or idea you run across can be interpreted and fitted into your system, and in turn the new element seems to buttress the system. At some point, however, you descend from the heights long enough to recognize a flaw in the system, perhaps its unworkability or inapplicability or inconsistency. You then crash, hopefully leaving a bit of new wisdom along with the wreckage. After the PhD diversion, I opened a mail-order business, as I had originally intended. I never imagined that I had any unusual intellectual power, in contrast to the shooting stars at college who made junior Phi Bete and got summas on their senior theses. As an undistinguished though competent student amid a great many talented students, and as a person who made practically no close contacts with the faculty, there was no opportunity for my greatest strength -- my copious flow of new ideas -- to be recognized by a teacher and to elicit the suggestion that perhaps I could be a productive and valuable scholar. And my parents needed me to start a business. But after a couple of years, I found that I did not like the business of business, though I took much satisfaction in providing jobs to two middle-aged women and my father. Realizing that I most enjoyed the writing parts of my business day, I got myself invited to talk about the mail-order business to a Columbia University class of a professor notorious for grasping every opportunity to avoid teaching his classes himself. Preparing and giving the lecture was so exhilarating that becoming a professor came to my mind, as it never had before. Then, while exploring possibilities for the next academic year, I took a part-time job as an economist at a liquor trade association. The place was a backwater. They wanted an economist just to fill an office, answer a few phone calls, and from time to time prepare an exhibit for a state liquor-tax hearing. A key issue in liquor-tax hearings is the extent to which liquor sales are affected by a change in price. Looking over the available studies, I was struck by their lack of solidity; none of the many methods tried, many of them extremely complex, seemed persuasive. It occurred to me that with a simple set of adjustments, the sales experience observed before and after state changes in liquor tax laws should yield a reliable answer. But before I had an opportunity to do the research I was fired, for reasons I never understood; the likeliest explanation is that my poking around in the files for data of various kinds that would support basic research in the economics and marketing of liquor was interpreted as suspicious activity. The next year I began teaching advertising at the University of Illinois. (To faculty recruiting chairmen, I appeared to be simply a failed businessman, and my services were not in much demand.) I soon did the calculations for the liquor study, and after writing them up I sent the study to Econometrica, then the highest-powered mathematical-statistical journal. To my surpise and delight, the article was accepted for publication. This was amazing because the article contained not a single equation in algebra (though it did contain an equation in plain English), probably the first time in the history of the journal that such a "mathematically unsophisticated" article had been published. The explanation for the happy accident of the article being accepted is that the referee was the world-class statistician Herman Wold, far too capable a man to be put off by lack of mathematical dazzle. And he requested the editor to put me in touch with him because he wanted to tell me about a similar piece of work he had done in Sweden with postal rates, though he had never analysed the method or written it up. Now finally causality: In the course of our correspondence in 1964, Wold enclosed a mimeo draft of a paper he had written, ostensibly about causality in statistical investigations. Flattered by the thought that he sought comments from me, I read the paper carefully. Try as I would, however, I could not figure out just what the term "causality " meant in the context of that paper. I later understood that Wold's work, as with considerable other useful work starting with Paul Lazarsfeld and continuing through Nobel prize winner Herbert Simon, addresses the problem of which among a given set of variables should be considered to be causal. These writers have developed a body of statistical techniques for unraveling the skein of causality in available statistical data series. They study how to disentangle the relationships within a complex set of variables, and how to understand the _r_e_l_a_t_i_v_e causal status of a _c_l_o_s_e_d_ _s_e_t of relationships. Their works _a_s_s_u_m_e causal relationships within the system and consider _w_h_i_c_h_ _v_a_r_i_a_b_l_e_ _c_a_u_s_e_s_ _w_h_i_c_h, i.e., the _d_i_r_e_c_t_i_o_n of causality. This view is clearly stated in the advertisement for a 1988 book by Glymour et. al. about "a computer program which uses artificial intelligence techniques to help investigators discover causal models -- that is, systems of linear equations -- that are consistent with a body of correlational data being probed." In contrast, my aim, as was Hume's, is to classify whether or not one variable causes another. That is, my purpose is to better understand the nature of the relationship between two given variables, though with reference to other relevant variables, of course. The Lazarsfeld-H. Simon-Wold analysis resembles the advances made by experimenters in biology and agriculture who have developed methods for effectively arranging experiments so as to determine which among a set of fellow- traveling variables are the operative ones, e. g. is it the paper or the tobacco that causes cancer? Implicit in such experiments, without explicit justification, is the notion that if you manipulate an independent variable and the dependent variable changes in response, the former may be called "causal". As we shall later see, however, this implicit definition does not cover some crucial situations, which therefore require a more explicit and more extended treatment of the concept of causality. After wrestling with Wold's paper for a while, I went back to David Hume's seminal discussion. (If the word "seminal" ever is appropriate, it is now; Hume's treatment of the concept is indeed the seed from which all subsequent work has grown.) While Hume's discussion is not perfectly clear in all respects, and may even be a bit self-contradictory in some secondary issues, this much is crystal clear: All that can ever be known about events is what can be observed. And the most that can be observed is that there is a "constant conjunction" between events, a statistical correlation. Hume specifically denied that a causal relationship can be established by a priori logical analysis of the features of events or objects. That is, the Lazarsfeld-H. Simon-Wold line of thought cannot provide the needed concept of causality, though it may assist our thinking in other ways. Cleansing though Hume's analysis may have been, it is not fully satisfactory, because both in scientific work and in everyday life we are prepared to call "causal" some correlations but not others. A statement that the flight of birds overhead precedes rain seems to be a different sort of statement than is the statement that actuating the starter of the automobile precedes the starting of the engine; indeed, we behave very differently toward these two statements. And there seems to be a difference in meaning between the empirical relationship of prices on the Dutch stock exchange to the number of houses built in the U. S., and the empirical relationship of mortgage interest rates in the U. S. to the number of houses built in the U.S. This difference between statements that are only predictions and those that seem to have additional meaning leads us to continue wrestling with the concept of causality. It is that nut that I sought to crack. When puzzled in one's scientific work, it often is sound strategy to examine a set of concrete examples of the phenomenon in which you are interested, rather than simply reflecting on the discussions of the topic that are found in the scientific literature; confining oneself to the scientific literature rather than looking out the window or in the street for problems and for examples of interesting phenemena is a perverse tendency in (at least) the social sciences, I believe. Therefore, I doodled up a variety of situations in business and economics, some of which we would ordinarily refer to as causal and some which we would not, to look for the elements which discriminate betweeen the two classes. Here I was influenced by that strain of philosophy sometimes called natural language analysis which, as I understand it, asks that we clarify a concept by finding out what it generally means in everyday parlance; this is not to say that we cannot or should not use terms in a restricted technical sense, but we should not fool ourselves by thinking that our scientific or philosophical terms refer to phenomena to which in fact they do not commonly refer. More precisely, I examined a set of situations where there seems to be scientific consensus about whether the term "causal" should or should not be used. My aim was to find criteria for classifying instances into those that are conventionally called "causal" and those that are not. If well-chosen, these criteria should accurately identify situations in which the term is and is not used. The next step after the development of criteria is to put these criteria into such form that they can be used as a guide for social-scientific work, a sort of check-off list. The aim is to improve scientific communication. If a social scientist uses the term in such a manner that there will be consensus among his/her colleagues about what the scientist claims to say, her/his meaning will be clearer. Such a set of criteria constitute what might reasonably be called a "working definition" (a less loaded term for the key idea in the operational definition) of the term "causality". But even the term "working definition" is not acceptable to some contemporary philosophers, and therefore I shall stick to the term "criteria set". The key element that this criteria set has in common with an operational definition is that the definition consists of a set of reasonably unambiguous intructions for the scientist to follow. The output of the instructions is a judgment about whether or not a particular item falls into the category in question--the category being "causal relationship," in this case. It is important to distinguish this set of criteria from the long-time object of philosophical search -- definition of causality by reference to its material or physical or "existential" or "ontological" properties. That is, we must refrain from asking what causality "is". In many complex and ambiguous issues, the term "is" (and its related terms such as "are" and "be") is (sic) one of the greatest sources of confusion in English. One reason is that "is" sometimes indicates equivalence ("Two and two is four" and "A car is an automobile") and sometimes a connector ("Jack is skinny"). "Is" is related to some of the grave paradoxes in logic that required unraveling by the pathbreaking discoveries of Bertrand Russell and Alfred North Whitehead. It is a fascinating exercise to try to speak or write without using any version of "is," or related terms such as "exist". The result is a purified English which its inventor, David Bourland, Jr., calls "E-prime".1 Hume discovered the impossibility of a definition of causality in terms of physical properties. But Hume did not page 1 \article8\causali2 1-1-69 replace the material-property definition with a conceptthat fits the needs of the working scientist. Recognizing that a material-properties definition could not work, some philosophers tried to define "causality" by inspection of its logical properties as, for example, the formidable "counter-factual conditional". We can skip the details because there is now rather general agreement that this device has not worked, a main reason being that "causal" is not mainly a term in formal theories but rather belongs primarily to the context of empirical research methods. Later I think it will be clear why such attempts must fail. A denotative definition of causality is implicit in scientific training. The scientist observes examples of relationships which other scientists do or do not label "causal" relationships. But denotation of "causality" apparently is not enough, or else there would never be arguments about whether a given relationship should be called a causal relationship. Definition by giving synonyms is a sort of indirect denotation, and like denotation it may help, but it surely is not enough. Yet mere synonymization, too, has often been offered as the solution, resulting in even more befuddlement. For example, Hubert Blalock says that "causality is conceived [in his discussion] as involving the notion of production, i.e., causes produce effects..." [l964, p. l73]. I find this very unhelpful; it purports to explain the matter even though it really does not. Einstein's study of Special Relativity pointed in a radically different direction. It was only after Einstein had forsworn property definitions, especially of simultaneity, that he could make his crucial discovery. Bridgman explained how the development of pre-Einsteinian physics had been held back because of the old habit of defining concepts in terms of their properties. Before Einstein, the concept of simultaneity was defined in terms of properties. It was a property of two events, when described with respect to their relation in time, that one event was either before the other, or after it, or simultaneous with it. Einstein now subjected the concept of simultaneity to a critique, which consisted essentially in showing that the operations which enable two events to be described as simultaneous involve measurements on the two events made by an observer, so that 'simultaneity' is, therefore, not an absolute property of the two events and nothing else, but must also involve the relation of events to the observer (l927, pp. 7-8). Similarly, the appropriate concept of causality is a concept framed in terms of the operating criteria for applying the label "causal" to a relationship, rather than in terms of the properties of a relationship and/or some underlying reality. Just as with the case of simultaneity, one needs a great wrench of the mind to accept this point of view. The trouble is that all of us have continually used the concept of causality in our everyday life without great confusion, able to get by with a vague intuitive understanding of when it is appropriate and when it is not, because most everyday situations are relatively straightforward in this regard. The concept of physical causality causes no difficulty when a small boy throws a baseball through a window (though it may be difficult to adjudge causality if we are trying to fix moral or legal responsibility). Therefore we are not accustomed to the difficulty of applying the term usefully in scientific situations that are not similarly straightforward. It was Einstein's great achievement to have made the first such recorded wrench of mind when he weaned himself away from property definitions of time and simultaneity, and thereby taught us how to accomplish similar liberations with respect to other troublesome concepts. The difficulty of the task for physics and for Einstein -- it took him a decade to think it through -- should put us on our guard about how difficult it is for each of us to rearrange our thinking in similar manner, either about causality or about time. And indeed, I have had little success in getting this view of the causality concept accepted or even noticed (Simon, 1970), perhaps because I underestimated the mind- wrenching difficulty even for clear thinkers, even after an extended explanation of the matter. Indeed, Einstein thought about the concept on exactly the same lines. He agrees that closed-system deductive logic cannot supply the appropriate concept. "Hume saw clearly that certain concepts, as for example that of causality, cannot be deduced from the material of experience by logical methods" (1949, p. 13). Or more generally, "Hume saw that concepts which we must regard as essential, such as, for example, causal connection, cannot be gained from material given to us by the senses" (1954, p. 21). And Einstein then goes on to assert that the appropriate causal concept should depend upon our scientific needs. "All concepts, even those which are closest to experience, are from the point of view of logic freely chosen conventions, just as is the case with the concept of causality" (1949, p.13). Max Planck, too, asserted that the choice of causality concept should be on pragmatic grounds. "In his attempt to build up his hypothetical picture of the external universe the physicist may or may not, just as he likes, base his synthesis on the principle of a strict dynamic causality or he may adopt only a statistical causality. The important question is how far he gets with the one or the other." (1981, p. 99). This point is made even more generally -- the reciprocal influence of science and the "metaphysical" interpretive framework -- by Agassi (1975, p. 239). As in the first flush of development of many intellectual advances, vastly overblown claims were made for operational definitions. There arose a body of thought called Operationism which claimed that application of operational definitions to all terms in science would resolve all problems of scientific conceptualization. Counterattacking, Fritz Machlup (1978, Part 3) vigorously argued that many theoretical terms in economics, and in the rest of social sciences, cannot possibly be operationalized--that is, reduced to measurement. Machlup even seemed to doubt whether any terms could properly be operationalized, but when pushed he admitted the necessity for at least some of the terms in any theory and discipline to be related to observed reality with some device akin to an operational definition. Let us not be sidetracked into that controversy, however. Our aim is a useful concept of causality in social science. CAUSALITY AND EXPERIMENTATION Natural scientists often say that an experiment defines causality. Indeed, when experimentation is possible a positive result is a powerful test of causality. If the stimulus is followed by response, and non-stimulus is followed by non- response, as in John Stuart Mill's canons, the stimulus-response relationship is commonly said to be causal. Experiments are replicable, and hence the definition has high reliability. This explains why there is relatively little dispute in the experimental sciences about which relationships to call causal. A single experimental relationship is not, however, a complete indicator of causality. In the famous Hawthorne experiments, for example, variation in light intensity in the work room was followed by variation in the work performance. But it was obvious that it was other factors -- perhaps the attention of the experimenters, or perhaps associated changes in the rate of pay, which are still the subject of controversy -- which caused the increase in work output. It is intuitively clear to experimental scientists that causality is better shown by a series of experiments that vary the parameters of the original experiment. One can therefore state the criteria of causality in experimental situations as follows: (l) Keeping all other conditions the same, vary the stimulus and observe the response. (2) If the variation in stimulus is followed by variation in the response, yielding a statistically significant relationship that is also strong enough to be of some importance, vary the conditions and repeat the experiment. (3) If the original relationship continues to appear even under different parametric conditions, call the relationship 'causal.' Two important points should be noted about these criteria of causality when experimentation is possible: (l) It is the actual operation of experimenting that defines the term; _t_h_e_ _e_x_p_e_r_i_m_e_n_t_ _m_u_s_t_ _a_c_t_u_a_l_l_y_ _b_e_ _c_a_r_r_i_e_d_ _o_u_t; the experiment is important as an act, and not as a model, in this context. In this definition, an actual experiment cannot be replaced by a hypothetical experiment. (2) Whether or not a relationship will be called "causal" is not an automatic and perfectly objective process; rather, it requires _j_u_d_g_m_e_n_t based on unspecifiable contextual knowledge, e.g., judgment about whether the _a_p_p_r_o_p_r_i_a_t_e conditions have been varied, whether _e_n_o_u_g_h conditions have been changed, and whether the relationship is important enough or strong enough. CAUSALITY IN NON-EXPERIMENTAL CONTEXTS Now we move on to the much harder task, a set of criteria of causality for observed relationships that are not subject to experiment. One suggestion has been what Wold called "the fictitious experiment," equivalent to this test: Judge whether the observed situation has the properties of a controlled experiment. If you so judge, call the observed relationship `causal.' But this definition differs from the experimental heuristic definition in that it does not include the crucial operative phrase, i.e., "carry out the experiment...." Furthermore, it is clear that this definition has low reliability; that is, there is much room for disagreement among scientists about whether or not an observed situation does indeed have the properties of an experiment. One frequent suggestion has been to deny the label "causal" to any non-experimental observed relationship, to say "correlation does not prove causation." But there are several drawbacks to this suggestion: (l) The term "causal" is frequently used in scientific descriptions of non-experimental relationships, and we therefore need to discern its meaning when it is used. (2) The "pure" scientist may be able to withhold the appellation (though it may well be a useful word in his/her vocabulary), but the decision-maker (or the scientist _q_u_a adviser for "decision-makers") cannot duck the issue. The 1964 Surgeon General's Committee on Smoking and Health knew that if the Committee did not use the word "cause," many people would not decide to quit smoking who otherwise would. And the progress of legislation might also depend upon whether they wrote "causal." Therefore they chose to use the word. (3) Our intuition--perhaps the final source of validation in these matters--tells us that there is an important difference among various observational relationships, a difference that corresponds to our usual sense of the word "causal." For example, there is a difference between the statement that when one clock's hour hand reaches l2 another clock strikes the hour, and the statement that when you remove the plug from the socket the electric clock ceases to run. To repeat an economic example given earlier, we sense a difference between the observed association between prices on the Dutch stock exchange and the number of houses built in the U.S., and the observed association between mortgage interest rates in the U.S. and the number of houses built in the U.S. Similarly, in sociology there seems to be a difference between a statement that certain phases in the moon precede or accompany a rise in the murder rate, and the statement that a rise in the temperature precedes or accompanies a rise in the murder rate. Here is the working definition that I propose for the term "cause-and-effect relationship" in non-experimental situations: l. _S_t_r_e_n_g_t_h_ _o_f_ _C_o_r_r_e_l_a_t_i_o_n. The relationship is a correlation strong enough to be interesting and/or useful. For example, one is not likely to say that wearing glasses "causes" (or "is a cause of") auto accidents if the observed correlation is .07, even if the sample is large enough to make the correlation statistically significant.(A correlation is measured by a number between -1.0 and +1.0, with zero indicating no correlation. In almost every discipline except perhaps education, a correlation of .07 is usually considered to be of no importance at all.) In other words, unimportant relationships are not likely to be labeled "causal." Of course this criterion by itself is not enough; that is the grain of truth in the expression "correlation does not prove causation." But nothing else "proves" causation, either; that is the larger truth. 2. _F_e_w_n_e_s_s_ _o_f_ _S_i_d_e_ _C_o_n_d_i_t_i_o_n_s. The relationship in question must not require too many "if's," "and's," and "but's." That is, the "side conditions" must be sufficiently few, and sufficiently observable, so that the relationship will apply under a wide enough range of conditions to be considered useful or interesting. For example, one might say that an increase in income "causes" an increase in the birth rate if this relationship were observed everywhere. But if the relationship is only found to hold true in developed countries, among educated persons, among the higher-income groups, among those who can be assumed to know about contraception, then one is less likely to say the relationship is causal--even if the correlation is extremely high once the specified conditions have been met. 3. _N_o_n_-_S_p_u_r_i_o_u_s_n_e_s_s. For a relationship to be called "causal" there should be good reason to believe that even if the "control" variable is not the "real" cause (and it never is), some "more real" variables will change consistently with changes in the control variables. (Between two variables, v may be said to be the "more real" cause, and w a "spurious" cause, if v and w require the same side conditions except that v does not require a side condition on w.) This third criterion (non-spuriousness) is of particular importance to policymakers. The difference between it and the previous criterion concerning side-conditions is that a plenitude of very restrictive side-conditions may take the relationship out of the class of causal relationships even though the effects of the side-conditions are known. But the criterion of non-spuriousness concerns variables that are as yet _u_n_k_n_o_w_n and unevaluated, but which have a _p_o_s_s_i_b_l_e ability to upset the observed transformation. Examples of "spurious" relationships and hidden-third-factor causation are commonplace. For a single illustration here, toy sales rise in December. One runs no danger in saying that December "causes" an increase in toy sales even though it is "really" Christmas that causes the increase, because Christmas and December almost always accompany each other. page 2 \article8\causali2 1-1-69 One's belief that the relationship is not spurious is increased if _m_a_n_y likely third-factor variables have been investigated and none reduces the original relationship. This is a further demonstration that the test of whether an association should be called "causal" cannot be a logical test; there is no way that one can express in symbolic logic the fact that "many" other variables have been tried and have not changed the relationship in question. The more tightly a relationship is bound up with (that is, deduced from, compatible with, and logically connected into) a general framework of theory, the stronger is the relationship's claim to being called causal. For an economic example, the positive relationship of the interest rate to business investment, and the relationship of profits to investment, are more likely to be called "causal" than is the relationship of liquid assets to investment. This is because the first two statements can be deduced from neo-classical price theory whereas the third statement cannot. This element in scientific thinking and in the explication of the concept of causality is perhaps the biggest difference between Hume's thinking and contemporary thinking. Hume focused on each relationship all by itself, and all by itself there is no more that one can say about a relationship except that there is "constant conjunction". But the presence or absence of other statements of relationship that are either connected to the relationship in question by commonsensical logic, or even more strongly by an integrated body of theory, is very important in deciding whether it is sensible to think that the statement in question should be considered as only a predictive relationship, or whether one should go further and call it "causal." More will be said below about the role of theory. This does not exhaust the list of possible criteria of causality. For example, on pragmatic grounds Guy Orcutt demands that a relationship be controllable for policy purposes. And Travis Hirschi and Hanan Selvin studied various _f_a_l_s_e criteria. But my informal survey of the use of the term "causal" in the social-scientific literature suggests that the definition proposed above captures more of the essence of the concept than any other definition I can construct. Others may disagree, and I hope that they will construct better definitions--but heuristic definitions. This heuristic definition is a checklist test against which one can compare a given relationship. If the relationship seems to meet most of the checklist criteria reasonably well, then you probably will (and ought to) call the relationship "causal"; if not, not. This test is not automatic or perfectly objective, of course2. Whether the relationship meets any one of the criteria, or enough of the criteria, is a subjective judgment, and demands knowledge of the substantive context of the problem. This is why no _l_o_g_i_c_a_l definition of the causal concept can ever capture its essence. Furthermore, the judgment about whether or not the relationship passes the test of the checklist also depends upon the discipline and area of knowledge of the judge and of the material he is working with. For example, as noted above, the requirement that the relationship be compatible with deductive theory is much more important in economics than in sociology. The reader may remark on the absence from the definition of a time-direction concept. The reasons are twofold: First and most important, time-dating cannot help determine _w_h_e_t_h_e_r there is _a_n_y causal relationship between the two variables. Additionally, time-dating is itself a difficult and uncertain operation; often one cannot say which event preceded which, especially when human intentions and expectations of future events influence a person to take an action to affect another event; the effect event then precedes the causal events by way of the expectations. Of course one could argue a forward-moving chain of events, but this easily becomes ambiguous. Whether or not this criteria set is a _g_o_o_d definition must be considered from several points of view. First of all, it ought to fit common scientific _u_s_a_g_e. The definition given above actually evolved from an inductive study of statements of economic relationships, and it can best be understood in that concrete context. A second test of this definition is that it fit the reader's _i_n_t_u_i_t_i_o_n. One's intuition is closely related to one's experience with usage, of course, but the intuition has some life of its own, too. In other words, I hope that the reader agrees that the definition offered here really stands for what the concept means to the reader. A third test concerns the reliability of the definition. Clearly it is much less reliable than almost any other heuristic definition in science; the need for contextual knowledge in judging causality assures that there will be many more cases about which judges disagree than for most other definitions. But the better question is whether this definition is more reliable than other methods of classifying situations into "causal" and "non-causal." If this definition is a helpful improvement, then it may lead to others that are even better. One might argue that all of the criteria proposed above lack good definitions themselves, and hence listing them does not improve the situation at all. Perhaps so. But often one can make a better judgment when one breaks up the overall judgment into parts, e.g., one can usually make a better judgment about the height of a skyscraper if one estimates the number of floors, and then multiplies by a common height of floor, than if one has to guess the height of the building directly. Similarly if one at least examines a correlation coefficient, then self- consciously thinks about the relationship to the body of theory, and so on, one may arrive at a better judgment of causality than if one makes a judgment directly. Let's take stock of the argument so far. Property definitions of causality are a dead end. Definitions referring to logical properties have failed, and must always fail. What is needed is a set of criteria of causality. This is the criteria set I have proposed: A statement shall be called causal (a) if the relationship correlates highly enough to be useful and/or interesting; (b) if it does not require so many side-condition statements as to gut its generality and importance; (c) enough possible "third factor" variables must have been tried to give some assurance that the relationship is not spurious; (d) the relationship is deductively connected into a larger body of theory, or (less satisfactorily) is supported by a set of auxiliary propositions that "explain" the "mechanism" by which the relationship works. This definition is a checklist of criteria. Whether a given relationship meets the criteria sufficiently to be called "causal" is not automatic or perfectly objective, but rather requires judgment and substantive knowledge of the entire context. After working out this analysis, I read the study by Hirschi and Selvin of the use of the concept of causality in juvenile delinquency research, where the issue is particularly troubling. Hanan Selvin is a close friend of mine, and in fact was the series editor of a text in which I wrote about my treatment of causality, and I probably never would have come across his work otherwise. This happenstance suggests to me that there must also have been other similar treatments over the years that I have not run across. And a few years ago I came across a similarly- spirited discussion of the concept in the context of cigarette smoking. In connection with the famous Surgeon General's Report of 1964, the following criteria for a causal relationship were stated: consistency, strength, specificity, temporal relationship, and coherence<1>. Then I found a very similar definition in an epidemiology text, stated without fanfare in a practical fashion (Mausner and Bahn, 1974). Epidemiology shares with many social-science situations the characteristic that experiments cannot be undertaken with human subjects. But these discussions apparently have never entered the literature of philosophy. One reason may be that the writers did not seem to realize that their analyses had something fundamental to contribute to philosophy, but rather assumed that they were working out a tool for the benefit of working scientists in their own field. Another reason may be that they did not frame their analyses in terms of a general concept such as the operational definition, as I did. THE ROLE OF THEORY The analysis that I wrote in the mid-1960's, as given above, still seems to me to be quite satisfactory, at least for economics. (It needs adaptation for the special needs of other disciplines.) But in the past few years, I have given some additional thought to related aspects of the subject, especially to the place of theory in the decision to label a relationship causal, and the basis for believing that prediction can do better than chance. These new ideas follow. Let's consider the issue of theory first. Hume begins and ends his analysis with the idea of observing the frequency with which two events are conjoined. But I have asserted above that the existence of a theoretical connection between the events is one of the predisposing conditions for us to call a connection "causal". Therefore it behooves us to consider what is meant by a theoretical connection, and why its existence in a particular situation makes it more appropriate to label a relationship "causal." One likely reason for absence of this consideration in Hume's thinking is that in his time there was no branch of science, except perhaps physics, that possessed an integrated body of theory. Economics lacked such until Adam Smith came along to weld together the various fragmentary observations that already existed; William Letwin, in his excellent book on the origins of economics as a discipline (1965), has persuasively argued that this was Smith's greatest achievement. Certainly there was at that time no philosophy of science that analysed the importance of an integrated theoretical framework, as has been done for us by recent writers. Indeed, none of the social sciences other than economics yet has a well-developed body of deductive theory, and hence this criterion of causality is not weighed as heavily in those social sciences. Rather, the other social sciences seem to substitute a weaker and more general criterion, that is, whether the statement of the relationship is accompanied by other statements that seem to "explain" the "mechanism" by which the relationship operates. Consider, for example, the relationship between the phases of the moon and the suicide rate. The reason sociologists do not call it "causal" is because there are no auxiliary propositions that sensibly "explain" the relationship and describe an operative mechanism. On the other hand, the relationship between broken homes and juvenile delinquency is often referred to as causal, because a large body of psychological theory serves to explain why a child raised without one or another parent, or in the presence of parental strife, is not likely to "adjust" readily. Depending on who is reading him, it might be argued that Kant sensibly brought theory back into the discussion of causality. But if he did so, it was in such a confusing fashion that progress was not made, but rather the opposite. (Later we shall see, however, how he made a more positive contribution to the general subject.) As noted above, Letwin makes much of the idea that Adam Smith's great contribution was the welding together of a great many existing economic propositions into an interacting system relating those propositions to each other logically. It is that logical structure that made economics into a mature discipline rather than a grab-bag of observations, Letwin says. And it is that structure as a whole that constitutes a theory. A single proposition does not constitute a theory, he says; only an integrated structure of such propositions does. The practical importance of such a structure is that one properly has much more confidence in the validity of a proposition that is part of such a larger interacting structure than in the validity of a similar proposition that stands alone. A proposition that is part of such a structure calls forth more confidence because it rests not only upon the evidence available to support that proposition alone, but also upon all the evidence that supports the other propositions in the theoretical system. In the same way, we feel more secure in calling a connection "causal" if, in addition to the empirical evidence bearing directly upon that connection, we can also call forth a theoretical explanation deduced from a set of propositions that interact with it logically. Hume uses as an example one billiard ball striking another. He says that it is only because we have experience that a moving ball colliding with a ball at rest sets the latter in motion that we predict that the same will happen next time there is such a collision. But if we possess an integrated body of physical propositions about moving objects, force, mass, and the like, we feel more confident in predicting such a phenomenon than if we did not have such theory available. Furthermore, we often find ourselves using logical deduction in bringing theory to bear in such a case, though Hume leaves the impression that logical deduction from observations has absolutely no role in affixing the label "causal". But it is important to note that, at the bottom of the theoretical structure from which we make our deductions, there necessarily lie empirical observations. The theory does not exist purely in the mental realm, but necessarily has some observed links to observation, or else it has no validity as a theory. The theory serves as a device for taking advantage of a wider set of observations in making predictions about a particular connection, such as that between a pair of billiard balls. Perhaps it will help to explicate the matter if we notice that some judgments are necessary before we can make any prediction. For example, a prediction about the behavior of a pair of billiard balls requires that we classify these billiard balls, and the force that starts one of them in motion, as similar to previous billiard-ball events. And a prediction about another pair of more-or-less round objects based on experience with billiard balls requires more delicate judgment as to whether the similarity is sufficiently great so that our experience with billiard balls is relevant. A body of theory is a systematic way of taking advantage of experience with not-so-similar phenomena so as to increase the body of our experience that is relevant to a particular phenomenon. Hume never seems to bring to bear upon a particular phenomenon such as a billiard-ball collision any experience with similar-but-different phenomena, that is, with a theoretical structure. This is one reason why his discussion of causality stops only with constant conjunction and prediction. And this is why he cannot distinguish between the sorts of connections that we call "causal" and those that we consider only predictive. Einstein may be interpreted as saying much the same thing: Our present rough way of applying the causal principle is quite superficial. We are like a child who judges a poem by the rhyme and knows nothing of the rhythmic pattern. Or we are like a juvenile learner at the piano, just relating one note to that which immediately precedes or follows. To an extent this may be very well when one is dealing with very simple and primitive compositions; but it will not do for the interpetation of a Bach Fugue. Quantum physics has presented us with very complex processes and to meet them we must further enlarge and refine our concept of causality. (in Planck, 1981, pp. 203-4). Perhaps a non-philosopher may be permitted to suggest that philosophers would do better in their discussions of science if they did not work with simple propositions in isolation such as "The book is on the table", but would instead deal with issues embedded in (say) economic theory, as for example, the causal status of money in analysing business cycles. THE BASIS FOR PREDICTION ITSELF Still, what about the fundamental basis for the whole kettle of fish, the efficacy of prediction? Why should any prediction be any better than random chance? Sitting in the social-sciences teachers' lounge at Hebrew University in Jerusalem in 1968, I heard economist-statistician Yoel Haitovsky say, "God wrote a set of equations, and it is our task to find out what the equations are." He was expressing a view of science which became dominant in the 19th century. As Kant put it, the "Author of the world" caused there to be "the glorious order, beauty, and providential care everywhere displayed in nature." (1787/1965, p.31). This surely is still the common view today. I replied to Haitovsky roughly as follows, "God wrote no equations. Rather, God (though let's not get hung up on that word) gave the formless mass a kick and deformed it so that it is no longer totally random and without order. We try out various equations to see which ones come closest to fitting our needs." The view that it is scientists who invent neat models to imitate some aspects of messy reality has come into vogue because of Einstein and Bohr. Einstein was fond of saying that physical theories are the products of human imagination. On this matter, Kant pointed in the right direction, Einstein notes (1954, p.22). Kant wrote that "reason" -- by which he meant something like what we call theory or logic or both -- must "show the way with principles of judgment based upon fixed laws, constraining nature to give answer to questions of reason's own determining" (1789/1965, p. 20). He says that the scientist should not approach nature "in the character of a pupil who listens to everything that the teacher chooses to say, but of an appointed judge who compels the witnesses to answer questions which he has himself formulated" (p. 20). And he goes on to say that we can know of phenomena "only what we ourselves put into them" (p.23). Admittedly it is unclear whether Kant was talking of experience- based theories as we know them, or purely tautological mathematical propositions. (He claims to be talking of the latter, but he understood the former quite well, and alluded to such theory though excluding it from his "a priori" category.) In any case, Kant surely did emphasize that our scientific vision of a phenomenon is, and should be, a product of our ideas about how best to think about the phenomenon, as well as the data that we collect about the phenomenon, just as Einstein and Bohr later insisted was the heart of their scientific approaches. This subject was on my mind when I overheard Haitovsky's remark because I had recently been working on the relationship of income to family size, and I had once more seen how it made no sense to talk about a "true" or "underlying" relationship, and how it is impossible to ever "purify" a relationship of extraneous forces in such fashion as to reach a "true" model. Rather, one must always make a choice of which model to work with, and the choice must be made with reference to the purposes of the investigation; different purposes should lead to different views of a scientific relationship, unattractive as this seems because of its apparent lack of "objectivity". It also lacks attractive power, I think, because its appeal is not to a particular theory being out-and-out correct but rather to its being approximately correct, or being "convenient", in Bertrand Russell"s term (1945, p.832). This is not palatable to people who view themselves as searching for eternal truth3 rather than searching for workable ideas that will help us get on with additional learning about the world and with dealing with our human problems more successfully. But perhaps the imprimatur of Einstein and Bohr will have some persuasive power; they are tough names to be contemptuous of, even if they are not names that I can actually conjure with. We can illustrate this principle with the shape of the earth. Would you say that the earth is round? I have exasperated my children by saying that I mostly regard the earth as flat, as is also appropriate for carpet-layers. Geophysicists consider it an elongated sphere. Airline navigators plotting a route from Europe to the U. S. properly regard the earth as round, but pilots flying from Denver to San Francisco better view it as jagged or they won't view it very long. Taken altogether, the earth "is" not anything, and to say that it "is" anything falls into the snare of the word "is" mentioned earlier. The earth equals only the earth, and any descriptive adjective at best grasps one aspect of the earth. In similar fashion, Benoit Mandelbrot, creator of the mathematics of fractals, discussed the coastline of Great Britain: its length depends upon whether you measure it as the crow flies, or as a beachcomber would walk it, or in some other fashion. This view of the world as deformed ex-chaos also provides a comfortable basis for prediction. Here is an example of its fruitfulness. For some time now, following Harold Barnett (Barnett and Morse, 1963)I have written about how our supplies of energy, land and other natural resources have been increasing rather than decreasing, contrary to all common sense and Malthusian diminishing returns. (This is in connection with my central interest, the economics of population.) The central evidence for this general proposition is that the prices of natural resources relative to our most precious commodity--the hours of our lives--and also even relative to consumer goods, have been decreasing rather than increasing. This has led to the prediction that this trend, which has been in operation over the entire span of history for which we can muster any data -- and that goes back almost 4000 years for copper -- will continue into the future indefinitely. Unsurprisingly, not everyone agrees with this prediction. And naturally enough, some persons have inquired about the method which is the basis of this forecast. Simplifying dangerously, a trend may be expected to continue unless there is a persuasive theoretical reason to think that it will not. And in the case of resource availability, such a persuasive counter-theory does not exist; in fact, there is a plausible theory in support: an impending shortage mobilizes people to develop solutions that leave us better off than if the shortage problem had not arisen. Furthermore, if statistical evidence is overwhelming, one may choose to believe that a trend will continue even despite persuasive theory to the contrary, implying that the theory needs rethinking. The underlying philosophical basis for this general proposition about trend continuation is the same as the basis for any prediction, to wit, there is likely to be a positive correlation between contiguous observations, in consonance with the idea that the world as we know it is less random than complete chaos. There is a certain amount of pulling oneself up by one's bootstraps in this way of thinking. It says that we may predict the future on the basis of past experience because the world is non-random, and we believe that the world is non-random because we have been able to predict successfully in the past. But if we look for it, we can find that Hume supported his assumption that prediction is possible with a similar observation: "[I]f all the scenes of nature were continually shifted in such a manner, that no two events bore any resemblance to each other, but every object was entirely new, without any similitude to whatever had been seen before, we should never, in that case, have attained the least idea of necessity, or of a connexion among these objects."(1949, p. 104). (Notice that though Hume says that we can never learn the nature of the physical connection among events, he does not deny that there is a connection). And he makes a similar observation specifically with respect to social science. "[W]ere there no uniformity in human actions, and were every experiment, which we could form of this kind, irregular and anomalous, it were impossible to collect any general observations concerning mankind; and no experience, however accurately digested by reflection, would ever serve to any purpose" (p.106). ( But the idea of non-chaos due to a single deformation is a structural vision that is not identical with the success of prediction; it is itself a hypothesis about the physical nature of the world and about a historical event. To the extent that this idea is sound, it buttresses the naked observation that prediction based on experience does better than random chance. And therefore the intellectual operation is not just a bootstrap operation, not just tautology and restatement. In fact, this view fits with recent speculations by astronomers. Such physical corroboration, however, is not crucial for present purposes though it may make one feel more comfortable with the hypothesis.) Now another sidetrip in the intellectual odyssey that leads back to greater appreciation of Hume, and also implicit approval from Hume of this view of the basis of prediction. In the past couple of years I have been reading much of Friedrich Hayek. This was in part stimulated by my receiving a letter from him saying, "This is the first time I have ever written a fan letter to a professional colleague." (March 27, 1981) I was naturally flattered almost out of my wits to receive this letter, not just because Hayek is a Nobel prize winner but because he is a great thinker rather than just a great economist, arguably the greatest social scientist of this century. The central idea in Hayek's thinking, and of the Austrian School of which he is the greatest representative since Carl Menger founded the school in the second half of the 19th, is the spontaneously developing social order, the sum of the decisions and activities of a myriad of independent individuals acting for their own purposes but inevitably serving the purposes of others. A market is the most vivid illustration, but spontaneous order has been the principle underlying the development of all civilization. This mode of social organization conflicts head-on with the vision of a centrally planned and centrally controlled organization where decisions are made on the basis of the best available knowledge of all the individuals' desires and capacities, in an attempt to optimize the welfare of all. The conception of spontaneous order is at the heart of Adam Smith's invisible hand. But it goes back further than Smith, to at least Bernard Mandeville and his Fable of the Bees. In Hayek's judgment, Hume is the most profound expositor of this vision. Part-and-parcel of this view is that social customs embody much knowledge which supports the functioning and survival of society and the development of civilization, and reason alone is insufficient for anyone to come to a clear understanding the development and effects of such social customs. Here we have a great irony: Hume, as clear-minded and powerfully rational as any human being who ever lived and wrote, emphasizing the limits of reason and the necessity of trusting in traditions that our reason is insufficiently powerful for us to fully understand4. Hayek's discussion of Hume led me to again pick up Hume. Rereading the section on causality, I found that Hume makes an argument for the basis of prediction not unlike the view which I offer above. As to what may be said, that the operations of nature are independent of our thought and reasoning, I allow it, and accordingly have observed that objects bear to each other the relations of contiguity and succession; that like objects may be observed in several instances to have like relations; and that all this is independent of and antecendent to the operations of the understanding. But if we go any further and ascribe a power or necessary connection to these objects, this is what we can never observe in them, but must draw the idea of it from what we feel internally in contemplating them. And this I carry so far that I am ready to convert my present reasoning into an instance of it by a subtlety which it will not be difficult to comprehend. When any object is presented to us, it immediately conveys to the mind a lively idea of that object which is usually found to attend it, and this determination of the mind forms the necessary connection of these objects. But when we change the point of view from the objects to the perceptions, in that case the impression is to be considered as the cause and the lively idea as the effect, and their necessary connection is that new determination which we feel to pass from the idea of the one to that of the other. The uniting principle among our internal perceptions is as unintellible as that among external objects, and is not known to us any other way than by experience. Now the nature and effects of experience have been already sufficiently examined and explained. It never gives us any insight into the internal structure of operating principle of objects, but only accustoms the mind to pass from one to another.(A Treatise of Human Nature, p. 115, edition unknown.) ***** I hope that the informal manner wherein I have discussed these fundamental questions which have stirred up the best minds throughout the ages does not keep you from taking the discussion seriously. I also hope that you are convinced that these questions--as is often the case--are partly of our own making, because it is we who choose how to frame the questions. And because they are partly of our own making, an understanding of how we think about them can help us resolve their mysteries. Hume, above all, should be willing to assent to this view, because as he saw it, his central interest was human nature and the operation of the human understanding. Concerning my suggestions about amending the state of affairs he left: If he were to come back for a few days, he would quickly familiarize himself with all the relevant advances in thinking since his day. And I would hope that he would then accept the additions to his thinking proposed here. page 3 \article8\causali2 1-1-69 REFERENCES Agassi, Joseph, Science in Flux (Boston: D. Reidel, 1975). Barnett, Harold, and Chandler Morse, Scarcity and Growth: The Economics of Natural Resource Availability (Baltimore: Johns Hopkins UP, 1963). Blalock, Hubert M., Jr., Causal Inferences in Non- experimental Research (Chapel Hill: U. of N. Carolina Press, 1964). Bridgman, Percy W., The Logic of Modern Physics (New York: Macmillan, 1927). Brown, R. R., Explanation in Social Science (Chicago: Aldine, 1963). Bunge, Mario, Causality (Cambridge: Harvard U. Press, 1959). Burtt, Edwin A. (ed.), The English Philosophers From Bacon to Mill (New York: Modern Library, 1939). Einstein, Albert, Relativity (New York: Crown Press, 1916/1952). Einstein, Albert, Ideas and Opinions (New York: Bonanza Books, 1954). Ellis, Albert and Robert A. Harper, New Guide to Rational Living (Los Angeles: Wilshire (1961; 1975). Clark Glymour, Richard Scheines, Peter Spirtes, and Kevin Kelly, with foreword by Herbert A. Simon, Discovering Causal Structure: Artificial Intelligence, Philosophy of Science, and Statistical Modeling ((San Diego: Academic Press, 1988). The reference is to an advertisement for this book. page 4 \article8\causali2 1-1-69 Green, Leon, "The Causal Relation Issue in Negligence Law," in Michigan Law Review, March, 1962, pp. 543-76. Hart, Herbert L. A., and A. M. Honore, Causation in the Law (Oxford: Clarendon, 1959). Hirschi, Travis and Hanan C. Selvin, Delinquency Research (New York: Free Press, 1967). Hume, David, An Enquiry Concerning Human Understanding (LaSalle, Ill.: Open Court, 1949). Kant, Immanuel, Critique of Pure Reason (New York: St. Martin's, 1781/1965). Letwin, William, The Origins of Scientific Economics (Gardner City, N.Y.,: Doubleday, 1965). Machlup, Fritz, Methodology of Economics and Other Social Sciences (New York: Academic Press, 1978). Mausner, Judith S., and Anita K. Bahn, Epidemiology -- An Introductory Text (Philadelphia: W. B. Saunders, 1974) Orcutt, Guy H., (a) "Toward Partial Re-direction of Econometrics," August, 1952, pp. 211-13, and (b) "Actions, Consequences and Causal Relations," in Review of Economics and Statistics, November, 1952, pp. 305-13. Pearl, Judea, "Embracing Causality in Default Reasoning", Artificial Intelligence, vol 35, 1988, 259-271. Planck, Max, Where is Science Going? (Woodbridge, Conn: Ox Bow Press, 1981). Russell, Bertrand, A History of Western Philosophy (New York: Simon and Schuster, 1945). Simon, Herbert A., Models of Man (New York: Wiley, 1957). page 5 \article8\causali2 1-1-69 Simon, Julian L., "The Concept of Causality in Economics," Kyklos, Vol. 23 (1970), Fasc. 2, pp. 226-254. Waldrop, M. Mitchell, "Causality, Structure, and Common Sense", Science, September, 1987, pp. 1297-1299. Wold, Herman O. A., "The Approach of Model Building," Model Building in the Human Sciences, Centre International d'Etude des Problemes Humains, Monaco 1966 (a). Wold, Herman O. A., "On the Definition and Meaning of Causal Concepts," Model Building in the Human Sciences, Centre International d'Etude des Problemes Humains, Monaco 1966 (b). page 6 \article8\causali2 1-1-69 FOOTNOTES I am grateful to Joseph Agassi, Steven Louis Goldman, and David Kelley for useful suggestions about this paper. 1 Albert Ellis and Robert A. Harper changed to E-prime when they revised their excellent self-help book, A New Guide to Rational Living (1961; 1975), and they assure us that it clarified their thinking greatly. My knowledge of E-prime comes from the introduction to their book. 2 I do not suggest that mechanical systems of deciding whether or not a relationship should be called "causal" or impossible or impractical. With a sufficient number of specifications, carefully made, there is no reason why a computer program could not effectively sort into "causal" and "non-causal." For some discussion of this issue, see Pearl (1988). 3 Einstein observed that "Man has an intense desire for assured knowledge" (1954, p. 22). 4 Einstein's admiration for Hume is impressive and charming. "If one reads Hume's books, one is amazed that many and sometimes even highly esteemed philosophers after him have been able to write so much obscure stuff and even find grateful readers for it. Hume has permanently influenced the development of the best of philosophers who came after him." (1954, p.21) 84-01 Causali2 10/3/8 **ENDNOTES** <1>: I found these in an excerpt from a news briefing by the Surgeon General, in the midst of the video series Against All Odds. page 7 \article8\causali2 1-1-69