21 September 2016

Why does science work? It's institutions

A typical cartoon history of philosophy of science goes like this: (1) Carnap and the positivists had a theory of how hypotheses are verified; (2) Popper showed that they can only be falsified; (3) then Quine and Kuhn came along and showed that, actually, there can be no crucial experiments falsifying a theory because theories are complex combinations of propositions – so we have a messier process in which a paradigm can eventually be overturned if it proves unsuccessful in coping with a growing number of facts; (4) as a side-note to Kuhn, Feyerabend showed that there’s no such thing as a single scientific method, but you can usually ignore him as he was an anti-science crazy person.

This supposed intellectual progression of philosophers’ understanding of science is in many ways incorrect, partly because Carnap was in many ways better than Popper. A Bayesian version of the cartoon history goes like this. First, you can safely ignore Popper because he didn’t understand probability theory, and, overall he's largely a distraction. There’s nothing wrong with saying that a theory is “verified” if by that you simply mean that it is the theory most consistent with the available information. (Popper built a whole theory of “propensities” in the attempt to argue against Bayesian subjective probabilities. By contrast, Carnap, who was Bayesian, tried to convince his fellow positivists to abandon Richard Mises’s radical nonsense about objective probabilities. He failed in being persuasive, but we can say that he has now been vindicated by Edwin Jaynes and the Bayesian revolution.) Furthermore, Quine and Kuhn can be seen as expanding on Carnap’s critique of his fellow positivists from “Empiricism, Semantics, and Ontology”. The positivists were at the time arguing that one should not use concepts that are not empirically measurable (e.g. Jordan wrote an entire quantum mechanics textbook purely in terms of measurables and purging all talk of “wave functions” etc.). Carnap argued that this was a ridiculously strong requirement, and that it was perfectly ok for science to use “conceptual frameworks” which do not involve direct empirical measurements – if the framework is not useful, it will eventually be replaced:

To decree dogmatic prohibitions of certain linguistic forms [conceptual frameworks] instead of testing them by their success or failure in practical use, is worse than futile; it is positively harmful because it may obstruct scientific progress. The history of science shows examples of such prohibitions based on prejudices deriving from religious, mythological, metaphysical, or other irrational sources, which slowed up the developments for shorter or longer periods of time. Let us learn from the lessons of history. Let us grant to those who work in any special field of investigation the freedom to use any form of expression which seems useful to them; the work in the field will sooner or later lead to the elimination of those forms which have no useful function. Let us be cautious in making assertions and critical in examining them, but tolerant in permitting linguistic forms. [emphasis in the original]

If this sounds very Kuhnian to you, it's because it is. If this reminds you of Elinor Ostrom’s distinction between frameworks, theories, and models, congratulations! Quine further strengthened Carnap’s critique by showing that there are no such things as purely formal propositions – anything that’s meaningful has some (however strenuous) connection with facts. In other words, there was no point in Carnap’s futile attempt to keep the distinction between purely formal and purely empirical (and, worst, try to build a theory of meaning on it). This is important because it also means that all "empirical facts" are unavoidably interpreted through some theoretical lens. There's no way to escape theory. We always have only combinations of theory and facts. Kuhn can be seen as simply providing a stylized historical account of how Carnap’s frameworks (what he called “paradigms”) did indeed change from time to time. He’s important because he brought history and facts to a philosophical fight, which, of course, no positivist could object to.

Now Feyerabend becomes a much more important figure because he showed that scientists have never been and are still nowhere close to the idealized rational Bayesians that the Carnap-Kuhn picture of science assumes. Kuhn’s history is still overly-beautified and whig. The latest version of the Bayesian picture of science can be found in Jaynes’s book Probability Theory: The Logic of Science, and a summary of it is in his paper on “Search Theory” (see esp. the conclusion). The problem with this picture is that it assumes that paradigms/frameworks change on a purely rational basis. Facts accumulate, and failing frameworks get changed. Feyerabend showed this wasn’t exactly so. You can see him as a sort of behavioral economist analyzing how scientists behave. This is why he is often misinterpreted as being “anti-science”. But far from being anti-science he is actually explicit that he takes it as given that scientific progress has in fact occurred, and he wants to explain how that was possible. His whole thing is that he’s concerned that philosophers of science will inadvertently ruin it (echoing Carnap’s concern and tolerance cited above). He says that his account works regardless of how you define “progress”.

Feyerabend basic puzzle is this: (1) Let’s agree that “scientific progress” has occurred, even if we disagree about the exact details about how to define this progress. (2) Here’s evidence that scientists are not rational Bayesians and, hence, scientific progress didn’t happen simply as an inherently rational process (as assumed by Carnap or Kuhn or Jaynes). (3) If (1) and (2) are correct, how does scientific progress actually happen? What's the mechanism that actually drives this progress if (2) is at best only part of the answer. His answer is that we must search for an institutional explanation. This means that Michael Polanyi and Tullock have more important things to say about why science works than the regular philosophers of science. 

I think we are now much further along the way to understand the institutions that make science work. There's now a fairly large literature on the economics of science looking at scientific entrepreneurship and how prestige works. Here’s my paper, building on Smolin, arguing that science can be expected to work (despite individual scientists' failures) as long as its polycentric organization is preserved. Here’s Boettke, Coyne and Leeson’s paper on how important things can get lost when switching from one paradigm to the next.

19 June 2016

Some questions about Robin Hanson's *Age of Em*

Here's a long review of the book and Robin Hanson's replies to various reviews.

It seems to me that a world in which brain emulations are possible would probably look quite different from what Hanson imagines.

1. One of Hanson's key premises is that only very few people will be emulated, namely those that are best suited to be productive and obedient workers. But why would the technology be restricted to emulating only a few people? Being emulated sounds like one possible path to immortality, so I suspect many would want to be emulated and be willing to pay for it. Also, especially given that you could have many copies of you, it could be easy to get a loan to be emulated (which would be repaid by the work done by ems later on). So, income restrictions shouldn't be a massive stumbling block. This in itself would push the em scenario in a very different direction from what Hanson imagines.

Corollary 1: Would identity theft be a problem? Once a person has been emulated, they could be further illegitimately copied at low cost, and the pirated copies used as slaves. Current difficulties to enforce copyright laws seem to suggest this might be difficult to prevent.

Corollary 2: The em scenario looks somewhat similar to David Brin's Kiln People. What did Brin get wrong? Why wouldn't the em scenario look closer to the Kiln People rather than The Age of Em?

2. Neglected topic: Why not colonize the galaxy with ems? Currently it's virtually impossible for actual humans to colonize even Mars. With ems it would be possible to colonize the entire galaxy in about 1 million years (which is long by human history standards, but very short by astronomical time frames).

(a) How would a society of ems spanning the galaxy look like? (There's that fun Krugman paper for a start.)

(b) This connects the discussion about ems with Fermi's paradox and Hanson's points about the great filter. Why haven't aliens (who presumably emerged millions if not billions of years before us) already emulated their brains and colonized the galaxy?

03 October 2015

What's wrong with Searle's Chinese Room argument?

Searle has described his Chinese Room Argument (CRA) on many occasions. Here is a typical example:
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese. (Searle, 1999, ‘The Chinese Room’, in Wilson, R.A. and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences, Cambridge: MIT Press)
This argument is what we could call a nice intellectual magic trick, as Searle diverts the attention of the spectator from what’s important, the place where the “magic” really takes place, namely the “book of instructions for manipulating the symbols”, and points it to the unimportant attention-grabbing actor brought to the forefront, namely the “native English speaker who knows no Chinese”. 

The trick evaporates once you start asking questions about the “book”. How does it really work? Suppose for instance that the “book” is stored inside the head of a real native Chinese speaker who is conveniently hiding under the table. Then we obviously know why the room passes the test – there’s a guy there who understands Chinese (not the person who hands out the answers though). 

But now suppose the “book” is stored in a computer; let's say that the program Searle mentioned is a real computer program. According to the hypothesis, the room passes the test; the English speaker still doesn’t know Chinese (but as we've seen above, that's irrelevant for deciding whether someone in the room actually understands Chinese). I would say that this implies the computer program genuinely understands Chinese, given that it successfully replaces the Chinese guy under the table. Of course Searle wouldn’t want to grant us that, but his “argument” doesn’t work anymore because he would have to assume the very thing he wants to demonstrate – the fact that the computer program doesn't “really” understand Chinese.

Useful rule: Every time someone tells you a thought experiment remember this definition: A thought experiment is a method of generating the maximum amount of confusion with the least amount of words. The vast majority of thought experiments are language tricks based on some form of misdirection, relying on misfirings of our intuition.

07 July 2014

What is Austrian Economics about?

"First they ignore you, then they laugh at you, then they fight you, then you win."

After Noath Smith's and Mike Munger's ridiculous attacks on Austrian economics can we can say that we have finally entered phase two? In case one is a neoclassical economist interested in moving to phase three, here's with what you have to deal with before anything else.

Take this quote from Knight and Johnson (2007) summarizing a basic point made by David Kreps in one of the most widely used microeconomics textbooks (emphasis added by me):
What economists offer is an analysis of the existence and qualities of equilibrium outcomes that does not explain how markets actually generate such nice equilibria. In other words this demonstration "doesn't provide... any sense of how markets operate. There is no model here of who sets prices, or what gets exchanged for what, when, and where" (Kreps 1990, 195-8). Instead, economists offer "a reduced form solution" that "describes what we imagine will be the outcome of some underlying and unmodeled process" (Kreps, 195, 187). Standard microeconomic analysis, in other words, offers little understanding of precisely how "market/exchange mechanisms" actually operate (Kreps, 195, 190). Thus the claim that economic agents will find their way to equilibrium in a decentralized process is a "rather heroic assertion" and, by implication, it "seems natural to think that we could increase (or decrease) our faith in the concept of Walrasian equilibrium if we had some sense of how markets really do operate." Progress on this task can be made "only if we are more specific about the institutional mechanisms involved" in mar ket interactions (Kreps, 187,190). 

In fact, Knight and Johnson and Kreps are somewhat uninformed. There are two major attempts to explain price adjustment. One is provided by Kenneth Arrow, and the other one by Israel Kirzner (his theory of entrepreneurship, which rests at the very foundation of Austrian economics). Here's a graph of their relative impact on the economic profession (numbers from Google Scholar):

Bottom line: (1) Austrian economics currently offers the most widely accepted model of how real markets actually operate. Austrian economics is alternatively referred to as "the theory of the market process" precisely because its main concern is with addressing Kreps' realism concern above, and deriving the consequences from it. (2) The importance of having such a realistic model is widely recognized by prominent authors like Kreps or Knight and Johnson. 

How about macroeconomics and the Austrian Business Cycle theory? After all, this is the focal point for scorn and ridicule. 
  1. First, it is absurd to identify Austrian economics solely with this theory, especially that prominent Austrians (e.g. Israel Kirzner or Richard Wagner) are actually critical of it for not being Austrian enough (ABCT is a highly aggregated theory with many idealized assumptions). (Here's an example.) And the idea that ABCT is supposed to be apriori correct and in no need of empirical testing is also absurd. The theory is hard to test because it requires disagregated data on the capital structure. Here's a list of empirical papers trying test it (e.g. by looking at the structure of the labor market).
  2. Second, I think that, properly understood, ABCT is an application to macroeconomics of the theory of entrepreneurship -- it is a theory of how entrepreneurial activity gets distorted. So, you cannot really criticize ABCT without getting into the deeper problem of explaining price adjustments. And to say it again: the neoclassical theory of price adjustment is not very well developed (Arrow's approach is interesting, and not necessarily contradictory to Kirzner's, but, in order to do the math, he is forced to make many highly unrealistic assumptions).
Addendum: For more details about various aspects of Austrian economics see my course at GMU.

19 March 2014

Why are costs subjective? And what is the subjective value of money?

A student asks me:

A classmate and I are having some trouble with the question in homework 2 about costs being subjective.  
One answer focuses on individual preferences as the source of costs:
"Costs are subjective because value is determined by the importance that individual players place on goods and services for the achievement of their desired ends. Everyone has different tastes and preferences and this can be reflected when determining how much they are willing to pay for a particular good or service." 
The other focuses on opportunity costs: 
"Costs are subjective because they are derived from marginal opportunity cost. The value of the next best alternative is a subjective value because it depends on the person who is considering a situations. Value is determined by the importance that individual players place on goods and services for the achievement of their desired ends. "

My answer:

The second answer is much better. Costs are subjective because they are opportunity costs - the cost is the value of the next best thing, and this value is subjective. Cost is not just how much you are paying for something, but it is what else you could have done with that money. (After all, the money itself is only valuable because of the things you can buy with it.)

The first answer is incorrect for the following reason:

Even if the two people value the item in the same way, the cost of acquiring the thing may still differ, because the value of the next best thing may differ. The first answer implies that if two people value something in the same way (say, they derive the same subjective pleasure from it), than they would be willing to pay the same amount for it. But this is incorrect. They may still differ in regard to how much they are willing to pay for it because the opportunity cost may differ.

By contrast, if the opportunity cost is the same, they are going to be willing to pay the same amount for the good, even if the subjective value of the good differs! For example, to simplify, suppose we have movie vouchers (we can only buy movies with these things). If I really like movie X, and you only kind of like it, but we both equally dislike the other movies playing, we are going to be wiling pay the same voucher amount to see X. The value of the voucher is given by our subjective valuation of the other movies, not by our subjective valuation of the movie we are buying.

11 February 2014

The Guaranteed Basic Income is Trickier than You Think

Not sure what to make of the recent interest in the idea of a Guaranteed Basic Income (GBI). Apparently, if the current spending on the US welfare state (~$1 trillion, according to this Boston Globe article) would be replaced by the GBI, everyone would get an annual handout of more than $30,000. To get some perspective, I currently have a relatively decent living in the most expensive US county with half that money. This highlights two things.

First, the staggering waste involved in the current welfare system. If the US government is throwing around more than $30,000 per person, how come there are still poor people in the US? Should I suggest firing everyone currently hired in the welfare state bureaucracy and giving them the GBI instead of their wage? But I'm sure many of them earn a lot more than just $30k, which should give you a glimpse at the political economy difficulties involved in any attempt to replace the current welfare system with a GBI.

Second, this highlights the size of the disincentive effect on work that the GBI would have. This is actually well known, but for some reason the concern for mobility is missing from the current discussions. E.g. Richard Wagner's To Promote the General Welfare (chapter 5, "Public Spending and Income Redistribution") discusses the effects of various GBI (or negative income tax) experiments in terms of lowering mobility. Spoiler alert: GBI is far worse than the current complicated system of various targeted welfare payments. This shouldn't be surprising. Giving more money with no strings attached has obviously bigger disincentive effects on work than giving less money with some strings attached. I think it's fair to say that most supporters of GBI have not looked carefully at the mobility side-effects. (Of course, one may also argue for replacing the welfare state with BGI and drastically cutting spending on social transfers, i.e. being in favor of a much smaller BGI, which would no longer have such large disincentive effects – but that sounds even more politically unrealistic.)

So, the real question GBI supporters should ask is how much income mobility are they willing to give up in order to get a lower level of poverty now? To put it differently, if you are concerned about equal opportunity, growing income inequality and a classless society (which is usually framed in terms of how many people succeed in getting out of their parents' income class), you should be against the GBI not in favor of it. The only ways in which to have some sort of safety net that doesn't dramatically affect income mobility are: (a) the current labyrinthic welfare state bureaucracy and waste; (b) a market-based social insurance system with plenty of unavoidable holes in it (occasional fraud, some people left behind, etc.).

There really is no way to have a safety net that is desirable on all margins (such as social mobility, efficiency, and risk aversion). If you want social mobility and are risk adverse, but don't really care about waste, people gaming the system and your occasional homeless person, you should favor the current welfare system, and be quite skeptical of GBI. If you want efficiency and don't care about mobility or growing income inequality, but are risk adverse, you should favor the GBI. If your priority is social mobility, are comfortable with risk, and enjoy your occasional conversations with homeless people, you should favor the market-based social insurance.

Social mobility
Efficient poverty reduction
Very small risk
Market-based insurance
Welfare system

31 December 2013

Atheism as a political movement

Pamela Stubbart has an insightful essay about whether atheism is elitist and why atheists don’t care so much about convincing disadvantaged believers that they are wrong:

to the charge that atheism is somehow “elitist,” I say: “of course. who cares?” The forefronts of knowledge have always been, and in some sense must be, the bastion of those who are privileged along some dimension or other. …  We can agree that it would be “cruel and point­less” to try to talk these peo­ple out of their the­ism. But label­ing atheism itself an “intel­lec­tual lux­ury” con­sti­tutes a near­sighted attempt to imbue athe­ism with the con­no­ta­tion that it’s unnec­es­sary and friv­o­lous. Please don’t for­get that in other con­texts, the non-religious do impor­tant work towards cur­tail­ing religiously-motivated harms (female gen­i­tal muti­la­tion, any­one? allow­ing chil­dren with eas­ily cured med­ical con­di­tions to die?) At those times, it is keep­ing quiet about unjus­ti­fied reli­gious claims (“think­ing differ­ently” from athe­ists!) which would be cruel.

On Facebook, Kevin Vallier was ticked off by another comment from the essay:

Kevin Vallier: "If you have gone to college and you do knowledge work for a living, the pursuit of truth ought to be of significant (though probably never overriding) importance to you. Theists of this class are aggravating, because they seem to be either willfully ignorant of science and philosophy, or capable of withstanding huge amounts of cognitive dissonance." Seriously? I hope I'm not one of these unfortunate souls.

In my experience, there is indeed a third possibility apart from willful ignorance and cognitive dissonance. The highly educated theist redefines what religion and faith mean in a way that is more compatible with what s/he knows about science and philosophy. From an atheist’s perspective, the problem with this approach is that each enlightened theist seems to redefine religion and faith in their own idiosyncratic fashion. Hence, there is no way to provide general arguments that would address all enlightened theists’ concerns. And this allows each enlightened theist to feel smug about their own version of theism observing that the atheists only “focus on straw men”.

A second issue is the following. Many of those enlightened theists would accept all the atheistic positions that have public policy impact. For example, they would agree with the separation of church and state, that children should be vaccinated, that evolution should be taught in schools, etc. As such, there is no real political benefit in trying to convince the enlightened theist of their error. Suppose they agree with all the atheist public policy concerns, but nonetheless think that there’s life after death – for example because they think life would be unbearable and meaningless unless life after death exists. (I have actually met a person who believes exactly this, but, again, many enlightened theists would think the belief in life after death is a silly straw man, and would feel smug about it if you would bring it up). So, what would be the point in trying to convince this person of their error?! Perhaps if they were a neuroscientist it would be important, for purely science concerns, but not for many other professions. Moreover, perhaps they would indeed feel worse and meaningless if there were no life after death – why would I try to harm them?

This brings me to what atheism really is. The fact that most atheists would not be “going to the ghetto, shak­ing peo­ple awake, and call­ing them ‘stu­pid’ to their faces”, nor would they be terrible interested in convincing enlightened theists of their idiosyncratic errors, reveals something about what atheism really is. Namely that it is a political movement. Indeed the great bulk of energies atheists in fact spend are in countering the relatively uninformed, but politically relevant, views. Hence, in the same way in which Pascal Boyer looks at what religious people actually do, rather than at their verbal rationalizations, to understand religion, we should also define atheism based on what atheists actually do.

This political movement has two big concerns:
  1. Making sure that all public policy and judicial acts are science-based or at least scientifically informed and not explicitly counter to what we know to be true.
  2. Getting more funding for scientific research, while maintaining the political independence of science.

(Libertarian atheists are skeptical that 2 is realistic, i.e. that you can maintain political independence while getting state funding, but they are a small and inconsequential force within the atheist movement. Although a big chunk of libertarians are atheists, a small chunk of atheists are libertarian.)

As such, atheism, as a political movement that targets mainly the relatively ignorant, can be seen as the political branch of the scientific community. If you’re looking at it this way, you will understand better the pissed reactions of various atheists towards Richard Dawkins – it’s not so much that they disagree with him on substantive grounds, including the inherent incompatibility of religion and science, they disagree with him on strategic grounds. Adopting such a confrontational approach may be a bad political strategy as far as the two goals above are concerned. To succeed at those goals it might be better for the bulk of the population to think that science and religion are compatible, even if they are not.

Frankly, I don’t know what my own position is about this, but each time I see some atheist pissed at Richard Dawkins, this is what I think is really going on – a call for politically savvy hypocrisy.

23 June 2013

The Great Stagnation and over-regulation

The Great Stagnation is the slowdown in economic growth and median wages in United States since the mid-1970s. From Tyler Cowen's book:

The proximate explanation is a slowdown in productivity:

But why did this slowdown in productivity occur?

John W. Dawson and John J. Seater argue that a big part of the explanation is regulation. From their paper, "Federal Regulation and Aggregate Economic Growth" [PDF] in the Journal of Economic Growth:

Their explanation, including why there is a lag in the slowdown:

Figure 9 shows the effect of regulation on TFP’s trend.  The effect is negative throughout the sample period, but increases in regulation actually decrease this negative effect on TFP’s trend through about 1980.  This suggests that, to the extent that regulation contributed to the productivity slowdown, it must have occurred through cyclical rather than trend effects.  The combined trend and cyclical effects of regulation can be obtained by constructing the counterfactual series which shows the level of TFP had regulation remained at its 1949 level.  Figure 10 shows the change in the ratio of actual TFP to counterfactual TFP.  As with output, there is an initial point artifact.  Ignoring that, we see that the change in the ratio is about zero in 1965 and then becomes increasingly negative until about 1980.  After that, the change rises sharply and is positive in some years through the late 1990s, after which it falls again. Throughout the 1965-1980 period the change is negative, indicating a persistent negative effect of regulation on TFP during the infamous productivity slowdown.

Their counterfactual:

(reason's cover of the paper.)

21 May 2013

*The Rule of the Clan: What an Ancient Form of Social Organization Reveals About the Future of Individual Freedom*

This is one of the best political philosophy books that I've read in a long time. While authors like Nozick and Buchanan accept the normative ideal of a purely contractual society, but reject anarchy out of practical considerations, Mark Weiner accepts the practical viability of anarchy, but contests the value of its normative ideal. His argument is basically this:

1. A purely contractual society rests on individual property rights being defined. Without this prerequisite, there cannot be voluntary exchanges and contracts.
2. Property rights have to be enforced in order to be meaningful. (That's why, e.g., natural rights theory is non-sense.)
3. The enforcement of property rights by a state (legitimate monopoly of violence) makes it possible for everyone to have the same fundamental rights. This is possible, but not necessary, as the example of dictatorial states shows. The degree of legitimacy of a state, from a liberal point of view, is given by the extent to which everyone has the same fundamental rights, i.e. the state enforces the rights in a non-discriminatory fashion. (We can quibble about the exact list of "fundamental rights", but his argument does not depend on making this list explicit or on adopting a particular list.)
4. The enforcement of property rights by private clubs leads to a variety of rights, depending on the club to which you are a member of. In a world without government, fundamental rights (i.e. which everyone should have) no longer exist.
5. Equality of fundamental rights is an essential liberal value that should not be abandoned.

Therefore, a society of private clubs (i.e. anarchy), although it is possible, it is against the liberal individualistic ideal (i.e the ideal that everyone should have the same fundamental rights) and should be rejected.

A lot of the book is about arguing that anarchy is not only possible, but actually it has been the rule in most of human history, and it still the prevailing constitutional arrangement in much of the world. The traditional form of anarchy is not a society of private clubs, but a society of kin-based clans. His argument is that a move towards a society of private clubs, via the weakening of state power in liberal democracies, leads us back to the past in the sense that the idea of fundamental (universal and equal) rights is eroded.

His discussions of historical examples are well worth reading, for example he disagrees with David Friedman that medieval Island had been an example of anarchy. His discussion of Islam as an attempt to replace a clan-based society with a rule of law society is very interesting - he has a comparative analysis of developments in Britain and the Arab world at around the same time -, but he ultimately doesn't have an explanation for why Islam has been so much less successful than European states at eliminating the clan-based organization of society. He also doesn't talk at all about technology, but even so his historical analyses are still very good (I don't think he tries to get into the "ultimate causes" debate, he keeps it fairly descriptive).

27 January 2013

Acemoglu on unbundling the impact of specific institutions from the effect of broad economic systems

From "Constitutions, Politics, and Economics: A Review Essay on Persson and Tabellini's The Economic Effect of Constitutions":

"I now turn to a discussion of the problems involved in using these variables as instruments for specific institutions. This has become a common practice in the newly flourishing empirical political economy literature. … I will argue that there are serious problems in this procedure because of inherent complementarities between different types of institutions. … Here the distinction between a broad cluster of institutions and specific institutions is crucial. In AJR (2001), we defined a broad cluster of institutions as a combination of economic, political, social and legal institutions that are mutually reinforcing. …

As an example of the difficulty of this type of strategy to estimate the effect of specific institutions, consider the quasi-natural experiment due to international politics, the division of Korea into North and South. … Suppose now that we try to use this source of variation to understand the effect of some specific institutional feature, say financial development, on economic growth. It should be clear that this strategy will lead to a highly biased estimate. It is true that South Korea is financially more developed than North Korea. It is also true that the reason for this is the division in 1946 (had it not been for the division, the North and the South would probably have similar levels of financial development). But this does not make the division a good experiment to understand the effect of financial development, because this division also caused many other institutional changes. It is a good laboratory for the study of broad institutions, but not for a study of the specific institutions. …

the objective here is to estimate the effect of specific institutions. … Can we then use Z_i as an instrument for a specific institution, say S_1 ? The answer is no. If we were to do this, all of the S_k ’s would also load onto S_1. … we can get arbitrarily biased estimates of the effect of S_1 on the outcome of interest. This discussion also makes it clear that the problem of instrumenting for a specific institution, such as S_1, is in many ways similar to the omitted variable bias, since other specific institutions that make up the cluster of institutions, G, are omitted from the regression. Even if we include proxies for some of them, unless we can correctly estimate the causal effects of all of those, IV regressions will fail to estimate the causal effect of the specific institution of interest, S_1, consistently. …

a source of variation in the broad cluster of institutions is not sufficient to separately estimate the effects of specific institutional features. In other words, we can find clever instruments, from history, sometimes from geography, or international politics, that affect the whole social organization of a society, but this is only the first step. It does not enable us to conclude that one specific institution is more important than another.

However, what we want to know in practice is not only that “institutions” (defined as a broad cluster, and therefore almost necessarily as a black box) matter, but which specific dimensions of institutions matter for which outcomes. It is only the latter type of knowledge that will enable better theories of institutions to be developed and practical policy recommendations to emerge from this new area. Consequently, the issue of “unbundling institutions,” that is, understanding the role of specific components of the broad bundle, is of first order importance."

14 January 2013


Recomended by them:

Dig! (2004) 
The Devil and Daniel Johnston (2005) 
Hoop Dreams (1994) 
Capturing the Friedmans (2003) 
Quince Tree of the Sun (1992) 
Paradise Lost: The Child Murders at Robin Hood Hills (1996) 
Undergångens arkitektur (1989) 
Mr. Death: The Rise and Fall of Fred A. Leuchter, Jr. (1999) 
The Times of Harvey Milk (1984) 
Aileen: Life and Death of a Serial Killer (2003) 
Harlan County U.S.A. (1976) 
One Day in September (1999) 
Brother's Keeper (1992) 
Woodstock (1970) 
The Thin Blue Line (1988) 
American Movie (1999) 
Hearts of Darkness: A Filmmaker's Apocalypse (1991) 
Crumb (1994) 
Once in a Lifetime: The Extraordinary Story of the New York Cosmos (2006) 
Vernon, Florida (1981) 
The Game of Their Lives (2002) 
The King of Kong: A Fistful of Quarters (2007) 
Ayn Rand: A Sense of Life (1997) 
Company: Original Cast Album (1970) 

Countdown to Zero (2010 Documentary) 
Stop Making Sense (1984 Documentary) 
Bobby Fischer Against the World (2011 Documentary) 
Tyson (2008 Documentary) 
Hands on a Hard Body: The Documentary (1997 Documentary) 
Dogtown and Z-Boys (2001 Documentary) 
Dancing Outlaw (1991 Documentary) 
Anvil: The Story of Anvil (2008 Documentary) 
Exit Through the Gift Shop (2010 Documentary) 
Deep Water (2006 Documentary) 
The Power of Nightmares: The Rise of the Politics of Fear (2004 Mini-Series) 
Dark Days (2000 Documentary) 
Hearts of Darkness: A Filmmaker's Apocalypse (1991 Documentary) 
Bus 174 (2002 Documentary) 
The Last Waltz (1978 Documentary) 

02 October 2012

Ortega y Gasset - "The Greatest Danger: the State"

from The Revolt of the Masses, chapter 13:

though it is not impossible that the prestige of violence as a cynically established rule has entered on its decline, we shall still continue under that rule, though in another form. I refer to the gravest danger now threatening European civilisation. Like all other dangers that threaten it, this one is born of civilisation itself. More than that, it constitutes one of its glories: it is the State as we know it today. ...

Early capitalism and its industrial organisations, in which the new, rationalised technique triumphs for the first time, had brought about a commencement of increase in society. A new social class appeared, greater in numbers and power than the pre-existing: the middle class. This astute middle class possessed one thing, above and before all: talent, practical talent. It knew how to organise and discipline, how to give continuity and consistency to its efforts. In the midst of it, as in an ocean, the "ship of State" sailed its hazardous course. The ship of State is a metaphor re-invented by the bourgeoisie, which felt itself oceanic, omnipotent, pregnant with storms. ...

But with the Revolution the middle class took possession of public power and applied their undeniable qualities to the State, and in little more than a generation created a powerful State, which brought revolutions to an end. Since 1848, that is to say, since the beginning of the second generation of bourgeois governments, there have been no genuine revolutions in Europe. Not assuredly because there were no motives for them, but because there were no means. Public power was brought to the level of social power. Good-bye for ever to Revolutions! The only thing now possible in Europe is their opposite: the coup d'etat. Everything which in following years tried to look like a revolution was only a coup d'etat in disguise. 

In our days the State has come to be a formidable machine which works in marvellous fashion; of wonderful efficiency by reason of the quantity and precision of its means. Once it is set up in the midst of society, it is enough to touch a button for its enormous levers to start working and exercise their overwhelming power on any portion whatever of the social framework.  

The contemporary State is the easiest seen and best-known product of civilisation. And it is an interesting revelation when one takes note of the attitude that mass-man adopts before it. He sees it, admires it, knows that there it is, safeguarding his existence; but he is not conscious of the fact that it is a human creation invented by certain men and upheld by certain virtues and fundamental qualities which the men of yesterday had and which may vanish into air tomorrow. Furthermore, the mass-man sees in the State an anonymous power, and feeling himself, like it, anonymous, he believes that the State is something of his own. Suppose that in the public life of a country some difficulty, conflict, or problem presents itself, the mass-man will tend to demand that the State intervene immediately and undertake a solution directly with its immense and unassailable resources.  

This is the gravest danger that today threatens civilisation: State intervention; the absorption of all spontaneous social effort by the State, that is to say, of spontaneous historical action, which in the long run sustains, nourishes, and impels human destinies. When the mass suffers any ill-fortune or simply feels some strong appetite, its great temptation is that permanent, sure possibility of obtaining everything- without effort, struggle, doubt, or risk- merely by touching a button and setting the mighty machine in motion. The mass says to itself, "L'Etat, c'est moi," which is a complete mistake. The State is the mass only in the sense in which it can be said of two men that they are identical because neither of them is named John. The contemporary State and the mass coincide only in being anonymous. But the mass-man does in fact believe that he is the State, and he will tend more and more to set its machinery working on whatsoever pretext, to crush beneath it any creative minority which disturbs it- disturbs it in any order of things: in politics, in ideas, in industry.  

The result of this tendency will be fatal. Spontaneous social action will be broken up over and over again by State intervention; no new seed will be able to fructify. Society will have to live for the State, man for the governmental machine. And as, after all, it is only a machine whose existence and maintenance depend on the vital supports around it, the State, after sucking out the very marrow of society, will be left bloodless, a skeleton, dead with that rusty death of machinery, more gruesome than the death of a living organism.  ...

Already in the times of the Antonines (IInd Century), the State overbears society with its anti-vital supremacy. Society begins to be enslaved, to be unable to live except in the service of the State. The whole of life is bureaucratised. What results? The bureaucratisation of life brings about its absolute decay in all orders. Wealth diminishes, births are few. Then the State, in order to attend to its own needs, forces on still more the bureaucratisation of human existence. This bureaucratisation to the second power is the militarisation of society. The State's most urgent need is its apparatus of war, its army. Before all the State is the producer of security (that security, be it remembered, of which the mass-man is born). Hence, above all, an army. ...

Is the paradoxical, tragic process of Statism now realised? Society, that it may live better, creates the State as an instrument. Then the State gets the upper hand and society has to begin to live for the State. But for all that the State is still composed of the members of that society. ... This is what State intervention leads to: the people are converted into fuel to feed the mere machine which is the State. The skeleton eats up the flesh around it. The scaffolding becomes the owner and tenant of the house.   When this is realised, it rather confounds one to hear Mussolini heralding as an astounding discovery just made in Italy, the formula: "All for the State; nothing outside the State; nothing against the State." This alone would suffice to reveal in Fascism a typical movement of mass-men. Mussolini found a State admirably built up- not by him, but precisely by the ideas and the forces he is combating: by liberal democracy. ...

Statism is the higher form taken by violence and direct action when these are set up as standards. Through and by means of the State, the anonymous machine, the masses act for themselves. The nations of Europe have before them a period of great difficulties in their internal life, supremely arduous problems of law, economics, and public order. Can we help feeling that under the rule of the masses the State will endeavour to crush the independence of the individual and the group, and thus definitely spoil the harvest of the future?  

A concrete example of this mechanism is found in one of the most alarming phenomena of the last thirty years: the enormous increase in the police force of all countries. The increase of population has inevitably rendered it necessary. However accustomed we may be to it, the terrible paradox should not escape our minds that the population of a great modern city, in order to move about peaceably and attend to its business, necessarily requires a police force to regulate the circulation. But it is foolishness for the party of "law and order" to imagine that these "forces of public authority" created to preserve order are always going to be content to preserve the order that that party desires. Inevitably they will end by themselves defining and deciding on the order they are going to impose- which, naturally, will be that which suits them best.

It might be well to take advantage of our touching on this matter to observe the different reaction to a public need manifested by different types of society. When, about 1800, the new industry began to create a type of man- the industrial worker- more criminally inclined than traditional types, France hastened to create a numerous police force. Towards 1810 there occurs in England, for the same reasons, an increase in criminality, and the English suddenly realise that they have no police. The Conservatives are in power. What will they do? Will they establish a police force? Nothing of the kind. They prefer to put up with crime, as well as they can. "People are content to let disorder alone, considering it the price they pay for liberty." "In Paris," writes John William Ward, "they have an admirable police force, but they pay dear for its advantages. I prefer to see, every three or four years, half a dozen people getting their throats cut in the Ratcliffe Road, than to have to submit to domiciliary visits, to spying, and to all the machinations of Fouche."   Here we have two opposite ideas of the State. The Englishman demands that the State should have limits set to it. 





19 September 2012

Learning Bayesian Probability Theory

Alex asked me about some introductory references on Bayesian probability theory. Here's my list (to be read in this order):

1) a historic intro:
Jaynes, 1986, "Bayesian Methods: General Background"

2) intro to Bayesian probability (and the connection to Boolean logic):
first two chapters in the book: Jaynes, 2003, Probability Theory: The Logic of Science
[the free alternative: chapters 3-6 from this Probability Theory With Applications in Science and Engineering (1973)]

3) intro on maximum entropy:
Jaynes, 1982, "On the Rationale of Maximum-Entropy Methods"
Gull, 1988, "Bayesian Inductive Inference and Maximum Entropy"

4) a general overview of the entire theory (including maximum entropy):
Jaynes, 1988, "How Does the Brain Do Plausible Reasoning?"

5) model selection and parameter estimation:
Bretthorst, 1996, "An introduction to model selection using probability theory as logic"
Bretthorst, 1989, "An Introduction to Parameter Estimation"
[the respective chapters in Jaynes' 2003 book]
[the advanced and more philosophical by the end: Jaynes, 1985, "Entropy and Search-Theory"]

6) on the connection between Bayes formula and Maximum entropy:
Caticha, 2003, "Relative Entropy and Inductive Inference"
[Caticha & Giffin, 2006, "Updating Probabilities"]

7) Bayesian theory of surprise:
Itti & Baldi, 2005-2006

8) Bayesian networks:
Jensen & Nielsen, 2007, Bayesian Networks and Decision Graphs

05 August 2012

A taxonomy of uncertainty, "WARNING: Physics Envy May Be Hazardous To Your Wealth!"

From a paper by Andrew W. Lo and Mark T. Mueller, "WARNING: Physics Envy May Be Hazardous To Your Wealth!":

Level 1: Complete Certainty

Level 2: Risk without Uncertainty

randomness governed by a known probability distribution for a completely known set of outcomes

Level 3: Fully Reducible Uncertainty

risk with a degree of uncertainty, an uncertainty due to unknown probabilities for a fully enumerated set of outcomes that we presume are still completely known

Level 4: Partially Reducible Uncertainty

situations in which there is a limit to what we can deduce about the underlying phenomena generating the data. Examples include data-generating processes that exhibit: (1) stochastic or time-varying parameters that vary too frequently to be estimated accurately; (2) nonlinearities too complex to be captured by existing models, techniques, and datasets; (3) non-stationarities and non-ergodicities that render useless the Law of Large Numbers, Central Limit Theorem, and other methods of statistical inference and approximation; and (4) the dependence on relevant but unknown and unknowable conditioning information. ...

Under partially reducible uncertainty, we are in a casino that may or may not be honest, and the rules tend to change from time to time without notice. In this situation, classical statistics may not be as useful as a Bayesian perspective, in which probabilities are no longer tied to relative frequencies of repeated trials, but now represent degrees of belief. Using Bayesian methods, we have a framework and lexicon with which partial knowledge, prior information, and learning can be represented more formally. ...

At this level of uncertainty, modeling philosophies and objectives in economics and finance begin to deviate significantly from those of the physical sciences. Physicists believe in the existence of fundamental laws, either implicitly or explicitly, and this belief is often accompanied by a reductionist philosophy that seeks the fewest and simplest building blocks from which a single theory can be built. Even in physics, this is an over-simplification, as one era’s “fundamental laws” eventually reach the boundaries of their domains of validity, only to be supplanted and encompassed by the next era’s “fundamental laws”. ...

It is difficult to argue that economists should have the same faith in a fundamental and reductionist program for a description of financial markets (although such faith does persist in some, a manifestation of physics envy). Markets are tools developed by humans for accomplishing certain tasks—not immutable laws of Nature—and are therefore subject to all the vicissitudes and frailties of human behavior. While behavioral regularities do exist, and can be captured to some degree by quantitative methods, they do not exhibit the same level of certainty and predictability as physical laws. Accordingly, model-building in the social sciences should be much less informed by mathematical aesthetics, and much more by pragmatism in the face of partially reducible uncertainty. We must resign ourselves to models with stochastic parameters or multiple regimes that may not embody universal truth, but are merely useful, i.e., they summarize some coarse-grained features of highly complex datasets.

While physicists make such compromises routinely, they rarely need to venture down to Level 4, given the predictive power of the vast majority of their models. In this respect, economics may have more in common with biology than physics. As the great mathematician and physicist John von Neumann observed, “If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is”.

Level 5: Irreducible Uncertainty

Irreducible uncertainty is the polite term for a state of total ignorance; ignorance that cannot be remedied by collecting more data, using more sophisticated methods of statistical inference or more powerful computers, or thinking harder and smarter. Such uncertainty is beyond the reach of probabilistic reasoning, statistical inference, and any meaningful quantification. This type of uncertainty is the domain of philosophers and religious leaders, who focus on not only the unknown, but the unknowable.

Stated in such stark terms, irreducible uncertainty seems more likely to be the exception rather than the rule. After all, what kinds of phenomena are completely impervious to quantitative analysis, other than the deepest theological conundrums? The usefulness of this concept is precisely in its extremity. By defining a category of uncertainty that cannot be reduced to any quantifiable risk—essentially an admission of intellectual defeat—we force ourselves to stretch our imaginations to their absolute limits before relegating any phenomenon to this level.

Level ∞ : Zen Uncertainty

Attempts to understand uncertainty are mere illusions; there is only suffering.

(HT Steven Strogatz)