Skip to content
October 7, 2015 / schoolthought

A few thoughts on Syria

Tolstoy wrote War and Peace as a retort to all the armchair European analysts of his time, who wrote and discussed Napoleon’s conquests, and his failure in the Russian winter in the year 1812. Their analysis always sounded reasonable, and explained all the events by attributing clear decision making and brilliant analysis to Napoleon and his enemies. Tolstoy’s problem with this was that he saw all the intellectuals of his day as just observing all that happened, and then crafting a nice story as an explanation. A grand success from the French was due to Napoleon’s sheer brilliance. A failure was due to a tragic mistake, or a stroke of misfortune. The messy complexities of actual reality were smoothed over. Tolstoy saw it differently, and even suggested that Napoleon wasn’t particularly great. He viewed Russia’s success as a fluke. In his view the entire course of history is chaotic, and since we miss so much of what truly happens, we convince ourselves the course of history is set by the grand commanders and pivotal moments.

I sometimes wonder if we make this same mistake with Syria. We have our major players: The Assad regime, ISIS, the Free Syrian Army (FSA), Al-Nusra (al-Qaeda), the Kurds, Turkey, Iraq, Iran, the US, and now Russia. We then look to each player and ascribe strategic intentions, and use that in our analysis. What is interesting though is there seem to be two separate areas of information, with only slight overlap, on Syria. The first is the standard journalism information, and the second is from non-professionals scanning and interpreting the vast amount data posted by combatants and civilians on social media. The subreddit /r/syriancivilwar aggregates the most interesting data, which is usually far more personal and noisy than the stories on NPR or the NYtimes. On the other hand, they are often videos of fighting or discussion, or personal thoughts. The more I read and watch the more confusing Syria becomes. Things rarely become clearer, and it’s obvious they aren’t that clear for people on the ground either.

What is the Free Syrian Army? Is it really a moderate group of soldiers working together against oppression? To me it seems like a term we have assigned to groups of rebels each fighting to protect their cities from outside threats. What is the al-Nusra front, and what is their goal in Syria? The Institute for the Study of War argues that smaller opposition groups might ally themselves with larger groups (like al-Nusra) as larger outside threats approach. Part of the problem is when we speak about Syria, we assume there is a semblance of organization and communication that makes belonging to a faction simple and systematic. I think an important question is to understand what drives Syrian civilians to affiliate themselves with a group. My impression is that if you are a man, and not in a major city, you need to form a militia to protect your town or village, since without any formal protection you are essentially living in anarchy. These are then characterized as ‘opposition’ groups, and somewhere along the way they are clumped together with the FSA, or ignored.

I worry that the analysis and stories that we hear on Syria misunderstand the entire country. While there are concrete issues with Russia involved, in terms of understanding the civil war itself, it is closer to anarchy than anything else. And if that’s the case, I think trying to ‘arm moderates’ or support the right side won’t work, since there is no clear side or organization (although there are wrong sides, namely ISIS and al-Qaeda). I think the only end to the war will come when some person or group gains overwhelming power, and can create an authoritarian government.

October 5, 2015 / schoolthought

Scientific Failures


For us humans it is important to not fixate on the chaos of cause and effect. If we see someone eat a plant and get sick, we shouldn’t eat it. If someone seems to get sick when they are cold, we should stay warm. This kept our ancestors alive. It is also why only up until a couple centuries ago Europeans and American settlers thought tomatoes were poison, and why people tell you to stay warm so you don’t ‘catch cold.’

It seems likely that humans didn’t evolve to be scientists. We evolved to survive, and our most basic model is a simple iterative cost vs. benefit analysis. This has resulted in incredible discoveries. Native Americans used tea from birch trees since pre-history, which contained vitamin C, to prevent scurvy. Making that connection probably took a while, and involved some luck, but it was extraordinary. On the other hand, in Europe the earliest record of finding the solution to scurvy was a British explorer recommending orange and lime juice in 1593. Despite this, there were tons of competing theories, most of which didn’t work. In the early 18th century over a hundred thousand men in the British navy died due to scurvy, and the Navy doctors wouldn’t suggest limes as they originally did not confirm to their theories of disease. Then, finally, in 1753 a British physician James Lind conducted a clinical trial that more or less settled the issue.

Looking back it seems obvious that they should have solved it sooner, and if they had a more developed philosophy of science they would have. But there were hilariously challenging confounding factors to work through. Fresh citrus cured scurvy, but juice that had been exposed to copper tubing and light didn’t. Fresh meat contained vitamin C as well, but salted meat did not. Improved nutrition in general prevented scurvy. If you were trying to figure this out you might notice that citrus juice is not helping, but it’s going to be a few centuries before the periodic table of elements is even invented, so you have no conception about how all substances are in fact composed of many smaller molecules and vitamins.

Then you theorize that you need fresh produce, but fresh meat prevents or cures scurvy in your crew. So that is no longer a convincing argument. And over thousands of shipping trips, someone might eat one of many different foods with vitamin C and be cured, and now you have a whole set of anything they did or ate in the past few days as a potential solution. So everyone starts developing folk theories about how to prevent scurvy. And what is really comical, is that the new scientific view formed following the germ theory of disease suggested that scurvy was caused by bacteria in tainted or old meat. So if you were a scientist invested in the germ theory of disease, you might not be too keen on evidence that seems to go against your scientific argument.


Science journalism seems to be growing in popularity. On my daily commute I listen to NPR and hear the newest social science research. The papers on popular or controversial issues are quickly distilled and find themselves on major journalism publications, such as the New York Times.  There have even been new platforms, like Vox, which claim to take a scientific and analytical approach to the news. The articles are usually about inequality, gender or race, labor economics, and a whole bunch of sci-fi space junk.  The articles usually play up the authority of the scientists, and take their findings at face value.

I know a lot of smart people who read and share these articles on Facebook, LinkedIn, and even talk about it and mention it at work. Pointing out methodological flaws or telling people you don’t believe them when they talk about interesting research they heard isn’t something you should do if you enjoy having friends.

The abuse of research-design and statistical methods is what lets most bullshit research take on a veneer of authority.  Fishing for significance is the most common error, as p-values are the bread-and-butter of modern statistical inference. If we have a 5% p-value that tells us our parameter is 8.5 under the null-hypothesis, it means that if we were to assume that the parameter we are estimating was 0 (null), there would only be a 5% probability that it is 8.5 or greater.  One major problem with this is explained by Andrew Gelman, who wrote a great paper that touches on one of the main issues here,  Statistical Significance is not itself Statistically Significant. The point here is basically that a p-value moving from 4.9% to 5.1% isn’t actually a significant movement, even though a 5% p-value is often viewed as ‘proof’ of scientific existent in the peer-review process.
There are additional statistical issues with significance. For example, the assumption is usually that hypothesis we are testing our parameter against is ‘no effect’ (i.e. zero). But this depends on the circumstances, and is not always true. Then there are also concerns about the size of the parameter.  If you wanted to measure differences in height within the US, and cut the country into two equal halves, the difference in height would be statistically significant. After all, we are dealing with the population, so as long as the two averages aren’t equal, they are statistically significant by definition (our standard errors are zero).

Both of those issues are the most commonly cited when criticizing modern science, but in my view they are derived from a more insidious issue. When fitting a model there are usually thousands of plausible specifications to choose from. It’s easy to test tons of model variations until p-value sticks out, and then create a great story on why this is the optimal model.  For example, there is a new paper out claiming an anti-depressant, Paxil, can cause increased risk of suicide in teenagers. This paper uses the same data that was used in the drug trial that concluded Paxil was safe, but comes to a separate conclusion that Paxil has side effects that were ignored in the original study. The original paper argued Paxil was safe, and the statistical evidenced in the original paper did not suggest it caused an increase in suicide risk. It’s not surprising this is hard to measure, as suicide is very rare, which means you might only have a few cases of patients reporting suicidal thoughts, and probably zero patients who commit suicide.

The data was from 1994 to 1998, and focused on 275 adolescents with major depression that had lasted at least eight weeks. There was a double-blind treatment with paroxetine (Paxil), imipramine, or a placebo. The study had an eight week randomized control trial, and was then followed by a six month continuation phase.  Similar to most antidepressant trials, the main outcome variable is a survey called HAM-D, which indexes depression from 0 (none) to 52 (extremely suicidal due to depression).  There are many assumptions on how to interpret this, which you can read in the paper if you’re interested.

The criticisms in this paper appear to be somewhat justified. The original paper made a series of choices when recording and reporting the data, each of which would be plausible on its own, but when combined suggests that—whether by luck or design—their data was presented in a way that slightly understates adverse effects.  There were two points the new paper makes that I found most compelling: The first was that the original study only reported a negative effect if it was above 5% of the sample, but went on to create very specific categorizations. For example, anxiousness, nervousness, and agitation could each only reach 4% of the sample, but could be argued that are different words for the same symptom. The second was that the original authors made access to their data and documentation extremely challenging. For such important research this is unacceptable, and should be required with publication.

The main and most popular finding in this paper had to do with adolescents being at higher suicide risk than originally thought, so let’s look into this: Using their new methodology, they found that five patients dropped out due to suicidality, whereas the original paper had that metric as zero. This new methodology also had 3 patients drop out due to suicidality in the placebo groups, which were also originally zero. Based off patient documentation they also noted that there were 11 suicidal patients during the acute phase and taper compared to 5 suicidal patients in the original study (although this first number is including the taper phase, which the original study didn’t include). Throughout the entire study one patient unsuccessfully attempted suicide.

The difference in the two papers can be explained by the garden of forking paths, sometimes also called researchers degrees of freedom. It’s a concept of how many different ways a researcher can compare the same data to achieve the desired results. In these two papers, the authors of the original paper would benefit more from supporting this medicine, as they were employed by the drug company. In the second paper, they would benefit more from finding a severe flaw to support their argument and get a great publication (and to their credit, they admit this in their paper).

Based off the replications main analysis, their biggest complaint is that the first paper understates suicidality, as well as other minor issues.  But this paper doesn’t find the smoking gun they claim. The truth seems to be one of differences in coding. Imagine two Psychiatrists who each meet with the same 80 severely depressed patients over eight weeks. At the end one says “I think about five of them were low-risk suicidal” (because remember, high-risk suicide requires being committed). The second says “I disagree, I think 11 of them were suicidal.” They then sit and compare notes, and it turns out they look for slightly different signals and indicators. One of them is really conservative and documents anything that could be perceived as suicidal, and the other takes a pragmatic approach.

The statistical power here, the likelihood to detect an effect when there is an effect to be tested, is very low for rare events. If one in a thousand users of Paxil kills themselves due to the drug, this study wouldn’t even have a high chance to detect this result.  Not to mention this study took place about 20 years ago. Since then there have been millions of adolescents who have taken Paxil. While that data might be harder to find, and isn’t a randomized study, it has a sample size of millions. I do not think quibbles over classifications over a few people out of a sub-sample of 80 from 20 years ago should hold that much weight – although I could be wrong as I haven’t worked in this field.

In each case though there are many reasonable choices that could result in slight benefits either for or against the drug’s safety. This gets at the reason I was skeptical of both papers strong claims towards safety or danger, the truth is I don’t think they know to the extent they claim. Both research papers are important, as it gives us a reasonable profile of the risks and benefits of Paxil. But when arguing on the margins of an extra few people being suicidal, it’s hard to take it seriously, as the variance of the research design itself is much bigger than the change in effect.


Karl Popper argues that our reasons for coming up with a hypothesis or question exist outside of a scientific framework and are unimportant, but once they appear they must be tested rigorously and properly. As a strict philosophy of science this makes sense, since human curiosity is capricious. Unfortunately for the pure philosophy of science, most academic and private sector research has a clear benefit towards proving their hypothesis correct. When someone asks a question, there is usually an answer they either want to be true, or one they think is true and they want to try and prove their intuition is correct. There is a famously bad paper on whether women are more likely to wear pink or read when they are fertile. Why did they ask that question? I’m guessing that their thought process went something like “Women wear red and pink to embrace their femininity, since society views them as feminine colors. I bet when women are most fertile they subconsciously act more feminine to attract the attention of males. I should explore this!”

There are a few problems immediately. The most obvious one is the researcher clearly wants the answer to support the hypothesis; otherwise there is no fun quirky research paper that gets published and widespread science journalism acclaim. The second is it will justify their brilliant intuition and earn them respect and advance their career. Then the third is that there are many different ways to measure this hypothesis, both in the original question and the model specification. The same scientific question could be achieved by examining the level of skin women show, cleavage, makeup, time spent talking to men, and so on. They would all try to measure the same phenomena. Once any of those are chosen, there are many different ways to set up the research design, collect data, and fit a model to the data. It’s so easy to support your hypothesis when you have such wide freedom and you only need to find one that supports your hypothesis and ignore the rest.

None of this is reassuring for the scientific method. There aren’t clear rules on how to set up the right design outside of a randomized experiment. In this instance the question and data do not seem rooted in a robust method. Part of this is also because I view the subtleties of human behavior as usually hard to tease out from the daily noise and complexity of our world.

If all these scientific issues are known—as I certainly didn’t come up with them—why do they persist?  I think it is because even though philosophers of science and some statisticians are extremely interested in them, most other academics don’t appreciate the complexity of reality.  Trying to understand all the chaos that we can’t understand is strange, but it is how compartmentalize and understand everything we cannot or do not know. I recently watched a youtube video of a 9/11 ‘truth’ conference, created by an organization of engineers. The presenters were mathematicians, engineers, and other PhDs and academics. They created computational simulations of the towers crashing, presented chemical experiments showing reactions between steel beams and thermite, and generally had a deep and impressive knowledge of structural physics. I know very little about their fields, but I know they are wrong. The world is full of emergent properties on a scale we probably can’t comprehend.  Even if they are much better at mathematical models than I am, my conception of omitted variable bias is better, even though all I’m doing is appealing to the complexity of the world. Even brilliant men make this mistake.  Alan Turing in 1950 made the following claim:

I assume that the reader is familiar with the idea of extra-sensory perception, and the meaning of the four items of it, viz. telepathy, clairvoyance, precognition and psycho-kinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.

Turing was a defining genius in human history that focused on math, computers and cryptography, which are inherently logical structures that are fully founded in their base properties. Alan Turing bought into the poor research design on telepathy that found statistical significance, and felt there was no choice but to accept it then as scientific fact.

Linus Pauling founded quantum chemistry and molecular biology, and won the Nobel prize in chemistry. He later claimed vitamin C could cure cancer based on a reasonable hypothesis, and nothing could change his mind. He was convinced it was the case. If you think he was just crazy in his old age, and then you need to explain why despite being completely refuted, it’s still common knowledge that vitamin C cures colds (although these days its Zinc, based on new bad research).


This all ties back to modelling. It becomes easy to let the strange and unpredictable emergent properties and chaos of the world drop out. Since we can’t observe them, and we don’t know how they bias our model, it is difficult to understand how our model of the world is wrong. The randomization can do a great job fixing this, but is usually impossible to implement. By conceptualizing the world through science experiments we have made incredible progress. If we were able to send our knowledge on the scientific method back to 16th century Britain, but no additional knowledge, they would probably have been able to set up a series of tests on different boats with clever use of controls, and find a solution to scurvy within a year.

I think if we were able to similarly only receive knowledge on the scientific method from 1,000 years from now, we could also make a leap in progress in understanding how to set up and learn from research designs on issues from drug research to microeconomics. That is the optimistic view. The pessimistic view is that we already know far more about the proper use of the scientific method than is used in academic research, even at the highest levels of research, since the truth often does not line up with passing a drug trial, being published, or getting tenure.

September 20, 2015 / schoolthought

It’s time to change how we view theoretical models (ft. Anna Karenina).

“But you see we manage our land without such extreme measures,” said he, smiling: “Levin and I and this gentleman.”

He indicated the other landowner.

“Yes, the thing’s done at Mihail Petrovitch’s, but ask him how it’s done. Do you call that a rational system?” said the landowner, obviously rather proud of the word “rational.”

 –Anna Karenina, Leo Tolstoy


Petrovitch is a wealthy landowner in early 19th century Russia, in the novel Anna Karenina. He’s a side character and exists mostly for one of the main characters—Levin—to argue with and explain his new idea on farming and labor. Tolstoy subtly crafts Petrovitch’s arguments to be stale, but with the right mix of economic terminology. Rationality means something important in their discussions, but isn’t well defined. In the book, Levin operates a farm that employs hundreds of peasants shortly after serfdom had been abolished. Like other farmers, he isn’t turning a profit. The peasants don’t have an incentive to work hard, it’s a simple principle-agent problem. And Levin crafts an idea to have them share in the profits with him to give them motivation. This was when economic reasoning was beginning to offer solutions in the abstract theory, but before much of it had been actually tested.  Levin called his system rational farming, and it develops into his life’s work to implement it and teach it to the Russian economy.


Economists have always prided themselves on being scientifically more rigorous and rational than other academic fields. The field has grown without a rigorous empirical scientific method. Many of the greats, such as Samuelson, Wicksell, and Keynes, were viewed as mostly theoretical. It made sense at the time, and they were far smarter than I am, but following another half century of scientific knowledge, it’s time to realize that economics needs to be grounded completely in empirics. This doesn’t mean we have to start over though, it’s possible to reformulate their arguments within an empirical and testable framework. I won’t try to do that in this post, but I want to explain why it’s important. For example, it’s easy to assume an abstract and true model of the world, and then impose what rationality means. But it is challenging to rigorously explain what it means to call someone else who disagrees with you irrational.

In economic models it’s uncommon to consider that each individual is using his own different model of the world. It is more standard to have a single model that defines the action space, and have players within that world. Even in probability games, the players still are interacting within a shared structure.  As a famous example, there is the prisoners’ dilemma.  While originally viewed as theoretical, these models are absolutely empirical. This model is built on empirical observations ranging back before economics was a discipline. The concept of human betrayal is built into the written history of our species. Hobbes modeled our political system similar to a prisoner’s dilemma. Except in his model it was one where we were all caught in a bad government, with an optimal solution of cooperating.

That people are self-interested, and that people want more rather than less, are predictions based on observations and individual conceptions of history. Hobbes was a well-read philosopher, and as he studied history he saw that there was an equilibrium where even stable dictatorships were preferable to some idealistic democratic dream. This seemed to be a constant throughout different places and events. This reasoning was the type of prediction a computer can’t handle, and is based on our incomplete data of things we have seen and read throughout our lives. As well as the way we combine our empirical views with our own personal experiences to generate a prediction.

Throughout these observations, we have seen patterns that let us consistently look for a few key variables that are always present. Within this perspective, an economist is just someone who uses the scientific method when studying people, but who starts with a prior that a few variables regarding self-interest explain most of the variation. The reasoning behind these models distinguished economics as its own discipline. It wasn’t until later that economists went back and formalized these models in terms of rigorous assumptions and math.


The prisoners’ dilemma isn’t a theoretical model that needs to be specifically tested in a controlled environment to see if it’s true or not. In fact, the controlled environment will add experiment noise, seeing as how it will just appear to be a silly game. There are no rules for how ‘serious’ the punishments need to be for the game to be valid, but I think it’s reasonable to hypothesize that they should be way worse than simply gaining or losing $20. The way the model was originally created was by looking for a common structure among all historical events and human interaction. Following Ostrom’s work on how these models interact within society, in any given laboratory test of the model it will probably miss other important parameters unique to the time, location, and circumstance.

What would add to the field is an emphasis on studying what causes additional considerations and predicting them. Here a laboratory test could be useful, if it were standardized across different countries. Even if it’s only an approximation to a serious game, the differences between sample populations might still be meaningful. More importantly, the view that a formal test using error statistics and a computer is always the optimal tool for empirical analysis is wrong. Currently computers are not even close to being able to pour over disparate historical texts to search for a common structure of human interaction in certain situations.

Economics as a discipline is built on centuries of empirical research showing that simple measures of self-interest can predict people’s actions. But instead of letting sociologists handle the rest (and usually mess it up), economists should instead start studying the residuals. An empirical achievement would probably involve slightly less effort on extremely complicated mathematical equilibrium refinements (and I’m not just saying that because I’m jealous that I don’t understand them), and instead try to estimate how simple measures of self-interest in the form of interaction in tested games vary throughout different regions and cultures.  These could be used to help predict policy success.


If we step away from the world of testing models, we can look at how terms like ‘rational’ are casually misused by Nobel Prize winning economists. Krugman is a good example here of how even smart economists often have a tenuous grip on the rigors of economic scientific inference (I also get to be like my academic role model John Cochrane, and bash on Krugman). Or they somehow think the rules don’t apply when not formally making a model for an academic paper, which is ridiculous, since the entire point of scientific inference isn’t just to publish, but to actually meaningfully understand the world.  Krugman’s scientific formalization of the world starts by imposing preferences on others. Of course, since he is not other people, he needs to try to infer their preferences and their own model of how their actions would affect the world, and then imagine how he would have acted given their preferences but using his model of the world. Imagine a conversation between Paul Krugman and a conservative voter:

Krugman:  I was looking at your economic information. It says that you work as a janitor for a local school system, and make $28,000 a year. The Republicans don’t work as hard to fund education as Democrats. They also believe and will support high quality health insurance for you, and are in favor of unions that could help you negotiate higher wages.

Voter: I don’t need a hand-out from anyone. I know a janitor doesn’t make much money, but it’s my job and I work for what I earn. I don’t know as much about funding, but the school looks alright. My wife and I work hard to pay for our health insurance, and so far have not had any issues. But we really don’t like the condescending attitude most Democrats have about how we choose to live. We are part of a Christian community, and we think a great society needs to follow the right values.

Krugman: By ‘right values’ do you mean you want to defund Planned Parenthood and force the poor to suffer from high unemployment and low opportunities because the Republicans will shoot down any economic policy that tries to lower inequality?

Voter: We work hard and are proud of our values. It’s not our job to fund welfare programs and abortion clinics. If liberals want those, fine, they can buy them.

We know Krugman views these voters are part of, as he calls them, the “irrational right.” To come to this conclusion he tries to infer what they care about and what they want from their country, then he thinks to himself “If these are the things I wanted, how would I go about getting them?”, then he looks at how they go about trying to get them. If these two things don’t match, he calls them irrational. It’s not just Krugman, it’s how most economists use the word.  He is making the implicit statement here that he knows what they want, and knows what they should do to get it better than they know themselves. And he could be right, after all he knows more about economics and policy than most people, but it would be hard to know.

It gets weird when we think about how we would test his hypothesis that they are irrational. We could split the world into two timelines, and then simulate ones where their policies go through and where Krugman’s go through. Imagine it is a game show, and Krugman and the voter are each in separate rooms. Then we can have the results of any variable—no matter how intangible or vague—quantified and given to our players. After pouring over the results, they both meet. They each say “Hah! Looks like I was right!” Krugman looked at all the policies he thought they would care about. But the voter looked at other things, like the excitement of celebrating an election victory with his community, the excitement and feeling of accomplishment he shared with his family, and the belief he had that the world would be better for his children.

This is an area I’m still trying to work out myself. It’s very hard to properly be scientific when talking and disagreeing with someone else. When starting it’s possible to share information and try to understand the evidence and model being used by whoever you disagree with. But if you two still disagree it is not clear how to resolve the issue scientifically (if that is even possible). From all this follows that rationality is at its core about scientific inference and how it is used to infer what people want given their preferences. And even this is unsatisfactory, as preferences are based on our model of the world, so it’s not accurate to consider them as separate or conditional. At its best it’s a useful term because it lets us embed scientific inference when we talk, and still represents an important idea about disagreement on models and conceptions of the world. But at its worse, it is a term that is associated with our over-confidence in our ability to predict how others view the world.


In Anna Karenina, Levin’s rational farming system didn’t work how it was supposed to. He incentivized the peasants, taught them how profit-sharing works, and tried to inspire an excitement at the prospect of mutual riches. But the system was too different. They had no generational or cultural history of entrepreneurship. Their conception of work and life was one that had developed throughout serfdom, and was antagonistic at its very core. They were fundamentally suspicious of land-owners, uneducated, and viewed every elaborate new plan as one that was zero-sum.

He guessed that if they were rational, they would want to participate in his rational-farming system. But his prediction of the peasants’ model was wrong. He didn’t know what they wanted or how they would use their prior knowledge of the world to interpret his economic view of the world.  Unfortunately, Russian agriculture after serfdom wasn’t able to bring an entrepreneurial system to the peasants. By the end of the 19th century it was the worst agricultural system throughout all of Europe. And by the 1930s the centrally planned Soviet model of agriculture resulted in the deaths of around 6 million people.

August 26, 2015 / schoolthought

Looking Under the Hood of the OLS Model (Mathy)

The best way for me to learn something is to force myself to write it up, and then post it here. Unlike some of my other posts, I didn’t write this to be an accessible and easily digestible post. But if you already have a solid understanding of how OLS works, and want to see how it’s derived, this might be interesting for you.
I wrote it for the following reasons:

1.) While I use linear regressions all the time, I had never derived the analytical solution myself.

2.) I wanted to show how to solve it in R to using an optimizer and coding it myselff.

3.) I wanted to see how to solve OLS with maximum likelihood, instead of just using the closed form solution.

4.) I wanted more Latex experience.


The post itself is a pdf file. I need to eventually move this blog and host it myself, so that I can add plugins so Latex shows up here!


PDF File:


August 12, 2015 / schoolthought

Masters Thesis

I have attached my thesis for my MSc in Political Economy at the London School of Economics that I wrote two years ago, in August of 2013.  This thesis is on the Smoot-Hawley tariff act of 1930. This was a tariff act that preceded the great depression, and was sometimes blamed for the extent of the depression. It also was an example of an awful economic policy, and the dangers of log-rolling in the U.S. legislative branches. I argue against previous academic papers that claimed it was legislation that no one wanted, but spiraled out of control since every legislator wanted to toss in their own protectionist policies once it started. I claim that the act was in fact a prudent political move for Hoover, and benefited his political career. I set up the model using game-theory founded logic, and then build up empirical evidence by analyzing the various voting blocs within the House and Senate using principal component analysis.SimonRiddell_MastersThesis

I had hoped to redo it in LaTeX, but at this point I don’t think it would justify the amount of time it would take. So when you see those clunky equations, just pretend they look fancy. As anyone who has worked in academic social sciences should know, it doesn’t matter if the statistics or math is correct, so long as it looks imposing and flashy. But in all honesty, even though it looks clunky, I still stand by my methods of inference. Looking back on my paper now, having learned so much more, I would only make a couple changes:

The first would be to either remove–or be far more specific–with my argument on macro-economic factors. I just toss in a few citations and a macro-economic theory on factor production, which wasn’t all that necessary for my main argument. The second would be to detail the methodology behind using principal component analysis a little more thoroughly. Being more explicit in my identification mechanism and the properties I want my model to satisfy would tighten up my inferences and conclusion.

Thesis: SimonRiddell_MastersThesis

July 12, 2015 / schoolthought

Messing around with DW-Nominate and Bond Returns

Over the past year and a half I have spent most of my time at the Fed working with yields data from U.S. Treasuries. Before the Fed I spent most of my time at LSE working with political data. Specifically, a dataset called DW-Nominate. I won’t do it a full service here, but it’s a measure of partisanship within the U.S. congress. It is a formalization of our heuristic classifications. We ‘know’ Hillary Clinton is further ‘to the left’ than Ted Cruz, but how are we actually identifying that? It doesn’t work too well to identify it on specific policies, as even just 8 years ago both her and Obama were against gay marriage, which no longer is reconcilable with ‘the left.’ Poole and Rosenthal identified it by using multi-dimensional scaling to simultaneously find the ‘ideal point’ of a legislator on a single dimension, as a function of their legislative voting records, and who they vote with most. As a result a legislator, who is always voting with other legislators on ‘the left’ and not with legislators on ‘the right’ of the dimension, will be classified farther to the left.  This has allowed for each congress (in this case the House) to have its measure of partisanship estimated, based on where in the dimension the average winning vote occurs. It also has allowed for a measure of partisanship, which is a measure of disagreement amongst legislators.

The yields data from U.S. treasuries are the most fundamental indicator of the present and future of the U.S. economy.  The pricing of the yield curve is related to the expectations of the U.S. interest rate and real economic activity. For example, steepness of the curve is a great indicator of future recessions, due in large part to how monetary policy affects the yield.  From this data I have constructed an excess return series, based on a function written by an economist I work for, which is the average of the additional return that can be achieved from buying an N-year bond, and selling it at N-1 years, when compared to just buying a 1 year bond. What this identifies is the time-varying risk premium in the term structure, which reflects investor’s demands for a risk premium to compensate them for interest rate risk. For example, if an investor buys a 5-year bond and sells it a year later, he takes the risk that rates rose in the interim period, which means he might realize a loss (when compared to the counterfactual of having just bought a 1-year bond). For this he requires an additional risk.

I always was curious if there was a way to identify a political risk premium in the term structure of yields, so I ran a series of regressions using the DW-nominate measures of partisanship, and political polarization. While the results were a little interesting, they appear to be null results.  I won’t entirely give up on the general thesis that there is a relationship between political polarization and the U.S. economy, but I have at least proven to myself that identifying that relationship will be much more challenging than simply throwing time-series at a regression. With that said, I’ve documented a snippet of the resulting graphs below, as well as my code.

plot2Chart 1: This chart is a benchmark, and graphs the fitted values of a regression of excess returns on the first three principal components. The theoretical idea behind this is that the term structure of yields should include all knowable future information on the U.S. economy, including the time-varying risk premium. So for another variable to meaningfully contribute, it must add above and beyond these three principal components. The R2 for this regression is 0.17


Chart 2: this chart is a regression of excess returns on only the two political variables, without including the principal components. What this regression specifically asks, as an example, is standing at the end of a 2-year congressional term and looking back at the dynamics of that legislative body, what amount of the future time-varying risk premium can I predict? Despite reaching a statistically significant result, and having an R2 of 0.11, the economic significance of this chart does not look meaningful.


Chart 3: This chart asked the same question as before, but assumed the investor was able to have perfect foresight of what the dynamics of the legislative body would be in the coming two years, essentially allowing him to predict the future and use it for his investment today.  The R2 and significance essentially remain the same as chart 2, but the variation at least looks slightly more interesting, but still pretty unconvincing.


Chart 4: This combines the two political variables with the first three principal components in the black line, and compares it to the benchmark of only the first three principal components in the red line. Statistically the R2 here is 0.26 vs. the benchmark value of 0.17, and the additional variables are significant.

I’m not convinced this represents any meaningful causal relationship outside of some historical idiosyncratic patterns. It is possible there are some deep latent factors in the U.S. and the world, which relate or drive increasing political polarization and variation in the time varying risk-premium. But absent any strong theoretical connection, I can’t bring myself to buy any of the significance.  There are also no meaningful reasons to prefer one model specification over another. For example, instead of excess returns I could have used the slope of the yield curve, and I could experiment with lags or forwards of explanatory variables without running against any theoretical reason to prefer one over the other (I tried lots of these, without results much different from what I achieved here).  And given all this freedom to try varying models, I could surely find a tight fit eventually, by luck.

In addition, the sample size of data on each congress is not particularly large, with only around 30 unique points for the entire sample (compared to the daily yields data).

Messy R Code:

DW-Nominate data from

June 16, 2015 / schoolthought

Review of Orwell’s Experience in the Spanish Civil War and WWII

Book One, Homage to Catalonia:

When George Orwell was in his early 30s, he went to fight in the Spanish civil war against the fascists. He fought for The Workers’ Party of Marxist Unification — also known as the POUM. In 1936 the Spanish military staged a coup to put Franciso Franco in total control of the country. Franco would be supported by Nazi Germany and Italy, with the various workers movements being supported by Russia. The ideological shape of Europe at the time consisted of a battle between the workers movement centered in Russia, and the insidious forms of fascism, which had taken their firmest roots in Germany. Even within the relative stability of Britain and the U.S. a deep fear of socialism began to spread, built on a fear of workers collective strength expropriating and nationalizing government and industry; a fear articulated in Joseph Schumpeter’s Capitalism, Socialism, and Democracy, where he predicted the end of capitalism.

For Orwell, the lives of the working poor in Britain were horrific when compared to the upper class. His book The Road to Wigan Pier was based on his diaries and travels through the industrial north of WWII. The observations in the first half of the book subtly endorsed socialism, yet he also focused on its practical failures within Britain, frustrating his publisher and his primary audience. Despite his careful approach, Orwell saw a country and world where things were going to change rapidly, either ending in suffering and repressing or one where the poor gained more political power and no longer suffered while the rich lived opulently. So despite being British, he saw Spain as the start of a larger battle.

The ideas behind communism and fascism combined with the modern age of industrialization hadn’t been tested, but held the promise of a better world. In a sense, each offered the same thing to those in society who whenever they looked at those more well-off, saw an ease of life they could never hope to obtain, and the opportunity to live with what they saw as dignity. The realm of communism would achieve the goal through collectivization and removal of the capitalists, industrialists, aristocrats, and politicians, whose opulence put the poor in relative shame. Fascism on the other hand didn’t come with a handbook, and to me often seems to rest more on nationalism and transformative leadership. However, most modern forms of fascism often involve bribing the major capitalists in an economy until it is possible to coerce them with violence, and giving meaning through manufactured employment and some type of nationalist fervor to the working class. These tactics can be found today in Syria, as I wrote in a previous post, but with a slightly more sectarian twist.

Orwell was not alone in seeing these two ideologies building up throughout Europe. But he focused on the specific institutions, people, and parties within countries and their neighbors.

This book is largely about his time on the front lines. The anti-fascist forces were more united by common enemy than ideological similarities. There were variations on communist groups, socialist, anarchist, and worker syndicates. Their lack of organization and military supplies consistently came up in Orwell’s frustrations, but he also saw benefits in the style of military. The workers army he saw as based on class-loyalty, whereas a ‘bourgeois conscript army is based ultimately on fear’.

This book walks the reader through Orwell’s original belief in socialism, as well as his disillusionment with the ability to meaningfully fix and rebuild the foundations of a country through civil war. “This is not a war,” he used to say, “It is a comic opera with an occasional death.”

Below I’ve included more of my favorite excerpts:

“At Monte Pocero, when they pointed to the position our left and said: ‘Those are the Socialists’, I was puzzled and said: ‘Aren’t we all socialists?’ I thought it idiotic that people fighting for their lives should have separate parties; my attitude always was, ‘Why can’t we drop all this political nonsense and get on with the war?’”

“When I came to Spain, and for some time afterwards, I was not only uninterested in the political situation but unaware of it. I knew there was a war on, but I had no notion what kind of a war. If you had asked me why I had joined the militia I should have answered: “To fight against Fascism,” and if you should have asked me what I was fighting for, I should have answered: ‘Common decency.”

“In England, where the Press is more centralized and the public more easily deceived than elsewhere, only two versions of the Spanish war have had any publicity to speak of: the Right-wing version of Christian patriots versus Bolsheviks dripping with blood, and the Left wing version of gentlemanly republicans quelling a military revolt.”

“In Spain the communist ‘line’ was influenced by the fact that France, Russia’s ally, would object to a revolutionary neighbor.”

“To fight against Fascism on behalf of “democracy” is to fight against one form of capitalism on behalf of a second which is liable to turn into the first at any moment. The only real alternative to Fascism is workers’ control.”


Book Two, Orwell’s British Perspective on WWII

Following Orwell’s experience in Spain, he wrote about his time living in and near London leading up to, and during, WWII. Before reading this book my own casual historical view of WWII was that Britain rallied under Churchill’s iron will to defeat the Axis. At the time Orwell was in his late 30s, and was still recuperating from an injury he suffered in the Spanish Civil War (as a foreign fighter). Orwell’s diary portrays a different picture of a politically unstable environment, and a country that appeared to lack the motivation to go to war with Germany.

The Battle of France began on May 10th, 1940. Despite Hitler’s constant aggression, this was the first major battle involving allied forces. By May 26th most of France was occupied, and about 330,000 British and other Allied troops had retreated to the shores of Dunkirk. At this point there were talks among the British of making a conditional peace offer to the Germans, and the French formally surrendered. If the British lost hundreds of thousands of troops in the British Expeditionary Forces (B.E.F) it would be hard to imagine a situation where they could still contest Europe. This marks the beginning of Orwell’s wartime diary on May 28th, 1940. Orwell wrote that there was ‘no real news and little possibility of inferring what is really happening,’ but that he his suspicion was that the strategic situation the B.E.F was in was hopeless. Despite this, he said that people were not really talking about the war.

The people in the Censorship Department where Orwell’s wife, Eileen, worked, would lump all “red” papers together, and often prevent them from publication or export. Orwell also mentioned that there were rumors of air raids beginning in London, yet there still seemed to be little interest in the war as people are not grasping that they are in danger. Over the coming days, as the soldiers at Dunkirk were miraculously successfully evacuated, Orwell saw “The usual Sunday crowds drifting to and for, perambulators, cycling clubs, people exercising dogs, knots of young men loitering at street corners, with not an indication in any face or in anything … that they are likely to be invaded within a few weeks.”

I also thought Orwell’s comments on marketing at the time were hilarious. He observed “Huge advert. on the side of a bus: “FIRST AID IN WARTIME, FOR HEALTH, STRENGTH AND FORTITUDE. WRIGLEY’S CHEWING GUM.” Orwell was always particularly disgusted by wartime profiteering.

Throughout his diary, what I find most interesting his Orwell’s commitment to dispassionate political study of his time and country, while remaining steadfast in his personal nationalism and desire to defeat fascism. Orwell believed he was more clearly able to see the future than the British cabinet. Normally I’d accuse any writer of just cherry-picking his past predictions, but I think I trust Orwell here. His reasoning was as follows:

“Partly it is a question of not being blinded by class interests etc., eg. anyone not financially interested could see at a glance the strategic danger to England of letting Germany and Italy dominate Spain, whereas many rightwingers, even professional soldiers, simply could not grasp this most obvious fact. But where I feel that people like us understand the situation better than so-called experts is not in any power to foretell specific events, but in the power to grasp what kind of world we are living in. At any rate I have known since about 1931 that the future must be catastrophic. I could not say exactly what wars and revolutions would happen, but they never surprised me when they came. Since 1934 I have known war between England and Germany was coming, and since 1936 I have known it with complete certainty.”

There are plenty of other brilliant paragraphs Orwell writes in his diary, but this is the final one I’ve chosen to include in this post:

“It is impossible even yet to decide what to do in the case of German conquest of England. The one thing I will not do is to clear out, at any rate not further than Ireland, supposing that to be feasible … If the U.S.A. is going to submit to conquest as well, there is nothing for it but to die fighting, but one must above all die fighting and have the satisfaction of killing somebody else first.”

%d bloggers like this: