Skip to content
October 24, 2016 / schoolthought

Why Rationalists (sometimes) Madden Me

Eliezer Yudkowsky wrote a short Facebook comment this week. It’s not really fair to take him so seriously and comment on it here, but I’m going to anyway, because it highlights why rationalists (sometimes) drive me crazy.

This is the guy who wrote Politics is the Mind Killer, and is highly respected for his insights on human bias and rationality.

My (cheeky) response:

I know you’re probably joking, but there are other emotional and civilized considerations behind these types of choices. Rationality can’t override all biological subroutines, and these are the types of choices that can send off very poor signals, which have the potential to deeply damage existing and future relationships.
“Wow, you have a really nice place! How did you afford it? I’m glad I asked you out on a date!” “Oh, I was a prostitute when I was younger. Only once though. I made lots of money because I’m so rational.” “Wow you’re such a rational women! What a wonderful quality to find in a mate.”

The idea behind Eliezer’s rationalism is that humans are biased emotional machines, but if we think in a structure parameterized fashion, we can evaluate choices in such a way that we end at different outcomes than our base instincts. It’s fundamentally using applied scientific methods to live our lives. This is a great idea, as most people make choices on base instinct.

The litmus test though has to be a clear acknowledgement of our bias when the truth is revealed. The best examples are those that highlight a consistent bias in our neural network approximation, which is put on display by a structured model.

For example, sometimes apartments will give renters a choice of either the first month free *or* a cheaper rate. One of these choices is mathematically optimal, and if someone makes the wrong choice they are wrong.

Retirement funds can be slightly more confusing. If someone puts 100% of their funds in a bond index, that seems like a really bad idea. We can’t tell them their risk preferences are wrong though, it’s their preference. What we can say is “Your risk preference would have to match this distribution to justify your choice: Does it?”

See, in these cases someone who isn’t thinking too hard, or isn’t smart enough, or doesn’t care, uses their brain’s untrained base model to spit out an answer. Our brains love to spit out answers, we didn’t evolve to develop espistemic reverence. Then someone else builds a more refined model that points out that there was a clear bias, which can be easily represented mathematically (or at least in a very structured manner).

I’m being a little lazy here for two reasons: 1.) Someone could point out that I’m assuming the person who made the initial biased error is smart enough to observe and learn that they were initially mistaken. And 2.) That I’m assuming there is a discrete distinction between self-evident mathematical biases and other types of bias. I can live with this laziness for now.

In the case of prostitution, let’s start by mapping out the dynamics of the situation. There is around a $120,000 premia for a young (presumably attractive) women to sell her virginity. Why? Well, it is because she is only able to do this in Nevada, must be willing to suffer through the act, and potentially messes up her personal and familial relationships. If these things risks didn’t exist, and we all lived in a sexually liberated world where no one cared about prostitution, it’s safe to say she’d get an order of magnitude less for the trade.

In short, the price is currently the equilibria price for a marginal sale conditional on these expected costs. In this case it only makes sense if you have a good reason to believe your expected costs are lower than the revenue you would receive.

Eliezer, however, seems to think and imply this trade isn’t happening enough. Because, unlike rationalists, people haven’t thought through the trade-offs enough and are making choices based on their emotional biases.

For what group would these expected costs be low? Well, the daughters of rationalists would know their parents wouldn’t judge them negatively, and if they are part of the community they could avoid lots of negative social repercussions. It only stands to reason men like Eliezer should encourage their daughters to sell their virginity.

And to what sane man does this sound acceptable? For now, we are not computers, we are men. I understand that my brain runs biological algorithms that are beyond *ahem* pure reason. This doesn’t mean that the feelings aren’t real. I can concede the point that combining intellect, rationality, and base instinct is challenging. It’s hard to tell where we draw the line. For me, this is that line.

I’m a science obsessed guy, so It always feels strange for me to appeal towards an appreciation for a moral civilization and intuitive ethical preferences. As I’ve grown older though I don’t see these two as in contradiction. If humans are nothing more than a special case of a robot, than our collective programming could result in some optimal and suboptimal outcomes at the societal level.

There is no real way to prove any of this, but in my optimal society our women are never encouraged to sell their virginity. I don’t know. It’s what feels right to me. I don’t consider it irrational.

I have, however, now come up with a better definition of Neo-Reactionary: A Rationalist who doesn’t want to pimp out his daughter.





October 20, 2016 / schoolthought

Before Work

It’s a rainy and dark day in Seattle. As I was drinking my coffee before work I checked the Mosul live stream, so I could see how the battle was going.  It’s hard to tell what’s going on, but as the sun set you can see fire, intermittent explosions, and small arms fire.

In the foreground of the camera you can see fireflies as well, flying around like tracers encircling the war.

It’s hard to imagine that in this picture there are so many men younger than me desperately fighting, some having fun and most terrified.

Anyway, time to work.



October 16, 2016 / schoolthought


Most interesting functions that describe the complexity of the world are too hard for even the smartest human to derive. I think it’s an interesting twist of reality that the nonlinear equations required to classify a cat are impossible for a human to derive, but every once in a while a guy like Dirac can conjure the equations that describe very fundamental features of reality. Weird.

Only very recently do neural networks classify pictures of cats better than humans (it’s still mostly a draw). And they are only now coming up to driving cars. The models themselves are black boxes, we can’t meaningfully look into the functions they are approximating. We do know though that they are approximating functions that are used to predict where the cat is in a video, or where the car should turn.

Scientists are still far off from stuff like literary or media criticism or analysis. It follows that if neural networks are approximations for our brain, and our brain uses classifications to understand the world, we are trying to filter out nonlinear and complex functions when we study the world.

Classification without scientific verification risks simply overfitting the fast and giving us bunk conclusions of the state of the world. On the other hand, scientific verification of things like analysis of the media is nearly impossible outside of really simple and contrived experiments.

Everyone seems to agree the way the media interacts and evolves with Americans and politics is clearly relevant to our lives.

This sucks. Only our brains can filter out equations. We have no way to tell if they are overfit. There is no experiment we can run to verify anything; other than casually using our brains to try and pseudo-test experiments by observing the future and doing our best to ‘control’ for the cacophony of the world.

If anyone were to get it right though, I would count on David Foster Wallace. In his essay E Unibus Pluram he wrote about the ‘post-postmodernism’ of media. His interest was in the dynamic interaction between the viewer and the media itself. He noted how we all agreed T.V. was an instrument of cultural decay and we all viewed it as though we were in on the joke, which then fed back into the creation of the media itself.

From the essay:

What explains the pointlessness of most published TV criticism is that television has become immune to charges that it lacks any meaningful connection to the world outside it. It’s not that charges of nonconnection have become untrue. It’s that any such connection has become otiose. Television used to point beyond itself. Those of us born in like the sixties were trained to look where it pointed, usually at versions of “real life” made prettier, sweeter, better by succumbing to a product or temptation. Today’s Audience is way better trained, and TV has discarded what’s not needed. A dog, if you point at something, will look only at your finger.


But TV is not low because it is vulgar or prurient or stupid. It is often all these things, but this is a logical function of its need to please Audience. And I’m not saying that television is vulgar and dumb because the people who compose Audience are vulgar and dumb. Television is the way it is simply because people tend to be really similar in their vulgar and prurient and stupid interests and wildly different in their refined and moral and intelligent interests. It’s all about syncretic diversity: neither medium nor viewers are responsible for quality.

Are we all metawatching the current election? Everyone hates the mainstream media. The left hate-watches it, then uses John Oliver, Rachel Maddow, the NYtimes, The Huffington Post and whoever else to dive into them for their unbalanced reporting. There are numerous articles accusing them (you know, them? CNN, MSNBC, whatever) of giving rise to Trump.

Meanwhile Trump, right-wing talk shows and blogger types accuse the mainstream media of launching an unprecedented assault on their campaign.

Even now, the mainstream media channels realize they are accused of misleading no matter what they do. Still, everyone metawatches them, using the reporting they hear to generate their own partisan commentary.

I wonder if, without purposefully doing this, the equilibrium is for mainstream media to be purposefully bad. This way we all metawatch it so that we can stomp on their broken toy arguments.


October 13, 2016 / schoolthought

Week Review #2

Slavery Dynamics:

Overcoming Bias has an interesting article on the economic dynamics of American slavery. It reminds me of a recent EconTalk episode with Munger, who talks about how the intellectual culture of the south created incredibly clever pro-slavery arguments. Not that they are moral, or correct, but that they are clever enough that if you were born into that society they would be convincing. Presumably this is in contrast to most portrayals of the time, which involve almost comically evil folks.

Munger quotes a book called Cannibals All, which I had previously partially read. The book takes a sort of Marxist approach to slavery, claiming that given how awful working poor conditions are for wage-slaves, slavery is actually a good deal. The author’s reasoning is that the slave owner actually has an incentive to keep the slave healthy and safe, whereas the capitalist doesn’t own any particular worker and has no such incentive. Yet with his capital he has a residual claim on slave labor from all of the working poor.

It’s no wonder Munger was arguing these arguments are… surprisingly good. Not good good, but about as good as any modern PhD Sociology thesis (that is, pretty bad). But they sound good. And while sounding good often has no correlation with reality, it’s often enough.

It’s strange that there is tons of literature on American slavery, some of it by brilliant minds, most of it painting a different picture than what we were taught. Probably what happens is clever well read scholars devote a lifetime towards studying slavery, and come to shared conclusions. The problem is most people don’t have the ability or time to study all those texts. Cutting the texts down is dangerous, as a little knowledge can be a dangerous thing.

The clever solution is to select a few core and simple texts on slavery that lead the reader to a one dimensional version of the slavery scholars final conclusions.  You can see the same thing in The Holocaust. The side effect of this is that those simple and selected texts are then mistake for the reality. So when someone starts digging a little into old books, and they spot inconsistencies, exaggerations, and exclusions, they immediately doubt the entire conclusion. Even though the conclusion is usually still generally right.

The problem is that these topics and conclusions become sacred, and are used as a shared signal for our morality. In Germany it’s illegal to deny the Holocaust. So when someone starts digging into small inconsistencies or questioning the past it’s viewed very negatively. Even though you would have to be delusional to actually deny the Holocaust or claim American slavery was in any way not a horror show.

So what seems to happen is every time someone makes a claim everyone goes along with it, since you can’t question the sacred.

And that sucks. Because while it might convince some people to care more, it also becomes a really inconsistent documentation of history that gives far too much credibility to groups who use conspiracy theories for their own ideological reasons.

U.S. Growth:

Interesting article on Sam Altman and his work at the head of Y-combinator. His goal seems to be to save America through incredible growth. The article reminds me of the Unqualified Reservations post on how Altman almost gets it, but that we are focusing too much on ‘pig growth.’Pig growth is a copy of a term philosopher Thomas Carlyle used, which basically decomposes growth between short run rum consumption and growth from buying your daughter a wedding dress.

What he thinks we should really be focusing on is getting the type of growth that creates a civilization worthy of pride. Interestingly enough, this type of thinking seems to be gaining traction as an economics paper on the value of uplifting makework programs was linked in Marginal Revolution.

Personally I think small organic farms are a great place to start. I used to think the French were stupid for their farm subsidies. Now I still think they are stupid, but accidentally got this one right. Our farm subsidies in the U.S. these days suck, because they mainly go to massive industrial farms.

We can imagine a scenario where there is always a job for any American citizen on a small farm somewhere outside of a city. We would all subsidize the farms, and this would provide a healthy life with meaningful work for anyone who feels they can’t make it in the private sector or find a job.

This would have been an insane proposition during times of high growth, but I get the impression the future of our growth is concentrated in an increasingly smaller group of high-tech  and high-IQ group of people.

The Rational-Sphere and Reaction Watch:

Great Interview by Game Theory professor James Miller with Greg Cochran of West Hunt. My favorite part was at the end, when Miller notes he had to cut out parts of the interview because the topics could destroy his career if he was on record discussing them.

There is a fast growing reaction towards immigration in the US. There currently seems to be a break down between immigration preferences for somewhere between 20-50% of the US population, depending on how you define it, and those preferences working their way into legislation through the electoral connection. My theory is that the longer that connection between electorate and legislation is suppressed, the more radical it will become.

Evonomics article on immigrants importing their destiny. It still feels dangerous to discuss, but there are real differences in cost and benefit towards admitting different immigrant groups.

“History is written by the winners, but perhaps in the future science will also be written by the winners. I’m not sure that the truth will win out. Perhaps the glass will become darker, rather than clearer. “

Does this document the start of the Progressive-Trump dichotomy?

General Interest:

Apparently in the Ottoman Empire when the King died, the son to succeed him on the throne was legally allowed to kill all of his brothers. The idea being this lowered the chance of a civil war and limited the scope of damage.

A physician talks about his urge to save lives in Aleppo. He risks his life daily.

A university in Syria was bombed. As I saw the picture with blood on their math homework I looked over at the math books on my desk, realizing my own frustrations of knowing I will never be smart enough to truly understand what is inside is not a real concern.

A surprisingly interesting post on the investment philosophy of a portfolio manager. He writes about Warren Buffet wannabes, and how he navigates investing and keeping clients happy.

Nerdy Residuals:

Apparently Tensor Flow can simulate PDEs. I’m sure that will help me with all the daily PDE simulations I do.

This is also a cool write-up on Graph Convolution Networks. This is a really interesting system to map graph theory data into deep learning models. I don’t know a ton on this field, but it seems like it’s the next big step towards using highly complex relational graph data for machine learning problems.

October 11, 2016 / schoolthought

Theresa May: Progressive Savior

Theresa May is a savior of progressivism.

There is dissent among non-progressives. It’s now safe to say a Brexit voter and a Trump voter would have more in common than they would with an elite professor from their country. This dissent is making its way through the democratic electoral connection, but as of yet hasn’t meaningfully changed mainstream legislation in the UK or US.

If May is able to incorporate the frustrations of Brexit voters into mainstream politics, while allowing the opposition to negotiate with them, she could present the country with a palatable set of policies. I predict by taking seriously their core concerns, while ignoring the more radical concerns, she will incorporate the populist anger into normal boring mainstream politics. The more extreme will lose their collective power, since 90%+ will be sufficiently satisfied with their representation.

What is extreme in this case? From some perspectives it’s all racist extremism. This is changing though, with even Ezra Klein and Tyler Cowen discussing the fact that there are ought to be a way to discuss demographic concerns and the heavy negatives of some types of diversity without calling it racist xenophobia. In my opinion extreme is when this macro-level frustration manifests itself into hatred or violence towards individuals in day-to-day life.

May’s recent speech was claimed to have been xenophobic and awful. Judging the response you would have thought she gave a speech similar to Enoch Powell’s Rivers of Blood speech. The reality is that by acknowledging the most reasonable, and least extreme, frustrations of a substantive amount of Brits, she brings the policies into real debate and negotiation. The alternative of ignoring voters who feel betrayed by not having a say in immigration and the composition of their communities, is to risk their continuing to become more extreme.

Democracy seems to work best when it brings together voters who are able and willing to make political trades. Excluding half the population from having representatives who are able to build political capital and make legislative trades, on the basis that they are morally wrong, is not a successful strategy.

Is it really worth staking an entire political party and direction of a country on marginal amounts of immigration? Does the US really benefit so strongly from illegal Mexican immigration and relocating Syrian refugees to Texas that the Democratic party, and frankly large subsets of the GOP, ought to risk disenfranchising half the country, pushing them towards more extreme choices? I think if mainstream politicians took this subset of voters more seriously sooner, instead of a Trump we could have had a May.

May seems to understand this and is taking it seriously. Mainstream financial journalists claim her hard Brexit risks messing up the economy. May isn’t ignoring economic risks because she’s stupid, she’s taking a stand and making a show of it because that’s what the voters want.

Being unwilling to compromising on relatively marginal issues, such as letting major voting blocs have a say on who they allow into their communities and likening them to Hitler is a really bad strategy.


Related Links:
Taleb on the “Intellectual Yet Idiot”

Viktor Orban, Hungarian Prime Minister’s immigration speech

Academic research on make-work programs

Evonomics on Immigrants importing instutitions

Bloomberg article on Breitbart executive

Berlin threatens to hold Facebook accountable for ‘racist posts’

Trump’s ratings let him scrimp on TV ad spending

October 2, 2016 / schoolthought

A Few Thoughts on ML in Economic Predictions

I was recently talking to a friend about my work in economic forecasting and why the field hasn’t had much influence from more modern or trendy machine learning algorithms.The way most economic forecasting works is using a class of models called time-series. Unlike structural models they are based on identifying statistical dynamics of how series change over time, based on their past data, and their dynamic relationship with related series. For example, if you want to model GDP you would start with only GDP data and explain how it evolves based on its past data.

There are a few challenges with a large class of interesting economic time-series models. For one, the model you are  specifying is super dependent on economic theory. As an example, we never use more than one lagged value when using stock market data, because by definition a stock market embeds expectations of the future conditional on information known at the time. So having past data is not only meaningless, but misleading. Secondly, this is compounded by the problem that economic time-series data often has  little meaningful data. The amount of data in a model shouldn’t just be measured by it’s sample size, but by the amount of variation that is meaningful to build a mathematical connection between your models.

I ran into this problem when trying to forecast house prices in Seattle based on Zillow. Their data goes back about 20 years. That’s not much data, there are some economic issues around 2001-2003, a huge price increase up to 2007, a housing crash, and a new price increase. Not only is this not much variation (4 or 5 data points?), but the relationship between variables is structurally changing. Is the relationship between the macro-economy and Seattle house prices the same in 2004 as in the modern tech-boom Seattle?

How do you solve this? If you’re really really good you are able to build great priors based on economic theory. My economic priors come from years of reading Game Theory textbooks, economic principles, history, and the Philosophy of Economics and Science. Those books aggregate their knowledge on human behavior from centuries of observation, documentation, and modelling human action and markets.

That’s the cool part of economics, which is that lots of more complex systems can be understood from observations that we can use our brains to filter out to a low number of dimensions, which people often call economic theory. If I drop $30 on the ground, I strongly expect the first person to walk by to pick it up. I expect that because One, I know that it’s what I would do, because as a human I can simulate what another human will probably do. And Two, I’ve read lots of information involving other humans that suggests they would pick up the $30.

Let’s take this back to time-series modelling. The best economic models, before they are even estimated, are hypothesized by a human based on a combination of the specifics of their problem combined with their knowledge of economic theory. How do you get a machine to learn the right model to do this?  To replace a human it needs to understand economic theory, the structure of an economy as put through a textbook, the ability to try and simulate human behavior, and understand how this all interacts with the context of the specific problem.

Not only is it a problem type that requires massive high-dimensional data, but because the models are about other humans we are naturally suited to simulate what it’s like to be another human, and how their choices could dynamically evolve through the future.

In a sense the objective function we minimize for a given time-series model is only an approximation for the “true” objective function conditional on our having chosen the right model based on our prior data of economic systems.

Based on this I get the feeling solving economic models using advanced ML methods (where the model is able to incorporate prior economic system information) would require an AI-complete solution, which is able to read textbooks, human history, and simulate human behavior. There will definitely be smaller steps, particularly with respect to letting a machine search very high-dimensional datasets for useful predictors.

I have to think about in a more structured way, but my thought now is that the more a model relies on economic theory the less useful ML models will end up.

October 1, 2016 / schoolthought

Week Review #1

Following Slate Star Codex I’m tracking my links and posting them to refer back to in the future.

I.) The S&P500 doesn’t seem to be sensitive to variations in Trump’s probability of wining (based on prediction market trends). This difference between what the marginal investor thinks, and what non-investors suggest will happen, is some sort of puzzle. The New York Times on the other hand claims the market could lose up to 10% based on debate data. The first paper, by Sweet, Ozimek, and Asher, uses a more formal econometric approach.

The NYtimes article is a little more haphazard. Extrapolating a small change, based on the after-hours and illiquid S&P500 futures market. Plus, this implies the current market price is in equilibrium with the current expectation of Trump winning (let’s be incredibly conservative and say 20-30%). If this is true, shouldn’t it have already plummeted on that expectation over the past ~6 months?

Scott Alexander of Slate Star Codex would (almost surely) have agreed with the Marginal Revolution argument that there is, as of yet, no strong sign that Trump is bad for the market.

However, he has now officially has endorsed Clinton. He thinks she is less likely to fuck things up, even though she sucks. And our main goal has to be to keep the world stable so scientists can create a super-intelligence AI and fix (rebuild) our genetic code.

2.) John Cochrane gave a great podcast on EconTalk on economic growth, based on a paper he presented to the congressional Budget Committee. He argues that over-regulation of our economy– a “death by a thousand cuts” — is to blame for slow growth. This is in contrast to views of low demand or that we have ‘run out of ideas.’

3.) Statistical Icon Andrew Gelman laid into NPR for their incredibly lazy science reporting. The ‘scientific’ paper NPR reported on had to do with how class separation on air planes results in anti-social behavior. His best line: “NPR will love this paper. It directly targets their demographic of people who are rich enough to fly a lot but not rich enough to fly first class, and who think that inequality is the cause of the world’s ills.”

4.) Reddit user provides very strong evidence that an interview with a “rebel” (Nusra) group in Syria was staged. He does this entirely on his own, documenting video evidence and cross-comparing it to other combat footage, maps, and landmarks, that were previously uploaded. By doing this he is able to place the rebel commander in a location no rebel ever captured.

We’re at a point where the amount of information being poured into the internet from Syria a.) Both too great (and sometimes too unimportant) for the government to care about. And b.) Too unstructured for any algorithmic model to digest. So rogue internet analysts can in some sense be near equals with the government.

In related news, here is a picture of ISIS’ currency

5.) Relatively new Bayesian textbook (with R applications) is lauded as top of the line. The first chapter has no real stats/code, but is an awesome read for its Philosophy of Science insights.

I’ve been going over some probability stuff, and made a note to remind myself of Chebyshev’s inequality. People frequently use standard deviation rules assuming a normal distribution, often when it’s nonsensical. The Inequality states: “In practical usage, in contrast to the 68–95–99.7 rule, which applies to normal distributions, under Chebyshev’s inequality a minimum of just 75% of values must lie within two standard deviations of the mean and 89% within three standard deviations.”

6.) The newest edition of Econ Journal Watch has a great article on political representation in academia and the failure of Gender Sociology to ever seriously consider biological differences.

If I’m optimistic I would say these articles, combined with the growth of Heterodox Academy, and the rationalist-sphere online, are keeping the goal of scientific inference alive. Still, it’s depressing that it’s heterodox and controversial to tell people “maybe men and women really are different in ways that make us uncomfortable.







%d bloggers like this: