If you are unable to read the print in your browser, either scroll to the bottom to read it at the actual LSE Econ Society website or right click to view the picture in a new tab so that you may zoom in/out. Thank you for reading !
This is from an interview I did with John Cochrane for the LSE Econ Society. The motivation to reach out to Mr. Cochrane, and the questions, are all my own. This is a picture/excerpt from the actual article found here:
This fourth piece focuses on a different way to view the goals of an investments firm. It is my theory. I do not think I have provided enough evidence to support some of my claims, there are some parts that are still bugging me. I’m trying to find a more useful way to theoretically evaluate active investment firms, instead of resorting to alpha. In my experience, particularly after learning the way alpha is calculated, it is often a weak measure of performance. Using alpha assumes that the firm isn’t doing useful research in various beta factors, which could arguably be more useful for their clients. Simply beating peers in an incredibly uneven market does not mean much. Instead I argue that infrastructure, talent, and research demands positive economic rents. And this is more fruitful of an approach for self-analysis as well as evaluation of a fund. In the end the alpha of an entire market is zero, but the value created by intelligent investors evaluating equity, debt, and other products is decidedly positive. This is my attempt to solve the problem of “how does active investing make sense if alpha sums to zero and markets are efficient?”
Ricardo’s theory of Rents:
I believe Ricardo’s theory of rents is applicable, yet overlooked, when considering the investments industry. It is a fundamental piece of economic theory that was created by one of the founders of microeconomics. The theory of rents helps destroy the dichotomy between alpha and beta. While alpha and beta are useful for constructing a model of reality, it is an attempt to simplify the world to an equation. There are some serious flaws with the alpha and beta model. The first is that all alpha must equal zero since every investor that beats the market takes from someone who lost.
Often academic research suggests that active investing is a pointless game due to the fact that all alpha across the board must equal zero by definition. As I have mentioned earlier, this is due to alpha being a measurement of return without risk or beating the market. And for every winner there must be a loser, resulting in alpha amounting to zero over the whole market. This is the definition and theory behind alpha measurements, however the reality of measuring an entire market to zero-out the alpha is impossible.
Viewing the primary goal in a market as only alpha returns is also problematic, because this suggests the market is a zero-sum game. However, if all active investments switched to passive investing (not taking active bets) then the market would not change to reflect new information. This would cause the markets to become inefficient, which suggests active investing is not a zero-sum game.
Moreover, the widely accepted view of economics is that markets tend to be efficient under a capitalistic system. While academia does often identify areas of inefficiency in various fields, it is suspect to claim that most active investments firms are actually not contributing to social welfare and utility. As a result it is important to search for a methodology that allows for the existence of active investments to make sense, while not requiring that the goal of active investment firms be to generate risk adjusted returns (alpha).
Imagine that we reject the assumption that the goal of active investing should be to return alpha. If this were the case we would ignore residual values in regression returns (which is how we identify alpha) and only concern ourselves with risk factors (beta). I posit that active trading would still be beneficial. Some risk factors are not open to easy investments. For example, maintaining exposure to inflation risk on an emerging market junk bond portfolio. This would require a hedge fund with talent, infrastructure, and a great team. This firm would have to charge money to support its firm, and it might be one of only a few hedge funds to offer pure exposure to this risk factor (as opposed to holding small amounts of it inadvertently when buying a market index). This hedge fund would be accessible only to high net worth individuals, would charge expensive fees, and would provide potentially high returns with low correlation to traditional investments. This is a useful service that investors wish to buy, and it does not require this fund to beat other investors.
Contrary to previous examples, I have described this using only risk factors. By using Ricardo’s theory of economic rents, we can identify active investments firms that provide a useful service without obsessing purely over the residual value (alpha) of regressions on their investments and/or whether or not they beat their peers. Hopefully through explaining this theory and then revisiting an example I can convince the reader that obsession with alpha is overrated. If you are not familiar with Ricardo’s theory of rents I have included a detailed description at the end of this piece.
A primer on economic rents can be seen by viewing a recent high profile case in contemporary finance: Groupon (GRPN). This firm added a new dimension to deals and coupons. They turned the coupon industry into a fun way to try new things targeted at young social media users. And they flourished at first. However, their stock has dropped from $25 to $4 in six months. One reason this happened was due to a decrease in Groupon’s economic rents. They had a first mover advantage, since they essentially created a new product. In this short-run Groupon made immense profits. These profits were far and above what a firm ought to be compensated for the act of organizing and emailing coupons. Since there were no competitors at the time they were able to capture massive rents. Eventually the field was swamped by imitators, which drove down Groupon’s economic rents to zero. In fact, so many investors thought this would be happen, Groupon was briefly the most shorted stock in the US market. This means owners of Groupon shares could charge a very high rate to speculative investors who wanted to borrow the stock from shareholders so that the speculative investors could sell it short.
Understanding the idea of economic rents is fundamental to economics and financial markets. Economic rents are zero in all perfectly competitive industries. Even though these industries make money and profits, their economic rents are zero. As I explained earlier this seems counter-intuitive at first, yet makes perfect sense. It takes constant innovation above and beyond the market norm (or a monopoly) to keep rents positive. A simple innovative shock will temporarily bump up economic rents but then decrease to zero once again as other firms copy. For example, Apple was able to increase its economic rents recently by winning the lawsuit claiming that Samsung infringed upon their innovation. The courts decided Apple deserved economic rents due to innovation that Samsung allegedly stole.
Determining economic rents is difficult and often results in debate, as seen in the Apple and Samsung lawsuit. It is even more difficult in the investments industry. It is a more theoretically complex view than simply measuring the alpha residual on risk adjusted returns. Instead we view these firms to determine how they are providing services based on their inputs. For example, the currency hedge fund I hypothesized at the beginning might have zero or an immeasurable ‘traditional’ alpha. Despite this, the firm would be operating with positive economic rents due to their ability to bring exposure to a complicated risk factor to their clients, a service that is not easily replicable by other firms.
Another example might be a fund that holds the equity of many large Chinese firms in their portfolio. In contrast to other emerging market investments firms, this fund has many teams of investments professionals that worked at all the major Chinese firms before they joined the fund. As a result this fund invests in select groups of Chinese firms with a decidedly above average ability at picking stocks. A traditional risk-adjusted return model, such as the Fama and French three factor analysis, would measure their investments against a few well documented beta values and attribute the excess residual to alpha. Yet this analysis falls short. Consider instead that the team of investors sat together and thought of these different firms, analyzed their balance sheets, and viewed massive amounts of data. After this analysis they determined these Chinese firms would provide strong returns. Instead of measuring this basket of stocks against traditional market risk factors, we could argue that this team created a set of risk factors for each stock using macroeconomic data and the population as factors. For example, an agricultural firm might have its pork returns tied to the growth of GDP per capita in developing cities. In this example the team researched and chose a portfolio of stocks. It is possible the firm earned no traditional alpha. Instead, they identified factors that they believe will be compensated strongly based on their prediction of the future. And as mentioned earlier, predictability does not imply market inefficiency.
I argue that viewing this past example in terms of positive economic rents has more use than attempting to search for alpha. First let’s consider how using a traditional alpha calculation, similar to examples I offered in the past post, would work. Let’s suppose that this investments firm uses a value oriented approach, and searched for ten Chinese stocks that they believe traded at a substantial discount. A rudimentary (yet typical) risk-adjustment would look for what amount of the returns can be explained by traditional factors. This would begin by identifying a market index so that the market beta value can be ‘taken out.’ Additional factors such as Fama and French might be adjusted for as well. Every aspect of these ten stocks that is not explained by these factors will be lumped together in the alpha category. This alpha category while containing some information, is far from a good indicator as to the success or failure of the firm.
Imagine that the bulk of the research from this investments firm was to find firms that created goods that have dramatically increased in inelasticity over the past decade. Perhaps they were searching for goods that were once considered luxury before the rapid economic growth. As a result this investments firm buys these stocks in the hope that they will continue to deliver consistent returns even in the face of a recession. Their conclusions was that the market had only partially priced in the defensive nature of these firms. As a result they believe that over the next half decade or in the next recession, other investors will realize this defensive quality is worth paying extra. Simply put, this firm wanted to buy a basket of defensive Chinese stocks, based on their own rigorous research. Based on this hypothesis, if the following year still has strong growth in China, the investment firm will have a negative alpha value. In theory, the firm might argue that if this ‘defensive beta’ factor they believe existed were included, their lower returns would be justified. However, they could only argue this theoretically. And this is why traditional alpha measurements can be flawed and misleading so often. The theory of the law of one price, risk and reward, factors, and alpha is essential to understand. The proofs that create this framework teach us how markets operate beyond what we are able to truly and scientifically measure.
It is for this reason many academics argue that stock picking is impossible. What they really ought to say is that we have yet to measure stock pickers generating true alpha values in a manner consistent with the scientific method. And many investors, while unable to prove what they have done is true skill with measurements that qualify at the peer reviewed journal level, are still able to make money off of their investments. Yet positive alpha values are not necessary to justify the existence of stock pickers. After all, alpha is zero in the market as a whole. Consider the previous example. The active investments firm that was stock picking in China was not searching for positive risk-adjusted returns. Instead they were searching for a factor that they believed would perform well over the following decade. Similarly, stock picking can exist even without markets being inefficient and providing alpha values. Uncovering the right beta values in the right combination is difficult, requires research, and can be rewarded. My example of ‘defensive Chinese’ stocks was still overly simple, as it only claimed there was one risk factor common in all ten stocks. The reality of stock picking is a firm might have a portfolio of thirty stocks. And each stock might have ten different factors that the researchers argue will allow it to perform strongly. The term factor might even lose its meaning, as some of the researchers make their investment based on a firm handshake with the CEO.
While the law of one price claims similar risk must contain similar reward, these risk factors might have low correlation with traditional market returns. If this is the case, it is possible the fund managers found a way to gain their clients exposure to a subsection of the market that will either improve their returns and keep their volatility constant or keep their returns constant and lower their volatility.
Consider the following example: Ten hedge funds all are predicting the outcome of a single firm over the next decade. Each one believes this firm is exposed to a few rare factors that suggest it will provide consistent returns. Each fund engaged buys the firm, and receives the returns. Depending on when each firm bought or sold this stock, there will be a group of winners and losers relative to one another if we only rate them based on their alpha value. However, each fund might have bought the firm for a different reason, to satisfy a different portfolio, and off of their own independent research. Perhaps the fud that received the lowest return per risk ratio bought this firm based on research that suggested to them the times when a firm is at extremely low risk for a defensive portfolio. While it may appear that they came last in the group of ten, they might have actually identified a period of calm. And while they were rewarded less for their risk, as we would expect in an efficient market, the firm still satisfied the goals of their portfolio.
My conclusions is that using Ricardo’s theory of economic rents might better explain the advent of active investing and stock picking, as opposed to searching purely for alpha. And as always, in markets with a large variance, some investors will by nature do better than others for reasons that we tend to call luck. This viewpoint also allows for active investing to make economic sense. Large amounts of academic research, after finding that alpha does not beat infrastructure costs, would appear to suggest that many active investment firms are not meeting their goals. However, this is problematic for the same reason I argued in the prior example with the Chinese firm that was searching for a new and ‘rare’ factor. I believe instead we may view active investments firms as firms that are rewarded for their economic discovery of various market, economy, and individual stock, factors. While the true alpha of an investments firm might be zero, it is possible that the firm has positive economic rents. They would be awarded these positive economic rents for their research and innovation in identifying, studying, and investing in various different factors that might affect the price of stocks. Such as the affect weather will have on oil shipments or a rising demographic of tweenagers that are more likely to buy Apple devices. It is possible that these firms make money, generate positive economic rents, and do all of this by identifying the right mix of the infinite level of factors and even discovering new factors. Similar to my original currency example, it is possible that a risk factor exists, and is compensated, yet does not exist as a pure investments vehicle. If a firm is able to research this factor, and isolate it, investors will pay them to gain and maintain exposure to this factor using their infrastructure capital investment and human capital and human labor. In doing this the firm will increase societal wealth by allowing a more efficient transfer of risk and create positive economic rents. All of this can be done successfully without focusing only on beating their alleged competition, so as to generate alpha returns. Instead investors can be viewed similar to other professionals. The work of an intelligent investor in identifying risks in the market is compensated. The industry practice of tethering this compensation to a figure that is measured ‘easily,’ such as alpha makes sense, as it gives an investor the incentive to work hard. However, this does not suggest that alpha is the true end all goal for a portfolio manager.
Ricardo Rent Explanation:
I will begin with an example, which involves two nearly identical hot-dog stands. Hot dog stand A is in New York City and hot dog stand B is in a small town called Springfield. Each owner needs to buy a business license for a mobile cart from the city to operate, and each city charges a price set by the market equilibrium of supply and demand. In this scenario let us assume that the functions that restrict supply and create demand are only based on the population of each city. Now, let’s guess how much each hot-dog stand owner is paid. Each stand owner is doing the same job at its very core: Managing a stand, cooking, and selling hot dogs. However, the New York stand should sell more hot dogs simply because it is in a higher population city (this is a key observation). Now let us consider what the cost for a food cart license is in each city: We know each city issues license proportional to the population. It is likely that the gut instinct of the reader is that the New York license ought to cost more, this does make sense based on experience. But why should it cost more? Well, each city that issues a license wants a portion of the rents that they deserve for having the ability to grant the right to sell to consumers. The right to sell to all of New York is greater than the right to sell of all of Springfield. As a result the New York license will be cost more than the Springfield license. The cost should be enough that the equilibrium wage of each hot dog stand owner is identical (reality will of course deviate, but the general theory is reasonable, fast food workers tend not to have radically different qualities of life).
The beauty of this theory is that despite other factors, the employee of a hot dog stand only receives money for his or her marginal contribution to serving hot dogs. In this example one hot dog stand owner was able to sell far more hot dogs at a higher price due to a large and jam-packed city. But all that extra money, or rents, was demanded by the city when issuing the license. The city implicitly states that all rents captured due to the large population of the city do not belong to you, and the city demands them back when issuing the license. This will all happen in a competitive market in equilibrium. Ultimately consider that the two hot dog stands are essentially identical, each man pulls the same cart, cleans it, cooks hot-dogs, and exchanges them for money. Both of their economic rents are equal to zero, so ultimately they are only rewarded for their raw inputs and actions. This is economically intuitive, as we do not expect a fast-food worker in a large city to make far more than one in a small city. By recognizing that their economic rents are zero, we are able to acknowledge that while their revenue and sales numbers will differ drastically, they still ought to take home the same amount of money, as suppliers (in this case we are only considering a city licensing bureau) are able to demand higher shares of their revenue and sales.
Here is part three in my series. After this post there will be a final part four. However, in between three and four will be a new post on Ricardo’s theory of rents and financial markets, which is a personal viewpoint of financial markets that I have been developing. This current series–while I have put lots of hard work into it–is based upon University of Chicago professor John Cochrane’s seminal market theories and analysis.
Most active management and performance evaluation just is not well described by the alpha-beta, information-systematic, selection-style split anymore. There is no “alpha.” There is just beta you understand and beta you don’t understand, and beta you are positioned to buy vs. beta you are already exposed to and should sell. -John Cochrane
In the past post on efficient markets, I explained the traditional view of efficient markets and how it has evolved. I then extrapolated this to the current financial industry, investments firms, and what it means to actively beat a market. I now would like to focus on academic literature and efficient market theory that has continued to develop since its foundation in the 1970s. Most investors stop their study of finance far too early. After learning the basics of the efficient market hypothesis, they move on to the Wall Street Journal, The Economist, Zero-Hedge, and the Financial Times. And as these students read more, the efficient market hypothesis is only a lingering memory of an overly abstract theory. This is unfortunate. While the introduction to investments is often the same across the board, there is less consensus and many divergent schools of thought after this common starting point.
I will be primarily using John Cochrane’s notes on predictability, available on his website, as my reference. The first goal for most aspiring investors is how to search for patterns and then to use these patterns to make predictions about the financial market; in other words, to make money. Research built upon market theory and used to make predictions is mostly empirical. This means that instead of considering how markets ought to behave based on human nature and economic intuition, we use empirical data to understand how they work and look for patterns. In finance and economics a large weight is placed on theory due to the inability to conduct controlled experiments. However, now that we have a robust theory of markets, we can start to place empirical results in full context.
The primary tool of empirical research in investments is the following equation: R(t+1)=a + bx1(t) + bx2(t) + e(t+1). This is a simple regression equation. If financial economists were engaged in a battle to make predictions about financial markets, that equation would be the standard issued M16 rifle. The primary point of my posts is to offer a non-mathematical understanding of financial markets to the reader. Nonetheless, I have decided one equation will not hurt. Below I have explained the separate parts of the regression, placed in the context of financial analysis.
R(t+1): This is the expected return in the following period (this could be any time interval we choose, such as one day or one year). “t” is the current period and therefore “t+1” is the following period.
Bx1(t): This is the first beta we use to forecast our left hand variable R(t+1) using current information. The most commonly used beta value in financial economics is the ‘market factor,’ which determines how much an asset(s) covaries with a chosen market index. This is so popular it is often referred to as simply ‘beta.’ However, when using multiple betas we are sure to specify it is the market factor beta. A common error in investments analysis is to accept beta as a robust and catch-all risk measurement. The beta of a given asset or portfolio of assets will vary based on the chosen market index it is measured against as well as the time frame specified For example, the beta of Apple will be different if it is measured relative to the S&P500 as opposed to the Barclay’s Aggregate Bond Index. In this case the S&P500 makes more intuitive sense, though there is not a ‘correct’ answer. Interpretation of beta should also consider the return distribution of the index, the overall volatility, and the fact that beta risk isn’t perfectly linear. While we tend to assume something close to a normal distribution for equities, in the case of options or derivatives a market beta value might be strange. Very low and very high beta stocks are often less reliable measurements for various reasons.
Bx2(t): This could just be a second beta. A regression can have one, two, or many betas.
e(t+1): This is the error term (or disturbance term). We assume it is normally distributed with a mean of zero. This may include other variables (betas) we have not considered; measurement errors; unpredictable effects; and nonlinearities.
a: This is the alpha term, formally titled Jensen’s alpha after his seminal paper. If running a regression on investments, this term will be positive if the investor received greater return than the risk suggests. Conversely, this term will be negative if the investor does not receive return commensurate with risk. There is no magic in this term and the math is simple. If the return was 10% and the beta factor(s) can only explain 9%, the residual is assigned to the alpha term. The leap from the 1% assigned to alpha to then saying it was riskless profit is massive. It might have been due to market volatility or beta factors that were not included in the model.
The hunt for alpha is so hyped that there are swaths of investors (and students) hoping to find alpha. Most are not even aware how it fits into the above regression. To identify beta an investor must first create a regression model that, hopefully, has a beta for each relevant risk factor. This model would be run over a sufficiently large sample size and time period. The goal would be to analyze if the investor consistently received positive alpha, suggesting he or she had returns above the expected returns based on risk exposure. The unfortunate part of this model is that risk factors are difficult to observe and create. Consider that a risk factor must isolate and measure a significant and constant level of risk observable in all similar assets, for which investors demand compensation.
The market beta factor–the darling variable of the CAPM–has many faults. Low beta stocks often receive significantly higher returns than should be anticipated in the model, and high beta stocks often receive significantly lower returns than should be anticipated. Demand for additional compensation for beta risk varies over time due to events, such as recessions, that lead investors to risk aversion. This means that if a recession has recently occurred, and investors are now risk averse, the market beta factors may instantly lose efficacy. Despite these drawbacks, the market beta factor is still the most dominant and dependable risk factor. This is due in part to my original explanation, in part one, of how it is micro-founded in individual choice. There is no need to guess why beta works.
However, there are other notable beta factors. Fama and French noticed an issue with the market beta. When creating 10 portfolios out of size and book-to-market ratios, the mean portfolio returns co-varied with exposure to these two factors, while market beta had an insignificant covariance. This suggested that there were additional dimensions to risk that were not being captured by the market beta. As a result Fama and French added two more factors. Size is measured by small cap vs. large cap, with the notion that small cap is riskier and earns higher returns. Value is measured by the book-to-market (BtM) ratio, with the notion that firms with a high BtM offer higher returns. These high BtM stocks are called ‘value’ stocks, in contrast to low BtM stocks, which are called ‘growth’ stocks. There is one last commonly used risk factor developed by Carhartt called momentum, which considered past momentum as a predictor of future returns.
The final result is when a full regression is run on a series of investments, all returns not explained by these risk factors are attributed to alpha. And as a result alpha becomes interpreted as skill. As I mentioned before, alpha must by definition be zero across the entire market. Not every investor can receive higher returns for their risk; for every investor that has positive alpha, other investors must have negative alpha. However, this theory presupposes that all risk factors are observable and measurable. The truth of the matter is that different firms use different risk factors, and even the best models fail to capture some risk factors. For example, one firm might have exposure to a few risk factors that they do not include in their model. As a result they will attribute to alpha what ought to be attributed to beta.
In addition, a large sample of active investing is required to determine that a positive or negative alpha is consistent enough to be attributed to an investor’s ability. Very loosely speaking, it might take five years to even make an educated guess that a manager is beating the market. And it would take even longer to feel safe in thinking a manager has talent. The amount of time required tends to be painfully long. Even then, with thousands of managers and investors, a few are bound to be in the top 99.99% percentile by luck. As a result investors tend to search for the traits of a typical good manager. This is likely one reason for the obsession with prestige and pedigree in investments. A firm might be unable to show statistically significant alpha returns, but it can show a team of highly educated and trained investors.
An interesting example to start with is insurance. In fact, insurance products are often traded as assets on financial markets. Home insurance is an investment with a negative expected return, but a positive value. This means we don’t expect to receive a positive return but we still value it enough to buy it anyway. When paying for insurance we are transferring different risk factors to an insurance firm. For example, the risk of a fire, theft, an earthquake, or other negotiable details. Each one of these is a beta factor. We need to pay another person money to take our risk. However, value is created for society as a whole by spreading risk out. The insurance company’s goal is to offer you the cheapest deal they can, while being sure they are being compensate for all the risk factors they absorb. Now a hypothetical might be that the insurance decides to cover all water damage that your house suffers. They consider the cost of your house, the size, and other variables necessary to price the water damage risk factor. However, it is possible that they might miss information. In this case perhaps the purchaser of insurance lives near a swamp. Because of this, the insurance company is not being adequately compensate for the risk they have accepted. They have agreed to pay for water damage, but the chance for water damage is higher than they expect. In this situation the homeowner will receive insurance alpha. They are having all their risk taken away for less than it ought to cost.
A year later the cost for insurance increases for the family. The incredibly clever team of statisticians at the insurance company realized location to swamps is an important variable in water damage. Despite shopping around, the family realized this new information had flooded the market. They are no longer receiving more benefit than risk (alpha), and instead they must now pay more to have another party take on their risk (beta). So in this case the source of the family’s alpha became common knowledge, and as a result turned into beta.
Now that I have covered market efficiency, market alpha, market beta, and both the simplifications and reality of our tools–I would like to move on to the empirics and cover predictability.
Excess stock returns are unpredictable based on past price movements. A regression of returns on lagged returns, annually from 1927-2008, shows a predictability beta of 0.04. What this means is that annual returns were gathered over this time period into a data series. This data series was then ‘copied’ and lagged one year. The goal is to find if the return of last year helps us forecast the current year return. With a beta coefficient of 0.04, this suggests that if the return was 10% last year, we expect a rise of (0.04)*(10%)=0.4% expected rise from “momentum.” In addition the R^2 value, a statistical tool for measuring the proportion of return variance than can be forecast one year ahead, is near zero. Predictability of returns are equally unimpressive if run on time intervals shorter than a year. Generally it is of near zero value to attempt to forecast future price movements based on past prices alone. There are some strategies that consider isolated momentum in high volatility periods over fractions of a second, and others that focus on the intricacies of mutual fund pricing. These clever traders use a far more robust model than simply annual return predictions over 80 years. And while some pure momentum models do exist (predicting future price from past price alone) it is nearly always one of many different factors used in a full model. It is also important for traders to respect the reality that of the hundreds of thousands of regressions run over past data some might show good results simply due to cherry-picking, statistical luck, and lastly that those that once predicted excess returns might no longer predict alpha returns.
Conversely, the T Bill market is highly predictable. This isn’t surprising. The Federal Reserve is likely to use the interest rate of the previous year as a base before they attempt any adjustments. As a result the lagged ‘momentum’ beta for T bills is 0.91. This does not violate market efficiency, since even if you know T bill returns should be high the following year, you must still borrow at the same high rate. Whereas if you know stock returns will be higher than the T bill in the following year, you can borrow at the T bill rate and invest in the market with full clairvoyance that the spread will be profitable.
While excess equity returns cannot with any real consistency forecast itself, other variables have shown an ability to forecast future expected returns. For example, dividend yields vary over time. A dividend yield is calculated by taking the current annual dividend over the stock price. A high dividend yield means that the current dividend is high relative to the stock price. A low dividend yield means that the current dividend is low relative to the stock price. If a stock has a low dividend yield relative to its long run average we might guess that future dividends will be higher than past dividends. The reason for this is that the stock price includes all future dividend increases (whereas the current dividend is simply what was last paid as the dividend). An example of this might be if a firm recently announced that future dividend payments will be double the current payments due to unexpected success. While this does not affect the past dividend payment, it will increase the price of the stock. In this situation the dividend yield will be lower than its long term average, and we should expect future dividends to be higher, thus bringing it back to its average. Conversely, if the dividend yield is high we might expect future dividends to be lower.
The dividend yield has the ability to forecast future returns and forecast itself. The statistical significance is not high, but it is material. This fits with our theoretical economic understanding of the market. We cannot be clairvoyant, but we can use variables to increase our understanding of future returns. In this case the dividend yield works as a great starting point. Cochrane uses it as an initial term due to its simplicity and clear statistical evidence. However, there are a thousands of papers on thousands of variables, all with an attempt to predict the future. A casino does not need to win every time to make money, only 51% of the time. I will not include the quantitative data in this paper. I suggest reading Cochrane’s notes on predictability for a full understanding. It is useful to begin with the dividend yield
It is now important to reconcile the ability to forecast future expected returns (discount rates) with market efficiency. Predictability at first blush seems terribly incompatible with the idea of market efficiency. It ends up working perfectly fine with the idea. Firstly, market efficiency does not require constant returns – variation is allowed. We could theorize that this variation is perhaps simply a function of changing discount rates, or risk. While the dividend yield holds predictive power, it is possible when the future dividend yield rises that risk does as well, which would explain the price increase. If this were the case markets would simply be continuing to be rational. Using dividend yield forecasting would allow us to predict when risk is higher, leading to the rational expectation that returns ought to be higher in periods of higher risk.
While this is congruent with efficient markets, it is a different perspective than what was originally thought in the 1970s. The original theory was that returns are constant, meaning predictability is impossible. If returns are constant and equity returns are a purely random walk there is by definition no good or bad time to invest. As a result any characteristic or ratio (such as the dividend price ratio) meant nothing about whether an investor ought to buy or not. After all, if returns are constant valuation ratios should simply reflect efficient beliefs for an investor. Since Cochrane phrased his point so succinctly and clearly, I will include an excerpt.
“This is a huge change in viewpoint from the classic eﬃcient markets/constant returns view, circa 1980. We used to think that expected returns are constant; stocks are a random walk; there is no “good time” to invest or “bad time”. Now, of course, prices move around. Isn’t a low a good “buying opportunity?” No, we would have said, low happens when people expect declines in dividend growth. Variation in P/D [price to dividend] occurs entirely because of cashﬂow news. What we see in these results is exactly the opposite. Now we think that market P/D variance corresponds 100% to expected return news, and none at all to cashﬂow news. (Prices decline when current dividends decline of course.) (Things get even stronger when we add more variables; technically this result refers to forecasts using only .) In this sense, our view of the world has changed from 100% / 0 to 0 / 100%.” -John Cochrane “Notes on Predictability.”
The prime point from this passage is that valuation ratios act differently under a microscope. The original market theory was wrong. It stated that returns were constant and valuation ratios were just the result of rational investors considering future business conditions. As we see, returns are not constant and valuation ratios can be used to predict future returns. In addition, these ratios tend to be related far more to market discount rates than to firm specific features. The P/D variances are tied to expected return news, not actual future dividend news. If the P/D ratio is high or low, it does not mean future cash flows or dividends will change. Instead we can expect the price to change. Different valuations can help us expect the return environment we are in, and this is what it means for returns to be predictable.
Stepping aside from ratios, it is also possible to understand how finding different risk factors, and isolating them in a portfolio, can increase wealth for all investors. For example, a hedge fund might increase its risk to emerging market currency fears. As we know in markets this will be compensated with a corresponding return. If it turns out that this risk has a low correlation with traditional asset classes, such as US stocks, taking purposeful exposure might be a great strategy. This could be a novel and useful strategy. It is possible that while emerging market currency risk does exist, and those who hold this risk are compensated for it, there is no way to gain pure exposure to this risk. For example, another hedge fund might be interested in investing in emerging market firms with strong fundamentals as value investors. As a result they will be gaining marginal exposure to currency risk that they do not want. This second hedge fund that does not want this risk might engage in a currency swap with the first hedge fund, that wants to hold this risk. Despite the complex financial tools that take place, the end economic reality is similar to a farmer selling livestock manure to a fertilizer firm. The firm that does not want currency risk uses financial tools to unload it on the firm that does want the risk. And savvy investors may now invest in the currency hedge fund firm to gain returns with a low correlation to their more traditional asset classes (that have more traditional risk, like the US business cycle).
The importance of this paragraph is the world of active investing and hedge funds does not fall apart under a reasonably efficient market. It is viable for hedge funds to actively trade on what we typically consider ‘beta’ factors. And while some hedge funds do make absurd amounts of money per year (and these are few and far between), this could be rationalized by the idea that when spending hundreds of millions on infrastructure, programmers, physicists, and economists, it is possible to gain an edge on having meticulous exposure to exactly the right risk factors at the exact right moments. And while this might allow an edge, it also brings these funds teetering on the edge of a cliff. It is not unknown for the most brilliantly guided hedge funds to blow-up, as “Long Term Capital Management” displayed to the world of financial markets.
I have an approximately 20,000 word work in progress ‘intro to financial theory for social scientists’ paper in progress. It is seeming like it will never be done though, so I decided to go ahead and start posting it in parts even if it is a little rough. It’s not as though I’m being graded. Citations will be available in the final version when I properly format my ‘book’ and turn it into a PDF. Otherwise available upon request.
The Efficient Market Hypothesis:
It will now be a useful exercise to consider how the previous intro on market laws might change the way a clever reader (yes, you!) views a stock market, if you were to accept everything so far as absolute truth. We are now aware that a discount factor that combines many factors, such as various types of risk and human impatience, is used to find the current value of a firm’s stock price. We now also know that assets with identical or extremely similar discount factors offer identical or extremely similar payoffs (Law of one Price). And that a mispricing in the market cannot exist (No Arbitrage). Everything is properly priced, all relevant information such as country specific risk is included in the discount factor, and there are no mispriced assets in markets or across markets. This suggests that markets are efficient. Or in other words, it means that we do not expect to receive a higher payoff from our investments than would be suggested from the discount factor. And that we do not expect to receive a higher payoff than would be expected due to our clever strategizing. If we consistently could pick stocks that offered higher returns than their discount factors suggested, we would be violating market efficiency. This leads us to the efficient market hypothesis. It was first presented by Eugene Fama in a seminal paper in 1970. It is mostly right, but often wrong. I will explain its strengths, weaknesses, and applications in depth. A clever student of finance should recognize that various economic efficiency laws have known flaws, yet still respect their importance.
There are three forms in total: weak, semi-strong, and strong. This was the most debated and important spectrum in financial economics for many decades, and the argument still exists, although it has toned down. The strong form suggests it is not possible to consistently and truly beat the market, as the market is always in equilibrium. If everything is properly priced there is no way to receive excess return vs. the market (without taking on more risk). The weak form does not require the market to be in constant equilibrium. In addition some forms of fundamental analysis can provide excess return. In this case fundamental analysis generally means analysis that focuses on the microeconomics of a firm, rather than just examining pure market data to look for pricing errors. And that while it is profitable to make money from inefficiency in the short run, this is not possible in the long-run to consistently make money. This is because inefficiencies will cease to exist, and overall excess return in the market must equal zero (for every winner there is a loser). The semi-strong form is a reasonable mix of the two.
Lastly, there are a number of important aspects that are true for all three: Future prices cannot be predicted from past prices to profit in the long-run; using technical analysis (or computer algorithms) cannot generate profit in the long-run; price movements in the market follow a random walk, (they move randomly) since the movements are based entirely on new information that is non-forecastable; there are no predictable patterns to asset prices; and that investors have rational expectations, meaning that even if some are wrong, on average the population is correct. I would like to reiterate that stock movements only occur based on new information that isn’t already contained in past prices. For example, if all investors are 100% certain Apple will have great quarterly earnings accuracy in ten days, they will buy Apple today. This means that when the quarterly earning comes out the stock will only move if the earning is above or below the current expectation. So if the quarterly earnings do end up being great, the stock will not move, because all investors already anticipated this event and bought Apple stock (thus increasing its share price before the official date of release). Now in reality investors do not have 100% confidence, and often have different guesses. However, it is for this reason an investor is more interested in the surprise of the actual release measured against the estimated release rather than the absolute earning.
The previous paragraph is the most important of this entire paper (re-read it). Unsurprisingly, if accepted fully, it renders every attempt at teaching an investor how to make easy money false. It suggests that all investments firms are not actually beating the market. It also suggests that the information your broker shares with you on ‘hot new investment products’ is useless. A famous hedge fund guru–Seth Klarman–warns against these investments advertisements in his famous investments book. He believed entirely in fundamental analysis of firms and markets, and warns investors not to listen to all the products being sold by Wall Street. Spotting the late night infomercial scams is not sufficient, most reasonably intelligent people know those are false. It is important to note that your broker has one goal: Making money off of you. For example, even if they don’t believe in technical analysis themselves they provide all the tools. Technical analysis day-traders spend lots of money on commission. Just because your broker or wealth-manager supports a type of trading does not mean it is sound. There was also an extremely influential book released in 1973 (shortly after Eugene Fama’s market efficiency paper) entitled A Random Walk down Wall St. by Burton Malkiel, a Princeton economist. This was the first mainstream push to warn all investors to only buy a market index and never attempt to beat the market or buy products that attempt to beat the market.
Beating the Market:
Efficiency and You
It is important to recognize that a stock market is simply a reflection of underlying firms and entities. As a result the stock market is a zero-sum game. If the entire S&P500 returns 5% in a year, the average return for all holders of the S&P500 must be 5% by definition. As a result imagine if one investor experienced a return of 6% by selling the S&P500 and buying it back a week later after it had decreased by 1%. This gain must come at the loss of another investor who happened to buy the S&P500 at exactly the wrong time. This same thought experiment can be extended towards the entire worldwide stock market. For every winner it is required that there is a loser. Now consider that trading infrastructure is expensive. Hiring a team to attempt to beat the S&P500 costs money. As a result the stock market is considered a zero-sum game. As a brief example, imagine a market with only two investors: every time an investor A wins money investor B must lose money. In addition both investors pay money for computers and a team of analysis. As a result investing simply costs money and is overall a losing game to play.
However, there is a contradiction inherent to belief in zero-sum market efficiency: Markets are man-made. As a result a market can only be efficient because investors make them efficient. Every aspect of a market is the function of a set of legal rules and regulations followed by investors entering the market and investing by the rules. Imagine if no person ‘played the game’ and attempted to beat the market. Instead they recognized it is a zero-sum game that costs money, and decided just to hold the market, such as passive indexes, instead of attempting to beat the market. The result would be the market would never react to incorporate new information. Reflect over the Arab Spring uprising. The moment a the social movement began the price of oil increased to incorporate the new risk to oil supplies. Some shrewd traders were able to capitalize on the events. If there were no active traders, there would be no way for the market to incorporate this new information.
Eugene Fama and Kenneth French found that there was a pattern in the price of the U.S. equity and U.S. bond market. They realized that if the dividend ratio of the U.S. equity market increased, the U.S. bond market inched up over the following two weeks. They found that this trend was consistent. Imagine that they then told colleagues at an investment firm, and that they then began paying close attention to this relationship and invested on it whenever it appeared. Slowly other investors became aware of it as well. They even began to invest in the bond market based on expectations of the dividend ratio before it came out: For example, if they anticipated there was a 70% chance the dividend ratio would increase by 30%, they had an equation to determine how many shares they ought to buy of the US bond market. Eventually, due to all these relationships becoming clear, the trend disappeared. As a result you can see that trends do not exist because the moment they do, investors will trade on them, thus eliminating the trend. It is important to note that the observations of market efficiency are not similar to physical laws, rather, they are observations of how the market works considering that clever investors will execute trades in all opportunities. While this theory has been accepted, recently Jeff Pontiff, a professor of Finance from Boston College released a working paper called Does Academic Research Destroy Stock Return Predictability?, where he found empirical evidence for pricing anomalies disappearing after discovery. As a result while many studies may find anomalies or trading strategies, one they are observed they should theoretically cease to exist. However, I am not writing this to teach anyone a clever trading strategy. Rather, I would like to teach you the theory behind a clever trading strategy, and how to tell if a strategy is clever or stupid.
I will begin by explaining some trading strategies that work, but that you could never hope to accomplish. I don’t say this as a joke, I’m entirely serious. There are firms that are primarily driven by pure quantitative programming, have teams of professors and PhDs, and infrastructure measured in the tens of millions. In some cases they have even moved closer to the official servers and hired physicists in general relativity to help them optimize their servers so that they can reduce their trade-delay. This is not sufficient to make money, however, it is necessary.
These algorithmic, quantitative, and high-frequency firms tend to trade to uphold ‘No Arbitrage.” Some firms specialize in ‘statistical arbitrage’ and search for asset mispricings across markets and asset classes. Such as two different markets having the same stock that are not exactly equal in price. There is also ‘event arbitrage’ where a firm will take advantage of their near instantaneous ability to place a trade due to their infrastructure and get ‘first dibs,’ for example, on a positive job report. However, there are many other firms that operate on the same general principle that do not fall into an explicit category or have proprietary information. Perhaps a firm has found a trend by analyzing data, or perhaps they enforce a documented trend. Often the most successful firms trade on an esoteric asset. For example, some hedge funds focus only on trading volatility. Their goal is to make sure derivative contracts on the S&P500 are all at the proper price. A fat fingered investor at his home would not notice a difference for hours after it happened, and even then be far too slow and costly to implement the trade. A high infrastructure hedge fund on the other hand has nearly zero transaction costs and is able to capitalize on the trade seconds after it occurs, capturing just cents per trade (cents per trade multiplied by thousands of trades adds up).
These strategies keep markets efficient and often work well, however, the firms that are successful are few and far between. In addition, there is not always room for new entrants. Often in these areas if a firm is too large they experience diseconomies of scale. If a fund is too large they lose agility and the ability to enter and exit positions gracefully without impacting the market. While these areas do exist, it is important to observe and understand how they work, but trying to replicate their style in any other environment is not reasonable.
There are many other investment firms that do not self classify as purely quantitatively driven. These firms often tend to trade by searching for some type of fundamental analysis. This is a fancy way of saying they are interested in the micro-economic foundations of the sectors, firms, and places they invest. These could range from small capitalization US tech firms to firms that exclusively trade emerging market debt. They play close attention and meticulously research their area of focus. As a result if an emerging market has a currency crisis or a new trend appears in US tech firms, they will either buy or sell assets to bring the market to efficiency.
It is important to note that these firms and investors that do contribute to market efficiency build their trading models on theory that interacts with reality. Some firms trade on algorithms that keep the price of stocks equal across different markets. This allows for more liquidity and no information asymmetry. Other firms might trade on international relations theory, perhaps their thought is a large oil exporting country has more political risk than others are aware. The goal is to absorb data of all forms and turn it into specific information through analysis. The opposite of trading this way is just buying and selling stocks for no good reason. This would place the investor in the category of a ‘noise trader.’ Essentially indicating that despite trading, the trades themselves are entirely meaningless towards market efficiency. An individual example would be someone buying Apple just because they think it sounds cool. An institutional example would be a firm selling Apple so that they can rebalance their mutual fund to an index. In both cases Apple wasn’t bought or sold for any tangible reason that recognizes the firms attempt at generating money. It was simple ‘noise’ and either had no reason, or had reason that was specific to the investor. As a result while this noise will marginally move the stock price up or down, since it is not reflective of any true change in the stock, it should be expected that another trader will buy (or sell) Apple back to its fair value.
If not for active investors attempting to beat the market, commodities and assets would not reflect new information in the world. This presents a paradox where the winning move might seem to suggest not to attempt to beat the market because it is a zero-sum game. For every winner there is a loser, plus expenses. However, if everyone follows this methodology the market would not be efficient and assets would not reflect new information. As a result active trading should exist, despite it appearing to be a zero-sum game, the expenses the firms incur allow them to engage in information and price discovery. At this point macroeconomic theory tends to give way to empirical observations. The academic literature tends to find that individual and institutional investors are net losers; meaning that after expenses they would have been better had they held a non-actively traded portfolio.
Individuals tend to lack the necessary resources to make educated active investments. Even if an individual investor was a professor of finance, without access to top level research and supporting teams it is too difficult to actively beat the market. On the other hand, mutual funds often suffer because they are too large. This is called diseconomies of scale. With one billion dollars it is possible to enter and exit positions. With ten billion dollars many opportunities are too small. In addition mutual funds have far more legal regulation, which can be crippling when trying to act quickly. In addition, Mutual funds tend to have small amounts of money from a very large amount of investors, which requires lots of legal and paperwork. As a result, while active mutual funds do tend to beat the market on average, once expenses are added they lose on average. In addition, if a winning mutual fund is spotted, often new cash flow will enter until the fund becomes too bloated. There are some arguments that certain mutual fund firms can outperform their peers in the long-run, however, the empirical evidence for this argument is weak.
Conversely, hedge funds and other similar investment vehicles tend to only hold money from small pools of high net-worth individuals or institutions, such as endowments. In addition, they tend to hold lower amounts of capital and often cap their funds if they fear diseconomies of scale. These firms tend to also employ the best talent. They can offer better compensation than mutual fund employees and also receive their compensation based on performance, whereas mutual funds charge a flat fee. As a result many of these firms have the best investors in the game, the top and most expensive infrastructure, and the ability to act quickly with little to no oversight. These are a few of the key points that explain why Hedge funds, on average, can perform very well. It also explains why on occasion they can go bankrupt fast. Many hedge funds used massive amounts of debt for leverage, one of many options mutual funds are forbidden from, and as a result when their investments went down even only one or two percent in 2008 they immediately went bankrupt and their investors lost all their money.
I have stressed throughout this that these are broad generalizations that do not explain exceptions. However, they are largely accepted in academic analysis of different market players. Part of the reason it is so important to conduct large empirical studies is due to the randomness of the market, as well as the statistical difficulty of measuring one person. It is not possible to examine an active and passive individual investor, an actively and passively traded mutual fund, and a passive hedge fund that focuses only on portfolio construction as well as a hedge fund that trades multiple times daily, over two years, and hope to explain who did well and who did not. It is not enough information, too short of a time frame, and too small of a sample size. As an example, I have witnessed too many students who are new to finance experience higher returns than the S&P500 for a year and think it suggests profound skill on their part. There always far too much I do not know about this students investment (and that he or she often does not know either), and even if I knew how much risk he took on, I still could not explain whether he won due to luck or skill in such a short time-frame.
Even if an active mutual fund has beat the market after expenses for a decade, due to the amount of active mutual funds that exist it is possible that fund was just one of the lucky few. If a portfolio manager for a mutual fund underperforms or outperforms for three years straight he or she will likely be fired or promoted. But the truth is that there is not enough information to justify either of those actions. As a result researchers tend to focus on larger and generalized groups, such as lumping all actively managed mutual funds in one group and comparing them to all passive mutual funds over five decades. An interesting paper on mutual fund performance did not examine funds, but instead examined portfolio manager returns through variables such as SAT score and education. This paper did find a positive correlation between education and excess returns–however–often the smarter students are recruited by the top firms, so it is difficult to be certain if this has any real merit. Overall, it is still overshadowed by the large research that find active mutual fund performance to be drab.
The reason the active and passive dichotomy exists is that it is an extremely useful classification. Those who research finance want to be able to examine the market as a whole throughout time. For this reason indices are made for nearly all asset classes in all regions. This then allows researchers to compare those markets as a whole, and compare active investors in them to the market itself. This dichotomy also allows firms to explain how their funds act. For example, there might be two complex retirement mutual funds. Each one becomes less risky every year until retirement. In the one that is is active, the portfolio is attempting to beat the market by picking and choosing. The passive portfolio will simply hold the market. However, each one likely is holding ten to twenty different markets across the world based on a portfolio optimization model.
If I have discouraged anyone from actively investing themselves or buying actively traded mutual funds, I will have accomplished my goal. To help achieve long-term financial stability the best strategy is to invest in a mix of passive assets that have a risk exposure that is most prudent for your goals. Many investors enjoy investing personal money that isn’t needed for retirement or stability actively in the market in the hopes of making lots of money. Some more educated investors might also enjoy long term gains from strategies such as value investing (made famous by Warren Buffet). While understanding the difference in active and passive is very useful for investing in a 401k or reading financial literature, when it comes to personal investments the need for simplification disappears. The truth is there are many different ‘active’ trading strategies. Buying and selling gold daily is different than choosing twenty firms based on research and trading every six years. If you do choose to invest actively be sure to recognize the benefit of low-fee and passive ETFs and mutual funds (often offered in a retirement plan) that are loosely optimized for your age, risk level, and desired goals. Then make active trades in addition to your already secure financial portfolio. It is possible to mix the two together and tweak a personal portfolio to reflect personal bets and investments, and many private wealth management firms aimed at high net worth individuals offer these services. However, it is generally a poor idea without very strong knowledge and resources.
Understanding Financial Products
It is difficult for an investor who has not formally studied financial economics to differentiate between good and bad investment products and strategies. This could be a reason why multi-billion dollar firms continue to have their retirement funds managed by overpriced firms. Being able to wade through all the different products and types of investment vehicles is very difficult. For example imagine you log on to your broker and they have two new ‘suggestions’ of investment products that you may buy: The first one is an ETF (an exchange traded fund that is very similar to a mutual fund) that tracks Chile and the second is a mutual fund targeted towards elderly investors that, the advertisement claims, are “not receiving high yields from their fixed income products.” Even if you are not interested in these products, it is still useful to have the skills to examine each one and determine if it is absolute trash or might have some merit for the right investor. I purposefully chose two very different products in this example.
A great first place to start is to run each product’s goal against a market efficiency test. The first ETF claims to follow a Chile index passively. This means it is not attempting to beat the market, just replicate the market. An index means that someone created a simple list of rules to categorize stocks, for example, list them in order of size. This type of ETF will proceed to hold these stocks based on the rules defined at the beginning. While there may be some small variations since the ETF needs to rebalance on occasion, it will generally follow the index. This ETF is executing a transparent plan.
To further analyze this ETF, I would expect that as a typical investor having high exposure to Chile would be unnecessary. However, the ETF seems to be reasonable. The goal is to allow you to hold a general piece of Chile’s overall economy. It is a simple but useful product to help an investor gain exposure to Chile based on a clear and available index. In addition, since it is traded passively the fees are likely to be lower. The reason being instead of hiring a team of highly compensated traders to attempt to beat a market, it is done by a couple people just following an index.
Now to properly analyze the mutual fund we will need information, let’s suppose that the mutual fund claims to achieve excess returns against its index by holding an 80/20 percent split of an investment grade bond index and dividend stock index. The product continues to explain that their goal is to actively trade the stocks to achieve greater returns. This means instead of strictly following the rules of the index, the portfolio manager will actively trade the stocks in the index while attempting to keep risk factors at the same level as each respective index. Another way to consider this is that the active traders on this mutual fund team are attempting to identify stocks that have been mispriced. Let’s think of this in discount factor terms: If a stock has claims on all future profits of a firm, its price is all those profits discounted to today based on their perceived risk as well as factors such as the interest rate and the time value of money. Now let’s imagine a stock has been discounted backwards and is currently at the price of $100. This means if we buy that stock we are entitled to an expected gain equal to the discount factor, since this is what investors demand for investing money in a risky stock instead of buying cupcakes today or investing in another less risky stock. The goal of a talented team of investors on this mutual fund team would be to somehow find what they think the true value of the stock would be by coming up with their own discount factor. If this team finds that they believe the discount factor ought to be 20% lower than the market discount factor, this means that the stock should be more expensive. This is because a lower relative discount rate means that a stock is not as risky as the market expects. However, if a market expects a stock to be risky they will demand it be a lower price as compensation for the risk. As a result this team would try to buy stocks that are cheap due to perceived risk–despite not actually being as risky as expected. If this seems incredibly complex compared to the passive ETF of Chile–it is because it is more complex.
The mutual fund does not pass evaluations as well. It is being sold as a solution to elderly investors who have had lackluster returns on fixed income products. We can initially take a reasonable guess that the product will achieve higher returns. Since stocks generally have higher returns than fixed income, this mutual fund is weighted more towards risk. This fund is also being actively traded. The portfolio manager is increasing the risk on this product to offer higher returns versus rival funds, which are aimed at elderly investors. This fund is also being actively traded, which often increases risk as well (despite claims to the contrary). Actively traded mutual funds, on average, increase the price by more than the returns are increased. It is not cheap hiring a team of highly compensated portfolio managers to attempt to beat the market. The end result is that this mutual fund will likely have higher returns, but this comes at a cost of higher fees and more pronounced risk. It is also important to note that some actively managed mutual funds that were sold as ‘safe’ had extremely risky positions prior to the financial crisis, ultimately falling 30-40%. Whereas similar passively managed mutual funds that followed the same index did not suffer as poorly. While this fund would increase returns, overall I would be hesitant to suggest an elderly investor buy products that attempt to beat the market. If he or she thought it prudent to increase overall portfolio risk, with guidance from an educated adviser, it would still make more sense to invest in a passive portfolio. While nearly all evidence suggests actively managed mutual funds fail their objectives, it would be a particularly poor idea for retired investors to hold these products.
My explanation so far on market efficiency is sufficient at an introductory guide. For readers that are acquainted with financial journalism and investment blogs I would like to add a few more substantial examples and explanations. This will help discern intelligent from unintelligent writers as well as teach some of the empirical tests that have allowed market efficiency theories to become so widely accepted.
To begin, the formal definition of efficiency is “information efficiency.” As I have mentioned before, the idea is any new information will be incorporated into the stock price today. In addition, if any ability to predict good or bad days existed, all investors would jump on the opportunity, and the strength of the prediction would cease to exist. The first generation of empirical tests found with extreme accuracy that this was indeed the case. An ordinary least squares (OLS) regression was run on annual market data from 1927-2008. The regression forecast the ability of past stock prices to predict the stock price in the next period. While the mathematical proofs to derive the theory behind OLS is quite complex, the equation is intuitive. The test measured if returns in the following year could be predicted by multiplying past returns by a ‘beta’ coefficient. The answer was essentially that they could not predict future prices on past information(the beta was 0.04 and the R squared was 0.002). To contrast this prediction T bills are very predictable, since interest rates are predictable.
There is a stark difference between simply reading this and searching for logical fallacies in investments articles or arguments. John Cochrane created a list of five examples for University of Chicago MBA students enrolled in his Advanced Investments course that sound like they could have come from one of many articles. The italics are my addition.
1. “The market declined temporarily because of proﬁt-taking. It will bounce back next week.” (If markets were to increase next week, investors would attempt to buy earlier to enjoy the gain, this would eventually lead to the market instantaneously increasing now)
2. “The stock price rises slowly as new information diﬀuses through the market.” (Information does not slowly diffuse through a market. There are hundreds of millions of dollars in hedge fund and active investment infrastructure all pursuing rigorous research searching for good deals today. That is not to say information does not diffuse in a market, but to suggest it will diffuse with a clear upwards or downwards trend is incorrect)
3. “The internet is the wave of the future. You should put your money in internet stocks.” (While the internet may be the wave of the future, the internet firms will still only compensate their shareholders for the risk that they are holding in aggregate. In addition if this were true investors would buy internet stocks until they were once again in equilibrium with other investment opportunities. Internet stocks did shoot up at first, but the initial investors took a large risk. Imagine an internet firm is offering a dividend of $10 a year per share. If the firm is part of the future and will offer great returns at low risk, investors would bid the price of the share up so that the $10 dividend only represents a more modest risk-adjusted gain in the traditional 4-10% realm.
4. “Buy stocks of strong companies, with good earnings and good earnings growth. They will be more proﬁtable and give better returns to stockholders.” (Debunking this statement may draw some flack from value investors. However, value investors search for underpriced firms. In this case the statement is suggesting strong companies with publicly known good earnings and good earnings growth will always give better stockholder returns. Once again, if this were true, speculators would buy the stock until the price increased, thus bringing it back into equilibrium.)
5. “The demand curve slopes down;” “Big trades have a lot of price impact.” “Stock prices fell today under a lot of temporary selling pressure,” “Some stocks fell too far in the crash because mutual funds and hedge funds had to unload them to meet redemptions” “Small losers fall in December as dentists harvest tax losses.” (See if you can do these ones yourselves)
Each phenomenon explained here could not exist due to competitive speculators. Some of these sentences though, in the right context, seem very reasonable. This is precisely why properly understanding market efficiency is such a trying task. It is surprising how often a common and flippant short-term market theory can violate efficiency. However, it seems as though most industry professionals are more caught trying to progress their careers and make money, than attempting to properly follow basic market theory. This isn’t surprising, nor is it wrong or different from nearly any other industry created by humans. The end goal for most participants in any field is a successful career. In defense of industry professionals, often they must offer products to fulfill market demand regardless of efficiency. It appears that customers of financial services enjoy fun services like a simple set of ‘technical analysis’ buttons on their trading website, or exciting new ETFs that offer synthetic ‘hedge fund’ exposure. However, as an individual investor, or student of finance who strives to truly understand the truths and falsehoods of the field, it is important to identify real research from attempts to sell financial products. To provide a counter-example of a short theory that might be on a finance website that is more rigerous consider the following:
6. “Buy stocks of strong companies in traditionally safe sectors (e.g. telecommunications and healthcare), with good earning and good growth for individuals who want consistent dividend payouts instead of high risk from large capital gains. They will be more likely to provide a stream of income that is less affected by turmoil from geopolitical events and European debt over the next two years.
In this example instead of suggesting strong companies will just ‘do better,’ I pointed to a theory of how these assets will perform and react to crises, and how this suits them to a particular investor. At the very core instead of making a broad claim that “one bucket of stocks will do better than another for vague reasons,” I am suggesting that an investor in term of lower risk dividend payouts should prefer to own firms such as Comcast and Procter and Gamble as opposed to smaller pharmaceutical firms and emerging market banking equities.
There is still much unknown about markets. Up until recently if a dividend price ratio (D/P ratio) declined, most academics thought it meant that the future dividends of an equity would be lower. A dividend price ratio is calculated by taking the current price and dividing it by the most recent dividend. If future dividend payments are expected to decrease, the price of the stock will decrease (note the last paid dividend cannot change, and only will change when the next dividend is paid, which usually occurs on periods denominated by months). Therefore if the dividend price ratio decreased, this valuation would suggest future dividends would decrease. However, empirical evidence and research over the past four decades strongly suggests a lower D/P ratio suggests higher future expected returns. Meaning that expectations of future dividends appear to fluctuate more than future dividends themselves. This is one of many different anomalies and valuations current financial economists study. With modern empirical tools it is possible to put these to the test and begin to evaluate true phenomenon.
To conclude this section: Markets are very efficient and reflect nearly all available information. However, this only exists because market participants buy and sell assets to reflect all new information. Most information and price discovery is conducted by high-resource and high-powered investments firms, not individual investors. Despite this efficiency, some pricing anomalies still exist in the stock market. If an asset class has higher (or lower) prices than academics suspect due to market efficiency, it might be due to the following reasons: A key risk factor has not been identified, there is an institutional factor that has not been considered, some investors are lacking information, it could be a result of cherry-picking data and have no theoretical significance, or perhaps it was entirely random and will revert back to the expected price.
I have an approximately 20,000 word work in progress ‘intro to financial theory for social scientists’ paper in progress. It is seeming like it will never be done though, so I decided to go ahead and start posting it in parts even if it is a little rough. It’s not as though I’m being graded.
During the end of the great depression until the end of the 20th century financial regulation was not a hugely popular topic for social scientists to study. There were more important issues. Even during the dismantlement of welfare projects under Reagan and Clinton, when partisan politics began its infamous rise, financial regulation was not high up on the list. For social scientists today it is relevant to public discourse. And those that work in the finance industry should be able to articulate why their job is useful and important. With the financial world gaining larger scope and power in the world, understanding its foundation is now equally important to introductory courses in micro and macro economics. Even for political theorists or labor economists, current topics in finance are relevant. In these following posts I plan on teaching an introduction to financial markets. My main goal is to write palpable and intuitive descriptions that helps simplify how financial markets work. I will focus on the theory and not on the math. The math is not necessary for a basic theoretical understanding. In addition I will tie together these topics in investments with their counterparts in the financial industry, to provide a realistic and non-abstract lesson. My hope is that this be useful to students of sociology, political science, or other non-economic social sciences, as well as be useful to students of investments or economics that would like to learn financial theory in areas outside their expertise.
I will explain central theories and assumptions that will allow for an applied social scientists to have a strong theoretical understanding of the benefits of financial markets and how they function. Asset pricing and equity premiums are core areas of research in investments. However, it seems that often more academic and less (immediately) useful theory is ignored by those in the investment industry. This isn’t surprising. A firm being paid to invest money for a client is being paid to make them money. Whether or not the firm puts forward important theories to long-term academic questions is not as important as generating returns in the next year.. I remain concise and educational, rather than speculative. My primary goal will be to teach from John Cochrane’s seminal book on ‘Asset Pricing’ as well as pulling information from other key literature.
Introduction to Assets
First and foremost forget everything you have heard about stock picking and making easy money. While there are some particularly astute investors who might be able to identify the next ‘Apple,’ it is foolish to emulate them without understanding the basic theory. Once you understand asset pricing theory you will have a more refined and skeptical eye. For example, I often hear phrases such as “Apple is overvalued.” For Apple to be overvalued you should be able to express a reasonable theory of what price it should be, why you think it should be that price, and how you plan on making money off that difference. While often investing is said to have artistic properties, the art only can show its true grace after a strong foundation in empirical financial theory. The best gymnast in the world still needs to be in top physical condition to elegantly complete an artistic routine. A financial market is an aggregation of individual investors. As a result the most robust theory in finance ought to be micro-founded and can explain decisions at the level of an individual.
Every individual investor needs to make decisions on consumption, savings, and, assets held in his portfolio. At an investment equilibrium, the marginal utility loss of consuming less today, and instead investing into an asset, will equal the marginal utility gain of consuming more of the asset’s payoff in the future. That is to say, at a certain level an investor will be indifferent to additional short-run consumption today (such as an iPod) to instead enjoy extra consumption in the future once the asset has appreciated (such as an iPad). If the price and payoff of an asset change, the investor’s preferences should change as well. For example, if the future payoff drops by half, the investor will have less incentive to forgo current consumption for future consumption. It would be similar to the past scenario, only now in the future the iPad has less memory. This basic description explains how individual investors make decisions regarding current and future consumption based on the price of an asset and/or its expected payoff.
Now it is important to understand what drives the price of an asset, such as the price of a share of Apple. The fundamental theory is that an asset’s price ought to be equal to the expected discounted value of the asset’s payoff, using the investor’s marginal utility to discount the payoff. Put more simply, instead of imagining a single ‘asset’ selling for $100 imagine a series of future payoffs (I will be using the word payoff frequently, whether it is in the form of expected capital gains, dividends, or bond payments is not yet relevant). The current value of the asset is the discounted value of all the future expected payoffs. We will then use a certain discount value an investor holds to determine how much we should give up today for a certain amount of money in the future. In this case the act of buying one share of Apple would be equivalent to buying all the future payoffs of Apple, which would end up equalling the cost of one share of Apple.
To deconstruct the price of an asset to a very simplified state, imagine a single investor is all that exists and that there is no inflation or interest. Sitting across from him is a man that tells him “The only financial asset in the world is a single bond. This bond will pay you exactly $10 in one years from now” The man now wishes to calculate the maximum amount he would be willing to pay for the bond. If he pays $10 for the bond he will lose, because he will pay $10 now and then not have access to his money for a full year with no gain. He now considers what might happen if he pays $9. He knows if he invests $9 into his personal business he will make a $1.50 profit and receive $10.50 back. However, he estimates that working for a year in his personal business would not be as fun as leisurely activities he might receive if he just bought a bond–and prices this personal cost at 0.51 cents, leading his end return from investing in his business at $9.99. As a result he decides he would rather invest $9 to buy this bond than invest it in his business. In this simplified market this man has just determined the price he would be willing to pay for an asset. However, this is a a micro-economic phenomenon. To examine where financial economics explains the price of an asset we must consider a discount factor.
In finance a discount factor is used to properly price an asset. The discount factor allows us to consider assets, investments, choices, prices, interest, inflation, time, and all other factors that would affect the price of an asset. If we examine the previous example, only the market has thousands of people, we would aggregate the price every individual would pay for the bond. The discount factor of the bond would be end up being the return it offers considering all individuals personal micro-economic utilty functions.
To start with the, the main discount factor used in asset pricing is based on investor consumption. Not surprisingly, it is called the consumption model. This model allows us to compare an investor’s future value of consumption to present value of consumption. Perhaps the most obvious, but theoretically important factor, is the idea that investors’ want to have consumption generally smooth throughout their life. We borrow when we are young and save as we grow older for retirement. On average we want to forgo consumption today to be sure we can continue to consume in the future. This is similar to the idea that most people would prefer to spend two days in a two and a half star hotel than one day in a five star hotel and one day on the street.This is known as an aversion to risk. We want to be sure that there is an extremely low probability our savings will be sufficient for retirement or that our home will foreclosed. This aversion to risk that appears at the level of an individual investor has profound implications for greater financial markets and the price of nearly all assets.
To properly conceptualize this idea of using a consumption model to price an asset on board financial markets consider the following: Each person has their own consumption based discount factor depending on the stage of their life, how much they have saved, how much they earn, when they expect to retire, if they plan on having kids, and every other variable that will affect consumption. Imagine two investors, one is about to go study an expensive masters program in London the following year and another is in a stable career at the age of 28. Each investor, in evaluating whether or not to buy an equity, will come to a different conclusion on what the price they would be willing to pay. The first investor knows he will have to consume a lot very shortly, so he does not want to invest money for future consumption but instead borrow. The second investor knows he will start a family in ten years and send his kids to college in 28 years, so he wants to save money for future consumption. Whether they buy or sell depends on if in using their own consumption based discount factor the price they value the equity at is below or above its market price.
A Primer on Efficiency: The law of one price and No Arbitrage.
As I have explained so far with the consumption model, a discount factor is a random variable that generates the price of an asset from the asset’s payoffs. The discount factor earlier was used to explain intertemporal substitution (substitution of current consumption for future consumption across time) in the consumption model. However, a discount factor can hold any variable that affects the price of an asset, such as risk of future payoffs failing to deliver due to bankruptcy or a global recession. Now that I have explained how a discount factor works and explained the primary consumption factor, it is important to understand how a discount factor can be extrapolated to other market theories. There are two conditions that must hold for discount factors to appropriately price assets: The Law of one Price and No Arbitrage. Knowing the individual discount factor for all assets for each investors is not possible. Other than consumption and risk there are infinitely many factors. If a single investor decides to only buy stocks that begin with the letter B that will impact the price of assets in the market and his or her personal discount factor. While this factor is likely not to be as popular as being averse to risk, it will still impact prices. I give this example to represent the absurdity of knowing each individual investors’ preference. However, while financial theorists attempt to identify the main factors, many are too small or niche to meaningfully prove their existence. The amount of variables and individuals that would need to be accounted for is absurd.
The first theory is the law of one price. The definition of the law of one price is that in an efficient market in equilibrium all identical assets (or payoffs) must have only one price. This means if two assets each have identical expected payoffs the price of the two assets must be identical. The reason the law of one price demands an equilibrium, is that if one of the two identical assets was cheaper we assume shrewd investors would immediately bid the price back up to identical. It is useful to now abstract away from reality (temporarily). Instead of thinking of microsoft stock and Treasury bonds, imagine a ‘payoff space.’ In this payoff space you have all the expected potential payoffs from every different stock, bond, future, option, or combination. The law of one price says that no matter how you create a certain expected ‘payoff’ it will cost the same amount. Intuitively this makes sense: If I were to create a portfolio ‘A’ by combining different assets that have a perfect correlation with the payoffs of portfolio ‘B’ they will be the same price. There should never be an asset or portfolio on the market that is cheaper than another asset or portfolio that offers the same payoff. To offer one last example consider Apple and Amazon. I should not be able to buy one share of each, package them together as a portfolio called “Apple Amazon portfolio,” and then sell it for anything greater than the combined share price. Even though it’s a new portfolio (and subsequently asset), it still holds the exact same payoffs as before.
Conversely, the existence of the law of one price allows us to derive the discount factor. To a certain theoretical degree this theoretical approach contains more substantial information. It is impossible to consider every single factor all individual investors might consider, which cause the price of a payoff to vary. Since we know the law of one price exists if the market is in equilibrium, we can instead examine payoffs. If we examine that all similar payoffs have similar prices this implies a rational and linear pricing structure, and a linear pricing function will imply that at least one discount factor exists. The reason this is true is that if all identical payoffs have identical prices, we know that there is at least one factor that all investors consider when buying and selling assets. This discount factor keeps is proven because in equilibrium identical assets have identical prices. If there was no discount factor, we would have no reason to expect identical assets to have identical prices. It is likely that there are in fact multiple discount factors. To reiterate this means that there might be a discount factors to consider impatience, time, risk, and many other factors.
To add to my previous example, an example of market equilibrium means that If my special ‘Amazon Apple’ Portfolio were to trade at less than the combined value of Amazon and Apple (because someone just sold their holdings), another investor would instantly buy it and send it back to equilibrium. These profits are typically captured by high-frequency trading firms. A very simple program might logically be as follows: If portfolio ‘Amazon Apple’ is trading at a value less than the price of a single share of Amazon plus Apple instantly buy and resell at the net asset value (NAV).
From this explanation of the existence of a discount factor and the law of one price I have partially explained the no arbitrage condition. As expressed in the previous paragraph a market in perfect equilibrium will have no arbitrage opportunities (if any do exist they will near instantaneously be corrected). However, a temporary witness of a violation of the law of one price does not guarantee riskless profit. If an investor were to buy my ‘Apple Amazon’ portfolio at a discount and attempt to resell it there is a chance in that process he might have lost money if both assets crashed in price. A formal definition for arbitrage requires absolutely no risk with the potential for reward. No arbitrage means an investor, in a market in equilibrium, cannot receive an asset for free that would never have any associated cost and only potential gain. An example would be if Apple is trading at $500 on one stock exchange and $505 on another. An investor could buy one share for $500 while simultaneously selling the other share short for $505 (to sell a share ‘short’ means that you borrow a share and then sell it. Think of it as taking an inverse position where you make money when the stock declines). This guarantees a profit of $5 with no risk. Since even if the shares increase or decrease by $100 an investor is only betting that the identical sock will converge back to the same identical price. While differences in the same company on different exchanges do exist, they usually only exist in fractions of cents for fractions of a second (assuming no exchange has a different tax structure or other extra fees). This demands that any payoff must trade at a positive price and as a result the discount factor must always be positive (a negative discount factor is identical to ‘giving away money.’) This is obvious, however, we have now created a theoretical proof. As a brief example consider insurance. On average insurance investments have a negative expected value, after all insurance firms must make more money than they pay out. However, insurance products have a positive discount factor since we are willing to invest in an insurance that pays off greatly to offset a potential disaster. Conversely, selling insurance has a positive discount factor for an insurance firm, since they hedge their risks. So by focusing on a discount factor to price our costs and payoffs we create a more robust framework that considers all factors that might affect our valuation of an asset, rather than just focusing on absolute return.
So far we have related price to consumption and payoffs. It is important to note how academic views on assets begins with these models that originate from rational behavior of individuals and all advanced models (and most trading strategies) are built on top of these assumptions. A model for assets without a theoretical basis is questionable not only because it lacks the logical rigor expected from academia, but it also is not reliable for understanding future actions. The greatest leap is not to create a beautiful mathematical formula, but to let that mathematical formula express individual human actions in aggregate. If a model is not micro-founded, or, built on the rational actions of an individual, there is reason to be skeptical. This formal explanation of investments is necessary even for purely qualitative investors. Now that we have set up a way of viewing a rational investor, I will introduce variations on the theme.
I explained the discount factor earlier as a consumption model that encompasses various corrections for desires to avoid risk or smooth consumption over time for an individual investor. However, there are many other factors that must be considered when discounting the future value of an asset. Consider $10 that will be received in ten years with absolute certainty. To find the present value of that $10, in a situation with no uncertainty, we discount it by the risk free rate. The use of a risk free rate is the idea that if we invested in some perfectly riskless asset, how much money would we need to invest now to receive $10 in ten years from now. US debt has traditionally been used to define the risk free rate. Using the risk free rate would be inappropriate for an asset with risk.
To understand why this is required imagine discounting the expected payoff of Apple in 10 years by the risk-free rate and its risk-adjusted rate. The former would suggest a higher current stock value than the latter. This makes sense intuitively because if the gains of Apple had no risk you would be willing to pay more money since the payoff is certain. Market participants, in equilibrium, would bid the price of Apple up until it offered only the risk free rate. However, since Apple is risky, we demand a cheaper price to compensate us for the fact that despite Apple having done well so far, might fall out of fashion. A cheaper price for Apple would de facto result in a higher return. This is because you are able to pay less money for the same fraction of Apple’s cash-flow. The correlation between asset-specific payoff risks and the general investors discount factor generates asset-specific risk corrections.
Thanks for reading!
A close friend and professor of mine once told me that the academy will warp your senses, and if you spend too much time there you lose touch with reality. There are many esoteric research topics and subjects in the academy. Often it is difficult to understand how they are at all relevant to society. Contemporary critical analyses of centuries-old philosophers that will never be read by more than a handful of people are published almost daily. And while the first few years studying at the academy pay off handsomely, most students who pursue a PhD will make less money than had they entered the industry. Depending on the subject matter, after a few years there is more money to be had through industry success than academic research.
This is an interesting phenomenon. Most professors become so specialized in a subset of an area of knowledge that they cannot simply leave and enter the industry. Skills (or lack thereof) with people, sales, teamwork, and project execution, can quickly overshadow more detailed research abilities. While there is the rare genius in each field who does contribute to a breakthrough invention, this is far from guaranteed, and does not often offer riches. Biomedical researchers who try to create breakthrough medicine tend to make less money than a family physician. For most of these research-oriented academic types money is far from their true goal. The opportunity to learn and have a high level of comprehension in a specific field can be intoxicating.
The field of financial economics has a particularly strong dichotomy between academia and industry. I have heard traders claim that academia is entirely detached from reality, and I have heard academics claim most of the investments and trading industry is wrong. This is confusing and unsettling for most students of finance or economics. It is difficult to reconcile these two views. In my experience most students defer to what they view as ‘reality.’ For a student who is ultimately interested in working for an investments firm, it isn’t particularly useful to disagree with everything their manager or mentor teaches. I learned that lesson the hard way. Citing academic journals that suggest an entire multi-billion dollar subset of a firm is ‘wrong’ does not win friends in the industry.
The true answer as to why this difference between academics and businesspeople is nuanced and a function of the different approaches to the scientific method that these two groups embrace. This is a more pleasing answer than claiming that either all professors are ‘out of touch’ or all successful hedge funds are ‘lucky and stupid.’ Instead, I will focus on how each group searches for information and approaches what they view as truth.
The scientific method is an incredibly important discovery in the academy. Interestingly, the scientific method cannot be used to prove itself. Our contemporary idea of the scientific method has been refined over centuries. David Hume is a key figure who contributed to the refinement of scientific method in the 18th century. An epistemologist,he wrote in length on logical positivism, which is the human attempt to identify a cause or phenomenon and ‘prove’ that the cause exists. The short answer to this problem is that it is impossible to prove anything, because logic cannot prove its own existence. All this means is that we cannot prove anything with 100% certainty. This is why, to simplify the field, our scientific method uses contemporary statistics to attach significance levels to our observations. In finance, the burden of proof is especially high.
The goal of a financial economist in writing a paper is to formulate a question that can be tested using the scientific method. This requires a strong thesis, an empirical test, and theory that can bring the theory of economics and social sciences together with the results of the empirical test. Once this is completed often other researchers will attempt to replicate the test, create a different test to account for other factors, or prove the test incorrect. If this all is completed and the various conclusions still support the initial thesis, there will often be a level of agreement on a the likely existence of a phenomenon. This process is necessary to come to a strong conclusion. The burden of proof is extremely high. In addition, since the social sciences rarely allow for controlled test environments, it becomes incredibly difficult to prove a theory over space and time.
The field of financial economics began to grow exponentially in the 1960s and 1970s. It was at this time that Eugene Fama put forward his paper on the efficient market hypothesis. This paper created an incredibly robust and enticing framework for understanding how financial markets operate. His central theory is that financial markets are efficient and function by incorporating new information. It suggests that all information that currently exists is already incorporated into the price of a stock and it will only fluctuate due to new, non-forecastable, information. Most financial economists are not stupid. They are aware that some hedge funds continue to make money above and beyond the market. While some academics did remain headstrong that this was only because they had access to insider information or more robust infrastructure, it slowly became accepted that perhaps they did have superior skills in predicting the future. Despite this, the theory of efficient markets was still incredibly impressive. It allowed a generally accepted understanding of how financial markets work as a base assumption. Exceptions had a very large burden of proof. They had to prove that they could, on average, be smarter than everyone else in the market.
Now let’s consider a group of 100 educated traders in the 1970s, all of whom worked at large banks with great infrastructure. Each one attempted to day trade based on macro-economic events. First and foremost, these traders are likely making a lot of money. So even if the theory of efficient markets was entirely true, they wouldn’t just quit their job. However, explaining the difference between traders and academics is more complex.
Consider a financial economist who wanted to put this group of traders to the test. He wants to study if these traders are using strategies that allow them to beat the market, achieving greater returns than should be expected. The first serious issue in any set of data is survivorship bias. This means that the bad traders were likely fired while the good traders kept their jobs and kept trading. As a result any set of data will likely seem better than it is, since the good traders remained in the data set while the bad traders were removed. Secondly, due to the high volatility in financial markets, even if all the traders had no true skill, some would perform very well and make lots of money (and some would fail immediately and be fired). So even a trader who has done very well might be lucky. Even if he is incredibly lucky, in a world with massive amounts of traders some would be expected to perform phenomenally over decades through pure luck. The third note of interest is that there is no way to distinguish between various strategy subsets. While all these traders might be considered ‘macro-economic’ traders, that is an incredibly vague category. A fourth note of interest is that even if a trader seems to beat the market, insider trading violations occur quite often. So if a researcher does find some weak evidence to suggest some traders can beat the market in the long run, he likely cannot prove that it is without the help of insider trading. And to complicate the matter further most traders in high infrastructure environments often receive information that is not illegal, but is better than what others receive. For example, Warren Buffett is able to have meetings with the executives of a firm before he invests.
While those are a few of the primary errors, the empirical issues continue to stack on top of each other. For example, imagine that one trader out of a data sample of thousands discovered a short-term trading strategy that allows some risk-adjusted excess returns for eight months, at which point the strategy stopped working since the overall market caught up with the trader and corrected the inefficiency. Strange little idiosyncrasies such as this would not be captured through research. Another example is a trading strategy called technical analysis. It is so disagreeable to contemporary finance that it is not taught in the academy. In essence it violates market efficiency entirely.
Technical analysis traders search for statistical trends or patterns in the market. The goal is to trade entirely free of any level of market information, theory, or the reality of firms and economies. This is different than quantitative hedge funds, which despite trading on algorithms and programming are built upon the premise of bringing efficiency to the market. A quantitative hedge fund might bid on the difference between convertible debt and a stock, or make mass amounts of arbitrage transactions across markets or on index baskets. A technical trader would instead attempt to predict the future through past information. This violates efficiency since it requires that there be predictable patterns in a market. Up until relatively recently this strategy did not have credibility in financial literature. Now there is evidence that intraday resistance levels might have some merit. The primary theory is that while in the long-run this phenomenon wouldn’t exist, during intraday trading market participants often sell or buy stocks for reasons unrelated to new information. For example if the 501st largest stock in the US shows signs of becoming the 500th largest stock, its price will increase since ETFs and mutual funds tracking the S&P 500 index will need to hold this stock. A technical trader might look at the largest 501-510th stocks and depending on the probability of them passing the threshold, make intraday bets on the firms. This is a quick and simple example I’ve created just for consideration, but it does introduce the possibility that there are short-term fluctuations that are due to institutional cash flows as opposed to true fundamental information. (http://www.ny.frb.org/research/epr/00v06n2/0007osle.pdf)
Before empirical papers did support some areas of technical analysis, there was no true academic proof that it existed. In fact, while some of the earlier literature did not consider very short-term trading, it did not find any proof of the potential for technical analysis in the long-run. As a result it would have been incorrect to say any empirical evidence that followed the scientific method in financial economics provided evidence for technical analysis. In addition, technical analysis has a weak theoretical argument. This is where the dichotomy between traders and academics is most apparent Some traders were making money on technical analysis before the academic body found any evidence it was a true phenomenon. The traders did not have to pass a peer reviewed journal or follow the scientific method. They only needed to make money. The end result seems to be that each group had a salient point. While the traders who followed technical analysis were not without merit, many of them were following particularly awful strategies. For example some theories that look for long-term patterns (such as ‘head and shoulders’) in stock prices fail spectacularly. Conversely, many academics who strongly believe in the efficiency of markets focused too heavily on searching for abstract solutions, and could not accept minor violations of the theory. It was not until a few researchers at the New York Federal Reserve, who are likely more in tune with industry practices, looked into the phenomenon that they found some evidence that technical trading could work in some instances due to institutional discrepancies.
The primary conclusion is that the burden of proof required for academics is much higher than traders. This is logical. When formulating and building knowledge in a field, it is important to demand well articulated proof for causal factors. When trying to make money as a trader, it is important to make money as a trader using cutting edge strategies that others have not yet developed (and have not yet proved). As an example that runs more in the favor of academics, the large majority of investments firms that actively manage mutual funds have lost money after accounting for expenses over the past four decades. The literature on actively managed mutual funds is grim, with the consensus being that the strategy is usually unsuccessful. It now seems as though the evidence has grown large condemning active management, slowly leading institutional and individual investors into more passive ETFs and mutual funds. However, it might still take another decade to see if the industry of mutual funds does move away from active management due to the academic conclusion.
For a young professional or student, it is important to remain open to both sides of the debate. For the first couple years I studied finance I was enamored by the efficient market hypothesis. I thought it was such a beautiful and wonderful theory, and argued against those who thought they could beat the market. I have also had other friends who believed in active trading and technical analysis could lead to easy money. A more reasonable framework demands consideration of both sides. It is important to build a foundation with the academic literature to understand the empirics and theory behind financial markets. However, the academy does not teach students about the financial industry, which is incredibly complicated and often diverges from what is taught in finance courses. It is important to understand where the debate is located. My personal advice is to rely more heavily on the academic literature, while accepting that the world is too large and complex for there to be a research paper that tests every phenomenon from every different perspective. Most importantly though, be skeptical. If you believe there is an inefficiency or predictability in the market you are taking the stand that you see something everyone else is missing.
Thank you for reading, my posts have been less frequent due to a particularly large project I am working on for this blog. In case you were worried.
Side note: Research alluded to available upon request.