## AI Fermi Prize Round 3 winners

By jacobklagerros on March 12, 2019, 9:54 a.m. GMT

Round 3 of the AI Fermi Prize closed March 9 at 23:59 GMT. Two prizes have been awarded to commenters who substantially contributed to our mission of tracking and improving the state-of-the-art of forecasts on AI progress.

# Summary

I decided to check out the price behavior of recent major companies, and get base rates about how often they were trading at 2x valuation since becoming very large. I was able to get these into Google Sheets but had to split it into several different ones due to lag: Apple Amazon Alphabet Tencent Microsoft

Each spreadsheet includes a chart giving a historical view of the ratio between that trading day's price and the closing price of one year before. Levels 2 or higher are part of the criteria for this question.

# Background

## Model 1 (time-horizon of market forecasts)

As mentioned, I have a hard time believing that the investing community is reliably predicting developments on a 10 or 20 year timeframe (for maximum relevance to this question I'm going with 20, as described below).

Does anyone happen to know of academic literature about this?

I tried doing a simple test in Excel to see if changes in the S&P500 price predicted changes in S&P500 earnings per share more than a year out, and the correlation looked like random noise for most time periods further than 1 year. I did it like that just because it's easy to get data on the S&P and do a quick test, but it might be more relevant to see if changes in individual companies' prices predicted long-term future changes in earnings for that company (since this question has been made about Alphabet specifically).

## Model 2 (Question uncertainty due to random walk of market)

Given a random walk, I wanted to get some sense of how likely the market cap of Alphabet Inc would be to fall within a range.

So I generate the fictitious returns in two batches, one for each of the two different standard deviations, and check how many times Google’s price would have fallen within the range given.

A bunch of details were crudely made up as follows:

• For the discount rate, I decided to try a 7% annual return (inflation adjusted). I am open to modifications. I assumed that a lower discount rate (such as at the level of treasury bonds) would not be appropriate for a stock.
• For this question, I assume HLMI happens exactly 20 years from now. Partly for simplicity, and partly from how steeply my distribution falls for earlier times. I give roughly 0.4% chance of HLMI happening within 20 years (defined as: when unaided machines can accomplish every task better and more cheaply than human workers), and about 0.01% within ten years. “Every task” is a lot of tasks, and includes things like going to random people’s houses and doing plumbing on their old pipework. That job has a bunch of sub-tasks that are very unlikely to be completely automated in less than 20 years, and adding the “more cost-effectively than humans” constraint is even more doubtful. So the question resolution says “in less than 20 years”, but I’m assuming “20 years from now”.
• A cluster of details that I glossed over for the discount rate: dividends, buybacks, acquisitions and other premiums such as voting power. Google stock does not pay a dividend, so that part isn’t too misleading to ignore. The rest I’m not thinking about right now.

Now let’s suppose if HLMI was not developed in 20 years, but Google is roughly correctly priced anyway, and just imagine the odds of Google’s stock falling within a range. Google’s market cap is currently at about 778 billion. The question criteria say plus or minus 300 billion from the current price. So I made a hypothetical future range with a lowerbound of 1,851 billion (478 bn * 1.07^20), and upperbound of 4,172 billion (1078 bn * 1.07^20).

I then did a simple simulation in Excel. I generated normally-distributed monthly returns with a mean average of 1.00565414, which results in an average yearly return of 7%. For reference, my calculation of Google’s historical yearly stock appreciation (adjusted for inflation) is about 21.9%.

• For standard deviation I didn’t know what to do, because a far more modest annual return for Google would seem to imply a smaller standard deviation than their historical one. Though that wouldn’t necessarily be the case (for instance if the stock spasmed and had major selloffs), though on average it should be smaller AFAIK. So I did this simulation twice, one with Google’s historical standard deviation of 0.091131917 (in terms of monthly percent changes in price in decimal form), and one with a much lower standard deviation taken from the Vanguard Information Technology ETF (VGT), which is 0.051228994. This is a crude way to grab a lower std, and if anyone can think of a better way to make this adjustment, feel free to tell me how.
• Another consideration is that Google’s (monthly price changes) has historically had a kurtosis of 3.443551126, according to whatever measure of kurtosis is used in Excel's Data Analysis Toolpak. Maybe some day I’ll figure out a way to make abnormal distributions in Excel or Python or something; right now I’m doing the easy, lazy thing.

#### Batch 1 (higher, historical std)

Given Google’s historical standard deviation, the price only ended up within the range 18.74% of the time after 240 monthly periods.

#### Batch 2 (lower, Vanguard index std)

Given the lower standard deviation taken from the Vanguard IT index, 36.79% fell within the range. I used 10,000 sample-runs of these 240 monthly periods, for both tests.

#### Discussion

So basically, I would expect Google’s stock to fall outside the range the majority of the time, even in a world with no HLMI within 20 years and assuming Google's stock had no particular tendency to appreciate more than the rest of the market (that must be the case if it was correctly priced, unless Google's price becomes mispriced say around year 19 and 20, and then corrected after that, which is an interesting spin on this).

So throwing in HLMI should add a heck ton of extra volatility, shoving more probability mass outside this range. I think this would make the stock price much more volatile whether or not Google had a monopoly on the technology. Maybe Google becomes almost worthless because a competing firm beats Deepmind to the punch on solving intelligence, obsoleting Google and most of its projects. But this outcome seems hard for me to believe, given two things: how cutting-edge Google (and DeepMind) is, and also, I seriously doubt that a single big insight will solve intelligence so universally that another firm would be able to make everything Google is doing uncompetitive.

## Model 3 (incorporating HLMI-like returns)

I decided to do another simulation, but now for a scenario more like HLMI happening, and assuming that Google stayed competitive. This would be more like Google's historical stock returns of 21.9% annually (inflation adjusted), rather than the 7% market return I assumed for the first test. Though I would interpret that as implying that that they are not valued correctly right now, but I decided to try it anyway.

I tried a mean average monthly price multiplier of 1.0166465 (Edited, first draft my numbers were off), which results in an annual return of 21.91%, and with no variance would result in a final market cap of 40.9 trillion (also edited) after 240 monthly periods (sounds like the ballpark for the kinds of valuations you would expect given HLMI). Standard deviation was set to the historical rate of 0.091131917. Unsurprisingly, this results in the stock price landing in the range only 11.03% of the time. (Any occasion of changing the returns to be further from 7% annually in either direction would do this given my set time-discounted range bounds).

#### Discussion

Lots of details would have to be improved with such a model. It would be interesting to post a question on the general Metaculus site, asking people to forecast the standard deviation of Google's stock price over the next 20 years. Do people expect it to be higher than historical? The same? Lower? It does make intuitive sense for it to be lower, as a larger company with more economic mass shouldn't move as much, all else held equal. But this is in a sector where you would expect to see wild price movements given HLMI, or even anything approaching it.

## Suggestions for question improvements

After talking to a friend about this, they argued that this question would be more interesting if it was about a tech sector stock index, or about an AI-related stock index. There are outcomes where Google doesn't have that much of a competitive monopoly, and also outcomes where the HLMI is not developed by one company full-stack, but rather there are a bunch of companies that specialize into different parts, thus enriching all of their earnings per share. These seem plausible, and this raises the value of having a similar question for a broader tech index.

If I can muster the time and energy, I will try to follow-up with an improvement at some point. Feedback is good. In my first post of this I had a couple numbers that were off a little, which I've fixed; serves me right for trying to hurry and post it before bed.

The second prize of \$200 goes to kokotajlod, who thereby has claimed prizes in all three prize rounds!

kokotajlod wins this prize for clearly stating the reasoning behind his prediction on the market cap doubling question above, gathering historical data about fast vs. slow takeoff, as well as some other contributions.

Finally, we extend honorable mentions to new users Aidan_OGara -- for building a Guesstimate model predicting whether more powerful models, like GPT-2, will be kept secret by major labs -- and Meefburger, for suggesting an interesting new question related to takeoff speeds, and making a subtle point about probabilities and outside views.

When we launched the prize we decided to run it for 3 rounds, and then reevaluate. We are running a further round while we are thinking about how to move forward.

Round 4 is now on, and prizes will be awarded to comments written between March 10 and March 23, 23:59 GMT. You can find more examples of prize-worthy commenting here.