Compute, Complexity, and the Scaling Laws of Return Predictability
Investors' computing power ("compute") governs how much signal they can extract from high-dimensional data. Drawing on insights from LLM training, we show that forecast performance follows scaling laws—stable power-law functions between accuracy and training compute—that pin down both an irreducible bound on return predictability and the rate at which extra compute closes the gap. In firm-level cross-sectional return prediction, scaling laws explain over 80% of performance variation across models. Treating compute as a primitive yields sharp economic implications: scaling laws quantify predictability limits and the value of new data, they distinguish problems with strong versus weak returns to scale, and they deliver a market-efficiency metric by mapping computational advantage into a certainty equivalent. Measured this way, computational superiority earns sizable rents: a 25% marginal increase in compute would have raised an investor’s Sharpe ratio by about 10% over the last 30 years, indicating substantial returns to computational sophistication.