Cover: Finding Alphas, Second Edition by Igor Tulchinsky

FINDING ALPHAS

A Quantitative Approach to Building Trading Strategies

SECOND EDITION

 

Edited by

Igor Tulchinsky et al.

WorldQuant Virtual Research Center

 

 

 

 

 

 

 

 

Wiley Logo

Dedicated to All at WorldQuant —

The Future of Trading

Preface

Much has changed since we published the first edition of Finding Alphas, in 2015. The premise of that edition – that we considered these techniques “the future of trading” – is more true today than it ever was. In the intervening four years, we at WorldQuant have seen remarkable growth in our development of predictive algorithms for quantitative trading – we call them “alphas” – powered by an ever-rising volume and variety of available data, an explosion in computer hardware and software, and increasingly sophisticated techniques that allow us to create and deploy a higher volume and quality of alphas. Today, at WorldQuant, we have produced over 20 million alphas, a number that continues to grow exponentially as we hunt for ever-weaker predictive signals.

Since 2015, we have steadily expanded our outreach to new, diverse talent, adding to the full-time researchers, portfolio managers, and technologists at our 28 offices around the world, as well as to our growing group of research consultants – now more than 2,000 strong. We found many of WorldQuant's research consultants through our Virtual Research Center (VRC) and its global quantitative finance competitions, such as the WorldQuant Challenge, Women Who Quant, and the International Quant Championship.

Participants who enter our competitions seek to create high-quality alphas using our online portal, WebSim, which provides educational, research, and backtesting tools, including some of the same datasets that WorldQuant researchers use. More broadly, the VRC enables individuals to conduct research and seek to build high-quality algorithms that may be used in WorldQuant's systematic financial strategies. Research consultants have flexibility in their hours and work location, are compensated based on their activity and productivity, are eligible for additional compensation based on their algorithms' performance, and may ultimately be considered for full-time positions.

This book, with contributions from 47 current and former WorldQuant staffers, summarizes much of what we have learned about the art and science of alpha development. This edition is not just a new cover slapped on old content. Individual chapters have been extensively rethought and revised. This edition has nine chapters that didn't exist in 2015 – on exchange-traded funds, index alphas, intraday data, and event-driven investing, among other subjects – and extensive additions in most of the rest. Topics like machine learning and automated search have become much more important. But while we've gone deep, we've worked hard to make the material more accessible and useful.

Yet we're only beginning to plumb the possibilities of alphas – and to explore the universe of predictive signals. The years ahead will be full of new challenges, new data, and new techniques. This exponential world forces us to accept that what's here today won't necessarily be here tomorrow. In The UnRules: Man, Machines and the Quest to Master Markets, a book I published in 2018, I wrote that I have become convinced that the Age of Prediction is upon us. The more alphas you have, the better you can describe reality and the more predictive you can be. But change is a constant, and the task is never done.

Igor Tulchinsky
June 2019

Preface (to the Original Edition)

This book is a study of the process of finding alphas. The material is presented as a collection of essays, providing diverse viewpoints from successful quants on the front lines of quantitative trading.

A wide variety of topics is covered, ranging from theories about the existence of alphas, to the more concrete and technical aspects of alpha creation.

Part I presents a general introduction to alpha creation and is followed by a brief account of the alpha life cycle and insights on cutting losses.

Part II focuses more on the technical side of alpha design, such as the dos and don’ts of information research, key steps to developing an alpha, and the evaluation and improvement of quality alphas. The key technical aspects discussed in this section are turnover, backtesting, fundamental analysis, equity price volume, statistical arbitrage, overfitting, and alpha diversity.

Part III explores ad hoc topics in alpha design, including alpha design for various asset classes like futures and currencies, the development of momentum alphas, and the effect of news and social media on stock returns.

In Part IV, we introduce you to WebSim, a web-based alpha development tool. We invite all quant enthusiasts to utilize this free tool to learn about alpha backtesting (also known as alpha simulation) and ultimately to create their own alphas.

Finally, in Part V, we present an inspirational essay for all quants who are ready to explore the world of quantitative trading.

Acknowledgments

In these pages, we present a collection of chapters on the algorithmic-based process of developing alphas. The authors of these chapters are WorldQuant's founder, directors, managers, portfolio managers, and quantitative researchers. This book has two key objectives: to present as many state-of-the-art viewpoints as possible on defining an alpha, and the techniques involved in finding and testing alphas. At WorldQuant, we believe that no viewpoint is the best and only answer, and that a variety of approaches is always superior to a single one.

This edition of Finding Alphas began to take shape in 2017, after Michael Peltz joined WorldQuant as global head of content. That year, he and then-regional research director Rebecca Lehman met with Igor Tulchinsky – WorldQuant's founder, chairman, and CEO – in his Connecticut office to outline their plan to overhaul the book. Finding Alphas had originally been written in 2014; Lehman and Peltz wanted to revise (and in some cases merge) existing chapters and add new ones. The two were instrumental in driving a reconceptualization of the first edition and managing a complex editorial process. We particularly want to thank the authors of the new chapters – Crispin Bui, Mark YikChun Chan, Chinh Dang, Glenn DeSouza, Anand Iyer, Rohit Kumar Jha, Michael Kozlov, Nitish Maini, Aditya Prakash, Prateek Srivastava, Dusan Timotity – as well as the authors of the chapters that were updated. Together they have created a book that should be tremendously useful to anyone interested in quantitative investing and developing alphas.

Every one of these chapters was copy edited by sharp-eyed Ruth Hamel, who juggled the myriad questions and challenges every chapter presented. Then each chapter was carefully vetted by WorldQuant's legal team under Jeffrey Blomberg, whose patience and always sensible suggestions made a major contribution. And, once again, we thank Wendy Goldman Rohm, our literary agent, who played a major role in getting the first edition of Finding Alphas off the ground.

Last, we need to acknowledge with gratitude the support and faith of every colleague at WorldQuant. It takes a team. Thank you all.

DISCLAIMER

The contents of this book are intended for informational and educational purposes only and, as such, are not intended to be nor should be construed in any manner to be investment advice. The views expressed are those of the various contributors and do not necessarily reflect the view or opinion of WorldQuant or the WorldQuant Virtual Research Center.

About the WebSim Website

At the time of writing, the WebSim information contained in this book is consistent with the WebSim website. Because the website is subject to change, in cases where there are inconsistencies between this book and the website the terms of the WebSim website will govern the most updated and current processes of WebSim. For the most up-to-date version of WebSim and the terms applicable to its use, please go to https://worldquantvrc.com or its successor site.

Registration at WebSim's official website is required to obtain the full functionality of the platform and to have access to the WebSim support team. Successful alphas submitted by research consultants may, in certain cases, be considered for inclusion in actual quant trading investment strategies managed by WorldQuant.

WEBSIM RESEARCH CONSULTANTS

WorldQuant has established a Research Consultant program for qualified individuals to work with our web-based simulation platform, WebSim. This program gives consultants the flexibility to create alphas in their own physical and intellectual environment. This is a particularly ideal pursuit for individuals who are undertaking a college education, as well as those who are ambitious and highly interested in breaking into the financial industry.

Qualified candidates are those highly quantitative individuals who typically come from science, technology, engineering, or mathematics (STEM) programs. However, majors and expertise vary and may include statistics, financial engineering, math, computer science, finance, physics, or other STEM programs.

You can find more details on WebSim in Part IV of this book. More information on the Research Consultant program is available at WebSim's official website.

PART I
Introduction

 

1
Introduction to Alpha Design

By Igor Tulchinsky

What is an alpha? Throughout this book, you'll read different descriptions or definitions of an alpha. Alpha, of course, is the first letter of the Greek alphabet – as in “the alpha and the omega,” the beginning and the end – and it lurks inside the word “alphabet.” Over the centuries, it has attached itself to a variety of scientific terms. The financial use of the word “alpha” goes back to 1968, when Michael Jensen, then a young PhD economics candidate at the University of Chicago, coined the phrase “Jensen's alpha” in a paper he published in The Journal of Finance. Jensen's alpha measured the risk-adjusted returns of a portfolio and determined whether it was performing better or worse than the expected market. Eventually, Jensen's alpha evolved into a measure of investment performance known simply as alpha, and it is most commonly used to describe returns that exceed the market or a benchmark index.

Since then, the term “alpha” has been widely adopted throughout the investing world, particularly by hedge funds, to refer to the unique “edge” that they claim can generate returns that beat the market. At WorldQuant, however, we use the term a little differently. We design and develop “alphas” – individual trading signals that seek to add value to a portfolio.

Fundamentally, an alpha is an idea about how the market works. There are an infinite number of ideas or hypotheses or rules that can be extrapolated, and the number of possibilities is constantly growing with the rapid increase in new data and market knowledge. Each of these ideas could be an alpha, but many are not. An alpha is an automated predictive model that describes, or decodes, some market relation. We design alphas as algorithms, a combination of mathematical expressions, computer source code, and configuration parameters. An alpha contains rules for converting input data to positions or trades to be executed in the financial securities markets. We develop, test, and trade alphas in large numbers because even if markets are operating efficiently, something has to drive prices toward equilibrium, and that means opportunity should always exist. To use a common metaphor, an alpha is an attempt to capture a signal in an always noisy market.

DESIGNING ALPHAS BASED ON DATA

We design alphas based on data, which we are constantly seeking to augment and diversify. Securities prices generally change in response to some event; that event should be reflected in the data. If the data never changes, then there is no alpha. Changes in the data convey information. A change in information should in turn produce a change in the alpha. These changes may be expressed in a variety of alpha expressions. Table 1.1 shows a few simple examples.

Table 1.1 Expressions of changes

A simple difference, A – B Example: today's_price – yesterday's_price
A ratio, A/B Example: today's_price/yesterday's_price
An expression Example: 1/today's price. Increase position when price is low

Alpha design is really just the intelligent search for price information conveyed by possible changes in the data, whether you think of them as patterns, signals, or a code. The mathematical expression of an alpha should embody a hypothesis or a prediction. Again, just a few examples are shown in Table 1.2.

Table 1.2 Expressions and their hypotheses

Expression Hypothesis
1/price Invest more if price is low
Price-delay (price,3) Price moves in the direction of 3-day change
Price High-priced stocks go higher
Correlation (price,delay(price,1)) Stocks that trend, outperform
(price/delay(price,3)) * rank(volume) Trending stocks with increasing volume outperform

DEFINING QUALITY IN ALPHAS

Alphas produce returns, which vary over time; like individual stocks, an alpha's aggregate returns rise and fall. The ratio of an alpha's daily return to daily volatility is called the information ratio. This ratio measures the strength and steadiness of the signal, and shows if a strategy is working – whether the signal is robust or weak, whether it is likely to be a true signal or largely noise. We have developed a number of criteria to define the quality of an alpha, though until an alpha is extensively tested, put into production, and observed out of sample, it's difficult to know how good it really is. Nonetheless, here are some traits of quality alphas:

  • The idea and expression are simple.
  • The expression/code is elegant.
  • It has a good in-sample Sharpe ratio.
  • It is not sensitive to small changes in data or parameters.
  • It works in multiple universes.
  • It works in different regions.

ALPHA CONSTRUCTION, STEP BY STEP

We can broadly define the steps required to construct alphas. Although the devil is in the details, developers need only repeat the following five steps:

  • Analyze the variables in the data.
  • Get an idea of the price response to the change you want to model.
  • Come up with a mathematical expression that translates this change into stock positions.
  • Test the expression.
  • If the result is favorable, submit the alpha.

CONCLUSION

The chapters that follow delve into many of these topics in much greater detail. These chapters have been written by WorldQuant researchers, portfolio managers, and technologists, who spend their days, and often their nights, in search of alphas. The topics range widely, from the nuts-and-bolts development of alphas, to their extensive backtesting, and related subjects like momentum alphas, the use of futures in trading, institutional research in alpha development, and the impact of news and social media on stock returns. There's also a chapter focused on various aspects of WorldQuant's WebSim platform, our proprietary, internet-enabled simulation platform. WebSim's simulation software engine lets anyone backtest alphas, using a large and expanding array of datasets. Last, in this edition of Finding Alphas, we've added new material on topics such as machine learning, alpha correlation, intraday trading, and exchange-traded funds.

What is an alpha and how do we find them? Turn the page.

2
Perspectives on Alpha Research

By Geoffrey Lauprete

In the field of finance, an alpha is the measure of the excess return of an investment over a suitable benchmark, such as a market or an industry index. Within the quantitative investment management industry, and in this book, the term “alpha” refers to a model used to try to forecast the prices, or returns, of financial instruments relative to a benchmark. More precisely, an alpha is a function that takes, as input, data that is expected to be relevant to the prediction of future prices and outputs values corresponding to the forecasted prices of each instrument in its prediction universe, relative to a benchmark. An alpha can be expressed as an algorithm and implemented in a computer language such as C++, Python, or any number of alternative modern or classical programming languages.

Attempts to forecast markets predate the digital era and the arrival of computers on Wall Street. For example, in his 1688 treatise on economic philosophy, Confusion of Confusions, stock operator and writer Josseph Penso de la Vega described valuation principles for complex derivatives and techniques for speculating on the Amsterdam Stock Exchange. Two hundred years later, in a series of articles, Charles Dow (co-founder of Dow Jones & Co., which publishes The Wall Street Journal) codified some of the basic tenets of charting and technical analysis. His writings provide one of the first recorded instances of a systematic market forecasting technique, but investors had to wait until the 1980s for affordable computing power to arrive on the Wall Street scene and change the modeling paradigm: instead of pencil and paper, the main design tools and their hardware were set to become computers and digital data.

PHDS ON THE STREET

Until the 1960s, all or almost all back-office processes, and stock settlement in particular, were done manually. It took the unprecedented increase in stock trading volumes experienced in the late 1960s (between 1965 and 1968, the daily share volume of the New York Stock Exchange increased from 5 million to 12 million), and the accompanying “traffic jams” in trade processing due to the reliance on pen-and-paper recordkeeping, for the adoption of computers to become a business imperative. By the 1970s, Wall Street had digitized its back offices. Within a few years, computers and programmable devices were ubiquitous on the Street, playing a role in every corner of the financial industry.

The arrival of computing machines on the trading floor of large Wall Street firms allowed previously intractable problems in valuation – the pricing of options and other derivatives, and price forecasting based on databases of digital data – to become practically solvable. But formulating the problems in such a way that the new machines could solve them required a new type of market operator, who historically had not been part of the sales and trading ecosystem: PhDs and other analytically minded individuals, not traditionally Wall Street material, became sought-after contributors to this new and modernized version of the trading floor.

A NEW INDUSTRY

One of the early adopters of computer-based investment methods to exploit systematic alphas was James Simons, an award-winning mathematician and former chair of the mathematics department at Stony Brook University. In 1982, Simons founded Renaissance Technologies, an East Setauket, New York-based firm that became known for the successful deployment of systematic market-neutral strategies. Six years later, former Columbia University computer science professor, David Shaw, launched D.E. Shaw & Co. in New York City. Shaw had spent two years at Morgan Stanley, part of a group whose mandate was to develop stock forecasting algorithms using historical price records. Others followed, either inside banks and brokerages as part of proprietary trading groups or at hedge funds managing pooled investor money. Over time, quantitative market-neutral investing became known as a scalable and dependable investment strategy, which fared particularly well during the dot-com market crash of the early 2000s.

As the hedge fund industry grew, so did the allocation of investor capital to quantitative investment strategies. As of January 2018, it was estimated that as much as one third of the hedge fund industry's total assets was managed using systematic investment methods, either by firms dedicated to quantitative investing, such as WorldQuant, or by multistrategy hedge funds that invest a portion of their assets in quantitative approaches, according to The Financial Times. The hesitation that investors exhibited in the 1990s and early 2000s toward what were often referred to disparagingly as “black box” strategies waned gradually as the strategies' track records held up relative to those of other investment approaches. It's possible that quantitative investment strategies, which emphasize leveraging technology – via algorithms, artificial intelligence, and machine learning – are benefiting from society's growing comfort with automation and the replacement of human intermediaries by machines with increasing levels of free rein. Investing via modeling and smart data processing doesn't seem like much of a stretch 40 years after the launch of the first systematic hedge funds.

Still, the path toward quantitative investing becoming an established strategy was not a straight line. The quant meltdown of 2007 dealt a particular blow to investor and participant confidence in the ability of quantitative investing to produce credible long-term risk-adjusted returns: in August of that year, a market panic prompted a large number of quant funds to liquidate their positions in a short period of time, creating an unprecedented drawdown and causing some participants and investors to head for the exits in what amounted to a stampede. That period was followed the next year by the global financial crisis, which again saw significant investment return volatility. The 2000s were accompanied by a range of structural market changes, from decimalization to the rise of exchange-traded funds. The period also demonstrated the flexibility and resilience of the quantitative investment approach, and showed that the quantitative operators developing alpha forecasts were able to adapt to new market environments, innovate, and ultimately stay relevant.

In the next section, we will take a closer look at the alphas driving the quantitative strategies described above.

STATISTICAL ARBITRAGE

The term “statistical arbitrage” (stat arb) is sometimes used to describe a trading strategy based on the monetization of systematic price forecasts, or alphas. Unlike pure arbitrage, when a risk-free profit can be locked in by simultaneously purchasing and selling a basket of assets, a stat arb strategy aims to exploit relationships among asset prices that are estimated using historical data. Because estimation methods are imperfect, and because the exact relationships among assets are unknown and infinitely complex, the stat arb strategy's profit is uncertain. It is subject to estimation error, overfitting, incomplete information, and shifts in market dynamics that can cause previous relationships to vanish. Nonetheless, the practitioner's goal is to discover, through data analysis and statistical hypothesis testing, which relationships are valid and deserve an allocation of capital and which are bogus and likely to lead to the poorhouse.

In the search for legitimate relationships between asset prices, the academic literature has been and continues to be an important source of ideas. For example, the work of financial economists on the capital asset pricing model (the CAPM, which aims to decompose a stock's return into its market component and an idiosyncratic component) and its derivatives has spawned an enormous, multidecade-long search to prove and/or disprove its validity, and to enhance its explanatory power with additional factors. The initial research on the CAPM was published in the 1960s (e.g. William Sharpe's 1964 article, “Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk”), and the debate continued into the 1990s (e.g. Eugene Fama and Kenneth French, “The Cross-Section of Expected Stock Returns”). A 2018 scan of The Journal of Finance found at least one entry on the subject of factor pricing (“Interpreting Factor Models,” by Serhiy Kozak, Stefan Nagel, and Shrihari Santosh).

But models from academia, even when they provide a foundation for applied research, are often incomplete or based on assumptions that are inconsistent with the real markets in which traders operate. As a result, such models can be difficult or impossible to employ successfully. This observation applies not only to models from financial economics but also to models from the fields of econometrics, applied statistics – such as time-series analysis and machine learning or regression – and operations research and optimization. As an example, many regression models tend to be estimated based on the minimization of mean-squared errors for computational convenience. But mean-squared error is not necessarily the objective that traders have in mind for their stat arb strategies – they may be more interested in generating a steady cash flow and managing the downside risk of that cash flow. The simplicity of the mean-squared error objective is a trade-off against the objective's usefulness. Alternatives are possible but fall into the realm of trade secrets, which quantitative investment firms develop in-house and keep to themselves, never to be published but instead handed down from quant to quant, forming the basis of a deep institutional pool of knowledge that it is in the firm's interest to protect.

EXISTENCE OF ALPHAS

We could debate whether alphas and stat arb strategies ought to exist at all. In fact, the academic literature in financial economics has tackled this problem exhaustively, qualifying the markets and the nature of information and how it affects prices, and deriving conclusions based on various assumptions about the markets, about market participants and their level of rationality, and how the participants interact and process information. The term “efficient market hypothesis (EMH) is sometimes used to describe the theory that says market prices reflect all available information. The EMH gained prominence in the 1960s, and empirical studies of prices and of asset manager performance since then have lent credence to the idea that the market is efficient enough to make it impossible to determine whether top asset managers' performance is due to anything but luck. The theory also implies that looking for exploitable patterns in prices, and in other forms of publicly available data, will not lead to strategies in which investors can have confidence, from a statistical perspective.

An implication of the EMH is that prices will evolve in a process indistinguishable from a random walk. However, another branch of financial economics has sought to disprove the EMH. Behavioral economics studies market imperfections resulting from investor psychological traits or cognitive biases. Imperfections in the financial markets may be due to overconfidence, overreaction, or other defects in how humans process information. Empirical studies may have had mixed results in aiming to disprove the EMH, but if no investors made any effort to acquire and analyze information, then prices would not reflect all available information and the market would not be efficient. But that in turn would attract profit-motivated investors to tackle the problem of analyzing the information and trading based on it. Thus, over time, some investors must profit from analyzing information.

Even if we can make an argument in favor of the existence of alphas under various stylized assumptions, the details of prediction in the real world are complex. A prediction with low accuracy or a prediction that estimates a weak price change may not be interesting from a practitioner's perspective. The markets are an aggregate of people's intentions, affected by changing technology, macroeconomic reality, regulations, and wealth – and this makes the business of prediction more challenging than meets the eye. Therefore, to model the markets, investors need a strong understanding of the exogenous variables that affect the prices of financial instruments. That is the challenge that market forecasters and algorithmic traders face, motivated by the expectation that they will be rewarded for their efforts and for their mastery of complexity.

IMPLEMENTATION

Alphas are typically implemented in a programming language like C++, Python, or another flexible and modern language. When implemented in a programming language, an alpha is a function that takes data and outputs a forecast of the price of each instrument in the universe being tested. The simplest forms of data are concurrent and historical prices. Other commonly used data include volumes and other market records, accounting variables in a company's income or cash flow statements, news headlines, and social media-related entries. Data quality is a significant issue in the alpha research process. Bias in the historical data can make the calibration of accurate models impossible. Ongoing data issues, such as technical problems, human error, unexpected data format changes, and more, can sap a model's forecasting power.

Predicting the future price of a financial instrument is a difficult problem. For example, in order to predict the price of an NYSE-listed stock over the next month, a researcher needs to understand not only (1) the idiosyncratic features of that stock, but also (2) what drives the industry that the stock belongs to, and ultimately (3) what drives the market for listed stocks as a whole – that is, the world economy. The complexity of the problem can be reduced dramatically by focusing on relative prices; for example, instead of trying to predict the absolute price of stock XYZ, you can try to predict the price of stock XYZ relative to other stocks in XYZ's industry. By reducing the problem's scope, (2) and (3) can be ignored. In practice, investors can try to monetize such relative value predictions via market-neutral investment strategies.

EVALUATION

What is a good alpha? There is no single metric that will answer that question. The answer depends in part on how the alpha is going to be used. Certain investment strategies require very strong predictors; others benefit, marginally, from weak ones. Here are some pointers for alpha evaluation:

  • Good in-sample performance doesn't guarantee good out-of-sample performance.
  • Outliers can ruin a model and lead to erroneous predictions.
  • Multiple-hypothesis testing principles imply that the more effort spent sifting through evidence and the more alternatives considered, the lower the likelihood of choosing an optimal model.
  • An out-of-sample period is necessary to validate a model's predictive ability. The longer the out-of-sample period, the higher the confidence in the model but the less in-sample data available to calibrate the model. The optimal ratio of in-sample to out-of-sample data in model building depends on the model's complexity.

LOOKING BACK

Backtesting involves looking back in time to evaluate how a forecast or trading strategy would have performed historically. Although backtesting is invaluable (providing a window into both the markets and how the alpha would have performed), there are two important points to remember:

  • History does not repeat itself exactly. So while an alpha idea may look great in a backtest, there's no guarantee (only a level of confidence) it will continue to work in the future. This is because of the perverse power of computation and the ability of creative modelers to miss the forest for the trees. With computational resources, you can evaluate a very large number of ideas and permutations of those ideas. But without the discipline of keeping track of what ideas were tried, and without taking that into account when evaluating the likelihood that a model is a true model and not a mere statistical artifact (multiple-hypothesis testing principles), you may end up mistaking lumps of coal for gold.
  • New algorithmic modelers look back at history and estimate that the market was much easier to trade than it was in reality. This is due to several effects. First, hindsight is 20/20. Second, data scrubbed to polish rough edges can lead to overinflated historical alpha performance. Last, computational power and technology evolve, and today's tools were not available historically. For example, ideas that seemed simple enough to program in a Lotus spreadsheet in the 1980s were actually not so simple to discover and implement back then. Every period has its own market and its own unique market opportunities. Each generation of algorithmic modelers has an opportunity set that includes the possibility of discovering powerful market forecasts that will generate significant profit.

THE OPPORTUNITY

Exploitable price patterns and tradable forecasting models exist because market participants differ in their investment objectives, their preferences (such as risk tolerance), and their ability to process information. Participants work with a finite set of resources and aim to optimize their investment strategies subject to the limits imposed by those resources. They leave to others the chance to take advantage of whatever trading opportunities they haven't had the bandwidth or ability to focus on. Market participants with long-term investment horizons tend not to pay the same attention to short-term price variations as participants with short-term investment horizons. Conversely, traders with a short-term investment horizon can operate efficiently and effectively without having an understanding of the fundamental valuation principles that are used by institutional investors concerned with scalability, tax efficiency, and longer-term performance (or, in some cases, performance relative to an index). Traders who use leverage cannot tolerate volatility and drawdowns to the same extent that a nonleveraged trader can. Firms with larger technology budgets can beat the competition in areas like the monetization of short-term alphas via a low-latency infrastructure, the exploitation of large-scale data processing, or the application of computationally intensive machine learning or artificial intelligence forecasting techniques.

The goal of an alpha researcher is to discover forecastable prices or price relationships that investors may profit from. The fact that the market is continuously evolving and responding to new information and new information sources ensures that the opportunity to find alphas will continue to exist indefinitely. That is good news for the next generation of alpha researchers, but it also implies that models designed for market conditions that no longer exist will cease to function, their forecasting power decreasing inexorably with time. An alpha researcher's job is never finished.