The interplay of finance and technology will lead to inevitable ups and downs for investors.

getty

Lyft’s
LYFT
fourth quarter earnings report contained an embarrassing typo earlier this week, accidentally tacking on an extra zero to one of its key financial forecasts. The company said a profitability metric would rise 500 basis points rather than 50. The blunder lit trading algorithms ablaze, sending Lyft’s stock price surging over 60% in after-hours trading. Within a day, cooler heads prevailed and shares gave back most gains when the company clarified the mistake.

Reviving memories of the “flash crash”—the algorithmically-induced stock market crash in 2010—the Lyft slip-up highlights how fallible human beings interacting with powerful new technologies can result in a rollercoaster ride for investors.

Innovation is a double-edged sword, and whenever new inventions are unleashed into complex social systems there is potential for something to go haywire. High-speed automated trading systems enable fast, low-cost investing but may have also reacted imprudently to Lyft’s typo. Manual trading systems meanwhile might have avoided the mistake, but mean society would have to give up the efficiency benefits of algorithmic trading.

Corporate reporting errors leading to investor headaches are nothing new, of course, and they do not have to stem from any advanced technology either. In 1999, for example, a biotech company called Biomatrix accidentally reversed two digits on an inventory document. Rather than reporting that it had 32,000 doses of an injectable knee pain medication in stock, the company stated during an investor earnings call that the figure was 23,000 doses. When Biomatrix later corrected the error, via a fax communication, this led to a 10% drop in the company’s share price as investors feared it held too much inventory.

Today, the reaction to, and resolution of, mistakes happens a lot more quickly as the use of facsimile and snail mail have been replaced by email and algorithm. The Lyft share mispricing was quickly corrected once revealed, highlighting how technology and automation often form part of the solution to problems as well. In the future, it is easy to see how the use of AI in financial reporting could make mistakes like Lyft’s less likely.

MORE FOR YOU

While some might jump to the conclusion that regulation must be needed, regulation may also be part of the problem. U.S. government requirements for quarterly reporting from public companies are potentially a cause of hastiness that makes mistakes more likely. In Europe, for example, required financial reporting is less frequent.

Nor is it the case that the potential for technologies to amplify human frailties is unique to the private sector. Military personnel have come dangerously close to setting off a nuclear war on more than one occasion. In the 1950s, for example, a flock of Canada geese was interpreted as a possible Soviet attack and set off an early warning system in the U.S. In this sense, the lesson for regulators may not be all that different than for investors. Just like Wall Street firms, regulators suffer limitations in foreseeing the ripple effects from their interventions. Caution is therefore warranted from both parties when experimenting with technologies that could have dangerous side effects.

Moreover, the private sector can be expected to test new technologies before unleashing them on the broader world. Yet regulators are rarely so careful or judicious with their own regulations. Last year, the Securities and Exchange Commission (SEC) proposed sweeping new rules targeting possible conflicts of interest in broker-dealers’ and investment advisers’ use of AI and predictive data analytics. Yet the agency never bothered to calculate the benefits of its intervention, nor seriously evaluate any alternatives to its proposal. The regulatory push appears to be an anxious overreaction to the GameStop
GME
trading frenzy from a few years back. Feeling an apparent need to “do something,” the SEC displays its own set of human biases by regulating technology first and asking questions later.

Even though Lyft’s goof will likely fade from memory soon, it provides a lesson that will hopefully endure. Technologies and markets evolve rapidly, often confounding even their creators. Predicting all of the potential failure points where new tech might compound human errors is impossible.

Fortunately, people learn from their mistakes, and often do so quickly. Investors burned by Lyft’s typo will probably exercise more care with future earnings reports. Nevertheless, questions about our ability to control technology within complex social environments should give both the risk-averse regulator and the optimistic technologist pause. Our capacity to anticipate the effects of transformative new technologies are, and probably always will be, severely limited. To err is human. Expect unintended consequences wherever technology and people are involved.