Barely a week after New Year’s Day, Bank of England’s chief economist Andy Haldane admitted to his profession’s failure of foreseeing a financial crisis and miscalculating the impact of Brexit.

In contrast to economists’ forecasting blunder, Haldane argued that a dramatic improvement in the weather forecast had been due to one reason — the availability of data. So, why can’t economists fare as well as weather forecasters?

He was not the first to criticise. Economics’ famous nickname, “a dismal science”, is often associated with its grim records of predicting the future economy, from the 1929 Great Depression to the 2008 Financial Crash.

The famous punch line that popped up in the likes of The Economist, Financial Times and others reiterated the similar message. “The average expert was roughly as accurate as a dart-throwing chimp” suggests that average experts’ predictions did just as well as random guessing.

Trust deficit in governmental and experts’ judgments further calls for fine-tuning economic predictions. How did economists often end up with missed forecasts?

Two types of errors — incentive-induced and systematic ones — need to be minimised to save economic forecasts from embarrassing mistakes they did in the past.

Behavioural economist Dan Ariely argued humans are often blinded by their own incentives and, therefore, make biased judgments. This incentive-induced error is at the peak prior to 2008 crisis.

Banks would employ a “dress-to- impress” forecasting to tell their rich clients how the economy would look like in the next decade or so.

The documentary, Inside Job, revealed how the financial service industry paid leading academics to write reports about the soundness of the industry. Credit rating agencies, too, had a strong incentive to give desirable ratings for their customers who paid their bills.

Accuracy is rarely mentioned. Old forecasts are soon forgotten and the pundits are virtually never held accountable of their forecasts. Experts’ reputation relies heavily instead on their clients’ profit.

These incentive-induced errors assume that regulators need to alter the incentives to allow experts to make accurate prediction. Is this enough? Unfortunately not. More systematic errors, economic model’s and human’s, need to be tackled, too.

Psychologists, led by Daniel Kahneman and Amos Tversky, have shown that markets are full of irrational behaviour. This is why a perfectly-constructed rational economic model fails to detect when a bubble will ultimately burst.

Anchoring economic forecasts to previous estimates and frequently-quoted sources are common too. However, it could give rise to predictable forecast errors.

Modern psychology also revealed that humans’ minds always crave for certainty. If they do not find any certainty, they impose it.

This blurs the ability of forecasters to recognise irreducible uncertainty, which Philip E. Tetlock and Dan Gardner claimed in their Superforecasting
book as “uncertainty that is impossible to eliminate, even in theory”.

“I-knew-it-all-along” mentality, or hindsight bias, which refers to the thought of having predicted an event after it occurred, triggers the forecasters to be overconfident. They would go on making another poor forecast rather than admit and start thinking how to improve.

Are there ways to save economic forecasting from such errors? On incentive-induced errors, regulators are at the core to alter the incentives surrounding economic forecasts.

Some progress has been made. After the 2008 financial crash, the Dodd Frank Act of 2010 made changes that increased the liability of credit rating agencies for faulty ratings. But it can be further improved.

First, it is the time to lay economic forecasters’ reputation on the
accuracy of their forecasts. Our economy is made up of complex and interrelated components. What seems to be a minor forecast error could have major consequences in the economy.

Precision is the key. For central banks, minimising forecast errors can facilitate more meaningful discussions of the implications between policy options.

The use of Brier score, which measures the distance between what forecaster forecasts and what actually happened, should, therefore, be encouraged among the pundits.

Second, economic modelling could be improved to recognise a series of irrational behaviour. Easier said than done, but it is one way to save economists’ reputation.

Third, Tetlock and Gardner offered useful insights on how to improve forecasters’ skill. Forecasters can start by predicting the potentially predictable and breaking the seemingly intractable problems into sub-problems.

They need to strike the balance between under- or overconfidence and under- or overreacting to evidence as well as recognising hindsight
biases to stop justifying missed forecasts.

Learning, thus, requires doing, together with feedback why the forecast succeeds or fails.

Economists know the pitfall of forecasting — that it could be pointless. It might not be the outcome people would expect since policymakers might choose to change policy to avoid something they see in their forecasts.

But that’s the point. Policies are made for the future. Policymakers need to know how the future will hold given different scenarios to prevent the economy going down the slippery slope.

This article first appeared in New Straits Times on 17 January 2017.

- Advertisement -