Economists cash in on efficient, high-performance computing method

Economists have previously made little use of high-performance computers (HPC) in their research. This is despite the fact that the complex interactions and heterogeneity of their models can quickly cause them to reach hundreds of dimensions, which cannot be calculated using conventional methods. In the past, simplified models were therefore often formulated for answering complex questions. These models solved some problems, but they could also provide false predictions, explains Simon Scheidegger, Senior Assistant at the University of Zurich's Department of Banking and Finance. For example, quantitatively studying optimal monetary policy in the wake of a financial crisis cannot be properly achieved using the conventional methods. However, calculating high-dimensional models on a supercomputer is not easy either. Until recently, researchers lacked appropriate numerical analysis and highly efficient software.

The curse of dimensionality

Unlike in physics models, in which time is considered as a fourth alongside the three spatial dimensions, economic models have to consider ten- or even a hundred-times more dimensions. Even a "simple" model of pension insurance in a single country, which aims to depict the prosperity of its society at each year of age, clearly shows how quickly a higher dimensionality is reached: "If we assume that people will live to 80 years old on average and will be earning from the age of 20, and want to determine prosperity for each year of age, we already have 60 dimensions," explains Scheidegger. What's more, people make their current decisions while taking into account future uncertainties. Ideally, a model should consider all these influences.

There are two main sticking points in calculating such complex economic models. The first is recursively approximating the high-dimensional functions using many iteration steps. At the same time, systems of non-linear equations must be solved at millions of grid points that describe the model. Calculating such a model can take hours and sometimes days of computing time, even on high-performance supercomputers like Piz Daint.

Nested model

To find a highly efficient solution that can recursively calculate the economic decision-making rules (known as policy functions), the researchers combined so-called sparse grids with a high-dimensional model reduction framework. "The resulting linear combination of sparse grids, which describe the model and thus the policy functions, are nested together like a Russian doll, and are lined up in such a way that they optimally approximate and describe the original high-dimensional space," explains Scheidegger. The beauty of it is that the code to calculate the individual grids and their combination is highly parallelised. Even in small models with "only" 50 dimensions, the method efficiently scales up on Piz Daint to as many as 1,000 computer nodes at the same time. In simple terms, the dimensional decomposition framework ensures that only the relevant grid points and dimensions that describe the under consideration need to be calculated. To further minimise the time required to solve the functions and keep communication between the processors and the processes running on them highly efficient, the researchers also used a hybrid parallelisation scheme (message passing interface (MPI) and Intel(R) threading building blocks (TBB)).

Scheidegger and his colleagues have thus developed a method that takes significant account of the heterogeneities and avoids oversimplification. It also works generically and so can be applied to a variety of issues-from public finance models, such as state pensions, to central bank models. "As is the case in computer-aided physics or chemistry, the new method should enable models in economics to be solved fundamentally, that is ab initio, and then confronted with real-world data and adapted as necessary," says Scheidegger.

Further research on this topic will be conducted in a project for the Platform for Advanced Scientific Computing (PASC).

More information: More Information: www.pasc-ch.org/projects/solid … -agent-macro-models/

Aryan Eftekhari et al. Parallelized Dimensional Decomposition for Large-Scale Dynamic Stochastic Economic Models, Proceedings of the Platform for Advanced Scientific Computing Conference on - PASC '17 (2017). DOI: 10.1145/3093172.3093234

Provided by Swiss National Supercomputing Centre

Citation: Economists cash in on efficient, high-performance computing method (2018, February 22) retrieved 29 March 2024 from https://phys.org/news/2018-02-economists-cash-efficient-high-performance-method.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Bayesian model selection shows extremely polarized behavior when the models are wrong

37 shares

Feedback to editors