How to handle Big Data
Central banks and national statistical offices have increasingly been looking to big data sets and analytics to provide new insights – but managing data of this magnitude requires new data platforms. This week a Bank for International Settlements (BIS) report breaks down how best to deal with big data.
Novel big data sets – such as payment transactions, unstructured granular data collected from the web and big, structured financial data sets collected by banks since the 2008 financial crisis – are increasingly complementing traditional statistics. Analysing big data with AI and machine learning techniques can help public authorities get more timely economic signals and enhance forecasts and risk assessments.
Many banks are already implementing big data platforms and complementary computing infrastructure to facilitate the secure storage and quick processing of very big data sets in complex simulations. This report draws on their experiences to deliver guidance for big data infrastructure projects, strategic analytic techniques and insights into how to make the most of big data infrastructure to support policy making.
The upcoming release of XBRL-CSV, part of the Open Information Model (OIM), will offer a way to simplify handling big or granular data sets. XBRL-CSV has been developed specifically in response to regulator demand for more granular data collection. It features the existing capabilities of XBRL – data requirements can be defined in a taxonomy, and data can be validated, increasing accuracy – but it’s also compact, reducing storage issues. In future, XBRL-CSV will provide a way to integrate XBRL capabilities effectively within the kind of big data infrastructures discussed in this report.
Read the IFC Report here and find out more about the OIM here.