The effort led by Lawrence Livermore, one of nine DOE-funded data reduction projects

0


[ad_1]

This illustration, showing the density field of a Rayleigh-Taylor jitter simulation, shows how lossy compression affects data quality. Using the open source zfp software, researchers can vary the compression ratio depending on the desired setting, from 10: 1 (shown on the left) to 250: 1 (on the right), where compression errors become apparent. Image courtesy of Peter Lindstrom.

A Lawrence Livermore National Laboratory-led data compression effort was one of nine projects recently funded by the U.S. Department of Energy (DOE) for research aimed at reducing the amount of data needed to advance the scientific discovery.

LLNL was among five DOE National Laboratories to receive awards totaling $ 13.7 million for data reduction in scientific applications, where massive data sets produced by high-fidelity simulations and improved supercomputers are beginning to exceed capacities of scientists to collect, store and analyze them efficiently.

As part of the project – ComPRESS: Compression and Progressive Retrieval for Exascale Simulations and Sensors – scientists at LLNL will seek to better understand data compression errors, develop models to increase confidence in data compression for science and design a new hardware and software framework to dramatically improve performance. compressed data. The team anticipates that the improvements will enable higher fidelity science executions and experiments, particularly on emerging exascale supercomputers capable of at least a quintillion operations per second.

Led by Principal Investigator Peter Lindstrom and LLNL Co-Investigators Alyson Fox and Maya Gokhale, the ComPRESS team will develop capabilities that allow sufficient data extraction while storing only compressed representations of data. They plan to develop error models and other techniques for precise error distributions and new hardware compression designs to improve I / O performance. The work will be based on zfp, a versatile, open-source, lab-developed high-speed data compressor capable of dramatically reducing the amount of data to be stored or transferred.

“Scientific advances increasingly rely on the acquisition, analysis, transfer and archiving of huge amounts of digital data generated by computer simulations, observations and experiments,” said Lindstrom. “Our project aims to address the challenges associated with this deluge of data by developing next-generation data compression algorithms that reduce or even reject unimportant information in a controlled manner without negatively impacting scientific conclusions.”

In addition to drastically reducing storage and transfer costs, data compression is expected to answer important scientific questions through the acquisition of higher resolution data, which, without compression, would greatly exceed the current capabilities of storage and bandwidth, added Lindstrom.

The team anticipates that the framework will reduce the amount of data required by 2 to 3 orders of magnitude and plans to demonstrate the techniques on scientific applications, including climate science, seismology, turbulence and fusion.

Managed by the Advanced Scientific Computing Research program within the DOE Office of Science, the newly funded data reduction projects were evaluated for efficiency, effectiveness and reliability, according to the DOE. They cover a wide range of promising data reduction topics and techniques, including advanced machine learning, large-scale statistical calculations, and new hardware accelerators.

To learn more, visit the web.

/ Public distribution. This material is from the original organization and may be ad hoc in nature, edited for clarity, style and length. See it in full here.

[ad_2]

Leave A Reply

Your email address will not be published.