Back to Search Start Over

Pushing compute and AI onto detector silicon

Authors :
Miceli, Antonino
Yoshii, Kazutomo
Foster, Ian T.
Publication Year :
2022

Abstract

In order to take full advantage of the U.S. Department of Energy's billion-dollar investments into the next-generation research infrastructure (e.g., exascale, light sources, colliders), advances are required not only in detector technology but also in computing and specifically AI. Let us consider an example from X-ray science. Nanoscale X-ray imaging is a crucial tool to enable a wide range of scientific explorations from materials science and biology to mechanical and civil engineering. The next-generation light sources will increase the X-ray beam brightness and coherent flux by 100 to 1,000 times. In order to image larger samples, the continuous frame rate of pixel array detectors must be increased, approaching 1 MHz, which requires several Tbps (aggregated) to transfer pixel data out to a data acquisition system. Using 65-nm CMOS technology, an optimistic raw data rate off such a chip is about 100-200 Gbps. However, a continuous 1 MHz detector with only $256 \times 256$ pixels at 16-bit resolution, for example, will require 1,000 Gbps (i.e., 1 Tbps) bandwidth off the chip! It is impractical to have multiple high-speed transceivers running in parallel to provide such bandwidth and represents the first data bottleneck. New approaches are necessary to reduce the data size by performing data compression or AI-based feature extraction directly inside a detector silicon chip in a streaming manner before sending it off-chip.<br />Comment: White paper for AI@DOE Roundtable, December 8-9, 2021 (virtual). arXiv admin note: text overlap with arXiv:2110.07828

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2205.10602
Document Type :
Working Paper