Back to Search Start Over

MTNeuro: A Benchmark for Evaluating Representations of Brain Structure Across Multiple Levels of Abstraction

Authors :
Quesada, Jorge
Sathidevi, Lakshmi
Liu, Ran
Ahad, Nauman
Jackson, Joy M.
Azabou, Mehdi
Xiao, Jingyun
Liding, Christopher
Jin, Matthew
Urzay, Carolina
Gray-Roncal, William
Johnson, Erik C.
Dyer, Eva L.
Publication Year :
2022

Abstract

There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .<br />Comment: 10 pages, 4 figures, Accepted at NeurIPS 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2301.00345
Document Type :
Working Paper