1. Contrastive learning of cell state dynamics in response to perturbations
- Author
-
Pradeep, Soorya, Imran, Alishba, Liu, Ziwen, Theodoro, Taylla Milena, Hirata-Miyasaki, Eduardo, Ivanov, Ivan, Bhave, Madhura, Khadka, Sudip, Woosley, Hunter, Arias, Carolina, and Mehta, Shalin B.
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Quantitative Biology - Quantitative Methods - Abstract
We introduce DynaCLR, a self-supervised framework for modeling cell dynamics via contrastive learning of representations of time-lapse datasets. Live cell imaging of cells and organelles is widely used to analyze cellular responses to perturbations. Human annotation of dynamic cell states captured by time-lapse perturbation datasets is laborious and prone to bias. DynaCLR integrates single-cell tracking with time-aware contrastive learning to map images of cells at neighboring time points to neighboring embeddings. Mapping the morphological dynamics of cells to a temporally regularized embedding space makes the annotation, classification, clustering, or interpretation of the cell states more quantitative and efficient. We illustrate the features and applications of DynaCLR with the following experiments: analyzing the kinetics of viral infection in human cells, detecting transient changes in cell morphology due to cell division, and mapping the dynamics of organelles due to viral infection. Models trained with DynaCLR consistently achieve $>95\%$ accuracy for infection state classification, enable the detection of transient cell states and reliably embed unseen experiments. DynaCLR provides a flexible framework for comparative analysis of cell state dynamics due to perturbations, such as infection, gene knockouts, and drugs. We provide PyTorch-based implementations of the model training and inference pipeline (https://github.com/mehta-lab/viscy) and a user interface (https://github.com/czbiohub-sf/napari-iohub) for the visualization and annotation of trajectories of cells in the real space and the embedding space., Comment: 20 pages, 6 figures, 3 appendix figures, 4 videos (ancillary files)
- Published
- 2024