2022
DOI: 10.1101/2022.08.16.504171
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust phenotyping of highly multiplexed tissue imaging data using pixel-level clustering

Abstract: While technologies for multiplexed imaging have provided an unprecedented understanding of tissue composition in health and disease, interpreting this data remains a significant computational challenge. To understand the spatial organization of tissue and how it relates to disease processes, imaging studies typically focus on cell-level phenotypes. However, images can capture biologically important objects that are outside of cells, such as the extracellular matrix. Here, we developed a pipeline, Pixie, that a… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(35 citation statements)
references
References 70 publications
0
35
0
Order By: Relevance
“…We first tested CellSighter on a dataset of human melanoma lymph node metastases, acquired by MIBI-TOF 2 . We took sixteen 0.8×0.8mm 2 images, encompassing 116,808 cells and generated labels for all cells using established approaches, including FlowSOM clustering 22 , pixel clustering 31 , gating and sequential rounds of visual inspection and manual annotation (Methods). Altogether, we distinguished fourteen cell types, including different types of tumor cells, stromal cells, vasculature and immune cells.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We first tested CellSighter on a dataset of human melanoma lymph node metastases, acquired by MIBI-TOF 2 . We took sixteen 0.8×0.8mm 2 images, encompassing 116,808 cells and generated labels for all cells using established approaches, including FlowSOM clustering 22 , pixel clustering 31 , gating and sequential rounds of visual inspection and manual annotation (Methods). Altogether, we distinguished fourteen cell types, including different types of tumor cells, stromal cells, vasculature and immune cells.…”
Section: Resultsmentioning
confidence: 99%
“…MIBI GastroIntestinal (GI) dataset: Labels for eighteen 0.4x0.4mm 2 images were generated using FlowSOM clustering 22 , in combination with pixel clustering 31 and sequential rounds of visual inspection and manual annotation. The model trained on the melanoma lymph node dataset was ran on 9,532 cells from the following classes, which were shared across the two datasets: T regs (FoxP3, CD4, CD3, CD45), CD8 T cells (CD8, CD3, GranzymeB, CD45), CD4 T cells (CD4, CD3, CD45), B cells (CD20, CD45RA, CD45), Macrophages (CD68, CD206, CD163, CD14, DC-SIGN, CD45), Neutrophil (S100A9-Calprotectin), Stroma (SMA, COL1A1) and Endothelial (CD31).…”
Section: Mibi Melanoma Lymph Node Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…Clustering methods also generally have problems with interpretability of semantic concepts for each cluster. A recent study used a self-organizing map (SOM) to cluster multiplexed stained tissue images [ 31 ]. An SOM is a dimensionality-reduction algorithm that can map and group high-dimensional features into a low-dimensional space, but, like clustering, it learns similarities among data and does not learn semantic concepts.…”
Section: Discussionmentioning
confidence: 99%
“…This close-network of cell bodies and projections results in spillover, whereby the protein signals from one cell overlap with the pixels of nearby cells. Several works have proposed methods to deal with spillover using compensation 3,32 , pixel analysis 33 or neighborhood analysis 34 , but these suffer from signal attenuation, difficulty in scaling to large datasets, or requirements for additional data sources such as scRNAseq on the same tissue. Altogether, cell classification has hitherto remained a timeconsuming and labor-intensive task, requiring sequential rounds of clustering, gating, visual inspection and manual annotation.…”
Section: Introductionmentioning
confidence: 99%