In histopathology, tissue sections are typically stained using common H&E
staining or special stains (MAS, PAS, PASM, etc.) to clearly visualize specific
tissue structures. The rapid advancement of deep learning offers an effective
solution for generating virtually stained images, significantly reducing the
time and labor costs associated with traditional histochemical staining.
Tuttavia, a new challenge arises in separating the fundamental visual
characteristics of tissue sections from the visual differences induced by
staining agents. Additionally, virtual staining often overlooks essential
pathological knowledge and the physical properties of staining, resulting in
only style-level transfer. To address these issues, we introduce, for the first
time in virtual staining tasks, a pathological vision-language large model
(VLM) as an auxiliary tool. We integrate contrastive learnable prompts,
foundational concept anchors for tissue sections, and staining-specific concept
anchors to leverage the extensive knowledge of the pathological VLM. This
approach is designed to describe, frame, and enhance the direction of virtual
staining. Furthermore, we have developed a data augmentation method based on
the constraints of the VLM. This method utilizes the VLM’s powerful image
interpretation capabilities to further integrate image style and structural
information, proving beneficial in high-precision pathological diagnostics.
Extensive evaluations on publicly available multi-domain unpaired staining
datasets demonstrate that our method can generate highly realistic images and
enhance the accuracy of downstream tasks, such as glomerular detection and
segmentation. Our code is available at:
https://github.com/CZZZZZZZZZZZZZZZZZ/VPGAN-HARBOR
Questo articolo esplora i giri e le loro implicazioni.
Scarica PDF:
2504.15545v1