Question Answering
NeMo
Akan
biology
xiaoshi commited on
Commit
c07ba1a
·
verified ·
1 Parent(s): a5880bd

Upload 3 files

Browse files
Files changed (3) hide show
  1. README-MedLSAM.md +240 -0
  2. boost_1_76_0.tar.gz +3 -0
  3. cmake-3.20.0.tar.gz +3 -0
README-MedLSAM.md ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # [MedLSAM: Localize and Segment Anything Model for 3D Medical Images](https://arxiv.org/abs/2306.14752)
2
+ <!-- select Model and/or Data and/or Code as needed>
3
+ ### Welcome to OpenMEDLab! 👋
4
+
5
+ <!--
6
+ **Here are some ideas to get you started:**
7
+ 🙋‍♀️ A short introduction - what is your organization all about?
8
+ 🌈 Contribution guidelines - how can the community get involved?
9
+ 👩‍💻 Useful resources - where can the community find your docs? Is there anything else the community should know?
10
+ 🍿 Fun facts - what does your team eat for breakfast?
11
+ 🧙 Remember, you can do mighty things with the power of [Markdown](https://docs.github.com/github/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax)
12
+ -->
13
+
14
+
15
+ <!-- Insert the project banner here
16
+ <div align="center">
17
+ <a href="https://"><img width="1000px" height="auto" src="https://github.com/openmedlab/sampleProject/blob/main/banner_sample.png"></a>
18
+ </div>
19
+ -->
20
+ ---
21
+
22
+ <!-- Select some of the point info, feel free to delete
23
+ [![Twitter](https://img.shields.io/twitter/url?style=social&url=https%3A%2F%2Ftwitter.com%2Fopendilab)](https://twitter.com/opendilab)
24
+ [![PyPI](https://img.shields.io/pypi/v/DI-engine)](https://pypi.org/project/DI-engine/)
25
+ ![Conda](https://anaconda.org/opendilab/di-engine/badges/version.svg)
26
+ ![Conda update](https://anaconda.org/opendilab/di-engine/badges/latest_release_date.svg)
27
+ ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/DI-engine)
28
+ ![PyTorch Version](https://img.shields.io/badge/dynamic/json?color=blue&label=pytorch&query=%24.pytorchVersion&url=https%3A%2F%2Fgist.githubusercontent.com/PaParaZz1/54c5c44eeb94734e276b2ed5770eba8d/raw/85b94a54933a9369f8843cc2cea3546152a75661/badges.json)
29
+
30
+
31
+ ![Loc](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/HansBug/3690cccd811e4c5f771075c2f785c7bb/raw/loc.json)
32
+ ![Comments](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/HansBug/3690cccd811e4c5f771075c2f785c7bb/raw/comments.json)
33
+
34
+ ![Style](https://github.com/opendilab/DI-engine/actions/workflows/style.yml/badge.svg)
35
+ ![Docs](https://github.com/opendilab/DI-engine/actions/workflows/doc.yml/badge.svg)
36
+ ![Unittest](https://github.com/opendilab/DI-engine/actions/workflows/unit_test.yml/badge.svg)
37
+ ![Algotest](https://github.com/opendilab/DI-engine/actions/workflows/algo_test.yml/badge.svg)
38
+ ![deploy](https://github.com/opendilab/DI-engine/actions/workflows/deploy.yml/badge.svg)
39
+ [![codecov](https://codecov.io/gh/opendilab/DI-engine/branch/main/graph/badge.svg?token=B0Q15JI301)](https://codecov.io/gh/opendilab/DI-engine)
40
+
41
+ ![GitHub Org's stars](https://img.shields.io/github/stars/opendilab)
42
+ [![GitHub stars](https://img.shields.io/github/stars/opendilab/DI-engine)](https://github.com/openmedlab/MedLSAM)
43
+ [![GitHub forks](https://img.shields.io/github/forks/opendilab/DI-engine)](https://github.com/opendilab/DI-engine/network)
44
+ ![GitHub commit activity](https://img.shields.io/github/commit-activity/m/opendilab/DI-engine)
45
+ [![GitHub issues](https://img.shields.io/github/issues/opendilab/DI-engine)](https://github.com/opendilab/DI-engine/issues)
46
+ [![GitHub pulls](https://img.shields.io/github/issues-pr/opendilab/DI-engine)](https://github.com/opendilab/DI-engine/pulls)
47
+ [![Contributors](https://img.shields.io/github/contributors/opendilab/DI-engine)](https://github.com/opendilab/DI-engine/graphs/contributors)
48
+ [![GitHub license](https://img.shields.io/github/license/opendilab/DI-engine)](https://github.com/opendilab/DI-engine/blob/master/LICENSE) -->
49
+
50
+ ## Key Features
51
+
52
+ - **Foundation Model for 3D Medical Image Localization**: MedLAM: MedLSAM introduces MedLAM as a foundational model for the localization of 3D medical images.
53
+ - **First Fully-Automatic Medical Adaptation of SAM**: MedLSAM is the first complete medical adaptation of the Segment Anything Model (SAM). The primary goal of this work is to significantly reduce the annotation workload in medical image segmentation.
54
+ - **Segment Any Anatomy Target Without Additional Annotation**: MedLSAM is designed to segment any anatomical target in 3D medical images without the need for further annotations, contributing to the automation and efficiency of the segmentation process.
55
+
56
+
57
+ ## Updates
58
+
59
+ - 2023.10.15: Accelerate the inference speed. Add Sub-Patch Localization (SPL).
60
+ - 2023.07.01: Code released.
61
+ <!-- give a introduction of your project -->
62
+ ## Details
63
+
64
+ > The Segment Anything Model (SAM) has recently emerged as a groundbreaking model in the field of image segmentation. Nevertheless, both the original SAM and its medical adaptations necessitate slice-by-slice annotations, which directly increase the annotation workload with the size of the dataset. We propose MedLSAM to address this issue, ensuring a constant annotation workload irrespective of dataset size and thereby simplifying the annotation process. Our model introduces a few-shot localization framework capable of localizing any target anatomical part within the body. To achieve this, we develop a Localize Anything Model for 3D Medical Images (MedLAM), utilizing two self-supervision tasks: relative distance regression (RDR) and multi-scale similarity (MSS) across a comprehensive dataset of 14,012 CT scans. We then establish a methodology for accurate segmentation by integrating MedLAM with SAM. By annotating only six extreme points across three directions on a few templates, our model can autonomously identify the target anatomical region on all data scheduled for annotation. This allows our framework to generate a 2D bounding box for every slice of the image, which are then leveraged by SAM to carry out segmentations. We conducted experiments on two 3D datasets covering 38 organs and found that MedLSAM matches the performance of SAM and its medical adaptations while requiring only minimal extreme point annotations for the entire dataset. Furthermore, MedLAM has the potential to be seamlessly integrated with future 3D SAM models, paving the way for enhanced performance.
65
+
66
+ <!-- Insert a pipeline of your algorithm here if got one -->
67
+ ![MedLSAM Image](fig/medlsam.jpg)
68
+ *Fig.1 The overall segmentation pipeline of MedLSAM.*
69
+ <!--<div align="center">
70
+ <a href="https://"><img width="1000px" height="auto" src="https://https://github.com/openmedlab/MedLSAM/blob/main/medlsam.jpg"></a>
71
+ </div> -->
72
+
73
+ ## Feedback and Contact
74
+
75
+ - Email: [email protected]
76
+ - Wechat: lyc4560147
77
+
78
+ ## Get Started
79
+
80
+ ### Main Requirements
81
+ > torch>=1.11.0
82
+ > tqdm
83
+ > nibabel
84
+ > scipy
85
+ > SimpleITK
86
+ > monai
87
+ ### Installation
88
+ 1. Create a virtual environment `conda create -n medlsam python=3.10 -y` and activate it `conda activate medlsam`
89
+ 2. Install [Pytorch](https://pytorch.org/get-started/locally/)
90
+ 3. `git clone https://github.com/openmedlab/MedLSAM`
91
+ 4. Enter the MedSAM folder `cd MedLSAM` and run `pip install -e .`
92
+
93
+
94
+ ## Inference
95
+ ### GPU requirement
96
+ We recommend using a GPU with 12GB or more memory for inference.
97
+
98
+ ### Data preparation
99
+ - [StructSeg Task1 HaN OAR](https://drive.google.com/file/d/1tlv79tgK5ETBFUB3_vgipBPOwZLmpbi8/view?usp=drive_link)
100
+ - [WORD](https://github.com/HiLab-git/WORD) (Request for access is required to download this dataset.)
101
+
102
+ Note: You can also download other CT datasets and place them any place you want. MedLSAM will **automaticly** apply the preprocessing procedure during the inference time, so please do **not** normalize the original CT images!!!
103
+
104
+ After downloading the datasets, you should sort the data into "support" and "query" groups. This does not require moving the actual image files. Rather, you need to create separate lists of file paths for each group.
105
+
106
+ **For each group ("support" and "query"), perform the following steps:**
107
+ 1. Create a .txt file listing the paths to the image files.
108
+ 2. Create another .txt file listing the paths to the corresponding label files.
109
+
110
+ Ensure that the ordering of images and labels aligns in both lists. These lists will direct MedLSAM to the appropriate files during the inference process.
111
+ The file names are not important, as long as the ordering of images and labels aligns in both lists.
112
+
113
+ Example format for the .txt files:
114
+
115
+ - `image.txt`
116
+ ```bash
117
+ /path/to/your/dataset/image_1.nii.gz
118
+ ...
119
+ /path/to/your/dataset/image_n.nii.gz
120
+ ```
121
+ - `label.txt`
122
+ ```bash
123
+ /path/to/your/dataset/label_1.nii.gz
124
+ ...
125
+ /path/to/your/dataset/label_n.nii.gz
126
+ ```
127
+ ### Config preparation
128
+ **MedLAM** and **MedLSAM** load their configurations from a .txt file. The structure of the file is as follows:
129
+ ```bash
130
+ [data]
131
+ support_image_ls = config/data/StructSeg_HaN/support_image.txt
132
+ support_label_ls = config/data/StructSeg_HaN/support_label.txt
133
+ query_image_ls = config/data/StructSeg_HaN/query_image.txt
134
+ query_label_ls = config/data/StructSeg_HaN/query_label.txt
135
+ gt_slice_threshold = 10
136
+ bbox_mode = SPL
137
+ slice_interval = 2
138
+ fg_class = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]
139
+ seg_save_path = result/npz/StructSeg
140
+ seg_png_save_path = result/png/StructSeg
141
+
142
+ [vit]
143
+ net_type = vit_b
144
+
145
+ [weight]
146
+ medlam_load_path = checkpoint/medlam.pth
147
+ vit_load_path = checkpoint/medsam_20230423_vit_b_0.0.1.pth
148
+
149
+ ```
150
+ Each of the parameters is explained as follows:
151
+
152
+ - `support_image_ls`: The path to the list of support image files. It is recommended to use between 3 and 10 support images.
153
+ - `support_label_ls`: The path to the list of support label files.
154
+ - `query_image_ls`: The path to the list of query image files.
155
+ - `query_label_ls`: The path to the list of query label files.
156
+ - `gt_slice_threshold`: The threshold value for ground truth slice selection.
157
+ - `bbox_mode`: The bounding box mode. It could be `SPL` (Sub-Patch Localization) or `WPL` (Whole-Patch Localization), as shown in Fig.2.
158
+ - `slice_interval`: Specifies the number of slices in a sub-patch. A smaller value results in more patches. This parameter should be of type `int`, and its value should be greater than 0. **Applicable only for Sub-Patch Localization (SPL), set to `False` for Whole-Patch Localization (WPL)**.
159
+ - `fg_class`: The list of foreground class to be used for localization and segmentation. This could be a list of integers indicating the class labels. You can only select a part of them as target classes.
160
+ - `seg_save_path`: The path to save the segmentation results in .npz format, **only required for MedLSAM**.
161
+ - `seg_png_save_path`: The path to save the segmentation results in .png format, **only required for MedLSAM**.
162
+ - `net_type`: The type of vision transformer model to be used, **only required for MedLSAM**. By default, this is set to vit_b.
163
+ - `medlam_load_path`: The path to load the pretrained MedLAM model weights.
164
+ - `vit_load_path`: The path to load the pretrained vision transformer model weights, **only required for MedLSAM**. You can change it to `checkpoint/sam_vit_b_01ec64.pth` to use the SAM model as segmentation basis.
165
+
166
+ <div align="center">
167
+ <img src="fig/wpl_spl.png" width="80%">
168
+ </div>
169
+
170
+ *Fig.2 Comparison between Whole-Patch Localization (WPL) and Sub-Patch Localization (SPL) strategies.*
171
+
172
+
173
+ ### Inference
174
+ - MedLAM (**Localize any anatomy target**)
175
+ ```bash
176
+ CUDA_VISIBLE_DEVICES=0 python MedLAM_Inference.py --config_file path/to/your/test_medlam_config.txt
177
+ ```
178
+ Example:
179
+ ```bash
180
+ CUDA_VISIBLE_DEVICES=0 python MedLAM_Inference.py --config_file config/test_config/test_structseg_medlam.txt
181
+ CUDA_VISIBLE_DEVICES=0 python MedLAM_Inference.py --config_file config/test_config/test_word_medlam.txt
182
+ ```
183
+
184
+ - MedLSAM (**Localize and segment any anatomy target with WPL/SPL**)
185
+ ```bash
186
+ CUDA_VISIBLE_DEVICES=0 python MedLSAM_WPL_Inference.py --config_file path/to/your/test_medlsam_config.txt
187
+ CUDA_VISIBLE_DEVICES=0 python MedLSAM_SPL_Inference.py --config_file path/to/your/test_medlsam_config.txt
188
+ ```
189
+ Example:
190
+ ```bash
191
+ CUDA_VISIBLE_DEVICES=0 python MedLSAM_WPL_Inference.py --config_file config/test_config/test_structseg_medlam_wpl_medsam.txt
192
+ CUDA_VISIBLE_DEVICES=0 python MedLSAM_WPL_Inference.py --config_file config/test_config/test_structseg_medlam_wpl_medsam.txt
193
+ CUDA_VISIBLE_DEVICES=0 python MedLSAM_SPL_Inference.py --config_file config/test_config/test_structseg_medlam_spl_sam.txt
194
+ CUDA_VISIBLE_DEVICES=0 python MedLSAM_SPL_Inference.py --config_file config/test_config/test_structseg_medlam_spl_sam.txt
195
+ ```
196
+
197
+ ### Results
198
+ - MedLAM (Localize any anatomy target): MedLAM automatically calculates and saves the mean Intersection over Union (IoU) along with the standard deviation for each category in a .txt file. These files are stored under the result/iou directory.
199
+
200
+ <div align="center">
201
+ <img src="fig/iou.png" width="90%">
202
+ </div>
203
+
204
+ - MedLSAM (Localize and segment any anatomy target): MedLSAM automatically calculates and saves the mean Dice Similarity Coefficient (DSC) along with the standard deviation for each category in a .txt file. These files are stored under the result/dsc directory.
205
+
206
+
207
+ <div align="center">
208
+ <img src="fig/dsc.png" width="100%">
209
+ </div>
210
+
211
+ ## To do list
212
+ - [ ] Support scribble prompts
213
+ - [ ] Support MobliSAM
214
+
215
+ ## 🛡️ License
216
+
217
+ This project is under the CC-BY-NC 4.0 license. See [LICENSE](LICENSE) for details.
218
+
219
+ ## 🙏 Acknowledgement
220
+
221
+ - A lot of code is modified from [MedSAM](https://github.com/bowang-lab/MedSAM).
222
+ - We highly appreciate all the challenge organizers and dataset owners for providing the public dataset to the community.
223
+ - We thank Meta AI for making the source code of [segment anything](https://github.com/facebookresearch/segment-anything) publicly available.
224
+
225
+ ## 📝 Citation
226
+
227
+ If you find this repository useful, please consider citing this paper:
228
+ ```
229
+ @article{Lei2023medlam,
230
+ title={MedLSAM: Localize and Segment Anything Model for 3D Medical Images},
231
+ author={Wenhui Lei, Xu Wei, Xiaofan Zhang, Kang Li, Shaoting Zhang},
232
+ journal={arXiv preprint arXiv:},
233
+ year={2023}
234
+ }
235
+ ```
236
+
237
+ <!-- ## Star History
238
+
239
+ [![Star History Chart](https://api.star-history.com/svg?repos=openmedlab/MedLSAM&type=Date)](https://star-history.com/#openmedlab/MedLSAM&Date) -->
240
+
boost_1_76_0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bd7ddceec1a1dfdcbdb3e609b60d01739c38390a5f956385a12f3122049f0ca
3
+ size 130274594
cmake-3.20.0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c06b2ddf7c337e31d8201f6ebcd3bba86a9a033976a9aee207fe0c6971f4755
3
+ size 9427538