-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing the accuracy of segementation #3
Comments
Yes that is used. Btw, the code has too many errors to work, so just don't blindly copy it. You may actually see my repo if you want some pointers. And it is similar to Nikki's but fully functional. |
@lazypoet saw your Please let me know what all steps need I take for test_images and ground truth images before calling your function. It already took a lot of time generating test images using nikki's code, will those images not work? |
@lazypoet I have tried something like this:
where DSC is
|
@build2create In any of the methods in |
@lazypoet I am using BRATS dataset. Suppose I convert into slices test images in the folder Also according to the nikki's code those 155 slices are 3X240X240 and labels are 240 X240 png. Please elaborate on this. Sorry in advance for the trouble! |
So, the images are 3d with an axial view. Hence the 155 slices. Eg., slice
74 would represent the same slice but different modalities in all of the 3d
brain images of a single patient, hence the 74th slice of the ground truth
would represent the same slice's segmentation. Use your predicted image
slice(i.e., the 74th slice of the predicted 3d brain image) and the
corresponding gt.
Each slice is 240x240, there are 4 modalities and a ground truth of each
patient, hence 5x240x240
…On Tue, Mar 28, 2017 at 4:35 PM, build2create ***@***.***> wrote:
@lazypoet <https://github.com/lazypoet> I am using BRATS dataset. Suppose
I convert into slices test images in the folder brats_tcia_pat123_0083 I
get 155 slices of test images. Now I guess ground truth labels are in
training folders the ones with substring 3more**. Am I correct? Where do
I find the corresponding Ground truth?
Also according to the nikki's code those 155 slices are 3X240X240 and
labels are 240 X240 png. Please elaborate on this. Sorry in advance for the
trouble!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ATN71rTQOBGoYJrF0URZ4JTUfqMxuJ1uks5rqOlqgaJpZM4MnuXi>
.
|
@lazypoet Yes, correct . What I am asking is where do I look for corresponding ground truth? Each folder in test dataset contains t1, t1c, flair and t2 modality. Ground truth is in training folder (one with substring 3more**) right? |
Yup, true.
…On Tue, Mar 28, 2017 at 4:49 PM, build2create ***@***.***> wrote:
Yes, correct . What I am asking where do I look for corresponding ground
truth? Each folder in test dataset contains t1, t1c, flair and t2 modality.
Ground truth is in training folder (one with substring 3more**) right?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ATN71p06Swe112RChuZ2CeGgAs5ZjDaRks5rqOzBgaJpZM4MnuXi>
.
|
@lazypoet So that means I need to test the quality of the images which are in training folder( I mean use the images in the training folder for dice coef calculation) right? I was using the ones in testing folder so far... |
Assuming you are trying to predict images in your "testing folder", yes you
need to use those images.
…On Tue, Mar 28, 2017 at 5:07 PM, build2create ***@***.***> wrote:
@lazypoet <https://github.com/lazypoet> So that means I need to test the
quality of the images which are in training folder( I mean use the images
in the training folder for dice coef calculation) right? I was using the
ones in testing folder so far...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ATN71tuCHT_UkHFeep_JTtk6E_3Awozhks5rqPD_gaJpZM4MnuXi>
.
|
@lazypoet Sir please be clear...I am not getting when it come to prediction use images in testing folder but when it comes to test the accuracy use the images in training folder.Is this right? |
Learn more about training, validation and testing data. Freshen up your concepts of basic machine learning, it'll help you a lot. |
Is there anyone who can help to run this code successfully? I am unable to find segmented images ? I don't know what will happen after running first file. I am not getting any error in code but i don't know where to find resulted images. can anyone pleaseeeee tell me which folder i have to create in advance ??? |
I'm running the training phase of the code brain_tumor_segmentation_models.py,but i find the accuracy is very low,at about 0.2.I run the code with python2.7,and I have increased the number of patches to 100000 with augmentation. I guess the problem might be caused by labels,because the labels of the "more" file I got are all black, but i don't know the reason.What does your label look like?How should i do to increase the accuracy? Could you please help me to solve the problem?Thanks |
As defined in
SegmentationModels.py
functiondef get_dice_coef(self, test_img, label):
requires labelled ground truth imagelabel
. I created the same usingsave_labels
inbrain_pipeline.py
of size 240 X 240 . When the call toget_dice_coef(self, test_img, label):
is made it gives value error for reshape due to this lineimgs = io.imread(test_img).astype('float').reshape(5,240,240)
.This is because we make the call to
predict_image
in the first line of the functionget_dice_coef()
(reference:segmentation = self.predict_image(test_img)
) buttest_img
is of size 240 X 240.def show_segmented_image(self, test_img, modality='t1c', show = False):
is producing the slice of 240 X 240 test image. Isn't that used for calculating dice_coeff ?The text was updated successfully, but these errors were encountered: