Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing the accuracy of segementation #3

Open
build2create opened this issue Mar 24, 2017 · 14 comments
Open

Testing the accuracy of segementation #3

build2create opened this issue Mar 24, 2017 · 14 comments

Comments

@build2create
Copy link

build2create commented Mar 24, 2017

As defined in SegmentationModels.py function def get_dice_coef(self, test_img, label): requires labelled ground truth image label. I created the same using save_labels in brain_pipeline.py of size 240 X 240 . When the call to get_dice_coef(self, test_img, label): is made it gives value error for reshape due to this line imgs = io.imread(test_img).astype('float').reshape(5,240,240) .

This is because we make the call to predict_image in the first line of the function get_dice_coef()(reference:segmentation = self.predict_image(test_img)) but test_img is of size 240 X 240. def show_segmented_image(self, test_img, modality='t1c', show = False): is producing the slice of 240 X 240 test image. Isn't that used for calculating dice_coeff ?

@lazypoet
Copy link

Yes that is used. Btw, the code has too many errors to work, so just don't blindly copy it. You may actually see my repo if you want some pointers. And it is similar to Nikki's but fully functional.

@build2create
Copy link
Author

@lazypoet saw your Metrics.py. Did you convert label images into png first? I am guessing this from your accuracy method. Actually I am pretty much stuck at this point. I have created test images using nikki's code and those are of size 172800(3X240X240). Now I am calling your accuracy function as accuracy(io.imread(test_image+prefix+str(i)+suffix).astype('float').reshape(3,240,240),io.imread(ground_truth+"0_"+str(i)+"L"+suffix)) It is giving the answer obviously in the range of 1-3. Mostly 2. something or 3 Is that correct? Plus I am also interested in using DSC method as written by you. I have the ground truth images but most of them black(from the code of this repository), so do I need to convert them again?

Please let me know what all steps need I take for test_images and ground truth images before calling your function. It already took a lot of time generating test images using nikki's code, will those images not work?

@build2create
Copy link
Author

build2create commented Mar 28, 2017

@lazypoet I have tried something like this:

    test_image="/home/adminsters/Documents/SegmentedImages_1/"
    ground_truth="/home/adminsters/Documents/Labels/"
    prefix="seg_0_"
    suffix=".png"
    #load the model first
    model = SegmentationModel(loaded_model=True)
    sum=0
    for i in range(0,155):
	n=model.DSC(io.imread(test_image+prefix+str(i)+suffix).astype('float').reshape(3,240,240),io.imread(ground_truth+"0_"+str(i)+"L"+suffix))
	if(n>0):
	  sum=sum+n
    print sum

where DSC is

  def DSC(self,pred, orig_label):
     ''' Calculates Dice Score Coefficient
     INPUT: predicted, original labels
     OUTPUT: float
     '''
     TP = len(pred[((pred == 1) | (pred == 2) | (pred == 3) | (pred == 4)) & (pred == orig_label)])
     denom = len(pred[(pred == 1) | (pred == 2) | (pred == 3) | (pred == 4)]) + len(orig_label[(orig_label == 1) | (orig_label == 2) | (orig_label == 3) | (orig_label == 4)])
     if denom == 0:        
	return -1
     print 2.*TP/float(denom)
     return 2.*TP/float(denom)

@lazypoet
Copy link

@build2create In any of the methods in Metrics.py , you have to pass a single slice each of the test image and respective ground truth as parameters, so as to get results between 0 and 1.

@build2create
Copy link
Author

@lazypoet I am using BRATS dataset. Suppose I convert into slices test images in the folder brats_tcia_pat123_0083 I get 155 slices of test images. Now I guess ground truth labels are in training folders the ones with substring 3more**. Am I correct? Where do I find the corresponding Ground truth?

Also according to the nikki's code those 155 slices are 3X240X240 and labels are 240 X240 png. Please elaborate on this. Sorry in advance for the trouble!

@lazypoet
Copy link

lazypoet commented Mar 28, 2017 via email

@build2create
Copy link
Author

build2create commented Mar 28, 2017

@lazypoet Yes, correct . What I am asking is where do I look for corresponding ground truth? Each folder in test dataset contains t1, t1c, flair and t2 modality. Ground truth is in training folder (one with substring 3more**) right?

@lazypoet
Copy link

lazypoet commented Mar 28, 2017 via email

@build2create
Copy link
Author

@lazypoet So that means I need to test the quality of the images which are in training folder( I mean use the images in the training folder for dice coef calculation) right? I was using the ones in testing folder so far...

@lazypoet
Copy link

lazypoet commented Mar 28, 2017 via email

@build2create
Copy link
Author

@lazypoet Sir please be clear...I am not getting when it come to prediction use images in testing folder but when it comes to test the accuracy use the images in training folder.Is this right?

@lazypoet
Copy link

Learn more about training, validation and testing data. Freshen up your concepts of basic machine learning, it'll help you a lot.

@Jiyya
Copy link

Jiyya commented Jul 25, 2017

Is there anyone who can help to run this code successfully? I am unable to find segmented images ? I don't know what will happen after running first file. I am not getting any error in code but i don't know where to find resulted images. can anyone pleaseeeee tell me which folder i have to create in advance ???

@tiantian-li
Copy link

I'm running the training phase of the code brain_tumor_segmentation_models.py,but i find the accuracy is very low,at about 0.2.I run the code with python2.7,and I have increased the number of patches to 100000 with augmentation. I guess the problem might be caused by labels,because the labels of the "more" file I got are all black, but i don't know the reason.What does your label look like?How should i do to increase the accuracy? Could you please help me to solve the problem?Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants