Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poor quality on our own dataset #10

Closed
herbiezhao opened this issue May 14, 2021 · 4 comments
Closed

Poor quality on our own dataset #10

herbiezhao opened this issue May 14, 2021 · 4 comments

Comments

@herbiezhao
Copy link

I don’t know if it’s the problem that I use. The effect on the RealWorldPortrait-636 dataset is not good. I set image-dir: image, mask-dir: segmask. Are there other parameters that can be adjusted?

The background distinguishing ability of our segmentation model is very strong, but it does not handle well at the edges. After using this model, it will affect the ability of background discrimination. Does it mean that the method of using mask is also very dependent on the data set, but the matting data set is difficult to obtain.

@yucornetto
Copy link
Owner

Hi, have you excluded those transparent objects in DIM training set when you train the model for the real-world benchmark? As mentioned in our README:

Please note that we exclude the transparent objects from DIM training set for a better generalization to real-world portrait cases. You can refer to /utils/copy_data.py for details about preparing the training set. Afterwards, you can start training using the following command:

My personal experience is that, when targeting solid objects, e.g., portraits, including those transparent objects can affect the semantic learning of the model, and results in some bad noise in background areas. Also, simulating real-world noises is also necessary.

@herbiezhao
Copy link
Author

I haven't trained the model yet, I used the pre-trained model provided. Do I need to retrain to achieve better results?

@yucornetto
Copy link
Owner

yucornetto commented May 14, 2021

That's interesting. Would you mind sharing one or two samples (both images and masks) with me at [email protected] and I can try to see what is wrong?

I just noticed that you seem mentioned both RealWorldPortrait-636 and your own dataset, right? Can you reproduce the results on RealWorldPortrait-636? What command did you use?

@ousinkou
Copy link

@yucornetto Have you solved your problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants