You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I trained the uvcgan2 model to transfer double eyelids images to single eyelids images. In most cases, the results are acceptable, but I have noticed inconsistencies in the colors of the generated images compared to the input images. What methods can be used to maintain consistency between the colors of the generated images and the input images?
The text was updated successfully, but these errors were encountered:
blakeliu
changed the title
How to maintain consistent colors between the translatedimage B and the source domain image A?
How to maintain consistent colors between the translated image and the input image?
Jul 21, 2023
Hi @blakeliu, I think, the answer largely depends on whether such a color difference is present in the training dataset or not?
If the training dataset exhibit the color difference between the two domains. One can try to fix this issue by adding a color-jitter transformation (jitter hue, saturation, and brightness). An example of the usage of this transformation can be found here.
Otherwise, one can try increasing the magnitudes of the lambda_idt, lambda_a, and lambda_b hyperparameters. The CycleGAN paper suggests that increasing lambda_idt alone may suffice.
I trained the uvcgan2 model to transfer double eyelids images to single eyelids images. In most cases, the results are acceptable, but I have noticed inconsistencies in the colors of the generated images compared to the input images. What methods can be used to maintain consistency between the colors of the generated images and the input images?
The text was updated successfully, but these errors were encountered: