ββBackground Matting: The World is Your Green Screen
ThΡ authors propose a method for creating a matte β the per-pixel foreground color and alpha β of a person by taking photos or videos in an everyday setting with a handheld camera. Most existing matting methods require a green screen background or a manually created trimap to produce a good matte.
Automatic, trimap-free methods are appearing, but are not of comparable quality. In them trimap free approach, they ask the user to take an additional photo of the background without the subject at the time of capture. This step requires a small amount of foresight but is far less timeconsuming than creating a trimap.
They train a deep network with an adversarial loss to predict the matte. At first, they train a matting network with the supervised loss on ground truth data with synthetic composites. To bridge the domain gap to real imagery with no labeling, train another matting network guided by the first network and by a discriminator that judges the quality of composites.
paper: https://arxiv.org/abs/2004.00626
blog post: http://grail.cs.washington.edu/projects/background-matting/
github (training code coming soon): https://github.com/senguptaumd/Background-Matting
#CVPR2020 #background #matte
ThΡ authors propose a method for creating a matte β the per-pixel foreground color and alpha β of a person by taking photos or videos in an everyday setting with a handheld camera. Most existing matting methods require a green screen background or a manually created trimap to produce a good matte.
Automatic, trimap-free methods are appearing, but are not of comparable quality. In them trimap free approach, they ask the user to take an additional photo of the background without the subject at the time of capture. This step requires a small amount of foresight but is far less timeconsuming than creating a trimap.
They train a deep network with an adversarial loss to predict the matte. At first, they train a matting network with the supervised loss on ground truth data with synthetic composites. To bridge the domain gap to real imagery with no labeling, train another matting network guided by the first network and by a discriminator that judges the quality of composites.
paper: https://arxiv.org/abs/2004.00626
blog post: http://grail.cs.washington.edu/projects/background-matting/
github (training code coming soon): https://github.com/senguptaumd/Background-Matting
#CVPR2020 #background #matte