Reproducing high-quality singing voice
with state-of-the-art AI technology.
Some advance in singing voice synthesis. This opens path toward more interesting collaborations and sythetic celebrities projects.
P.S. Hatsune Miku's will still remain popular for their particular qualities, but now there is more room for competitors.
Link: https://www.techno-speech.com/news-20181214a-en
#SOTA #Voice #Synthesis
with state-of-the-art AI technology.
Some advance in singing voice synthesis. This opens path toward more interesting collaborations and sythetic celebrities projects.
P.S. Hatsune Miku's will still remain popular for their particular qualities, but now there is more room for competitors.
Link: https://www.techno-speech.com/news-20181214a-en
#SOTA #Voice #Synthesis
Few-shot Video-to-Video Synthesis
it's the pytorch implementation for few-shot photorealistic video-to-video (vid2vid) translation.
it can be used for generating human motions from poses, synthesizing people talking from edge maps, or turning semantic label maps into photo-realistic videos.
the core of vid2vid translation is image-to-image translation.
blog post: https://nvlabs.github.io/few-shot-vid2vid/
paper: https://arxiv.org/abs/1910.12713
youtube: https://youtu.be/8AZBuyEuDqc
github: https://github.com/NVlabs/few-shot-vid2vid
#cv #nips #neurIPS #pattern #recognition #vid2vid #synthesis
it's the pytorch implementation for few-shot photorealistic video-to-video (vid2vid) translation.
it can be used for generating human motions from poses, synthesizing people talking from edge maps, or turning semantic label maps into photo-realistic videos.
the core of vid2vid translation is image-to-image translation.
blog post: https://nvlabs.github.io/few-shot-vid2vid/
paper: https://arxiv.org/abs/1910.12713
youtube: https://youtu.be/8AZBuyEuDqc
github: https://github.com/NVlabs/few-shot-vid2vid
#cv #nips #neurIPS #pattern #recognition #vid2vid #synthesis
❤1