Recently has finished NIPS β huge conference held in Barcelona. There is a nice post from a visitor, so if you missed an event, it's worth taking a look: "NIPS 2016 experience and highlights".
https://medium.com/@libfun/nips-2016-experience-and-highlights-104e19e4ac95#.dr7xzcqzw
#nips2016 #conference #deeplearning #nips
https://medium.com/@libfun/nips-2016-experience-and-highlights-104e19e4ac95#.dr7xzcqzw
#nips2016 #conference #deeplearning #nips
Medium
NIPS 2016 experience and highlights
It really was a crazy week for me, being first-timer both at NIPS and Barcelona. The impressions that I had compiled from all theβ¦
And couple of more #nips2016 links:
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.rirzzwi8h
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
#nips201 #nips #dl #conference
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.rirzzwi8h
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
#nips201 #nips #dl #conference
Ought
50 things I learned at NIPS 2016
I learned many things about AI and ML at NIPS. Here are a few that are particularly suited to being communicated in a few sentences.
Couple of more #NIPS summaries
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.tf10f1l4e
#nips2016 #conference
NIPS 2016: cake, Rocket AI, GANs and the style transfer debate
https://medium.com/@elluba/nips-2016-cake-rocket-ai-gans-and-the-style-transfer-debate-708c46438053#.7fwmiolh1
50 things I learned at NIPS 2016
https://blog.ought.com/nips-2016-875bb8fadb8c#.tf10f1l4e
#nips2016 #conference
Medium
NIPS 2016: Cake, Rocket AI, GANs and the Style Transfer Debate
Or, if you put them all together, my experience of the NIPS conference can be summarized as the image below. Read along if you preferβ¦
The Conversational Intelligence Challenge
NIPS 2017 Live Competition
Recent advances in machine learning have sparked a renewed interest for dialogue systems in the research community. In addition to the growing real-world applications, the ability to converse is closely related to the overall goal of AI. This NIPS Live Competition aims to unify the community around the challenging task: building systems capable of intelligent conversations. Teams are expected to submit dialogue systems able to carry out intelligent and natural conversations about specific news articles with humans. At the final stage of the competition participants, as well as volunteers, will be randomly matched with a bot or a human to chat and evaluate answers of a peer. We expect the competition to have two major outcomes: (1) a measure of quality of state-of-the-art dialogue systems, and (2) an open-source dataset collected from evaluated dialogues.
Organizers
Mikhail Burtsev, Valentin Malykh, MIPT, Moscow
Ryan Lowe, McGill University, Montreal
Iulian Serban, Yoshua Bengio, University of Montreal, Montreal
Alexander Rudnicky, Alan W. Black, Carnegie Mellon University, Pittsburgh
http://convai.io
#nlp #qa #nips
NIPS 2017 Live Competition
Recent advances in machine learning have sparked a renewed interest for dialogue systems in the research community. In addition to the growing real-world applications, the ability to converse is closely related to the overall goal of AI. This NIPS Live Competition aims to unify the community around the challenging task: building systems capable of intelligent conversations. Teams are expected to submit dialogue systems able to carry out intelligent and natural conversations about specific news articles with humans. At the final stage of the competition participants, as well as volunteers, will be randomly matched with a bot or a human to chat and evaluate answers of a peer. We expect the competition to have two major outcomes: (1) a measure of quality of state-of-the-art dialogue systems, and (2) an open-source dataset collected from evaluated dialogues.
Organizers
Mikhail Burtsev, Valentin Malykh, MIPT, Moscow
Ryan Lowe, McGill University, Montreal
Iulian Serban, Yoshua Bengio, University of Montreal, Montreal
Alexander Rudnicky, Alan W. Black, Carnegie Mellon University, Pittsburgh
http://convai.io
#nlp #qa #nips
NIPS ConvAI2 competition!
http://convai.io
Train Dialogue Agents to chat about personal interests and get to know their dialogue partner -- using the PersonaChat dataset as a training source.
Competition starts now! Ends September 1st.
#nips #conversational #ai #competition
http://convai.io
Train Dialogue Agents to chat about personal interests and get to know their dialogue partner -- using the PersonaChat dataset as a training source.
Competition starts now! Ends September 1st.
#nips #conversational #ai #competition
ββWhat we learned from NeurIPS 2019 data
x4 growth since 2014
21.6% acceptance rate
Takeaways:
1. No free-loader problem: Relatively few papers are submitted where none of the authors invited to participate in the review process accepted the invitation
2. Unclear how to rapidly filter papers prior to full review: Allowing for early desk rejects by ACs is unlikely to have a significant impact on reviewer load without producing inappropriate decisions. Likewise, the eagerness of reviewers to review a particular paper is not a strong signal, either.
3. No clear evidence that review quality as measured by length is lower for NeurIPS: NeurIPS is surprisingly not much different from other conferences of smaller sizes when it comes to review length.
4. Impact of engagement in rebuttal/discussion period: Overall engagement seemed to be higher than in 2018.
#Nips #NeurIPS #NIPS2019 #conference #meta
x4 growth since 2014
21.6% acceptance rate
Takeaways:
1. No free-loader problem: Relatively few papers are submitted where none of the authors invited to participate in the review process accepted the invitation
2. Unclear how to rapidly filter papers prior to full review: Allowing for early desk rejects by ACs is unlikely to have a significant impact on reviewer load without producing inappropriate decisions. Likewise, the eagerness of reviewers to review a particular paper is not a strong signal, either.
3. No clear evidence that review quality as measured by length is lower for NeurIPS: NeurIPS is surprisingly not much different from other conferences of smaller sizes when it comes to review length.
4. Impact of engagement in rebuttal/discussion period: Overall engagement seemed to be higher than in 2018.
#Nips #NeurIPS #NIPS2019 #conference #meta
NeurIPS slides and presentations link
Link: https://slideslive.com/neurips/
Brief paper overview videos: https://nips.cc/Conferences/2019/Videos
#NeurIPS #NIPS #NIPS2019
Link: https://slideslive.com/neurips/
Brief paper overview videos: https://nips.cc/Conferences/2019/Videos
#NeurIPS #NIPS #NIPS2019
SlidesLive
NeurIPS
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conferenceβ¦
ββFew-shot Video-to-Video Synthesis
it's the pytorch implementation for few-shot photorealistic video-to-video (vid2vid) translation.
it can be used for generating human motions from poses, synthesizing people talking from edge maps, or turning semantic label maps into photo-realistic videos.
the core of vid2vid translation is image-to-image translation.
blog post: https://nvlabs.github.io/few-shot-vid2vid/
paper: https://arxiv.org/abs/1910.12713
youtube: https://youtu.be/8AZBuyEuDqc
github: https://github.com/NVlabs/few-shot-vid2vid
#cv #nips #neurIPS #pattern #recognition #vid2vid #synthesis
it's the pytorch implementation for few-shot photorealistic video-to-video (vid2vid) translation.
it can be used for generating human motions from poses, synthesizing people talking from edge maps, or turning semantic label maps into photo-realistic videos.
the core of vid2vid translation is image-to-image translation.
blog post: https://nvlabs.github.io/few-shot-vid2vid/
paper: https://arxiv.org/abs/1910.12713
youtube: https://youtu.be/8AZBuyEuDqc
github: https://github.com/NVlabs/few-shot-vid2vid
#cv #nips #neurIPS #pattern #recognition #vid2vid #synthesis
β€1