Top 5 Machine Learning Papers in Q2 2018
Here are the most significant machine learning papers from the second quarter of 2018:
1. One Model To Learn Them All
Authors: Kaiser et al. Key Contribution: Demonstrated that a single neural network architecture could be trained to perform multiple tasks across different domains.
2. Learning to Reason: End-to-End Module Networks for Visual Question Answering
Authors: Hu et al. Key Contribution: Introduced a neural module network that could learn to reason about visual questions by decomposing them into sub-tasks.
3. Deep Voice: Real-time Neural Text-to-Speech
Authors: Arik et al. Key Contribution: Developed a real-time neural text-to-speech system that could generate natural-sounding speech.
4. Learning to Discover Efficient Mathematical Identities
Authors: Zaremba & Sutskever Key Contribution: Showed how reinforcement learning could be used to discover mathematical identities and optimize mathematical expressions.
5. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Authors: Devlin et al. Key Contribution: Introduced BERT, a pre-trained language model that achieved state-of-the-art results on multiple NLP tasks by using bidirectional context.
Note: This is a draft post. The content will be expanded with more detailed analysis and implementation details.