Kenji Kawaguchi, Zhun Deng, Kyle Luh, Jiaoyang Huang. Robustness Implies Generalization via Data-Dependent Generalization Bounds. International Conference on Machine Learning (ICML), 2022.
[pdf] [BibTeX] Selected for ICML long presentation (2% accept rate)
Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya. Multi-Task Learning as a Bargaining Game. International Conference on Machine Learning (ICML), 2022.
[pdf] [BibTeX]
Linjun Zhang*, Zhun Deng*, Kenji Kawaguchi, James Zou. When and How Mixup Improves Calibration. International Conference on Machine Learning (ICML), 2022.
[pdf] [BibTeX]
Kenji Kawaguchi. On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers.
In International Conference on Learning Representations (ICLR), 2021.
[pdf] [BibTeX] Selected for ICLR Spotlight (5% accept rate)
Linjun Zhang*, Zhun Deng*, Kenji Kawaguchi*, Amirata Ghorbani and James Zou. How Does Mixup Help With Robustness and Generalization?
In International Conference on Learning Representations (ICLR), 2021.
[pdf] [BibTeX] Selected for ICLR Spotlight (5% accept rate)
Keyulu Xu*, Mozhi Zhang, Stefanie Jegelka and Kenji Kawaguchi*. Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth. International Conference on Machine Learning (ICML), 2021.
[
pdf] [
BibTeX]
Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham and Quoc V Le. Towards Domain-Agnostic Contrastive Learning. International Conference on Machine Learning (ICML), 2021.
[
pdf] [
BibTeX]
Dianbo Liu*, Alex Lamb*, Kenji Kawaguchi, Anirudh Goyal, Chen Sun, Michael Curtis Mozer and Yoshua Bengio. Discrete-Valued Neural Communication. Advances in Neural Information Processing Systems (NeurIPS), 2021.
[
pdf] [BibTeX]
Ferran Alet*, Dylan Doblar*, Allan Zhou, Joshua B. Tenenbaum, Kenji Kawaguchi and Chelsea Finn. Noether Networks: meta-learning useful conserved quantities. Advances in Neural Information Processing Systems (NeurIPS), 2021.
[
pdf] [BibTeX]
Zhun Deng, Linjun Zhang, Kailas Vodrahalli, Kenji Kawaguchi and James Zou. Adversarial Training Helps Transfer Learning via Better Representations. Advances in Neural Information Processing Systems (NeurIPS), 2021.
[
pdf] [BibTeX]
Clement Gehring, Kenji Kawaguchi, Jiaoyang Huang and Leslie Pack Kaelbling. Understanding End-to-End Model-Based Reinforcement Learning Methods as Implicit Parameterization. Advances in Neural Information Processing Systems (NeurIPS), 2021.
[
pdf] [BibTeX]
Ferran Alet, Maria Bauza Villalonga, Kenji Kawaguchi, Nurullah Giray Kuru, Tomás Lozano-Pérez and Leslie Pack Kaelbling. Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time. Advances in Neural Information Processing Systems (NeurIPS), 2021.
[
pdf] [BibTeX]
Juncheng Liu, Kenji Kawaguchi, Bryan Hooi, Yiwei Wang and Xiaokui Xiao. EIGNN: Efficient Infinite-Depth Graph Neural Networks. Advances in Neural Information Processing Systems (NeurIPS), 2021.
[
pdf] [BibTeX]
Ameya D. Jagtap, Kenji Kawaguchi and George E. Karniadakis. Adaptive Activation Functions Accelerate Convergence in Deep and Physics-informed Neural Networks. Journal of Computational Physics, 404, 109136, 2020.
[
pdf] [
BibTeX]
Ameya D. Jagtap*, Kenji Kawaguchi* and George E. Karniadakis. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. Proceedings of the Royal Society A, 476, 20200334, 2020.
[
pdf] [
BibTeX]
Kenji Kawaguchi. Deep Learning without Poor Local Minima. Advances in Neural Information Processing Systems (NeurIPS), 2016.
[
pdf] [
BibTeX] [
Spotlight Video] [
Talk]
Selected for NeurIPS oral presentation (2% accept rate)
Kenji Kawaguchi, Leslie Pack Kaelbling and Tomás Lozano-Pérez. Bayesian Optimization with Exponential Convergence. Advances in Neural Information Processing Systems (NeurIPS), 2015.
[
pdf] [
BibTeX] [
Code]