1


About Me :

I'm a first year PhD student in University of Tübingen affiliated in ELLIS and IMPRS programs. I'm fortunate to be advised by Prof. Zeynep Akata and Prof. Cordelia Schmid. My research interest lies in building reliable and interpretable models, i.e. explainable AI, uncertainty, robustness, and weakly supervision.

NEWS :

- 03/2022 : One paper on partial label is accepted at CVPR 2022.
- 08/2021 : I started a PhD journey in University of Tübingen.
- 07/2021 : One paper on explainability is accepted at ICCV 2021.
- 08/2020 : I started working as a research intern at NAVER AI Lab.
- 02/2020 : One paper on robustness is accepted at AAAI 2021.
- 02/2020 : I received the best dissertation award for my MS thesis in Seoul National University.


Bio :

- Ph.D. student in CS, University of Tübingen and INRIA with ELLIS program (Aug. 2021 – Present)
         - Advisor : Prof. Zeynep Akata and Prof. Cordelia Schmid

- Master’s degree in EE, Seoul National University (2018 – 2020)
         - Advisor : Prof. Jungwoo Lee
         - Best dissertation award

- Bachelor’s degree in EE, Seoul National University (2011 – 2018)
         - Full tuition, National Scholarship for Science and Engineering
         - 2-year absence to fulfill military duty (2013 - 2015)


Work Experience :

- Research intern at Alsemy (Mar. 2021 - Jun. 2021)
         - Startup company developing AI-based semiconductor modeling software

- Research intern at NAVER AI Lab (Aug. 2020 - Feb. 2021)
         - Mentor : Dr. Junsuk Choe and Dr. Sungjoon Oh




2

Large Loss Matters in Weakly Supervised Mult-Label Classification

TL;DR: With the observation that memorization effect occurs in partially labeled multi-label classification, we reject or correct the labels of large loss in the training phase to prevent the model from memorizing them.

Youngwook Kim*, Jae Myung Kim*, Zeynep Akata, and Jungwoo Lee
CVPR 2022 [paper] [code]





2

Keep CALM and Improve Visual Feature Attribution

TL;DR: Self-explainable model by simple modification of CAM, but with better explainability.

Jae Myung Kim*, Junsuk Choe*, Zeynep Akata, and Seong Joon Oh
ICCV 2021 [paper] [code]





2

REST: Performance Improvement of a Black Box Model via RL-based Spatial Transformation

TL;DR: Studying the robustness of a black-box model to geometric transformations.

Jae Myung Kim*, Hyungjin Kim*, Chanwoo Park*, and Jungwoo Lee
AAAI 2020 [paper]





2

Exploring linearity of deep neural network trained QSM: QSMnet+

TL;DR: Better estimation of QSM, a quantitative approach for measuring magnetic susceptibility using MRI.

Woojin Jung, Jaeyeon Yoon, Sooyeon Ji, Joon Yul Choi, Jae Myung Kim, Yoonho Nam, Eung Yeop Kim, and Jongho Lee
NeuroImage 2020 [paper]





2

Sampling-based Bayesian Inference with Gradient Uncertainty

TL;DR: Efficiently predicting a predictive uncertainty by incorporating the concept of gradients uncertainty into posterior sampling

Chanwoo Park, Jae Myung Kim, Seok Hyeon Ha, and Jungwoo Lee
NeurIPS 2018 Workshop [paper]






Contact


    goldkim92@gmail.com