![]() |
Jihoon TackContact: jihoontack@{kaist.ac.kr, gmail.com}
I am on the job market for industry roles. |
I am a final year Ph.D. Candidate at KAIST, advised by Jinwoo Shin and a Research Intern at Meta FAIR, advised by Xian Li. Before that, I have closely collaborated with (or interned under) Jonathan Richard Schwarz at Harvard University, Yee Whye Teh at University of Oxford, and Jaeho Lee at POSTECH.
My research is primarily centered on developing efficient and robust (or safe) machine learning frameworks to tackle the emerging challenges of large models.
In particular, my research focuses on developing efficient pre-training and adaptation algorithms, leveraging useful prior knowledge extracted from multiple tasks (e.g., meta-learning) combined with efficient training and inference schemes (e.g., data pruning, weight pruning, quantization, and efficient parametrization).
Recently, I have extended my research expertise to Large Language Models (LLMs), working on new pre-training objectives, interpretabillity, reasoning, and LLM agents.
I am fortunate to be a recipient of Google Ph.D. Fellowship in machine learning, and Qualcomm Innovation Fellowship (2020, 2024) for two of my papers.
Topics of interest (publications with overlap):
* denotes equal contribution