Hello world! 👋
I’m Minwu, a researcher at New York University Abu Dhabi, where I’m fortunate to be advised by Prof. Keith Ross. I also completed my undergraduate degree here, majoring in Computer Science.
My research focuses on LLM Reasoning. Recently, I have been particularly interested in:
- Identifying and addressing the limitations of RLVR.
- Expanding capability to solve ‘hard’ problems (pass@k=0% problems) for LLMs (link).
- Compositional generalization & Creativity.
- Synthetic data that better capture the human cognitive process during reasoning.
- Test-time scaling methodologies that enable more effective reasoning and with longer memory.
Before that, I worked in computational finance as well, particularly in corporate finance and macroeconomics.
Before that, I worked on a startup project.
For a detailed look at my background and experiences, feel free to check out my CV. If you would like to talk about research or potential collaboration, feel free to ping me at mwk300[at]nyu[dot]edu :)
News
- 10.2025 Two papers to MATH-AI @ NeurIPS 2025!
- 09.2025 One work is accepted at EMNLP 2025 BlackboxNLP Workshop!
- 08.2025: One work is accepted at EMNLP 2025 Main Conference! See you at Suzhou, China!
- 06.2025: Preprint for my new paper Layer Importance for Mathematical Reasoning is Forged in Pre-Training and Invariant after Post-Training is out!
- 05.2025: Preprint for my new paper Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning is out!
- 05.2025: Preprint for my new paper Warm Up Before You Train: Unlocking General Reasoning in Resource-Constrained Settings is out!
- 02.2025: Preprint for my new paper Mathematical Reasoning in Large Language Models: Assessing Logical and Arithmetic Errors across Wide Numerical Ranges is out!
- 10.2024: My paper Interpretable Machine Learning Model for Predicting Activist Investment Targets is published at The Journal of Finance and Data Science!
- 09.2024: Started working as a graduate researcher at NYUAD!
Publications & Preprints
[6] Difficulty-Rebalancing Prefix for Training Reasoning LLM (working) Minwu Kim and Keith Ross
To be submitted to ICML 2026.
[5] Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning (link)
Minwu Kim*, Anubhav Shrestha*, Safal Shrestha, Aadim Nepal, and Keith Ross
MATH-AI @ NeurIPS 2025.
[4] Warm Up Before You Train: Unlocking General Reasoning in Resource-Constrained Settings (link)
Safal Shrestha, Minwu Kim, Aadim Nepal, Anubhav Shrestha, and Keith Ross
EMNLP 2025 Main Conference.
[3] Layer Importance for Mathematical Reasoning is Forged in Pre-Training and Invariant after Post-Training (link)
Aadim Nepal, Safal Shrestha, Anubhav Shrestha, Minwu Kim, Jalal Naghiyev, Ravid Shwartz-Ziv, and Keith Ross
MATH-AI @ NeurIPS 2025; BlackboxNLP @ EMNLP 2025.
[2] Mathematical Reasoning in Large Language Models: Assessing Logical and Arithmetic Errors across Wide Numerical Ranges (link)
Safal Shrestha*, Minwu Kim*, and Keith Ross
Preprint.
[1] Interpretable Machine Learning Model for Predicting Activist Investment Targets (link)
Minwu Kim*, Sidahmed Benabderrahmane, and Talal Rahwan
The Journal of Finance and Data Science, 2024
I Like
- Klein Blue
- IU
- Tottenham Hotspurs (Spursy who?)
- Tyler, the Creator
- Dijon
- Christopher Nolan
- Bitcoin
To know more about my taste, you are welcomed to visit my personal website (Korean alert though)
