Thoughts on Inference-time Compute
Published:
Published:
Published:
This snippet is from an Nvidia fireside chat with Ilya Sutskever and Jensen Huang (38:30~).
Published:
OpenAI’s o1 announcement might just be the event that triggered the most dopamine for me recently. Over the past two days, I stayed up late into the night researching it and tried playing around with it myself. It might sound like somewhat hyperbolic, but I can’t help but feel that I am witnessing one of the most pivotal moments in human history. With both excitement and fear, I’ll take a moment to organize my thoughts about o1.
Published:
A single technology can hold vastly different meanings depending on one’s perspective. The more you know—and the more imagination and insight you possess—the deeper that meaning becomes. The more I observe LLMs, the more this thought resonates. Some dismiss LLMs as limited because they simply predict the next token autoregressively. But I disagree. The potential of a machine that has mastered language is immense and will advance much further, depending on how much imagination we’re willing to apply.
Published:
These are some unconcluded thoughts that seem worth writing down.