Nick Oh

Researcher @ socius labs

profile_picture_b2c.jpg

Hi there! I’m Nick (Seungheon) Oh – a researcher experimenting at the intersection between every field that ever studied “thinking” and every machine that’s trying to “think”.

Basically I’m interested in two experiments: whether sciences that explained humans can improve machines (not just performance, but how they collaborate with us); and whether machines can test human theories at impossible scales.

After studying Politics and Economics at LSE, I worked as a Junior NLP Engineer at Numen Capital while sketching out socialscience.ai – a project to bring AI tools to social scientists. But building it revealed the more interesting question was the reverse: instead of AI serving social science, what if the disciplines that studied humans could advance machine intelligence? That flip became socius labs, now an independent research lab supported by LSE and LSE Generate exploring this convergence.

Current ongoing research spans metacognitive architectures for machines (Monitor-Generate-Verify, System 1.5, Before you 〈think〉, monitor), philosophical arguments on why AI’s imperfect explanations can still advance knowledge (In Defence of Post-hoc Explainability) and research infrastructures (PETLP, sentibank, RedditHarbor). Increasingly, I’m fascinated by whether machines have subjective “feelings” we can measure, and how AI agent societies might become laboratories where we test different theories.

selected publications

  1. ICWSM
    sentibank: A Unified Resource of Sentiment Lexicons and Dictionaries
    Nick Oh
    Proceedings of the 18th International AAAI Conference on Web and Social Media (ICWSM), 2024
  2. NeurIPS
    System 1.5: Designing Meta-cognition in Artificial Intelligence
    Nick Oh ,  and  Fernand Gobet
    System-2 Reasoning at Scale Workshop at the 38th Annual Conference on Neural Information Processing Systems (NeurIPS), 2024
  3. NeurIPS
    Position: In Defence of Post-hoc Explainability
    Nick Oh
    Interpretable AI: Past, Present and Future Workshop at the 38th Annual Conference on Neural Information Processing Systems (NeurIPS), 2024
  4. COLM
    Before you 〈think〉, monitor: Implementing Flavell’s Metacognitive Framework in LLMs
    Nick Oh
    The First Workshop on the Application of LLM Explainability to Reasoning and Planning at the Conference on Language Modeling 2025 (COLM), 2025