Nick Oh
Researcher @ socius
Hi there! I’m Nick (Seungheon) Oh - a researcher working at the intersection of artificial intelligence and human understanding. My work focuses on how we might conceptualise, build, and justify artificial intelligence in ways that advance our understanding at multiple levels - from technical mechanisms to cognitively plausible frameworks.
My journey from Politics and Economics at the London School of Economics and Political Science to NLP engineering at Numen Capital revealed a fundamental tension in AI development: while deep learning prioritised predictive accuracy (y-hat), my social science training emphasised understanding mechanisms and causation (b-hat). This gap inspired me to found socius - to bring social science principles of explanation and understanding to AI development.
Today, as researcher at socius, I focus on establishing solid theoretical groundwork for artificial intelligence development and deployment - from developing theoretical frameworks that bridge machine and human cognition (e.g., cognitively plausible AI), to justifying how XAI systems can generate scientific insights about real-world phenomena (e.g., philosophy of science), to analysing how individual components interact to create intelligent behaviour (e.g., mechanistic interpretability).
-
Theoretical Foundation: Developing System 1.5, a framework that bridges human and machine cognition by regulating the interplay between intuitive (System 1) and analytical (System 2) processing in artificial intelligence. This work draws inspiration from how human experts navigate between quick pattern recognition and deliberate analysis.
-
Philosophical Foundation: Proposing Computational Interpretabilism, a philosophical framework that establishes how post-hoc explanations from AI systems can generate justified scientific insights. This work provides epistemological foundations for using explainable AI in scientific discovery.
-
Mechanistic Foundation: I’m currently exploring how fundamental computational principles emerge in artificial neural networks through the lens of renormalisation theory, exploring parallels between physical systems and deep learning architectures to better understand how AI systems learn and generalise.
Additionally, to support social scientists in their research, I develop open-source Python libraries that make data more accessible. These include sentibank, which provides theory-driven sentiment analysis capabilities, and RedditHarbor, which streamlines social media data collection - enabling researchers to focus on understanding phenomena rather than technical implementation.