people

members of the lab or group


prof_pic.jpg

555 your office number

123 your address street

Your City, State 12345

Hi there! I’m Nick (Seungheon) Oh - a researcher working at the intersection of artificial intelligence and human understanding. My work focuses on how we might conceptualise, build, and justify artificial intelligence in ways that advance our understanding at multiple levels - from technical mechanisms to real-world phenomena.

My journey from Politics and Economics at the London School of Economics and Political Science to NLP engineering at Numen Capital revealed a fundamental tension in AI development: while deep learning prioritised predictive accuracy (y-hat), my social science training emphasised understanding mechanisms and causation (b-hat). This gap inspired me to found socius - to bring social science principles of explanation and understanding to AI development.

Today, as researcher at socius, I examine artificial intelligence across multiple levels of understanding - from how AI systems reflect real-world phenomena, to how we build theoretical frameworks bridging machine and human cognition, to how individual components work together to create intelligent behaviour.

  • Theoretical Understanding: Developing System 1.5, a framework that bridges human and machine cognition by regulating the interplay between intuitive (System 1) and analytical (System 2) processing in artificial intelligence. This work draws inspiration from how human experts navigate between quick pattern recognition and deliberate analysis.

  • Phenomenological Understanding: Proposing Computational Interpretabilism, a philosophical framework that establishes how post-hoc explanations from AI systems can generate valid scientific insights. This work provides epistemological foundations for using explainable AI in scientific discovery.

  • Mechanistic Understanding: I’m currently exploring how fundamental computational principles emerge in artificial neural networks through the lens of renormalisation theory, exploring parallels between physical systems and deep learning architectures to better understand how AI systems learn and generalise.

To support social scientists in their research, I develop open-source Python libraries that make data more accessible. These include sentibank, which provides theory-driven sentiment analysis capabilities, and RedditHarbor, which streamlines social media data collection - enabling researchers to focus on understanding phenomena rather than technical implementation.

I believe the future of AI lies not just in improving performance metrics, but in creating systems that think in ways we can naturally understand and trust. My research aims to bridge the gap between technical excellence and human comprehension, working toward AI systems that enhance rather than mystify our understanding of complex phenomena.

Please feel free to reach out if you ever want to chat more about the fascinating intersections between the two!


prof_pic.jpg

555 your office number

123 your address street

Your City, State 12345

Hi there! I’m Nick (Seungheon) Oh - a researcher working at the intersection of artificial intelligence and human understanding. My work focuses on how we might conceptualise, build, and justify artificial intelligence in ways that advance our understanding at multiple levels - from technical mechanisms to real-world phenomena.

My journey from Politics and Economics at the London School of Economics and Political Science to NLP engineering at Numen Capital revealed a fundamental tension in AI development: while deep learning prioritised predictive accuracy (y-hat), my social science training emphasised understanding mechanisms and causation (b-hat). This gap inspired me to found socius - to bring social science principles of explanation and understanding to AI development.

Today, as researcher at socius, I examine artificial intelligence across multiple levels of understanding - from how AI systems reflect real-world phenomena, to how we build theoretical frameworks bridging machine and human cognition, to how individual components work together to create intelligent behaviour.

  • Theoretical Understanding: Developing System 1.5, a framework that bridges human and machine cognition by regulating the interplay between intuitive (System 1) and analytical (System 2) processing in artificial intelligence. This work draws inspiration from how human experts navigate between quick pattern recognition and deliberate analysis.

  • Phenomenological Understanding: Proposing Computational Interpretabilism, a philosophical framework that establishes how post-hoc explanations from AI systems can generate valid scientific insights. This work provides epistemological foundations for using explainable AI in scientific discovery.

  • Mechanistic Understanding: I’m currently exploring how fundamental computational principles emerge in artificial neural networks through the lens of renormalisation theory, exploring parallels between physical systems and deep learning architectures to better understand how AI systems learn and generalise.

To support social scientists in their research, I develop open-source Python libraries that make data more accessible. These include sentibank, which provides theory-driven sentiment analysis capabilities, and RedditHarbor, which streamlines social media data collection - enabling researchers to focus on understanding phenomena rather than technical implementation.

I believe the future of AI lies not just in improving performance metrics, but in creating systems that think in ways we can naturally understand and trust. My research aims to bridge the gap between technical excellence and human comprehension, working toward AI systems that enhance rather than mystify our understanding of complex phenomena.

Please feel free to reach out if you ever want to chat more about the fascinating intersections between the two!