From Markets to Minds - My Personal Journey as a Researcher
I’m a researcher with an unconventional background - a BSc in Politics and Economics who now focus on researching researches AI systems. Here’s my personal journey as a researcher, with its unexpected turns and serendipitous discoveries that shaped my approach to artificial intelligence.
It’s 2020, and I’m nestled in a quiet corner on the second floor of the LSE library, surrounded by papers about the 2008 financial crisis. Prof. Schonhardt-Bailey’s GV309 course, Politics of Money and Finance in Comparative Perspective, had led me down a fascinating rabbit hole. Until then, most of my Economics courses had been a parade of numbers, graphs, and equations. But GV309 revealed a different story – one of institutions, ideas, and human interests shaping financial decisions. That contrast sparked something in me. The same market event could look entirely different depending on whether you focused on the mathematical models or the human elements driving them. (In hindsight, perhaps I should have enrolled in behavioural economics - it might have shown me this duality sooner.)
◄◄ MARKET DUALITY ►►
A Tale of Numbers and Human Nature
caption
{
"market_model": {
"left_hemisphere": {
"type": "quantitative analytics",
"components": [
"modeling",
"graphs",
"equations"
]
},
"right_hemisphere": {
"type": "human behaviour",
"components": [
"ideologies",
"interests",
"institutions"
]
},
"central_display": "integrated market visualisation"
}
}
After graduating from LSE, I was offered a role as a Junior NLP Engineer at Numen Capital - a stroke of fortune and timing for someone without a Computer Science degree. Working on the Knowsis research team, I was building NLP pipelines and transformer-based classifiers to analyse market sentiment on ESG. While our classifier boasted an impressive 97% accuracy, what truly fascinated me was a more fundamental question: why did we need sentiment analysis at all? If markets were purely rational, why would sentiment matter? Yet there I was, developing rule-based sentiment analysers and domain-specific dictionaries because market movements weren’t just about numbers – they were about human psychology, emotions, and sometimes deeply irrational behaviour. This curiosity about the foundational elements behind market movements would later become one of the founding principles of socius.
During my time at Numen, I began noticing something curious about how we study sentiment in text. Everyone was rushing to build bigger, more complex models, but they were missing something fundamental: What exactly is sentiment? How do we humans understand and process emotions in the first place? This wasn’t just academic curiosity anymore - it was becoming clear that if we’re going to build AI systems that understand human emotions, we should first understand how we ourselves process them.
Building Bridges: From $\hat{Y}$ to $\hat{B}$
This tension between understanding and performance was everywhere in AI development. A % gain in accuracy was celebrated more than building models from robust theories. Coming from a social science background, I was trained to prioritise understanding above all else. The essence was $\hat{B}$: mechanism, mediation, and moderation - we weren’t satisfied just knowing that X affected Y; we wanted to unravel the how and why, to peek inside the black box of causation. Yet the core of deep learning seemed fixated on $\hat{Y}$: optimisation, regularisation, evaluation. As Rahul, Verhagen and Kirk (2022) put it, “social scientists have previously had a preoccupation with parsimonious explanation and inferential $\hat{B}$, as opposed to predictive $\hat{Y}$ questions”.
I wanted to close this gap. I wanted to apply principles of social science to AI development. So I started my own lab - originally named rather straightforwardly as ‘socialscience ai’, before becoming socius. In hindsight, this was such a bold move. Other than having quasi-lab experience in industry, I had neither a PhD in Computer Science nor proper academic lab experience. But looking back, it wasn’t that I was at the peak of confidence on the Dunning-Kruger curve - rather, it was more of an ignorant courage. I knew I wasn’t fully ready, but at the time, I felt that I needed and wanted to. So in October 2022, I took the first steps toward building socius.
Of course, the first year was the most challenging. It was like doing a PhD with no supervision - teaching myself, guiding myself through the uncharted. I started by diving deep into how researchers had approached sentiment analysis over the years. What fascinated me was how different academic traditions had conceptualised and operationalised sentiment - each dictionary representing years of careful theoretical work and empirical validation. That’s how sentibank was born - not just as another sentiment analysis tool, but as an encyclopedic hub bringing together these diverse approaches to understanding human emotion. While not strictly an AI system itself, sentibank represented something profound: an attempt to assist black-box AI models with theoretically-grounded lexicons. These weren’t just arbitrary scores; we understood their origins, their theoretical foundations, their why. In many ways, sentibank embodied what would become a core principle at socius: creating AI systems that think in more humanly understandable ways.
From Market Psychology to Cognitive Science
During my time studying sentiment, the deeper I delved into human emotion and decision-making, the more I found myself drawn into cognitive science. I became fascinated by how experts make decisions – not the careful, analytical choices we often assume, but the quick, intuitive judgments that come from years of experience.
I still remember the day I first read about System 1 and System 2 thinking. It was like finding a missing puzzle piece. The idea that human thinking isn’t just logical or emotional, but a complex dance between intuition and analysis – it explained so much about the questions I had been grappling with.
◄◄ THE TWO MINDS ►►
A Portrait of Dual Process Theory
caption
{
"cognitive_framework": {
"processors": {
"left_unit": "System-1",
"right_unit": "System-2"
},
"theorists": "Daniel Kahneman, Jonathan St B. T. Evans"
}
}
This led to my research collaboration with Prof. Fernand Gobet. Working together on System 1.5, we examined how expert cognition could inform AI development. Just as social sciences seek to explain phenomena through multiple levels of analysis, we proposed a framework that mirrors how human experts navigate between intuitive and analytical thinking. This wasn’t just about building faster or more accurate systems - it was about creating AI that processes information in ways we can naturally understand and explain, reflecting personal philosophy that AI development must balance theory-driven approaches with multi-scale analysis.
◄◄ SYSTEM 1.5 ►►
The Dance of Intuition and Analysis
caption
{
"cognitive_interface": {
"left_entity": {
"type": "System 1",
"attributes": "rapid response core",
"signature": "scarlet energetic stance"
},
"mediator": {
"type": "System 1.5",
"function": "metacognitive regulation"
},
"right_entity": {
"type": "System 2",
"attributes": "analytical processing unit",
"signature": "azure contemplative stance"
}
}
}
The development of System 1.5 sparked an intriguing question: if human experts can provide valid explanations after making intuitive decisions, why do we demand complete mechanical transparency from AI systems? After all, experts across domains routinely make effective decisions while their subsequent explanations often involve post-hoc rationalisation rather than complete, factive accounts of their decision-making processes. This insight led me to develop Computational Interpretabilism, a philosophical framework that challenges conventional wisdom about AI interpretability. While many researchers insisted on building inherently transparent systems, I noticed an interesting parallel with human expertise: just as experts often construct valid explanations after making intuitive decisions, perhaps post-hoc interpretability in AI wasn’t a weakness, but rather a natural reflection of how understanding emerges from complex systems. This philosophical stance ultimately defends post-hoc explainability in AI systems, challenging the notion that complete mechanistic understanding is necessary for meaningful scientific insight.
Looking back, I can see how each step of this journey was necessary. Understanding market emotions led to studying sentiment, which led to exploring human cognition, which in turn is helping us build more human-centered AI systems. Each piece connected to the next in ways I couldn’t have predicted but now seem almost inevitable.
That undergraduate student in the LSE library, puzzling over market crashes and human behaviour, could never have predicted where this journey would lead. But that’s the beauty of research driven by genuine curiosity – it takes you places you never expected to go. The questions that drive me today have evolved from those early wonderings but maintain the same core focus on human understanding:
- How can we develop AI systems that learn and explain their knowledge more like humans do?
- How can we bridge the gap between mechanistic and holistic understanding in AI?
- What can cognitive science teach us about building AI systems that process information more naturally and effectively?
- How might we implement theory-driven AI development at scale while maintaining the human element?
There’s a peculiar poetry in how life unfolds. What began as a puzzled undergraduate’s musings in the LSE library has bloomed into something I never could have sketched out in advance - rather like how the most fascinating AI behaviours emerge from seemingly simple principles. Each step of this journey, from market sentiment to machine cognition, has felt less like following a map and more like reading a compass: knowing roughly where North is, but discovering the actual terrain as you walk. Perhaps that’s the real gift of an unconventional path - it teaches you to see connections that others might miss, to find new ways through old problems.
Enjoy Reading This Article?
Here are some more articles you might like to read next: