Living in a Post-Truth World

A conversation with James Siderius, assistant professor of business administration, on AI, social media, and the misinformation problem.

Long before the Internet was conceived, Mark Twain famously said, “A lie can travel halfway around the world while the truth is putting on its shoes.” You can find the quip attributed to him all over the World Wide Web. Ironically, Twain didn’t say it, and there are myriad claims online about who originally did.

Tuck assistant professor James Siderius, who also serves as the Wei-Chung Bradford Hu T’89 Faculty Fellow, is studying how and why misinformation runs rampant online and particularly via social media. Through rigorous theoretical and empirical research, he demonstrates why misinformation can spread further and faster on social media than the truth, and the existential threat it poses for online platforms. What are we to do? Siderius is here to help.

What made you think to focus on this area of study?
I come from a math and engineering background and originally studied graphs and networks. My initial interest in social networks stemmed from a technical perspective, as they provide a rich source of data and fascinating mathematical challenges in network analysis. Over time, my focus shifted to broader societal issues, particularly how artificial intelligence (AI) shapes the social media algorithms and their wider implication for society.

How much of what you study with social networks and social media is data and statistics versus psychology?
A lot of the research is grounded in qualitative observations about what we see happening on social media. Once we identify a phenomenon to examine, the focus shifts to quantitative methods, including designing models and analyzing new data sets. With that data, we build a model of human decision-making and try to understand what some of the implications of the model are, such as what it means for potential policymaking or regulation in social media.

How do you account for the nonquantitative aspects of what you’re studying?
That’s the hard part about modeling things that have a human component to them. We know that people get some utility from having other people share their content. Why do they receive this utility? We’re not sure. That’s where we really pull on a lot of the psychology and neuroscience literature. Our models rely on foundational assumptions about human behavior and social interactions, which we then translate into objective, mathematical frameworks, which is an essential step in our process.

James Siderius teaches AI-Driven Analytics and Society in the Tuck MBA program which explores the societal impact of AI and how AI is being used for analytics. He also teaches Optimization Modeling for Prescriptive Analytics.| Photo by Laura DeCapua

You’ve written about studying a particular large community group on Reddit—a “subreddit.” What prompted you to look into that one?
At a macro level, ideas like this often come out of conversations with my colleagues. In this case, some of us had been chatting about online misinformation, and how it spreads. At that time, Reddit had recently banned a prominent subreddit notorious for spreading misinformation and hate speech. They assumed that the people in it would then scatter to other groups and eventually just disappear. But in reality, they didn’t. They went to a different group and spread misinformation there. We thought that was interesting and thought maybe it would be better to silo these communities instead of banning them—what Reddit calls a quarantine policy, meaning quarantining some dangerous ideas and misinformation to a particular group because the people in that group aren’t going to change their minds anyway about, say, vaccination. So why introduce those ideas to others who may be influenced by them when instead you can let them live in their own echo chamber.

How can you develop quantitative ways to study qualitative observations?
That is the more micro level of this work, where we actually do some of these experiments and test our hypotheses. In this case, we tested them by classifying the kinds of language that were being used on Reddit in two different subreddits—one the more controversial, even radical group that was banned, and the other a related but more mainstream one—and we trained a language classifier to determine which of those two groups a given Reddit post came from.

Is this where artificial intelligence, or AI, comes in?
It was really more a natural language processing algorithm. We had it look at posts from the more controversial group and from the more mainstream group after the more controversial group was banned and predict which of those two groups the posts came from. We saw a huge spike in the number of mispredictions that posts were coming from the banned group, because the language in the more main-stream group had changed after the other group was banned—the entire sentiment of it had sort of converged and become something that was more like the banned group.

So when people self-select into some of these groups, conditions are ripe for misinformation.
Yes, and it’s dangerous. When you think about the building of social networks in a pre–social media world, you think of forming connections with people who are naturally going to have more diverse opinions than you—coworkers, friends, neighbors. In online social networks, you’re going to be exposed to fewer countervailing viewpoints. And I think that’s hugely problematic because things like misinformation go unchecked: people don’t verify information and they’re more likely to share it without giving it a second thought because the people they’re interacting with happen to be cut from the same cloth.

When you think about the building of social networks in a pre–social media world, you think of forming connections with people who are naturally going to have more diverse opinions than you—coworkers, friends, neighbors. In online social networks, you’re going to be exposed to fewer countervailing viewpoints. And I think that’s hugely problematic.

You talked on Tuck’s Knowledge in Practice podcast about what social-media platforms want versus what users are doing. Is there any incentive for a for-profit entity to break an echo chamber?
It’s a deep question, and one that we’ve gone back and forth on. The initial answer you might think of is no, there isn’t. The idea is that the platform is there to make money. It wants to maximize engagement. Often, echo chambers that fuel misinformation or heighten emotional responses drive engagement, keeping users on the platform longer. But we’ve found several studies recently that show users are not so happy about how platforms are monetizing personal data and how algorithms are doing things like fostering addictive content or promoting sensational misinformation. And people are leaving some platforms and joining others that have a more socially responsible objective. Fairly recently, some social-media platforms have started to care a little bit more about being responsible for what they’re promoting online. And that might come at the cost of breaking up echo chambers that promote more engagement.

Do you see any way to regulate social media in the future?
I think the first thing is deciding whether you want to. My wife’s a lawyer, and she’ll give me the legal perspective on this and talk about section 230 of the Communications Act—how a social-media platform is not responsible for what people post on it, as long as the platform didn’t create the post itself. Revising section 230 to hold platforms accountable for the content they host could compel them to be more mindful of what they promote. From a more academic perspective, there are things you can do so that users are more in control of their own data and more aware of things that can happen on social media. One policy we advocate for is provenance, which would provide a traceable history of shared content, enabling users to verify its origins. For instance, with a quotation, it would tell you “here’s the original news story in which it occurred” and require social-media companies to actually provide that provenance as part of the original post, so people can more easily fact-check.

Given both the excitement and fear about AI now, what would you like people to know about it?
I would say it is important to keep your eyes open to how AI is evolving and how tech companies are using it. It’s easy to overlook the nuances and assume tech companies are using AI solely to empower users. But often that’s not the case. Often we’re seeing companies exploit certain kinds of human biases or design their algorithms in ways that can harm users by using their data in ways that are not in their best interest. I think that when people carefully reflect on how they view the online experience and the role that AI is playing, then they’re pretty good about holding tech companies responsible.

Can you talk about the Research-to-Practice seminar you’re teaching?
It’s called AI-Driven Analytics and Society, and it’s great! We dive into issues similar to those we’ve talked about here: the societal impact of AI, how AI is being used for analytics. It’s an engaging class to teach, and students bring insightful questions and innovative ideas to every session. And it was particularly insightful for me to hear from them about how generative AI and other types of AI analytics impacted their own work over the previous summer.

Are you planning any new research now?
Yes, I’ve just started a project about how AI usage in organizations is changing the nature of work, and how that could lead to undesirable outcomes, such as people becoming overly reliant on AI and basically unlearning the skills that they need in order to be able to do their job. That’s a big issue. For example, if someone just drops a new data set into ChatGPT and says, “Run this analysis,” and they haven’t learned the fundamental, foundational skills of how to clean the data or fill in missing values, or how to critically evaluate the methods they’re using, they won’t know how to recognize things that generative AI often makes mistakes on. And then if managers are also relying on AI, they also aren’t going to be able to recognize that something is incorrect, and errors will propagate throughout the organization. So if you don’t understand what’s actually going on under the surface, you’re going to run into issues down the road.

This story originally appeared in print in the winter 2025 issue of Tuck Today magazine.