All Knowledge in Practice

Tuck Knowledge in Practice Podcast: AI, Social Media, and the Misinformation Problem

Tuck assistant professor James Siderius discusses the ethical challenges of AI and social media and his new elective AI-Driven Analytics and Society.

Social media has been both a blessing and a curse, giving us new ways to connect but also digital addiction and misinformation. How can we redesign the AI in social platforms so they are socially beneficial? That’s one of the main research questions that fascinates Tuck assistant professor James Siderius. In the final episode of season one of the Tuck Knowledge in Practice Podcast, Siderius talks about his interest in Artificial Intelligence and social media, some of the research he has done (and is doing), and his new elective AI-Driven Analytics and Society.
 
Research papers discussed:

Learning in a Post-Truth World. Management Science, 2022.
When Should Platforms Break Echo Chambers? Working paper.
When is Society Susceptible to Manipulation? Management Science, 2022. 

Listen Now

Our Guest

James Siderius is an assistant professor of business administration and the Wei-Chung Bradford Hu T’89 Faculty Fellow at the Tuck School of Business at Dartmouth. He has a broad array of research interests but is generally interested in the impact of artificial intelligence (AI) on business operations, often in strategic settings with multiple agents. His prior work has studied this in the context of social media, online review platforms, and two-sided matching markets. He uses game theory to study how agents incorporate information in these various environments, such as in the presence of strategically injected disinformation (bots) or when there is information overload. He is especially interested in how platform algorithms affect user behavior, learning, and the potential societal implications.

Transcript

[This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of the Tuck Knowledge in Practice Podcast is the audio record.]

Introduction, Media Clips: The man widely seen as the godfather of artificial intelligence, has quit his job at Google, warning of the dangers of AI. My worst fears are that we cause significant harm to the world. I think that could happen in a lot of different ways. Some are now raising alarms about how these advances could be used to create and spread misinformation. The risks that are happening with AI right now discrimination, misinformation and disinformation, interference in our elections. I think if this technology goes wrong, it can go quite wrong. But we try to be very clear eyed about what the downside case is and the work that we have to do to mitigate that.

[Podcast introduction and music]

Kirk Kardashian: Hey, this is Kirk Kardashian and you’re listening to Knowledge and Practice, a podcast from the Tuck School of Business at Dartmouth. In this podcast, we talk with Tuck professors about their research and teaching and the story behind their curiosity. How can AI combat disinformation campaigns. How does social media contribute to the spread of fake news? When should platforms break echo chambers? These and other questions are at the heart of research by James Siderius, a professor in the Operations and Management science group at Tuck. In this episode, I talk with James about his work on AI and how it can be deployed in a socially beneficial way in all kinds of contexts. We also talk about his new course, a Research-to-Practice seminar called AI-Driven Analytics and Society. James joined Tuck in the summer of 2023. He has a bachelor’s degree in mathematics from Princeton University and a master’s degree and PhD in electrical engineering and computer science from MIT. So, James, welcome to tuck, and thank you for sitting down here with us. you have some really interesting research and about artificial intelligence and social media and platforms and echo chambers and manipulation of society via misinformation and all, all kinds of things like that which, which is super pertinent to a lot of challenges the world is facing today. Tell us in your own words what your research interests are and sort of how you got into those things.

James Siderius: So I mostly focus on online platforms. So online platforms are run. but specifically in sort of the modern world where platforms are run through algorithms. Right. and these algorithms are often, you know, there’s some underlying motive or financial objective that a platform is trying to achieve. And so it does this in a way that maybe matches users to content creators, say on TikTok, or maybe make certain Amazon reviews more high profile than others. And, you know, for the most part, we’ve seen the positive change that, that these types of algorithms have given. But often what’s not as clear are the sort of negative societal implications. so there can be these sort of hidden forces that exist that, that are maybe less obvious on the surface that lead to kind of, uh, negative user behavior. For instance, you can think of things like digital addiction or, you know, the platform business models that encourage people to, like, be on Instagram all day long because they’re checking to see if, you know, they had more likes. And so the question is, you know, how we think about redesigning these systems, and in particular, how to redesign the AI in a way that’s also socially beneficial.

Media Clips: Well, fake news, you know, we heard that term so many times during the campaign. It’s become a household term. And it describes the stories that are inaccurate, as you know, are completely fabricated and made up. So last night, the granddaddy of real news television magazine 60 minutes aired its investigation on fake news and the impact on the presidential election.

James: And I really got into this sort of at the base level of social media when kind of in the wake of the 2016 US election, when we saw social media being a kind of breeding ground for misinformation and specifically how the I, you know, wasn’t necessarily doing something nefarious at thought was just trying to maximize user engagement and how people, uh, kind of use social media. But by doing so was actually making misinformation spread more. and so because of that, we want to think about, okay, what exactly are these AI systems doing? And can we sort of, you know, change them or shape them in a way that they’re not causing these negative, unintended consequences that we don’t always recognize immediately? So I have several papers that that touch on that and offshoots of it which, which look at different online media business models and more generally, online platforms.

Kirk: Yeah, I was looking at some of your papers and they have some provocative titles and I’ll just I’ll start with one of them called learning in a Post-truth world, which was published in Management Science in 2022. Tell us a little bit about that paper.

James: Yeah. Happy to. So in that paper, what we’re really looking at: So there’s, there’s this huge literature that’s, that’s typically in the what they call social learning or social network space, which is about how people learn from each other and form opinions. The typical model is there’s, you know, a bunch of people in a room, you know, say at a cocktail party, and they all have some private information. So something that they read about that maybe others don’t know. And they walk around and they exchange ideas. And kind of what is common in all that literature is that eventually people learn what the truth is. So they exchange opinions with enough people and the way they process information could be, you know, someone that’s kind of what we call fully Bayesian, which is that they’re doing these complex calculations in their head, updating based on information, making sure not to count redundant information too heavily, not having things like confirmation bias. and these agents, of course, tend to pick up on what the truth is faster than those that do. Very simple rule of thumb, kind of things like, oh, I just take the average of, you know, what everyone happens to think on a given issue, right? But we show in this paper is that the landscape changes considerably when we inject misinformation. So we inject we tell the people in the room at the cocktail party that, oh, 20% of what is being spread is actually just based on lies.

Yeah. What’s interesting is the revelation that there is misinformation in the system can actually break down these learning mechanisms. And what’s maybe more surprising is it breaks down faster for the smart people, these people that are doing the complex Bayesian calculations faster than it does for the rule of thumb learners. What we essentially discover is that there’s this effect where you start to dig in your heels about whatever prior belief you came into the room with, because you can sort of ex-post justify things that go against your opinion as, oh, that is misinformation. Right. So people in the room basically just never see eye to eye because they’re aware of the fact that lies exist. Right. It doesn’t even matter if the lie is sort of uncovered. Just the existence of it is enough to sort of break down all sort of learning in society. And I think we’ve seen a lot of this in kind of this post-truth world where, you know, there’s alternative facts and people are able to justify whatever predisposition they have about a given topic just based on, you know, what their, you know, current thoughts are because they can dismiss everything that disagrees with them as misinformation.

Media Clip: Or you are fake news. Sir. Go ahead. Can you state categorically that nobody no, Mr. President elect, that’s not appropriate.

Kirk: So it sounds like what you’re saying is that once misinformation is injected into the system, it kind of breaks down the trust that people have for each other. And they sort of stop listening to each other.

James: Precisely. They stop seeing eye to eye. Right. So you have, you know, in the previous version where you have someone that’s coming to you and giving, you know, exposing you to a different perspective. You’re appreciative of this perspective because it’s showing you something that’s against your current belief. Right. This information is actually useful. When you do something like Bayesian updating when you’re updating about a particular belief or topic. But when you know that some of the content in the room, some of the ideas that people are peddling are actually misinformation, you’re more likely to believe that the misinformation is coming from the opposite perspective. Right. As someone that’s, you know, if you happen to be maybe slightly left leaning, you might look at Fox News and say, oh, everything they’re saying over there is lies. Whereas if you happen to be slightly more conservative, you might look at all mainstream media and say, oh, those are all lies. Right. And so the same exact thing happens here where it’s not even what is true and what is not. It’s just the existence of misinformation and the idea that it’s that it’s out there is enough to break down sort of communication across, across lines. And so in the paper we also discussed some solutions to prevent this. Right. Some solutions to open dialogue back up in a productive way and prevent kind of this breakdown in learning that happens when we when we introduce misinformation.

Kirk: Okay. And so are these solutions that platforms and media companies could implement theoretically. That’s the hope.

James: Yeah, so the hope would be that, there’s been a big push to, you know, and one of the other papers you mentioned is about echo chambers, but, you know, there’s a big push to modify the algorithms in a way that make you sort of exposed to content that you’re not likely to see. And there’s been a big push for this. But as this paper shows, if you push this too far, then it actually is counterproductive. Right. If you push ideas that are too far away from someone’s initial opinion, they’re going to see these and just justify them as misinformation, whether they are misinformation or not. and so what we sort of suggest is a more mild version of this, which is that, okay, it’s good to expose others to, to opinions that are maybe slightly different from what you typically would expose yourself to. but you can’t do this to such an extreme that it actually breaks down any sort of trust in the platform. and so that balance is, is very critical to making it effective.

Kirk: That’s really interesting. I’m sitting here trying to kind of picture what my, uh, social media feed would look like in that kind of world.

James: The hope is you don’t even notice it happening. Right? 

Kirk: Right, the hope is I don’t notice it happening. But. So say I’m a left leaning person. And I guess a lot of the things that that are being fed to me kind of, uh, echo my, my sensibilities, my, my, my philosophies and things like that. Would I start seeing more stories that are a little bit more critical of, say, the Biden administration? Is that kind of what …

James: … of what I’m thinking. Right. And so instead of things coming out that are, you know, very critical. Right. That are that are, you know, say, take his, uh, his, his stance on, you know, relieving student debt. Right. Something that happens to be a little bit more sort of for some people on the fringe of kind of, you know, what is considered like very liberal. you know, they may not show you critiques of that, right? They might show you critiques, or they might show you a critique of something that’s a little bit more centrist, a little bit more welcoming. and as a conservative, it might show you something that the Biden administration is pushing. It’s also very bipartisan. Right. So something that’s saying that sort of overall very complimentary of Biden, but they’re not going to pick like such a hot button issues as abortion or something. Right. They’ll pick something that you know, there is a lot of bipartisan support for, with the hope that you’ll gradually sort of soften your opinions and your, your negative opinions in particular of, of the Biden administration. So.

Kirk: Wow. That’s really interesting. I’d be curious to see if some of the platforms implement that eventually. so and I guess this maybe this brings us up to another paper that you’ve written. I think it’s a working paper. Yeah. when should platforms break echo chambers? Is this kind of. This sounds like it’s related to what we were just talking about. Yes, it absolutely is.

James: Yeah. So, I think a lot of the social media platforms, they all have a similar flavor and that they’re, you know, a place for to exchange opinions. And there’s this, these algorithms that sort of rank the content and rank the feed of what you see on, on Facebook. The setting for this paper is social media, but I think it’s easiest to think about the context of a platform like Reddit. So you have these different communities and the algorithm plays maybe some role in this, but a lot of it is self-selection. So, you know, you go on Reddit and there are all these different communities or subreddits that are usually revolving around some, some common topic. And these can be on, you know, just interests like indie music or Taylor Swift, or it could be political, right? So there are some subreddits like r conservative, which is like a slightly right leaning conservative subreddit. And then there’s our politics, which sounds fairly neutral, but typically, you know, has a slightly liberal bias. and so the question in this paper was thinking about, you know, so you have this, you have these different communities which kind of represent a social network, right? You have these people that are communicating and exchanging opinions, and they’re choosing where to exchange these opinions on. Right. And so we model this as a game where agents sort of have a choice of which rooms to enter and which rooms to basically receive opinions, but also express their own. And in doing so, you end up with, you know, crazy fringe subreddits that can be very toxic and actually lead to offline action.

Kirk: That is, you know, people storming the Capitol or the Pizzagate subreddit that ended up causing someone to go to the pizza parlor and expose the Hillary Clinton pedophile ring that didn’t exist. So these things end up, you know, in real, real world situations, they come to fruition. And so platforms themselves are concerned with this because they don’t want their platform being used in a way that allows people that are very, very extremist to sort of peddle the same ideas and then take these ideas and then do something offline. the problem is, is that the solutions that have been put forward are kind of in a similar vein of what I was just discussing with post-truth is just break the echo chamber, right? So you go on and you say, okay, there’s this far right subreddit or this far left subreddit. what we’re going to do is we’re just going to ban it altogether. Say you can’t participate in the subreddit. Go elsewhere. and what we show in this paper is that sometimes this can be effective, but in other times it can be counterproductive. And the reason it can be counterproductive is that these people don’t just leave the platform and disappear. Sometimes they do. Sometimes they start Parler or, you know, another third party site, and that has its own issues. But I won’t talk about those. But sometimes they actually join these other more moderate subreddits and spread their ideas there.

Media Clip: Sneaking their way along Oxford Street, many thousands of protesters, a march that spanned almost the entire length of the capital’s main shopping street. For us, this was a noisy affair. Real anger here at what they see as the great Covid conspiracy.

James: So a good example of this would be a subreddit, no new normal, which was around the time of the Covid pandemic, where there was ideas that, oh, this was a hoax, you shouldn’t get vaccinated, it’s going to put a microchip in you. All of this stuff, but you can think about the people in that community are never going to get vaccinated regardless of what community they participate in, right? They have a very special belief. And if they just happen to exchange those beliefs with each other, there’s kind of no harm in this happening. Now, the problem is, is if you come in and you say, oh, this is misinformation, so we’re going to shut down this subreddit. Now, these people tend to migrate to other subreddits where there’s a bigger group of people and they tend to have more sort of moderate beliefs. Right. And the problem is, is the loud voices that were previously in this group that was banned is that now they’re spreading those ideas to people that may have gotten vaccinated without that particular nudge or that influence. Right. And so certain policies that that Reddit puts in place, such as quarantining, uh, you know, funny enough, on the topic of Covid, but quarantining communities means that they just make it less accessible. So the algorithm essentially, if I were to search R no new normal and it was quarantined, it doesn’t mean that I can’t actually physically go to that page and participate. But it wouldn’t show up in search results. It wouldn’t be algorithmically promoted. So people that are joining it for the first time joining Reddit, they might not even be aware that it exists. Right. And that does a very nice job of sort of keeping people that are, uh, you know, that have these very extreme beliefs kind of siloed in their own community where they’re not infecting others with misinformation. And so that’s sort of it strikes a nice balance. And so we find those interventions to often be quite effective.

Kirk: Wow. That’s interesting. do you see that that solution or that intervention being like a feasible one for, say, Reddit to implement? Like, is that could they actually do that.

James: Yeah. So they do actually. So they so they did this with, Thedonald was one subreddit that uh, was, you know, it’s tough to say, but might have been part of the reason that we saw things like the January 6th riot. but originally the subreddit was quarantined. And then at some point they determined that the that it was too dangerous or there was too much misinformation circulating on it. So then they went ahead and banned it afterwards. So they do these quarantine policies on various different subreddits. But what’s sort of interesting is that we, uh, what the paper says is that this is actually not the right course of action. It’s not that as the problem gets worse, that you should escalate the level of intervention. Right? You should go from doing nothing to quarantining to banning as they did with R. The Donald will we advocate for is actually that in many cases, quarantining can be the best solution to a very, very serious problem, right? Just increasing the aggression does not necessarily lead to better outcomes. And it’s precisely because when they banned these people from R The Donald, what did they do? Oh they migrated to R conservative where the discussion used to be sort of more balanced. And then what we showed is that the general sentiment of that community shifted very far to the right. And so that’s one reason for sort of keeping people siloed with these beliefs, instead of having them expose their beliefs to the broader platform. Wow.

Kirk: I’m just thinking you have your PhD from MIT, right? electrical engineering and computer science. That’s right. Okay. which are very technical areas. Right. and but yet you’re applying that to this cultural phenomenon, right? Which I think is really, really cool. tell us a little bit about, without getting into the weeds too far for those of us who are more poets than quants. how you go about studying this in sort of a mathematical way? Is there an easy way to explain that?

James: Yeah. No, that’s a great question. And one that that is like sort of the crux of all this research is figuring out, you know, usually the way that I do it is I start with the question, right. So I found it interesting in this case that, you know, when I saw this sort of ramping up with R The Donald, where they, they quarantined and then banned it, I was thinking about, oh, is it, you know, what do these people do when they’re no longer able to participate in this for? and so you sort of start with an interesting hypothesis. It’s like, oh, I’m not sure that this intervention actually makes sense to ban this community when it’s gotten so big, because now it’s just going to spill over into all these other groups and, you know, lead to the proliferation of more misinformation. And so we sort of start with, you know, kind of an interesting idea. And then we, you know, often write down a model, right. And then the idea is that hopefully this model will be backed by data. And when you write down the model, you think about, okay, what makes people sort of make decisions on platforms like Reddit. How do they choose to join different groups? And then when you do these interventions like banning or quarantining, how does that change the decision that they make to participate in other groups? And what does that mean for the platform? Right.

Kirk: So the platform is maybe worried about its reputation. It’s worried about, uh, you know, it doesn’t want January 6th type events happening, but it also doesn’t want to unnecessarily intervene and prevent people from communicating. Right? Like if you have a r Scooby-Doo subreddit or something, right? It’s like there’s no reason it should be intervening here just with people that are fans of Scooby-Doo. Right? So, you know, how does it balance these two objectives? And then the key here, though, is that there’s this interplay between what the platform wants and what the users are doing. Right. And these and these incentives are somewhat aligned, but not perfectly so. The platform has slightly different objectives. So the question is how can it. How can it do these interventions in a way that is most effective. And that’s all the results of the paper are all built on sort of solving the mathematical model and then applying it to data and, and recommending real you know, concrete interventions. So that’s sort of the typical step. And it’s very similar in the other papers as well. But and then the technical part all comes in sort of the results part. Right. It’s like once you’ve written down the, the mathematical model that captures the essence of the problem you want to solve. then you then you have to actually solve it. And that’s, that’s where the technical piece comes in, I see.

Yeah, so let’s talk about one more paper, it’s called when is society susceptible to manipulation? this one looks like it’s also about social networks and how their structure affects and manipulates learning. tell us about that one a little bit.

James: Yeah. Happy to So, so in this paper it’s a very similar setting where you think about a social media platform, but now think instead of, you know, AI being the issue, think about it as a bad actor, right? Someone who’s trying to spread disinformation. and then think about maybe the AI as being a good force. Right. Something that’s going to try to connect people in a way that the good information sort of outweighs the bad. Right? So you can think about it as like disinformation campaigns from Russian bots that are trying to convince you to not get vaccinated, for instance. Right. you know, in a situation where people are sort of isolated. Uh, then this the task of the disinformation bot is very hard. It has to reach each individual person in order to convince them not to get vaccinated. Right. but the problem is, is that when you connect the network too much, then all the good information wins out, right? So if you have everyone connected to everyone, then they’re spreading public health campaigns. They’re mentioning, oh, the CDC recommends social distancing. And then it’s very difficult to also influence people. So what we show in this paper is that we characterize what kind of social networks are most amenable to this type of manipulation, where the bots are most influential. And what we find is that it’s the ones that are sort of kind of connected, that are the ones that are easiest for disinformation to spread.

Kirk: and that’s exactly for the reason I just said, is that the bots can sort of use the connections to their advantage, but they’re not so well connected that also the good information is constantly circulating as well. and so it’s sort of this perfect intermediate ground that leads itself to the most disinformation. And why it’s interesting from kind of an AI perspective is that you know, we’ve always been pushing for or typically pushing for more integration, right? People coming together, getting. and so if you think about it from the AI perspective of like in the post-truth paper where I talk about sharing different perspectives, right. You can think of that as sort of connecting a social network, uh, together. This can be effective, but if you don’t connect it enough, it can sort of it has this counterpoint of the disinformation also spreads more. Right. So if you start with a social network that’s not very well connected, and then you take some kind of intervention that connects it more, if you don’t fully connect it, you only just improve the connectivity by a little bit. Then you are actually improving the ability of these disinformation bots to be able to spread misinformation throughout the network. And so this paper says that, yes, like full integration would be absolutely fantastic. But if you’re short of that benchmark, you actually may not be able to do one of these interventions effectively.

Kirk:  huh. So what is like full interconnection? What would that look like?

James: So you could think of it I mean, you would maybe think of it as like, fully, you know, exposing people to beliefs from, like, all across the ideological spectrum, right. Or you know, being able to if there is a public health campaign, like being able to reach people very quickly and doing so through the proper channels. Right. a very not well connected network would be sort of like, you know, pre-social media days where you think about, oh, I talked to my neighbors or maybe my colleagues or something, but that’s sort of a very isolated like neighborhood that I speak with. And so in that case, we would call the network very sparse. and, and so in today’s world, it’s, it’s, it seems like we’re getting more and more connected. and typically the academics say that this is a good thing, right? That we’re getting more connected. And so this is going to help, uh, disinformation fade and good, truthful things come out. and that’s and that’s partially true. And what we show in the paper is it’s partially true above a certain level. but when you start from an initial starting point where you’re not very well connected, if you increase connectivity, it actually could, could reverse and do the opposite.

Kirk: Was TikTok around when you were writing this paper? 

James: It was not, but that’s a good example. 

Kirk: Okay. Because it seems like that’s a very powerful social network today. Right?

James: Yeah, exactly. So it’s I think it’s exactly the issue that TikTok is struggling with is that, you know, you’re, you know, they have such a large user base that you know, in some ways it can be used for good, but then it has the you know, the flip side of that coin is that it can be used for bad. So while I’m glossing over, we give a very technical sort of characterization of all the different types of networks. It’s more it’s more than just connectivity and not connected. but we give a that’s why the title of the paper is called this one is a society susceptible to manipulation is that if I give you a social network, can we answer that question? Is this society more susceptible than this one?, and so we provide some conditions on that in the paper. But that’s more on the technical side of things.

Kirk: Yeah, now, presumably, would TikTok be able to kind of turn some knobs and dials using AI to, to kind of optimize their, their platform to connect more people?

James: Yeah. So that’s sort of the hope is when we think about these interventions that affect the network structure, we’re really thinking about what AI is doing, right. We’re thinking about, oh, you’re promoting content. You know, if you think about like always promoting the same influencer. Right. That would be not a very well-connected network because you’re essentially accentuating one link at the cost of attenuating a bunch of others. And so. Exactly right. You could think of sort of increasing connectivity as changing the algorithm in a way that’s exposing you to different people, different sources, different types of content., and, you know, and some cost of that is that you’re maybe getting videos that you like less or, you know, your overall level of engagement is going down. But I think there’s a lot of positives as well. Right. And we’ve shown that through a couple of these papers that there are benefits to sort of exposure across, you know, across different types of areas.

Media Clip: The major headline tonight involving media power and politics. Tonight the world’s richest man has just bought Twitter for $44 billion. What this means tonight how Twitter could now change. Who’s happy who’s concerned. And what about those who’ve been banned including former President Trp.

Kirk: What is your research on this tell you about the way Elon Musk has been running X if anything, do you have done you have an opinion on that?

James: Yeah. No. It’s interesting. So it’s funny that you mentioned that because you know, as someone that works on this, I in some ways love to tell you that he’s doing something crazy, but you know, a lot of, a lot of, of our examples and our results in these papers point to the fact that you know, there may be we may be in a situation where there’s very little that you can do about the misinformation problem other than allow people to figure it out. Right. So,, we have a paper that’s built on the won, the post-truth paper where we look at, you know, oh, if you’re running a social media platform and you do something like censorship or content moderation, how would you do this in a way to reduce the spread of misinformation and what people actually believe. And what we find is that actually the Elon Musk approach of kind of not being able to do censorship as much as you want to, right? Even if you’re a benevolent actor as the platform and you want to remove things that you know to be objectively false, by doing this, you create all sorts of beliefs in the population about, oh, the platform is not allowing me to see X, Y, and Z.

James: And so they’re because they’re censoring this, they must be censoring that. And then it actually shuts down all these different learning channels that promote learning in the traditional social learning literature, like I mentioned at the beginning with, you know, the cocktail party where people are exchanging ideas. all of that breaks down if you don’t have a sort of Elon Musk style platform where everything is sort of free and open. so it’s in some ways kind of a, uh, I don’t know, uh, a dark, dark story because, you know, it’s like, okay, we can’t actually control the misinformation problem by doing very basic things, like just going in and removing it. And the reason is, is because people don’t necessarily trust what they call third party fact checkers, right? They’re supposed to be independent, but a lot of right leaning people think they have a liberal bias. And so because of that, uh, you know, the only thing you can really do is sort of leave it open, and so I’m not sure whether that’s like the complete answer. I’m not sure that Elon Musk is sort of the gold standard for what should be happening on social media, but it’s not obvious that what he’s doing is suboptimal.

Kirk: Yeah. I wonder if he has an intuition about that himself. . Yeah. . Well, this, I think, leads into your teaching. which is you know, heavily revolving around AI right now and optimization. and I know you’re going to start teaching your research to practice seminar very soon. It’s called AI driven Analytics and Society. tell us about that course.

James: Yeah. No, this is a fun class. So I it’s based on a course at MIT that my advisor developed called data driven decision Making in society. But the whole idea of the class is to think about AI and data driven analytics from the perspective of what can go wrong. Right. So it’s not supposed to be a dooms day kind of story, but it’s supposed to be, you know, when you’re thinking about, you know, being a manager, what should you think about when people are using AI to make sure you aren’t kind of falling into the classic pitfalls that can befall you? So what we talk about, we start on thinking about things like fairness and bias in machine learning, right? That, you know, if you throw a machine learning at a problem, you should be aware that, you know, make sure the training data is representative to make sure that you’re training the model on a diversity of examples so that it’s not unintentionally biased against a certain group or . And then from there we think about, you know, big data and privacy, of course, and ways to sort of extract as much as you can from, from big data while also making sure that you respect, uh, important concerns like privacy. and then we go on to sort of discuss transparency in AI. So that’s one of the big, big topics right now is that, you know, when you when you think about traditional analytics tools, uh, you know, they’re much more transparent.

James: When you run a linear regression, you can see exactly sort of what things it’s putting emphasis on. whereas with machine learning it’s much more difficult. Everything is a lot more opaque. You treat it like a black box. And so, you know, kind of understanding how to get more transparency from AI. And specifically, you know, we talk about platform algorithms and talking about when transparency is good for platforms versus when it could be good to actually withhold some information about how the algorithm works. And really AI incentives. So we think about these platforms having their own objective. Right. Some objective. Maybe it’s reputation. Maybe it’s a financial objective like user engagement. And how they use AI for these in a way that may not necessarily be to the benefit of the user and why we might push for things like, you know, online safety bills or, you know, regulation of AI that makes it more sort of user driven versus company driven. and then, of course, you know, as part of the class, we have to talk about generative AI and large language models, right? So thinking about how those can be biased. you know, I think people rely on them and they speak with great confidence, which is amazing. But they’re often, you know, wrong. And so being aware of kind of how, you know, some of the technical details of how it actually works.

James: So you can be kind of cognizant of when it can be wrong, and then also understanding how it’s going to change the nature of, of work and what we see online. I mean, I think that’s the biggest, the biggest impact you know, which jobs is it going to replace versus which jobs is it going to really be complementary to? I just think about things like GitHub Copilot that have made me, you know, a much stronger coder as someone who doesn’t code all the time and works mostly on sort of mathematical research. it’s made me a phenomenal programmer because it’s able to help me so much with small syntactic, uh, issues and things like that. But also, you know, how is how is AI and large language model? How is that going to change what’s on the internet? Right. The types of content, you know, if you think about disinformation, right, it’s so much easier to generate a fake news article or, or story and then post it and try to get it to go viral on social media. so it kind of changes the whole landscape of how we think about interventions and when it’s just that much more We’re accessible. and so, yeah, we’ll touch on that a bit too, in that seminar. Yeah.

Kirk: Wow. That sounds like a really interesting course. Uh, it must be challenging to teach a course like this when you know that the technology is changing so quickly. Oh, yeah. How do you. How do you deal with that? Yeah.

James: No, I was telling this to some of the students that that, you know, this, this class I couldn’t really have been taught five years ago. And I think, you know, if it’s still offered in five years, it won’t look anything like what it looks like today. and that’s, you know, it feels it feels like it’s moving so quick that I shouldn’t even publish the syllabus before the halfway through the class. Right? Because it’s like no, that’s sort of joking there, but it’s. Yeah, it’s really tough. And I hope the students know too, that like, you know, it’s very easy to identify these problems, these things where I can go wrong and it’s much tougher to identify solutions. and part of the thing I struggle with is that, you know, relative to other classes I’ve taught where, you know, I teach you the problem, and then I also teach you the solution. That’s not really what will happen in the seminar. It’s much more of an open discussion about how we can think about solutions. And also in many cases, there is no like obvious solution when I talk about social media interventions. Right? As you could probably tell, like I still don’t. I’ve written 4 or 5 papers on this and I still don’t have the solution. Right? It’s often about trade offs, right? It’s like in certain situations maybe you favor one trade off or the other. And that’s a lot of what we’ll talk about in the class as well. And I’m hoping, you know, to hear from the students and hear their perspective, because I’ve talked with a lot of academics, mostly at MIT, before coming here, and we all kind of have homogenized our opinion into one, one idea. But really, I’m curious to hear what the students think.

Kirk: I’ve talked with Dean Matt about RFPs and a previous discussion, but this is the first time that listeners of the podcast will hear about an RTP kind of in context. Tell us a bit about your approach in this and kind of what you have the students reading and what you’re hoping to teach them through having them read, research and analyze research.

James: Yeah. Great question. So you know, so I tried to include a mix of papers that are, you know, that are actually using AI driven analytics and a mix of ones that are talking about sort of the societal issues around AI. Right? and the reason why I don’t just focus on the latter is because I want students to come in and think about these from a critical perspective, without the authors necessarily telling them like, this is what you should be looking out for, right? So reading a paper where they’re using machine learning to predict, you know, uh, forecast sales, you know, concert sales, say. Right. And looking at the machine learning tools that they’re using and going into it with a critical eye and thinking about, you know, robustness, fairness, uh, transparency, these different issues around AI and, and really critiquing published papers and thinking about how they can be improved and made, you know, more robust. And when I is, you know, when we identified these various different issues. But then of course, related to my research, there are several papers that explicitly discuss, uh, you know, societal issues around AI. And I think that that’s sort of helpful because, you know, they can see that academics are also navigating this at the same time as the students.

Right? None of us are 100% certain that we know what’s coming down the pike or how to handle it. Uh, and hopefully that’ll give them some confidence in being able to brainstorm ideas like the rest of us, because I think a lot of the time, you know, they might be worried that, oh, like, academics have got this all figured out. We’re going to and it’s not the case. We’re all still trying to, you know, understand exactly how things are going to change. You know, three years ago, we it wasn’t obvious that gen AI and large language models were going to be the next big revolution. Right? And now we’re witnessing this and we’re not sure how to regulate it. We’re not sure, you know, what are the societal implications in terms of creative contributions, artwork, all these different facets of society. And so I think by reading these papers, the students will hopefully gain some confidence in, you know, their ability to offer their opinion and think harder about these topics.

Kirk: Great. Let’s circle back to research, just as we wrap up here. We talked about some of the papers that you’ve written. What are you working on now that you’re excited about? And what are you kind of looking forward to on the research front? 

James: Great question. So I have a couple papers that are really exciting, so but I don’t want to spend too much time. So I’ll talk about one in particular that I, that I am about finished with, that I think is pretty exciting that I’ll also teach a bit in the RTP, which is thinking about ghosting on online dating platforms. So this has been kind of a big issue with and a lot of platforms have tried to kind of fix this by doing things like match uses. A let me start over in case that people don’t know what ghosting is. It’s when you’re messaging someone on the dating platform and you’re not getting a response, and you’re sort of unsure if the person is busy or if they’re just kind of blowing you off. And so after a certain amount of time, you sort of become sufficiently convinced, okay, this person can’t possibly be busy for three weeks and traveling to Europe and doing all these things, right. So they’re probably blowing me off. But this creates a real inefficiency in the market, because you could be dating other people or getting serious with someone else. you know, being more active on the platform if you knew this person was rejecting you. But the problem is, is that a lot of people just feel uncomfortable doing it. and so there’s this huge inefficiency in terms of mismatch and not communicating with each other. And so what we do in this paper is very similar to the others. We build a mathematical model of how people do this with the assumption that when they don’t like the other person, they don’t always explicitly communicate that.

and we highlight this inefficiency. But importantly, we think about, you know, what platforms are actually doing in terms of trying to reduce ghosting. So Match.com, what they do is they have like a time limit after 15 days, the match disappears. So you better, you know, you better figure it out before then. . Other platforms. Tame only lets you look at one match at a time, so you can’t explore new profiles while you’re matched with someone. You have to explicitly reject them before you can continue to surf. So these types of interventions are meant to reduce ghosting. But the problem is, is that they simultaneously reduce exploration. So by doing things something like the tame policy where you’re only allowed to look at one profile, you know, you know, the love of your life could be out there, but you’re not totally sure about this current person. And so there’s some inefficiency from not actually allowing you to continue to surf profiles. So in that paper, what we do is we come up with, uh, what we think to be it’s not sort of globally optimal, but it’s something that’s better than no communication at all. And it’s not something that a lot of platforms are currently implementing. But we believe could effectively reduce ghosting and improve sort of the matching times between partners so that they can be happily ever after.

Kirk: Wow. Well, from, uh, misinformation to helping people find helping people find their true love. You’re really covering the gamut there. well, thank you very much for your time, James. It’s been a pleasure to speak with you. Thank you for explaining your research and some of your teaching. and good luck with the RTP.

James: Awesome. Thank you.

Kirk: I’d like to thank my guests, James Siderius. You’ve been listening to Tuck Knowledge and Practice, a podcast from the Tuck School of Business at Dartmouth. Please like and subscribe to the show. And if you enjoyed it, then please write a review as it helps people find the show. This show was recorded by me, Kirk Kardashian. It was produced and sound designed by Tom Whalley. See you next time.