Tuck Professor Rob Shumsky has created an AI-generated chatbot to help answer his students’ questions.
Professor Rob Shumsky is certainly no Dr. Frankenstein. That said, Shumsky—faculty codirector of the Master of Health Care Delivery Science (MHCDS) program and professor of operations management—has indeed created his own teaching assistant.
Rob Shumsky—professor of operations management and faculty director of the Master of Health Care Delivery Science program—teaches Operations Management in Tuck’s MBA program.
Meet Robota (Rob’s Operations teaching assistant), the chatbot Shumsky built to answer questions from his MHCDS Operations Management course participants. Shumsky worked with a team of people to develop Robota: T’23 Penny Chen, a fellow at Tuck’s Center for Health Care; Professor Alva Taylor, faculty director of the Center for Digital Strategies; and center executive director Patrick Wheeler. First, Shumsky created a knowledge base in Cody—an AI coding assistant, or online chatbot service—by loading the information from his course onto the website (cody.ai). The information included course readings, lecture slides, and transcripts of videos. Course participants could then prompt Robota for information by typing in questions, and Cody would call on ChatGPT 4.0 to respond with an answer based on the course materials Shumsky had provided.
Shumsky explains that the basic process behind the bot is that Robota will read the prompt question and then transform it into a group of numbers that represents a location in a “linguistic space.” Robota then searches the provided knowledge base for related text—that is, passages in a similar location in the linguistic space. Robota (via cody.ai) then combines those coded texts with the initial prompt and sends it to ChatGPT—the “engine” that converts the question and then provides an answer in English—and replies to the user.
Shumsky, codirector of the MHCDS program since 2015 and at Tuck since 2005, noted that there were three reasons why he wanted to experiment with generative artificial intelligence (GenAI) in his course: “The first is that I’m curious about the technology and its potential applications to education. The second is that I would love for my program participants to have access to a knowledgeable teaching assistant 24/7, since I’m not able to answer emails at all hours. And the third reason is specific to my course in the MHCDS program, which was developed to educate participants who are currently working in health care. Some are frontline providers, some are administrators, some are legislators, and all of them are eventually going to be affected by this technology. The number of GenAI health-care applications being rolled out right now is overwhelming, and so I wanted them to have some interaction with these tools in this context so that they could start to understand what it involves.”
The number of GenAI health-care applications being rolled out right now is overwhelming, and so I wanted them to have some interaction with these tools in this context so that they could start to understand what it involves.
— Rob Shumsky, Professor of Operations Management
Of the approximately 50 percent of course students who tried out Robota, 88 percent said that it helped their learning.
While Shumsky said he thought Robota was a successful experiment, and one that he learned a lot from, he noted some limitations with the current technology. One in particular? That the chatbot will occasionally hallucinate. “That is, believe it or not, the technical term for making stuff up,” Shumsky laughs. Hallucinations are seemingly plausible responses that GenAI provides that are not factual. Shumsky noted that because he had designed Robota so that he would be able to monitor the dialogue between the chatbot and his course participants, he could intervene when he saw something erratic in a response from Robota.
Shumsky noted that the least successful interactions between Robota and participants were when a participant didn’t know enough to be able to ask a specific question. “As a human being working with students, you will often have a dialogue in which you try to determine where a student’s knowledge gaps are. The chatbot is not yet trained to do that—it was not able to interrogate a student to find out what an imprecise question really meant. So participants would end up asking the same questions in multiple ways and getting answers that didn’t really provide the information that would help with the real problem.”
This is early days for this technology. There are many limitations to it right now, and you have to be very careful not to rely on it too much. It’s definitely not ready to replace humans. At least not yet.
Despite the math and the quantitative rigor in the research behind the course materials that inform Robota … is the chatbot fun? Per Shumsky, “Yes, it’s great, a lot of fun!” But he cautions, “This is early days for this technology. There are many limitations to it right now, and you have to be very careful not to rely on it too much. It’s definitely not ready to replace humans. At least not yet.”