Welcome to Season 2, Episode 2 of Teach & Learn: A podcast for curious educators, brought to you by D2L. Hosted by Dr. Cristi Ford, VP of Academic Affairs at D2L, the show features candid conversations with some of the sharpest minds in the K-20 education space. We discuss trending educational topics, teaching strategies and delve into the issues plaguing our schools and higher education institutions today.
Episode Description
In today’s show, we’re getting into the science behind learning. We’re thrilled to welcome the winners of the XPRIZE Digital Learning Challenge, a global competition that finds the most effective learning tools and processes that improve learning outcomes.
To discuss this topic further, we welcomed Norman Bier, Director of the Open Learning Initiative at Carnegie Mellon University; John Stamper, Associate Professor at the Human Computer Interaction Institute at Carnegie Mellon University; and Steven Moore, a PhD student at the Human Computer Interaction Institute. Our guests and Dr. Ford chatted about:
- The science behind learning and the importance of running on-going experiments in education.
- Developing tools that allow educators to teach more effectively.
- The benefits and challenges of using AI tools, such as ChatGPT, in educational settings.
- The importance of collaboration and community engagement in advancing learning research and improving educational practices.
- The Open Learning Initiative and how educators can get involved.
Show Notes
01:07: An introduction to our guests.
03:00: John Stampher and Norman Bier explain the XPRIZE Challenge.
04:59: The Open Learning Initiative (OLI) at Carnegie Mellon University.
07:57: The challenges with small sample sizes when it comes to classroom research and how our guests are overcoming that.
11:54: How adaptive experimentation and causal modeling can fit in with more traditional double-blind controlled experiments.
14:33: Steven Moore discusses the tools educators can use to conduct experiments that determine the most effective teaching methods.
19:48: Running classroom experiments and building infrastructure at scale.
25:17: Our guests share their individual thoughts on AI tools and their use in education.
37:26: How educators and administrators can connect with the tools the OLI team is developing.
42:25: Final thoughts.
Full Transcript
Cristi Ford (00:00):
Welcome to Teach and Learn, a podcast for curious educators, brought to you by D2L. I’m your host, Dr. Cristi Ford, VP of Academic Affairs at D2L. Every two weeks I get candid with some of the sharpest minds in the K-20 space. We break down trending educational topics, discuss teaching strategies, and have frank conversations about the issues plaguing our schools and higher education institutions today. Whether it’s ed tech, personalized learning, virtual classrooms, or diversity inclusion, we’re going to cover it all. Sharpen your pencils. Class is about to begin.
So, listeners, welcome back to another episode of Teach and Learn. I’ve always believed there is an art and science to teaching, but today we are leaning into the science side of things and talking to a group from Carnegie Mellon about the science behind learning and running experiments in education. On that note, I’m super excited to have the winners of the XPRIZE Digital Learning Challenge, a competition to find and improve the most effective learning tools joining us up on this episode today.
Before we jump in, I want to take a moment just to introduce our three guests. Yes, colleagues. We have three guests on the episode, and this is a first for Teach and Learn the podcast, so you’re in for a real treat. I’m going to start with a colleague and friend, Norman Bier. Norman is Director of the Open Learning Initiative at OLI at Carnegie Mellon University. He spent his career at the intersection of learning and technology, really working to expand access to and improve the quality of education. Thanks for joining us today, Norm.
Norman Bier (01:39):
So good to see you again. Thanks for having us on.
Cristi Ford (01:41):
Absolutely. Next, John, let’s move over to you, John Stamper. John is an Associate Professor at the Human Computer Interaction Institute at Carnegie Mellon University. I just promoted you John. He is also the Technical Director of the Pittsburgh Science of Learning Center Data Shop, and his primary areas of research include educational data mining and intelligent tutoring systems. Really glad to have you as well, John, joining us today.
John Stamper (02:09):
Thank you, Cristi.
Cristi Ford (02:11):
And finally, we have Steven Moore, a PhD student at the Human Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University, who is being advised by John. Steven is passionate about pedagogy, having taught and redesigned multiple courses at Carnegie Mellon University and elsewhere. So a big welcome to all of you. Before I get started, I just want to say congratulations on such an exciting accomplishment.
John Stamper (02:39):
Thank you.
Norman Bier (02:40):
Thanks. Yeah, it’s been a busy few months and this definitely lent to some of the excitement over the past year.
Cristi Ford (02:47):
I mean, this is really phenomenal. I guess I want to just jump right in and ask you to start by explaining to the audience what is the XPRIZE challenge and how the three of you got connected to go down this journey.
John Stamper (03:00):
I can take this. The XPRIZE challenge was about doing experiment in education and it was sponsored by IES, part of the Department of Education. The goal was to be able to rapidly run and then rapidly replicate large experiments in education. We were actually brought in through a group led by Joseph J. Williams at the University of Toronto who has a set of algorithms to run adaptive experimentation. What we were able to do is take his work in adaptive experimentation and link it into our platform being the Open Learning Initiative and be able to scale experiments quickly and then repeat these experiments multiple times.
Cristi Ford (03:57):
That’s fantastic. It’s a great shout out to the work at Toronto. Norm, did you want to jump in here on that? Anything to add there?
Norman Bier (04:03):
I think when Joseph first reached out to us about the project, it was just such a natural fit. When we think about the work that we’re really passionate about, the work that we care about at Carnegie Mellon, a huge piece of this is being able to take a deeper dive into understanding how human beings learn to really continue to build an empirical knowledge base of our understanding of human learning. And so this idea that we were going to be able to extend our platform and really get it out into the world in a way that can effectively invite many more educators to this work of doing those kinds of experiments, of engaging in that kind of empirical observation and contribution into learning research was super exciting and great stuff.
Cristi Ford (04:47):
I want to take a moment. As I’m listening, I am clear about what OLI is, but I think for some of our listeners who may not be clear about OLI, maybe we should just unpack that a bit before we keep moving forward.
Norman Bier (04:59):
Sure. One of my favorite things to talk about. The OLI is the Open Learning Initiative. The Open Learning Initiative is a now 22-year-old research and production project at Carnegie Mellon. OLI sits in this larger tradition that I’ve talked about at CMU of really trying to deeply connect the work that we see in the learning sciences, theories of how humans learn with the kind of innovations that we deploy. We want to deploy those innovations in ways that are instrumented, that make the learning observable so that we can iteratively improve those findings, but also use them to test new models and learning. We talk about this as a learning engineering approach.
In a lot of ways, OLI is built to be an exemplar of that approach. We build online courses, active and interactive courseware that’s intended to support independent learners, demonstrative enacting learning, but that can also wrap around tools that help educators who are adapting and using these courses to help in their own instruction often replacing textbooks. On the one hand we see these as really rich textbook replacements that better inform the learner, that provide data-driven feedback loops to instructors, but this also provides a massive ground of experimentation space so that scientists, John and Steven, are able to ask new questions without needing to assemble a large panel of faculty who are going to try to deploy their intervention.
During COVID, we saw a real swell in use of OLI courses and so at this point we see more than 40 formal courses, probably another 30 to 80 smaller tiny courses that folks have built for themselves being used at hundreds of institutions across the US. There’s 70,000 to 80,000 academic enrollments each year, and so this really provides a rich community of educators and students who are interested in helping to advance that work in learning science.
Cristi Ford (06:53):
I just want a big shout-out to the work that you collectively have done. I remember, Norm, meeting you back in 2013/2014 and working with you in connection with the OLI. One of the things that I always use as a best practice around the work that you’re doing, we talk a lot about an innovation, being able to scale and sustain something, and really seeing the work that you all have done in Open Learning Initiative is just such a pinnacle of what many institutions are trying to do. It has been really nice to watch your journey. I want to move forward and talk a little bit more about this challenge. I want to fast-forward and understand, John, you talked about the team created the Adaptive Experimentation Accelerator. Can you talk to us a little bit about what problem did you set out to solve and just give us a little context there?
John Stamper (07:47):
One of the biggest issues that I see in educational research is that often ideas and theories get presented from really small classroom size studies where you’re talking maybe a handful of classrooms that are driving the results and then these papers get promoted. The theory kind of builds from that. Sometimes that’s good, but sometimes that’s not good. And so, one of the things that I find that I was really excited about the work that we were doing is that we are able to run really large experiments with many, many conditions. The artificial intelligence techniques that we use can help line up those conditions and move the number of participants around such that you get results with fewer participants in less time. This will really allow us to run a lot more experiments and validate things a lot better and a lot quicker than has ever been done in education before.
Cristi Ford (08:59):
Yeah, I think you’ve hit the nail on the head in terms of efficacy research. We have a hard time being able to talk about things that are generalizable and having smaller sample sizes, and it’s really challenging to find in education research even if it’s design-based research or other methodologies to really try to hit at the heart of this. It seems like you all have cracked the nut a bit in terms of really being able to have these larger data sets in different areas.
John Stamper (09:32):
Yeah, I think again, a lot of my work has been on how do we collect and curate data sets from educational data, and it’s actually been one of a great pleasure to see that there are researchers now who are able to use these data sets that we collect and do additional work and secondary analysis and make more findings that then you can go back. It’s not always the best to be able to run on a secondary analysis on a dataset that was collected before, but it can give you insights that you can go back and run an actual experiment in a live in vivo type setting.
Norman Bier (10:13):
Those opportunities to shorten the research circuit actually are so important for us. I think that one of the real challenges that we see in learning sciences is that our timeline for developing a hypothesis, tying it to an intervention, getting that intervention into the classroom, desperately trying to remove confounds, failing to remove confounds and running our experiment anyway, and then finally analyzing it. When you look at the timelines of some of the interventions in what works clearinghouse, you’re sometimes looking at years to get results. And often, results that end up being confused by the confounds anyway.
The kind of post-secondary or the secondary analysis that John talks about gives us a chance to tighten that cycle. I think that these adaptive experiments also give us a chance to start to close that loop and make some meaningful progress even faster. There’s really a chance to rapid cycle some of this work so that in comparison to a traditional AB study, we’re able to make changes as we go in ways that should be directly improving the experience for the participants, which is pretty important, but which also help us to see what seems to be working a little faster.
Cristi Ford (11:22):
Yeah, no, I actually have a provocation. Do you believe this is the way in which efficacy research should move? I mean, to your point, if a study is run, the time it takes to get the study in a classroom, not to mention publishing that work is sometimes years. And so is this the way that we need to start thinking about efficacy research and the kinds of opportunities we have in educational research going forward?
Norman Bier (11:54):
I’m not willing to completely throw out the scientific method yet. It’s gotten us pretty far, but I think that this creates another set of tools for us, and I think that we really should be treating adaptive experimentation and causal modeling directly alongside the gold standard of the double-blind controlled experiment. I think that we’re seeing really good results from it, and I think that the space that we’re trying to understand is so big, it’s so complex that we need those additional sets of tools and we need to acknowledge that they’re useful, as useful in many cases, as a big experimental approach. It might be a better question for Steven because ideally, Steven will still be doing this research after I’m retired on a beach.
Cristi Ford (12:37):
Fair enough. Steven, what do you think about this?
Steven Moore (12:40):
I definitely agree with Norm on that. I think it’s a good compliment to what we currently have in terms of the scientific method and the difficult studies. I think it’s really good that you can get these different cohorts. In the fall semester, let’s test it in a science course, a second language course, a psychology course. I think there is also a ton of value in doing these three year-long studies. It’s not just looking at the diversity of the cohort in terms of domains, but even just for the longevity of the intervention you might be doing. I think they’re nice compliments to each other.
Cristi Ford (13:12):
Fair enough, fair enough. I think as I look at the concerns or just the challenges with trying to get really good research out there, it is really sometimes hard to be patient when we’re looking at the evolution of learning and the evolution of just classrooms and students and the ways in which we support students. Sometimes I get a little overzealous in terms of how do we get there faster.
Norman Bier (13:38):
No, our science isn’t keeping up with our technology. We’re innovating faster than we’re able to test. We’re innovating faster than many of our evidence-based practices seem to catch up with. Not all innovations are good ones. I feel you on that impatience to try to say, look, are we headed down the right path? Do we keep doing this thing, or should we pull back and try something else?
Cristi Ford (14:00):
Yeah. I want to talk a little bit more about the project, and maybe Steven, I’ll throw this one to you. One of the things as I am understanding the work that you’re doing is that you ended up developing a tool that allowed educators to really, as John talked about, conduct experiments in classrooms to determine which were the most effective teaching methods. When you think about this work and your project, how does this work? What areas, pedagogical methods did you test? Can you just share a little bit more about how this process went?
Steven Moore (14:33):
Yeah, for sure. I think I like to give one of the main concrete examples for what we did for the competition, and that is one intervention was testing motivational prompts for students. In these different OLI courses, whether it be chemistry, physics, Spanish II, we wanted to see which motivational prompts might engage students the most with these optional activities. In an OLI course, you might have a few multiple choice questions or a drag and drop activity, and these are often optional, just done for low stakes to give students feedback as a good learning opportunity because you learn more by doing. But you want to encourage these students that are maybe on their phones or just doing it quickly to actually engage with these activities.
We had a series of five different motivational prompts that were used in previous studies and supported by different motivational theories of what might engage students in these activities.
One saying like, hey, just telling them you learn more by doing as opposed to passive reading and listening. So, engaged with these activities as far as some that maybe played to more of a younger hip crowd that might link a meme or just try to be like, “Hey, you should just do this,” try to be trendy like a TikTok way.
Then we deploy these in a bunch of different courses and had this kind of adaptive algorithm in the backend that would, at first if we had five of these different motivational prompts, 20% of the students would see a given prompt A, B, C, D or E. But then as we got data on this, it would show, oh, prompts C and prompts D work the most effective. When students get these motivational prompts, they tend to do higher participation rates on the activities that follow.
This adaptive algorithm would then start using these motivational prompts that have been shown to actually encourage more participation. And so instead of each prompting 20% equally and kind of how much it would be shown, it would start having the higher motivational prompts to be shown more like 40 to 50% of the time until we get enough data where it’s like, “Hey, of these five prompts across these different domains with all these different students at these different institutions, we can say Prompt C is the most successful at engaging students in these optional activities.”
Cristi Ford (16:40):
Are you doing this across different kinds of courses or is there a focus in the kinds of courses that you are conducting these experiments?
Steven Moore (16:50):
We tried to do it at different institutions. They all were in the OLI platform, but the domains and the actual institutions they were at all kinds of varied from courses here at Carnegie Mellon to some community colleges out in California. Then domains like Spanish II was a popular one, chemistry, physics. So, also tried to have good domain coverage there.
Cristi Ford (17:11):
Good to hear.
Norman Bier (17:12):
Yeah and stats down at GSU. It was exciting.
Cristi Ford (17:16):
It’s good to hear that the things that you are talking about and the ways you looked at effective teaching and learning practices were across the continuum because sometimes we find that there’s a focus in STEM or a focus in the arts, and so it’s really good to hear that from statistics to Spanish, there was great coverage around this work. I’m wondering, as you were working on these projects, I mean this is a question for any of you. Were there surprises or moments as you were working through this challenge that really maybe took you back, something that really surprised you about the work?
Steven Moore (17:55):
I mean, I definitely have a fun one from our pilot. We had this pilot phase where we just tested the framework and just seeing if different motivational prompts might be even worth trying out for our main deployment with hundreds of students. One of them… We had three motivational prompts we were testing. We had two already that were used in previous experiments and they were set up already and supported by the literature, and we needed a third case for this course. And so, we put an image of a capybara, which is just like a large hamster rodent, and it has a capybara is sitting at a computer with headphones on and it’s just this random kind of funny meme. No text, and just posted this picture of a capybara. The pilot course in particular was really small.
I think it was only around 20 something students, but they were shown it multiple times just to work through. That image of the capybara, compared to the other two motivational prompts, ended up actually having the highest success rate in terms of students engaging with the material that followed. And so that kind of took me back a little bit. I know the images are more fun to look at and maybe more engaging, but it’s like that maybe just want to dive in. Is it the humor? Is it just that it’s so odd that it stands out? So that had me spiral down a rabbit hole of like, wait, what is this? Should we follow up on this more?
Cristi Ford (19:11):
Hamsters for everyone.
Norman Bier (19:14):
Well, but it’s funny, but it’s also huge, right? Because the underlying learning design theory tells us that we shouldn’t be including images that aren’t directly relevant to the learning outcome or the objective that we’re trying to achieve. It’s just a reminder that there’s so much at that interplay between building an effective learning experience and supporting student motivation and engagement to dive into that experience that we don’t have all the answers on. It’s good stuff.
Cristi Ford (19:42):
Yeah, I really appreciate that. John, what are your thoughts?
John Stamper (19:48):
Well, certainly I guess the main takeaway that I had is that there is a need to have an infrastructure to run this kind of experimentation that it is not easy to do and it requires a lot of coordination to run. As any educational researcher knows, running any kind of experimentation in a classroom, whether it’s in person or online, is really difficult. What I was actually amazed is as we got into this and built out this infrastructure, it became increasingly easier to run these experiments and large experiments.
Towards the end, we were running on a state university that had thousands of students in tens of courses, and we were able to just deploy this as, I won’t say push button, but that’s where we’re moving towards, that you can set up an experiment and then just push it out. To me, that is very thrilling and I think that can have a profound effect on the research that we’re doing.
Cristi Ford (21:03):
Totally agree there. Norm, in terms of the work, in terms of the motivational prompting and the pieces that Steven talked about, where do you go next? How are you thinking, to John’s point, now that we have such a great opportunity to run these large experiments, where do you go from here and maybe what are you setting your sights on next?
Norman Bier (21:28):
Sure. One of the efforts that have been keeping us so busy has been moving from what we call the Legacy OLI platform to our new Next Generation Tourist platform, which is an open-source effort. We’ve been joined in this by other universities, most notably Arizona State has been doing great work with us on really getting this out and making this the future of adaptive learning. A key piece of that work is integrating and improving the tools, or continuing to improve I should say, they are integrated, the tools that we saw in the XPRIZE, because some part of this vision is to continue to make these easier for experimental deployments from learning scientists, from researchers.
An equal part of our mission has been to get greater numbers of classroom educators involved in this work of learning research, of really treating it as part of the work that they do, as part of the research that’s part of their own academic career. When John talks about moving this towards a push button state, that covers an awful lot of work that we need to do in refining the interface and in building the kinds of supports and scaffolds that can really help to turn every classroom into a lab. That’s pretty exciting. Another piece that I think is really just an incredibly rich space for us, we’ve been really fortunate to receive funding from the Bill and Melinda Gates Foundation to build out an equity centered exemplar chemistry course. We’re building out Gen Chem I and Gen Chem II in collaboration with our colleagues at ASU.
Steven was mentioning quickly these possibilities for laying out four or five different prompts, and we start to use algorithms to shift how those prompts were deployed and allocated. This represents a huge shift from your traditional AB split. We’re simply going to divide the class into two and give half of them are treatment and half of them are control. Individual learner needs and responses often just sort of get lost in those larger stats. This is talked about as the tyranny of the mean. This kind of adaptive experimentation gives us the chance to better understand whether there are different kinds of interventions that are meaningful for smaller populations of learners and then give them more of what’s working for them.
We’re looking forward to being able to push this out in the context of this general chemistry course where because we’ve got a lot that we’re investing around these questions about motivation, of ties back to prior knowledge, contextualization and relevance. This is going to give us a chance to test some of those theories pretty quickly and get them directly back into the courseware design.
Cristi Ford (24:12):
I mean, just wow, just the ability… Norm, every time I talk with you, and now you John and Steven, it’s just amazing to me in terms of where we’re moving the field and how are we getting the open access component of that. I’ve always been impressed with the opportunity to think about how do we reduce the barrier of the cost to serve? How do we think about equitable spaces?
So, hearing the work you’re doing in chemistry with ASU, all of that is super exciting.
Then as you all were working on this project, we all got hit with this tsunami of ChatGPT last November. John, I want to ask you a little bit more about your learning engineering field work. I’ve been talking with a lot of educators that listen to this podcast about prompt engineering and where are things going and how can we do a better job thinking about learning engineering around this work? I’d love to hear your thoughts and the kind of work that you’re doing in those spaces.
John Stamper (25:17):
Interestingly enough, we had been doing a lot of work in the space of question generation prior to ChatGPT kind of exploding. This work was done with Bert and Google’s T5 Transformer, and we did use GPT as well because one of the issues is that we know given good opportunities to learn, meaning good questions and good activities, students will learn. But especially in adult learning college and above, often these online courses don’t have enough opportunities so they don’t have enough questions. That was what really brought me into this question generation space, which we had been using these large language models for quite some time.
The issue that we had always run into is how do you rank the quality of questions that get generated and the responses that these models give? What are the metrics you can use to say whether they’re good or not? This really flows right into the prompt engineering and the ChatGPT as well, because it definitely… The interface of ChatGPT makes it so easy for anyone to approach. However, we’ve seen now as we’ve kind of gone about a year now since it really exploded, maybe a little less than a year, there have been a lot of issues with people noticing the quality.
Not only the quality of the responses in terms of their written English, but also the information that they’re providing is sometimes what we’re calling hallucinations and things where it makes up stuff. In fact, one of the big anecdotes, which I think most people know, but it does a really good job of making up research papers and taking actual researchers who haven’t worked together to put a paper and puts them together. It’s kind of funny because you’ll see this paper and you’ll be like, “Wow, that’s a really interesting paper,” but it doesn’t exist. It might be good at generating some opportunities there.
I think there is a place for it. I’m super excited about this work and we continue working in this space. I think probably the biggest thing to note is that it’s a tool. It’s like anything else. I’ve heard colleagues refer it to a calculator. I think from an instructor point of view, we have to recognize how can we use this to improve the learning experience? I know from our side of how do we incorporate this into a course, I’m happy to have my students use ChatGPT if they tell me they’re using it, or one of the code generation as well. I’m not teaching basic coding or anything like that, and I expect the kind of work that I’m doing, students are probably going to stack overflow in other places. If they can use one of these models to help improve what they’re communicating to me, which is what I see the ChatGPT really can help with communication. As long as they let me know they’re using it and recognize where it might go off the rails.
I think that’s one important point. I think from the design side and learning engineering side, we’ve been mostly excited about how can we get it to help the instructors or the instructional designers improve the courseware that they’re making? I think there’s opportunities there. We get into some really fuzzy areas with IP and things, which I think we’re going to have to let that shake out and see where it comes. I would encourage instructors to put prompts in and share what prompts they’re using to get information out to improve their courseware, to improve the text in their course or create questions and things. I think it’s great for that.
I think just one last little thing. I think if you’ve looked at the usage though, the usage for ChatGPT has actually been dropping steadily weekly at tens of percentage points per week. It’s actually not getting used as much now as when it kind of peaked out there. It’s possible we hit the peak on that Garner curve if you’re into the metrics. But yeah, it’ll be interesting to see what happens over the school year. Does it pick back up or have people… The novelty of it is gone, and hopefully it’ll just get used in the right way as a tool. We’ll see.
Cristi Ford (30:35):
Norm, it sounded like you want to jump in there.
Norman Bier (30:40):
I was about to just agree with John, although I think that we’re at the end of the summer and I think fall may see things pick back up. But yeah, I think we’re excited to see where the technology may take us, but also very nervous about the urge to just start throwing this stuff at students. I think we’re well positioned at CMU to start to explore these questions in part because we’ve got a really solid research and technical infrastructure that can help us investigate. We talk about OLI, but OLI ties back to tools like Data Shop and Learn Sphere that let us investigate these questions and tools for building cognitive tutors and collaborative experiences.
I know that we’re going to be really investigating that deeply. I think when we’re looking at the big questions around generative AI, I think that there are three big ones that seem to keep bubbling up in the circles that I’m in. The first is the one that almost everyone who’s a classroom teacher faces, which is, “Oh my God, all of my questions are easily answerable. I’ve got all these easy activities that are now trivially solvable, what do I do about that? How does this change the way that I’m assessing my students?” I think that sort of the more interesting question… And we need to address that.
But the more interesting questions are how does this change the curriculum? How does this change what we should be teaching our students writ large? As John said, it’s like a calculator and the advent of the calculator changes what kinds of problems we ask students to keep drilling on. Then the last question is, how does this change our instructional practice? How does it change what I do in the classroom? How does that change how we go about designing these kinds of learning experiences? Are there affordances here for helping faculty to interpret what their students are doing and give them suggestions for classroom activities?
I had a really nice experience going through all three of those, the stages of ChatGPT grief or whatever it is they’re teaching this summer. Because it did kick off the class by looking at my assignments and we’re like, “Oh my God, these are all really trivially… What do I have to change? How do I talk to my students about it?” And so we did. Second day of class the assignment was to… I spent the first day really drilling in on the relationship between society and computing and how that plays out. It’s just you know what, throw that question into ChatGPT. Ask it to cite its sources and let’s start comparing. What does it answer well? How does it change what we talked about? What pieces is it surfacing?
During that day, I spend a lot of time talking about the role of women in computing and what this means for larger questions of diversity and inclusion in the discipline and why does this happen in society, and how does that show up in that prompt? So, six weeks later, at the end of the summer, one of the last classes we dig in on okay, what’s really happening in generative AI? What do they mean by a transformer? What’s this piece? What’s that piece? We spent part of that class just playing with the tools. We threw mid-journey up on the screen and just started throwing prompts at it to see what kind of images pop up and song lyrics about surfing El Nino with mermaids got thrown in. And boom, up popped three images of mermaids, all sort of looking like Waterhouse pictures.
We turned to the class and asked them, “Hey, why are all those mermaids white? Who decided that?” This led us right back to that first day of class on why does representation matter and why do we need to have multiple perspectives in the domain and who’s building these things? I wish I could say I had planned that moment, but it was instruction of opportunity there.
Cristi Ford (34:26):
I really appreciate the three questions because as I’m talking with faculty and I’m talking with institutions, I think those are the questions that we really have to dig into. As I think about my own self-reflection when you talked about your own class, it really requires me to do a little self-discovery in terms of my own teaching methodology and how I’m thinking about the ways in which I’m engaging my students. To your point, maybe I have an opportunity in some of these areas. Really insightful to hear that from you, Norm. I wanted to ask you, Steven, around this same line and then we’re going to shift over, but one of the things I think you’re skilled at is the human AI partnership. If you can explain what does this look like and maybe provide an example from the higher ed classroom setting?
Steven Moore (35:15):
Yeah, for sure. I’m definitely on the bandwagon of encouraging the students to use ChatGPT and just having them be transparent about it. I think also too, depending on your course level and the domain, it can be better or worse. If you are an intro course, all the questions can super easily be answered by ChatGPT, but I’ve actually spent the last 72 hours throwing a bunch of higher-level chemistry questions into ChatGPT, GPT-4, and it’s not great yet coming up with the correct answer.
We can still guide the students, but if you were just using it to answer a quiz, you would not score very highly on it. The whole human AI partnership just involves if you’re going to use these tools, these large language models or basic even algorithms, they don’t have to be complex. Just knowing that you need to keep the human in the loop and you need to have a human there to make interpretations of it and to have an understanding of what biases might be built into the training set that trained this language model, or the wording that comes out of it or the hallucination detection, that we don’t have a fully sentient AI that’s human capable and all this stuff.
No matter how trivial or basic the task may seem and how AI, whether it be again, these algorithms or large language models like ChatGPT, can really make us more efficient at certain tasks. You still need to have some human oversight, even if it’s as minor as having a human quickly skim the output of ChatGPT. That’s kind of what the partnership’s all about.
Cristi Ford (36:44):
Really good to hear that higher level thinking and higher order critical thinking skills and the kinds of more complex problem solving, you’re not getting back really good data in ChatGPT-4. So, something to take into account. I want to get back to the XPRIZE and the challenge and the work that you’re doing at OLI. As I think about folks who are maybe hearing and learning about OLI for the first time, Norm, how can these educators, how can some of these administrators find ways to connect with the tools that you’re developing? How can they get engaged? What are the opportunities for them to get better connected to this work?
Norman Bier (37:26):
Straight off the top, we have a number of active projects running right now that we’re looking for folks to get more involved in, the Gates Chemistry course, where if you’re an institution or an educator who is offering Gen Chem I or Gen Chem II, throw this into a Google. But we’ve got a lot of space to incorporate larger perspectives into this project and see these materials get used for the benefit of students. Similarly, John and Steven and I are working on an NSF project emphasizing two year institutions in Maryland and in New York, looking for faculty that want to take advantage of these tools, dig in themselves and make some improvements, make some changes that can make it a little more relevant or more contextualized to their students, and then analyze the data that come back. It’s no longer about building the one true course, but instead giving faculty the chance to go in, customize and evaluate and then continue to iterate.
That’s the CCSS project. That’s another one where if STEM faculty at two year institutions are interested, reach out. We are always looking for more folks to participate. More generally though, if anyone heads to the OLI website, oli.cmu.edu, you’ll find our entire course catalog.
The easiest and most low cost way to get involved is to jump on and take advantage of one of these courses, whether in your own learning or incorporating into your classroom to display some more expensive textbook. Once you’re there and you’re seeing how the classes are working and how they operate, if you start to see opportunities where this might be able to make a difference in your institution or where you think some different implementations or some development might make sense, these are places where we’re looking for more active collaboration, whether this is just partnering up to make some changes to the course or coming together to put in a larger proposal to fund and advance this work.
We’re always looking for more active participation. I think that one of the key pieces of the OLI philosophy and I think a larger Carnegie Mellon philosophy is that we’re looking to transform this work through building a larger community of research and practice. At any point, I’m always asking folks, “Please consider this an invitation to come join that community.” One of the benefits of the community at this point, beyond the content and the well-designed materials and the larger infrastructure, is this chance to take advantage of the kind of tools that we’ve implemented via the XPRIZE, via some other supports for enacting experiments in your own classroom. I think that there’s this chance to actually start using some of the same types of techniques that we’ve been seeing from large web retailers, from Amazon, rapid AB testing, these kinds of adaptive experiments and changes.
On the one hand, if folks are interested in deploying these things themselves, they’ve got some ideas to test, we’ve now got tools that can help. But what we also have is a large cadre of learning scientists and psychologists who have their own theories that they’d like to test and who are looking for partners to bring those innovations to bear and to get it out in the work. And so even if you don’t have your own idea to test, but you know that your students could be doing better, reach out. We’ve got a large community to connect you with. It’s those kinds of partnerships that I think have driven OLI, have allowed it to thrive. Frankly, they’re also creating the kind of relationships that the reason you and I are still having this conversation 14 years after meeting, right?
Cristi Ford (41:00):
Norm, really great. Colleagues that are listening, what a call to action, many great opportunities for you to engage, to get involved in the work. I really appreciate, John Steven, Norm, the time today to really talk about this, have this conversation and the way in which you’re changing learning and how things are moving forward. I’m just going to ask any final thoughts from any of you before we close out today?
John Stamper (41:25):
I think one last thing that might be interesting to note is we have brought the band back together and even added some new partners. We were recently awarded an NSF grant that we’re calling EASI, Experiments As A Service Infrastructure. That grant is Carnegie Mellon leading along with the University of Toronto, but we’ve also added University of Alabama and Carnegie Learning in the K12 space, which was one of the runners up in the XPRIZE competition. So, we actually got them involved as well. That project is also looking to extend this work and get it out there and build this really a robust infrastructure that anyone can use.
Cristi Ford (42:13):
Fantastic. John, thanks for sharing. I really appreciate the emphasis on K12 as well. That there’s not just a focus on higher ed, but really being able to look and not just bring the band back together, but adding some new instruments and new players to the forefront. So thanks for that. Any other final thoughts before we hang up today?
Norman Bier (42:33):
I’d love to give a shout-out to some of my colleagues in the open education community and at the same time acknowledge some work that Steven’s doing that I just continue to be really impressed by. One of the big questions, the big hopes in open education is how do we move beyond simply handing resources to students, but instead see how those open licensing opportunities really give us a chance to change our pedagogy? And in some cases, more actively involved students as co-creators of materials while they’re creating knowledge.
Steven’s been taking that to an extreme in a lot of his research where he is engaging in a practice that we talk about is learner sourcing, so really actively having students create new materials, help to identify where our learning models are working for them and where they’re not. I think that that wrapper of taking open educational practice and open educational materials, but tying it very tightly to a rigorous learning science approach, it’s just really satisfying to see that work play out. I think it’s incredibly important that we’re making learners more active participants in this work rather than just expecting them to sit there and be vessels for the knowledge to fill out. And to see Steven taking this work and building on it and changing it and making it his own has been super, super cool.
Cristi Ford (43:52):
Fantastic. Listen, I hope this is not the last time I will see you on the Teach and Learn podcast. I’m really excited and continue to be invigorated by the work that you’re doing. Thank you, and very grateful for the time today. Look forward to hearing more of what you’re working on and being connected. So thank you, Steven. Norm, John, thanks for the time today.
Steven Moore (44:14):
Thank you.
John Stamper (44:14):
Thank you.
Norman Bier (44:15):
Yeah, thanks so much for having us.
Cristi Ford (44:18):
You’ve been listening to Teach and Learn a podcast for curious educators. This episode was produced by D2L, a global learning innovation company, helping organizations reshape the future of education and work. To learn more about our solutions for both K-20 and corporate institutions, please visit www.d2l.com. You can also find us on LinkedIn, Twitter, and Instagram. And remember to hit that subscribe button so you can stay up to date with all new episodes. Thanks for joining us, and until next time, school’s out.
Speakers
Norman Bier
Director, Open Learning Initiative at Carnegie Mellon University Read Norman Bier's bioNorman Bier
Director, Open Learning Initiative at Carnegie Mellon UniversityNorman Bier is Director of the Open Learning Initiative (OLI) at Carnegie Mellon University. His work sits at the intersection of CMU’s internal educational practice, ongoing learning science research and external collaboration.
Norman has spent his career at the intersection of learning and technology, working to expand access to and improve the quality of education. His experience spans the higher educational sector, including 2-year and 4-year; public and private; domestic and international; and commercial institutions. Prior to joining OLI, he was Director of Training and Development at iCarnegie Inc., a CMU subsidiary chartered to deliver software development education through international partner institutions. Using technology and faculty support, iCarnegie reaches thousands of students who would otherwise not have access to a CMU-level education. He has taught computer science courses as an adjunct faculty member at the Community College of Allegheny County, philosophy courses at Carnegie Mellon University and served as a founding committee member of the Cook Honors College at Indiana University of Pennsylvania. He currently serves as board member for the Kaleidoscope Project and the Shady Lane School.
John Stamper
Associate Professor, Human Computer Interaction Institute, Carnegie Mellon University Read John Stamper's bioJohn Stamper
Associate Professor, Human Computer Interaction Institute, Carnegie Mellon UniversityJohn Stamper is an Associate Professor at the Human-Computer Interaction Institute at Carnegie Mellon University. He is also the Technical Director of the Pittsburgh Science of Learning Center DataShop. His primary areas of research include Educational Data Mining and Intelligent Tutoring Systems. As Technical Director, John oversees the DataShop, which is the largest open data repository of transactional educational data and set of associated visualization and analysis tools for researchers in the learning sciences. John received his PhD in Information Technology from the University of North Carolina at Charlotte, holds an MBA from the University of Cincinnati, and a BS in Systems Analysis from Miami University. Prior to returning to academia, John spent over ten years in the software industry including working with several start-ups.
Steven Moore
PhD student, Human Computer Interaction Institute, Carnegie Mellon University Read Steven Moore's bioSteven Moore
PhD student, Human Computer Interaction Institute, Carnegie Mellon UniversitySteven Moore is a fifth year PhD student at the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University, advised by Dr. John Stamper. He uses his background in learning science, computer science, and applied natural language processing to create and evaluate educational content for online courseware. This has led to advancements in learnersourcing, crowdsourcing, and human-AI partnerships. Steven is passionate about pedagogy, having taught and redesigned multiple courses at Carnegie Mellon University and elsewhere.
Dr. Cristi Ford
Vice President of Academic Affairs, D2L Read Dr. Cristi Ford's bioDr. Cristi Ford
Vice President of Academic Affairs, D2LDr. Cristi Ford serves as the Vice President of Academic Affairs at D2L. She brings more than 20 years of cumulative experience in higher education, secondary education, project management, program evaluation, training and student services to her role. Dr. Ford holds a PhD in Educational Leadership from the University of Missouri-Columbia and undergraduate and graduate degrees in the field of Psychology from Hampton University and University of Baltimore, respectively.