Yesterday I attended the first meeting of the Artificial Intelligence Emerging Technologies (AI-ET) Committee at Seneca. I was invited as a representative of the the faculty of technology. The committee includes ~40 representatives from across the institution, including faculty, chairs, deans, admin, staff from service areas, ITS, etc. The meeting was quite useful, and the group of people in attendance really thoughtful and intelligent. I enjoyed hearing other perspectives and wanted to share some of what I learned.
I'm not usually keen to get involved in committee work, but I think it's important that we do a good job evaluating, including, and excluding AI from various aspects of our educational work. To this end, I've spent quite a bit of time writing programs using the OpenAI APIs so as to build a better understanding of what is and isn't possible. I've also written recently about my own experience encountering AI in my students' assignments. The response to that post was incredible (hello Hacker News!), with people from around the world sharing, responding, and getting in touch to tell me about their own experiences. It's clear that anyone who is teaching students in 2023 is also having to think hard about AI, ChatGPT, Dall-E, and other LLMs. I can promise you that if you aren't thinking about it, your students are!
Our first meeting was conducted as a series of break-out discussions. I was put together with four other people, representing faculty and admin in marketing, mathematics, sociology and indigenous studies. Other tables were similarly diverse and it meant that we each heard a broad range of opinions and experiences.
For example, I was interested to hear how ChatGPT is being widely adopted in marketing. Faculty in the school of marketing decided to approach the question of AI-and-education by talking directly to industry: are people using AI in their work? The answer was 'yes' and so it made sense to accelerate the students' development by showing them how to use AI to aid their work. Where professors used to think in terms of final projects (e.g., "submit a marketing plan") there is a movement to have greater focus on process, and using more check-point assessments. Students are enjoying being able to quickly explore ideas by creating multiple approaches using AI, then refining them.
We heard a counter-argument to this for students who are doing general education courses, which rely more heavily on traditional reading/writing style learning and assessments. Students who need to reflect on specific readings in a course are finding the AI suggestions more limiting/unhelpful as a brainstorming tool: once you see an idea, it's hard to unsee it, even (and especially) if it's wrong. Further, the dataset used for ChatGPT causes it to produce all kinds of incorrect, biased, ethically questionable, or otherwise unusable responses. However, students often don't have the background or experience in the literature to be able to sort through what is and isn't useful.
"If you want to see what I'm talking about, have it write your biography," said one professor. I'd never thought to try this, so gave it a shot:
Prompt
"Write a biography of David Humphrey, professor at Seneca College"
ChatGPT Response
David Humphrey is an accomplished educator and professor at Seneca College, located in Toronto, Canada. He has dedicated his career to teaching and inspiring young minds, and his passion for education has made him a respected member of the academic community.
David Humphrey received his undergraduate degree from the University of Toronto in the field of English Literature. He then went on to earn a Master of Education degree from the Ontario Institute for Studies in Education (OISE) at the University of Toronto.
In 1990, David joined the faculty at Seneca College as a professor of English. He quickly established himself as a dedicated and enthusiastic teacher, and his students were drawn to his passion for literature and his engaging teaching style. Over the years, David has taught a variety of courses in the English department, including courses on Canadian Literature, Contemporary Fiction, and Shakespeare.
In addition to his work in the classroom, David has also been involved in a number of initiatives at Seneca College. He has served on various committees, including the Faculty Council, the Curriculum Committee, and the Institutional Research Committee. He has also been involved in the development of new courses and programs at the college, and has been a mentor to many young faculty members.
David's contributions to the field of education have not gone unnoticed. He has received numerous awards and accolades throughout his career, including the prestigious John Polanyi Award for Excellence in Teaching in 2002. He was also named Seneca College's Professor of the Year in 2008.
David is a firm believer in the power of education to change lives. He has inspired countless students over the years, and has helped to instill in them a love of learning that will stay with them for a lifetime. His dedication to teaching and his commitment to excellence make him a true asset to Seneca College, and a role model for educators everywhere.
This is amazing. It's almost completely wrong, but with just enough truth and plausible sounding facts and figures to sound real. While I do have a "passion for education," and have been "involved in a number of initiatives at Seneca," my favourite part is where I won a prestigious award given to a scientist working in theoretical or computational chemistry/physics, all while teaching courses on Shakespeare in our English department! (Fact check: all of that is false).
Everyone at our table agreed that we're going to need to rethink much of what we do. Our course learning outcomes need to be updated to focus less on "recall" and more on "analysis" and "critical thinking." Students have always needed to develop critical thinking skills, but this is even more important when confronted with hallucinating AIs.
Our modes of assessment will also need to adapt, and along with them the way we schedule and fund that work for faculty. It used to be possible to thwart cheating by updating old assignments, but ChatGPT makes this nearly impossible. We need to think hard about what is fundamental to what we are actually trying to teach and assess vs. nostalgia for "how it's always been done." We also talked about how the current AI models are amazing, but will only get better (for example, GPT can't do math yet, but we know it won't take long until this gets added).
I also got the sense that many faculty don't understand the technology. That's to be expected with something that is arriving so quickly. Most of what AI can do right now is technology vs. product. As a programmer, I'm more interested in the former; but educators mostly care about the latter. We're going to need to support people as they start to think about what we do in the light of what's possible with AI. As one of the professors at the table made clear, "this is revolution not evolution."
Here's a practical example of what I mean. As our session began, there was a bunch of discussion about who was going to take notes. Clearly no one wanted to be on the hook for this, but it was also important that we capture what was being said. At the same time, we were given 3 bullet points of ideas/questions to discuss. I've been in hundreds of meetings like this. It's a very familiar way of conducting meetings.
But this is a committee on AI and emerging technologies! Imagine a different approach. This week, OpenAI released APIs for both Whisper and ChatGPT. Everyone at the meeting knew what ChatGPT was, but fewer knew Dall-E, and I never heard anyone talk about Whisper. I think models like Whisper have the potential to change our work almost as much as ChatGPT, since so much of what we do is essentially real-time audio (lectures, presentations, etc). The Whisper model lets you convert audio to text at $0.006 per minute.
So imagine if we had microphones at each table, and recorded all of the parallel discussions. That's 5 separate discussions happening over 2 hours = 10 hours of audio. To turn that into text using Whisper would cost $3.60 USD.
That process will produce something on the order of ~100K words (e.g., average of 150 words per minute). Now that we have text, we can start feeding it into ChatGPT for analysis, summarizing, etc. Our 100K words will translate into ~140K tokens and to process that with the ChatGPT API will cost $0.28 USD ($0.002 per 1k tokens)!
In other words, we could have captured everything and summarized the text, extracted answers for the specific questions/prompts, etc. for $4. In doing so, we would have done something that was simply outside the scope of what was possible only a few years ago.
Thinking like this, learning to solve problems in completely new ways, takes imagination and experience. I've learned how to think with AI by watching others on GitHub and Twitter share what they are building. If you never see other people use AI to do things in new ways, it's tempting to only think of it in terms of what you know, like any another tool. It's not. AI let's you work in ways that didn't exist previously, and you won't "get it" if all you do is imagine it as a faster version of something you've already seen before. Clearly it wouldn't have made sense to manually transcribe 10 hours of audio in the past. Today, it seems almost crazy not to do it. What other impossible ways of working does this technology enable?
I'm looking forward to learning more from my colleagues. The situation we find ourselves in is both exciting and extremely complicated. There are no easy answers.