I’m not an early adopter. I’m just not. I hate change, and, yet, I recognize the need and the benefits – so I approach very cautiously and in my own time. I like to be able to read the research and talk to lots of people before making a decision. I like to see that the bugs have been worked out, and there have been a few system updates before I get involved. I like to know something will actually be around for a while before I get too attached (there’s almost nothing worse than getting really into a television show only to find out that it wasn’t renewed for a second season…).
And, yet, with all of that, I’m delving into the world of Artificial Intelligence before I’m ready.
While ChatGPT has been around for only a few months (launched late November 2022), the concept has been there before even its predecessor (InstructGPT). For years now, we’ve interacted with different forms of chatbots on websites (my default was always to type “real person” as soon as I possibly could and see where that got me). My kids have asked Alexa their questions over coming to me (Alexa doesn’t ask seven follow-up questions; she just gives an answer, so it’s much easier). And while fictional, who can forget Janet from “The Good Place” (the “not a robot” who did develop feelings of love).
So, yes, I fully recognize that AI has been around in various forms for quite some time, so even if I jump all-in right now, I’m still not an early adopter. But this is still early for me, and it seems like ChatGTP may be disruptive in ways other forms of AI aren’t.
And I just need to see how likely it is that I can be replaced…as an educator…as a consultant…as a person?
I’ve allocated some time this month to exploring ChatGPT and reading up on the limitations and capabilities. Full disclosure, so far I’ve logged into the site, posed a few questions, was totally impressed by the thoughtfulness and eloquence of the answers (I’ll write more about that later), decided not to dismiss the power of the platform, read the Wikipedia article, read a few listserv chains from educational administrators, and read a few news pieces focused on the reception of ChatPGT so far.
I like to approach new things with curiosity, starting by just asking a lot of questions, and this preliminary research has certainly given me enough to start asking questions.
As an educator, I’ve always hated the “How can we keep kids from cheating?” conversations as I strongly believe we should never be giving a test or an assignment where “cheating” would help them. If Google could give them the answer to a test question, are we asking the right questions? If all they need to do is recall lines from their notes word for word, what is it really assessing? If they can get the one correct answer from looking at their neighbor’s paper, are we truly teaching the higher-level thinking skills and mindsets that they will need to be successful?
The same goes for conversations I am seeing surrounding ChatGPT. There seems to be two main conversations happening in the education world right now surrounding the technology:
What rules and regulations and limits do we need to impose in order to minimize the impact of this technology?
How can we use this technology to the benefit of our students?
Fun fact: Kodak invented the digital camera decades before it was a thing. But they were so afraid of losing all the business they had from film/print photography that they buried the technology, hoping to buy themselves some time (and money). Eventually others “invented” the same technology. And while Kodak benefitted for some time due to the patents they owned, they never embraced digital, and Kodak applied for bankruptcy in 2012.
When it comes to thinking about AI and the role of AI in our lives and in the space of education, my first question is this:
How do we avoid being Kodak and avoid limiting our students to Kodak?
Whether or not we want to admit it, we live in a world that has Chat GPT. And our first step towards not becoming Kodak is to not try to bury this technology and pretend it exists. We must embrace it and try to better understand it. And figure out how we can use it.
And with that comes so much potential…and so many questions.
Here are some of my initial questions (each of which can and should and will be explored in so much more depth in the future):
What are the limitations of this technology?
So far, what I’ve read points to two main limitations: it’s not always accurate, and its knowledge of anything after 2021 is pretty limited. Yes, I’ve also seen comments about it being too liberal or “woke,” but I wonder to what extent that can truly be objective.
How can we use this technology to heighten what we’re doing and/or reallocate resources?
According to Wikipedia, ChatGTP can already do things like debug computer programs, write content, simulate different environments and situations, and so much more.
What do our students need to know and be able to do now? How will that change with access to tools like ChatGPT? What can humans do better than this technology can do?
There are certain places where humans just can’t compete with computers. And there are certain places where computers will never be able to compete with humans. What are those areas, and how do we focus on those?
How will this technology redefine the role of teachers and the way we educate?
In some spaces, we’ve already shifted the role of teacher away from “expert.” How can the teachers work in conjunction with artificial intelligence to give students the best possible learning experience?
How can we reconcile using this technology with our need to be authentic?
With the word “artificial” literally in its description, how do we reconcile the two?
How much of the current criticism is legit or based on fear?
I’ve already admitted that I hate change. And I hate the unknown. And ChatGPT (and everything that comes after it) represents so much change and unknown.
I don’t have answers for any of these questions. But I do know that if we approach with curiosity, we’ll get so much further than letting the fear control our responses. OpenAI CEO Sam Altman was quoted in The New York Times as saying that AI's "benefits for humankind could be so unbelievably good that it's hard for me to even imagine,” and in January 2023, ChatGPT reached over 100 million users, making it the fastest growing consumer application to date…so there is something there. We just don’t know what it is yet.
One final question we should all be asking ourselves – What might be?
Comments