It has been interesting to read all of the uses that colleagues of mine and others have employed to put ChatGPT to the test, in addition to my tests of learning objectives, a biography, and multiple-choice questions. In one email thread, I joked that perhaps ChatGPT is a Rorschach Test of what interests people concerning the use of artificial intelligence.
I also was pointed to an interesting site that bills itself as a ChatGPT (actually trained on an earlier version of the OpenAI model, GPT-2) Output Detector Demo. I pasted my biography from my first post and this system declared the text had a 99.98% chance of being "fake," i.e., from GPT-2. When I paste in in the biographic paragraph from my own Web page, it declares the text to having a 99.97% chance of being real.
Another interesting reflection is to compare ChatGPT with information retrieval (IR, aka search). Perhaps I am biased as an academic, or someone greatly interested in IR since it is my primary focus of research, but usually when I look for information, I not only want to know the information, but also where it comes from and how trustworthy it is. A big limitation for me of ChatGPT is that it cites no references to back up what it says.
This gets to another academic concern about ChatGPT, which is how it will impact assessment of learning. Although ChatGPT seems to work best for relatively short passages of text that do not require references, there are fortunately many other ways to assess learning.
There have also been some good overviews in the news media about ChatGPT, including an interview of ChatGPT itself. There is also a nice description from the New York Times.
No comments:
Post a Comment