Chat GPT and Hallucination Effect
Chat GPT and Hallucination Effect
Hi guys, this weekday, each group presented their chosen topics and I would like to talk about my group's presentation contents this time.
We chose Chat GPT and its hallucination effect as the main topic. Then, we examined a case conducted at Canadian university, which used chat GPT as a tool.
The Rationale
Since its release in November 2022, ChatGPT has created huge contradicting reactions from education institutions worldwide: from restricting students’ access to ChatGPT to embracing the technology.
And we thought that it is educators’ need that uses Chat GPT as a tool for exploring new methods of education, in a phase of adapting new technology.
So, I set the target audience as people in the education field, especially for university students. We defined their needs as efficiency and accuracy of the information acquirement when using Chat GPT. Therefore, considering their needs, the technology’s success or failure was measured through the two aspects. Whether students reduce time in getting information, and whether the information is credible or not.
In order to specify the process of evaluating its knowledge acquirement, I introduced “the KSA model of competency” for the criteria framework.
It is also known as the Knowledge, Skills, and Attitudes model. It is a framework commonly used in human resources and helps identify the essential competencies needed for successful job performance. It can be applied to the education field to identify if a student is well-educated.
Specifically, we set criteria focusing on the knowledge part of our case study, because the definition of Knowledge in KSA model says having knowledge is the "Condition of being aware of something".
And in order to meet that condition, people have to get information and I thought Chat GPT can be a toll that can help students meet that condition.
Background information on the technology implement (Case study)
So, our group chose one case study from Ming Wang, a professor at University Canada West, who conducted an experiment to observe and find distinguishing points when using chat GPT.
In his courses “Business Analytics” and “Operations Management” 2023 Winter Term, there were 171 students in 6 classes (about 30 students per class) and about 90% of the students come from overseas.
Critical analysis
In this case, the educator used a quiz and an assignment to evaluate students’ knowledge and some of the findings were:
Quiz
-It was noticeable that many students spent less time answering questions than in previous terms.
Assignment
-Many reports have very similar structures in terms of sections, sub-sections, and headings.
-What has stood out is the widespread low-degree matches among students.
Some of the students in the lecture said,
“ChatGPT is a helpful conversational partner for learners to answer questions about science, mathematics, history, language, and other topics. It significantly improves the learning efficiency and saves a lot of time in searching information.”
“It is a good reference in academic and information-focused subjects. .... However, the accuracy of the answers depends on the skill of asking questions, such as keywords and the direction of thinking.”
We could also see that it was the same for students in EST 325. To sum up their saying, they had a similar understanding of Chat GPT. It offers efficiency, reduces time in information acquirement, but has limitations in the accuracy of information.
These features showed that ChatGPT was efficient in getting information, but the accuracy of information may be incorrect.
Furthermore, it means that if students use ChatGPT without a proper understanding of the knowledge and a clear direction of thinking, hallucinations can occur.
What is the hallucination effect?
We defined the hallucination effect in chat GPT as mistakes in the generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical. It comes out as a confident response that does not seem to be justified by its training data.
If you believe such narrow and false information as true without a doubt, it can be seen as you are in a state of hallucination. This can lead to dangerous thoughts and misguided paths.
Suggestions
Therefore, we came up with a few suggestions.
1. Limit the possible outcomes.
2. Pack in relevant data and sources unique to you.
3. Create a data template for the model to follow.
4. Tell it what you want, and what you don’t want.
Conclusion
With these findings and suggestions, we could conclude that contextualization is key to avoiding the hallucination effect, using multiple sources, and fact-checking the information.
Other than this, our group planned to go deeper into comprehensive analysis and further develop specific descriptions in the group paper.
Consequently, it was meaningful for me to have discussions with my group members, including Chat GPT, not just asking for information but as a chance to organize my thoughts and make them clear, conceptualized and personalized.
Comments
Post a Comment