The Effective Implementation of ChatGPT in PUB 101 at Simon Fraser University

            A chatbot with the capability of writing personalized essays and assignments is a never-seen-before feat that is changing the world of academia. ChatGPT, conceived by OpenAI is a chatbot that is trained using large amounts of data to conduct human-like conversations, compose essays, summarize information, and much more (Ortiz, 2023; Sundar, 2023). With the rise of the use of ChatGPT in the classroom, educators have expressed several concerns involving plagiarism, bias, and the loss of critical thinking. However, in the attempts to ban the use of the model in higher education, critics ignore the several ways it can benefit the classroom setting. Through a mutual understanding between professors, students, and teaching assistants (TAs) surrounding ChatGPT’s affordances and limitations, the platform can successfully enhance learning in PUB 101: Publication of Self in Everyday Life at Simon Fraser University (SFU). In this essay, I argue that there are three main roles in the publishing course: the professor, teaching assistant, and student, who can all use the chatbot in different ways to facilitate learning, while remaining cognizant of its downfalls.

Laying the Groundwork: Professors

            Professors hold the responsibility of educating students about the implications of the use of ChatGPT while creating an open space for discourse surrounding the model’s role in the classroom. The use of ChatGPT is becoming increasingly inevitable and therefore, I argue that instructors must create open conversation surrounding the tool instead of banning it, which will prove to be ineffective. Instead, like any other technology, instructors must help students build technological literacy surrounding its use, which is defined as the ability to “use, manage, evaluate, and understand technology” (ITEA, 2007, p. 7). A method for building technological literacy could include devoting a class to the study of artificial intelligence, allowing students to share their knowledge and suggest guidelines on the use of the model. This optimizes transparency and fairness, allowing students to become advocates for their learning. In the class, professors should discuss the importance of fact-checking due to misinformation, explaining to students that the model is only as good as the data it was trained on (Mhlanga, 2023). Students should also understand that ChatGPT cannot register context nor perform common sense and logical tasks (Lund & Wang, 2023). Using this information, they would understand that, for example, it would be a poor decision to ask the model to write the term essay for them, because it does not understand the professor’s specific instructions. Professors should additionally ensure that students are aware of the privacy and security implications related to the model, including the fact that it can retain highly sensitive information from users’ prompts, including financial and medical data (Mhlanga, 2023). Other implications involve the potential bias in the system and the overarching question of plagiarism, which will be discussed in the following section.

Possibilities: Students and Teaching Assistants

            Students can use ChatGPT to generate ideas or outlines for their blog posts while remaining cognizant of SFU’s guidelines on academic integrity and plagiarism. In PUB 101, a course based on blogging, students publish an average of two to three blog posts each week, including “process posts” which detail their weekly blog maintenance, “content posts” related to their blog themes, “peer reviews,” and “mini assignments.” Therefore, in alignment with Qadir’s (2022) propositions for the uses of ChatGPT, I argue that the predominant purpose of the model in this course is for idea generation or the creation of outlines for blog posts. This can help alleviate the time and stress students endure while conceiving their posts. However, Qadir, with the help of ChatGPT explains that there is a difference between acceptable use and plagiarism, which is a significant concern regarding the use of the model in the classroom. Therefore, an exploration of SFU’s academic integrity policy is required here to understand how it might be used ethically. The central tenets explain that students must present honest and fair work, among several other aspects (SFU, 2018). These tenets demonstrate that it is crucial they are transparent with the use of the model by citing or crediting it, even if they simply use it to generate ideas (Qadir, 2022). They should not use the model to write entire paragraphs, blog posts, or even essays, as this would constitute “falsification, misrepresentation… [and] deception” (SFU, 2018) because they are representing a chatbot’s output as their own work.

            ChatGPT offers teaching assistants the opportunity of providing personalized feedback to students, but they must remain aware of the potential biases of the model. Mhlanga (2023) explains that feedback from ChatGPT allows students’ classroom experiences to become more “productive and personalized” (p. 8). There are 59 students and one TA in the Spring 2023 offering of PUB 101, and with students creating an average of two to three weekly blog posts, it becomes nearly impossible for the TA to provide consistent feedback to 118 to 177 blog posts each week. TAs can employ ChatGPT to regularly provide personalized feedback for students on certain writing elements they wish to emphasize, including layout, content, and grammar, which, as Mhlanga describes, simulates hands-on learning. For example, I prompted ChatGPT with the following: “provide feedback on the layout of this blog post,” and proceeded to enter my post, “Piping Hot Take: The Royal Family Needs to Apologize for Racism,” to which ChatGPT responded:

The layout of this blog post is simple and easy to read, with a clear title and an engaging opening sentence. The author provides a brief disclaimer to clarify that the post reflects their opinion. The use of subheadings helps to break up the text and guide the reader through the post. The inclusion of images to illustrate the content adds visual interest and helps to convey the author’s point. Overall, the layout is effective in presenting the author’s argument in a clear and concise manner.

(ChatGPT, personal communication, March 10, 2023)

            Despite the opportunities for ChatGPT to offer effective personalized feedback, TAs must recognize that the output could contain biases due to its training data (Lund & Wang, 2023). These biases could pose problems when students’ positionalities are reflected in their writing, including race, gender, or socioeconomic status, which can perpetuate existing societal prejudices (Mhlanga, 2023). Because of these potential biases, PUB 101 TAs cannot use the feedback provided on ChatGPT to generate final marks. Instead, Cotton et al. (2023) suggest that TAs employ a combination of artificial intelligence and manual assessment tools to ensure that students’ understanding of the material and their blog posts are assessed appropriately. ChatGPT could generate regular feedback, supplementing the TA’s formal feedback offered twice a semester, culminating in the final grades, assessed manually by the TA.


            The successful implementation of ChatGPT in PUB 101 requires professors, students, and TAs to uphold certain responsibilities. Professors must foster technological literacy in the classroom by teaching students about ChatGPT’s affordances and limitations, showing them when it is appropriate to use the chatbot. After building this foundation, students can use the model to generate ideas and outlines, while remaining aware of SFU’s academic integrity policy. Additionally, TAs can use ChatGPT to provide personalized feedback for students as long as they do not rely on the model to provide grades, due to potential biases in the data. With ChatGPT-literate professors, students, and TAs, the model has the potential to revolutionize learning in PUB 101, becoming a leader in AI-assisted education at SFU.


Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 1-12.

International Technology Education Association. (2007). Standards for technological literacy: Content for the study of technology (3rd ed.). International Technology Education Association.

Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: How may AI and GPT impact academia and libraries?. Library Hi Tech News, 16(3), 1-4.

Mhlanga, D. (2023). Open AI in education, the responsible and ethical use of ChatGPT towards lifelong learning. SSRN Electronic Journal, 1-19.

Ortiz, S. (2023, March 10). What is ChatGPT and why does it matter? Here’s everything you need to know. ZDNET.

Qadir, J. (2022). Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for education. TechRxiv, 1-10.

Simon Fraser University. (2018, November 22). Student academic integrity policy.

Sundar, S. (2023, March 1). If you still aren’t sure what ChatGPT is, this is your guide to the viral chatbot that everyone is talking about. Business Insider.

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php Skip to content