Editor’s note: This is the first of an occasional series of stories The Columns will run this academic year on the topic of AI and its role in higher education.
No matter where one stands regarding the use of AI and large-language models like ChatGPT in higher education, one thing is clear: These tools probably aren’t going anywhere.
For Rebecca Cepek, now in her fifth year of teaching English at Fairmont State University, AI learning models present real ethical concerns. For example, according to AI detection platform Turnitin’s user agreement, content submitted by students may be used to improve its services.
Cepek finds this problematic if students are unaware.

“If Turnitin is using student work to train its systems, then professors must disclose that,” she said. “In my syllabus, I tell students that anything they submit may be subject to AI detection, and I disclose this in the first week so they can make an informed decision about staying in the class.”
With more than two decades of teaching writing and literature, Cepek believes that students should write full sentences and paragraphs without the assistance of tools such as ChatGPT.
Her expectations are clearly outlined in her syllabi.
“I don’t allow students to use generative AI,” she said, “They may use standard editing tools, like Microsoft Word’s spellcheck but anything that suggests full sentences or paragraphs–anything that does the work for them–is not permitted.”
Her goal is to foster genuine learning and maintain a fair academic environment where all students have equal opportunities to grow. To support this, she requires students to document their drafting process, recommending platforms like Google Docs (which tracks version history) or Microsoft extensions that record typing activity.
When asked about AI misuse, Cepek emphasized the importance of early and open communication.
“AI misuse can be an issue if we don’t discuss it with our students from the start,” she said. “AI is being hyped as a solution to problems that I
don’t think are real, and generative AI isn’t the solution.”
With that, she also believes most students are genuinely invested in their education and want to learn, not just earn a degree. To ensure academic integrity. Cepek uses Copylinks, an AI detection platform, but only as a final step.
“Copylinks is not my sole method of detection,” she noted. “I rely on rubrics and my own analysis to identify signs of AI-generated content—fabricated quotations or citations, surface-level analysis, or language that doesn’t match the assignment or the student’s skill level.”
She acknowledges the limitations of detection tools and stresses the importance of human judgment in evaluating student work.
Dr. Cepek cautioned against overreliance on AI detection systems.
“We need to have conversations with students about what AI is, why they might use it, and why we discourage its use,” she said. “It’s also about how professors design assignments and rubrics to encourage authentic student engagement.”
Several universities, including the University of Texas at Austin, Vanderbilt, Northwestern, Michigan State, and the University of Cape Town have moved away from AI detection tools, citing concerns about accuracy and ethics.
Cepak said she does not support an outright university-wide ban of AI detection tools, but added that faculty and staff should not “rely on AI detection tools as their only method of detecting AI use.”
In closing, Dr. Cepek reflected on the current state of AI in education: “As it stands right now, AI is not up to the task of being useful in first-year writing classes. I don’t know if, when, or how that will change, but for now, this is where we are.”