How to Use AI Ethically in Coursework
By Joni West | Last Updated: Nov 26, 2025
Artificial intelligence, or AI, was the stuff of science fiction, but it’s transitioned from far-flung fantasy to something real and accessible. Like any new technology, AI introduces exciting possibilities — but also understandable anxieties.
At universities, AI is drastically improving productivity and introducing new ways for students to learn, but many fear allegations of academic dishonesty. As AI becomes ingrained in the college experience, educators must be clear about what's allowed and what's crossing the line.
This article explores ways students can use AI in their coursework to enhance learning and productivity while maintaining good ethics. We asked Dr. Sudeep Sarkar, distinguished professor and launch dean of the Bellini College of Artificial Intelligence, Cybersecurity and Computing at USF, to help us navigate these ethical considerations and understand the role of AI in education.
Understanding AI in an Academic Context
This article centers around generative artificial intelligence, or genAI. As defined by our university,
“GenAI refers to technologies that can automatically generate new, original content and assets including text, images, audio, video, and computer code. These tools work by analyzing patterns in training data, building an understanding of structure and style, and using that knowledge to create novel, customized outputs that mimic the training data while introducing variation and new ideas.”
Well-known tools like ChatGPT or Grammarly are good examples.
Students at universities around the world are finding practical ways to use genAI in their coursework, and these are widely regarded as totally acceptable:
- Creating study aids, like practice quizzes and flashcards
- Summarizing and outlining long readings
- Debugging computer code
- Solving math equations
- Time management
- Making presentations prettier
- Finding sources
- Organizing citations
- Assisting with writing assignments
However, the last bullet point may trouble professors. GenAI can be an extremely helpful tool for writing, but how much help is too much? If a student asks AI to generate a new paper using just a few prompts and submits it as their own, what have they learned?
Writing plays a key role in higher education. It’s a process that requires students to comprehend a topic well enough to explain it, take a position, and defend their thesis with evidence. It’s both an exercise in critical thinking and the path to mastery of a subject. If ChatGPT does the work instead, something essential is lost.
Then again, people said calculators would eliminate math skills. And others worried that spell checkers would keep people from learning to spell or proofread (a little more credible, but not universal). Technological advancements and anxiety go hand in hand — but people tend to adapt to the new status quo.
Smart Ways to Use AI in Coursework
Dr. Sarkar has spent a lot of time considering this balance. “The AI space is evolving right now,” he explained, “and we have to come up with ways to enhance learning without replacing the thinking that goes on in the classroom.”
To that end, don’t make genAI write your paper for you. Instead, let it help you through the process:
- Brainstorm: If you don’t have a topic, genAI can help you come up with one.
- Ideate: Once you have a topic, consult with genAI to determine a direction to take.
- Outline: GenAI can help you structure your paper. Use the outline it gives you to place your ideas in a coherent order.
- Research: GenAI can help you find initial sources, connect ideas, analyze data, and identify patterns. You can also ask genAI questions about sources to improve your understanding. Conveniently, genAI can create summaries and note key points so you can quickly find the best sources and focus your time on those.
- Feedback: GenAI can offer helpful feedback. Its formative feedback can tell you if your writing is on the right track, and its summative feedback can review the overall quality of your finished document and offer suggestions.
The writing process is a continuum. You can choose to use genAI from start to finish or bring it in to assist at any point along the way. It’s up to you! The process is what matters, and students aren’t expected to go it alone. Before genAI, you could ask for help from teachers, peers, or the campus writing studio. Using genAI is just adding another coach to the team.
“At the end of the day, the student should own the synthesis, the citations, and the final argument that’s made,” Dr. Sarkar contended. “They should be able to defend their work. They are responsible for the overall product.”
That includes any errors.
Don’t Blindly Trust GenAI Outputs
GenAI seems like magic, but it has limitations. If it makes a mistake and you don’t catch it before submitting, you’ll be penalized. With this standard in mind, Dr. Sarkar urged students to beware of hallucinations, which produce unreliable information.
According to IBM, hallucinations occur when genAI “perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”
“The language used sounds confident, so you tend to think it’s right,” Dr. Sarkar said. He suggested a few ways students can avoid pitfalls:
- When genAI delivers numbers or stats, click through to the source to confirm their accuracy.
- If something seems off, ask genAI, “Are you sure this is right?”
- Challenge its output with counterarguments and see how well it defends itself.
- When you push back, watch for repeated position flips.
Remain skeptical, because genAI may just be delivering results that please you. Algorithmic bias exists, and small errors can ripple out to produce some real wacko results. If genAI fails these tests, “you should wonder why it’s not confident in its answers,” Dr. Sarkar concluded.

Ethical Considerations
Dr. Sarkar was very clear about his ethical standard for genAI use: "Never, never submit AI output as your own work."
Let’s make that our North Star as we navigate some specific ethical concerns related to genAI.
Plagiarism
If students are using genAI as a tool for schoolwork, some part of the deliverable is going to be AI-generated, whether it’s a kernel of an idea, an outline, or full lines of text.
Educators have been sniffing out plagiarism for ages. Have you ever submitted a paper through Turnitin? The software has been used for decades to analyze writing and give an “originality” score. Policies established by the school or individual teachers determine what scores constitute academic fraud. Quotes from other work must complement your own thoughts, not replace them, and you must properly cite each source.
Software like Turnitin is now being used to detect AI plagiarism — flagging recognizable argument structure, language patterns, and word choice, among other factors. Results are mixed. Reports of false positives are common and accusations could be ruinous.
Here’s how you can reduce the likelihood that your writing gets flagged for plagiarism:
- Don’t rely too heavily on genAI for the substance of your writing. Instead, reproduce its suggestions in your own style and voice.
- Add variety to your writing. Use a mix of long and short sentences, as well as formal and informal tone. Reading aloud is a great way to avoid writing like a robot.
- Add personal anecdotes or reflections when appropriate. These small, human details make a difference.
- Take the time to thoroughly edit any genAI outputs you use. The more effort you put in to changing them, the less they’ll sound like AI.
- Keep your drafts and track the ways you use genAI so you can defend your process. Transparency is key.
For other kinds of assignments, do as much as you can the hard way and consult genAI mostly for pointers and feedback. Again, track the way you use the tool throughout the process and be transparent.
Here’s an example: Dr. Sarkar requires students in his courses to disclose which parts of their final computer code are AI-generated and which are original work. "I grade them based on the part that they contributed,” he explained. “They might have used AI to ideate or whatever. That’s fine, but what is the final part that you did?"
If you are the victim of an AI plagiarism false positive, remain calm. It’s an indicator, not a verdict. It’s likely that your professor will reach out to discuss the issue, giving you an opportunity to defend your process. Show them you’ve been making a good faith effort to learn. Your professor will judge whether you should be penalized, and your honesty and effort will go a long way towards a favorable outcome.
Where to Draw the Line
Dr. Sarkar advised students to trust their conscience.
“I firmly believe everybody knows when they’re being dishonest,” he said. “If you think you’re crossing over into dishonesty, don’t do it.”
When you have doubts, ask your professor for clarification. And always, always check the course syllabus for an AI policy. Almost all instructors include them now. Some may not allow AI use at all! You must respect their policy.
If your school has an overall AI policy, you must abide by that, too. USF leads in this frontier, providing detailed guidance for genAI use by both students and instructors.
These policies will continue to evolve over time, and your feedback matters. Tell your professors how you’re using genAI and where you’ve drawn ethical lines for yourself.
Holding yourself accountable will help you get the most out of your college experience. You’re paying money to learn, after all. "If somebody is engaging in dishonest behavior, they’re cheating themselves,” Dr. Sarkar pointed out. “They might get an A, but if they don’t learn, they’ll fail in the job interview."
Full Transparency
I was curious about genAI’s function as a writing assistant, so I gave it a try for this assignment. Microsoft Copilot was hyped to be included: “That’s a great idea, Joni — using AI to write an article about using AI ethically is both meta and practical!” My thoughts exactly.
As a newbie, I focused on the basics. Here’s my disclosure:
- Copilot gave me a plan of action and generated an outline for the article, which I (mostly) followed. Generally, the heading hierarchy and topics of each section were provided by my AI assistant. I filled in the content.
- Copilot gave me sources. I had several to choose from, and it returned better results based on my feedback.
- Copilot summarized the key points of each source so I could determine the best ones to use.
- Ahead of my interview with Dr. Sarkar, Copilot generated a list of questions to ask him. I edited them according to my own preferences.
- After the interview, Copilot scanned the transcript, pulled key quotes for the article, and sorted them into the outline it had given me. I noticed it missed some great lines, so I grabbed them manually — about a quarter of the ones I used in the article.
- Before finalizing the article, I asked Copilot for summative feedback. I’m happy to report that it praised my writing: “I gave it a full read, and I have to say — it’s excellent.” Along with this small ego boost, I also got a few solid tips for improvement.
I gave genAI the most time-consuming and potentially frustrating tasks, while I focused on developing my ideas. Its outputs weren’t perfect, but I know I’m not a perfect prompter. I’ll get better with practice. For now, I’m grateful that Copilot saved me time and lightened the load. My creative process felt more directed than usual, and words came more easily.
Importantly, none of these uses ever felt like I was verging on unethical activity. (With any luck, my boss agrees.) Copilot simply helped me write a better article in a little less time.

Looking to the Future
Emerging technologies like genAI have their place in higher education, even if we’re still struggling with the specifics. “We will figure out a way to integrate AI to enhance learning,” Dr. Sarkar contended. “We’re already starting to figure things out.”
He sees a future where teachers use AI to design and teach lessons. Students will practice for exams with the help of AI assistants and AI tutors. AI will be injected into traditional topics, and each discipline will determine its own AI methodology. Assessment standards will adjust, too. “You’re going to see more oral exams, student presentations, and project artifacts," Dr. Sarkar predicted.
Training students in the use of genAI is essential. Dr. Sarkar and his colleagues believe it is already a core competency across curriculums. That’s why the Bellini College has begun offering microcredentials, artificial intelligence minors for non-technical degrees, and blended degrees. By enrolling in one of these paths, students in any discipline can gain proficiency in AI use and design within the workplace.
"AI allows you to learn anything, anywhere, any time,” he observed, “and build anything, anywhere, any time. What AI is doing is accelerating not only our access to information, but the speed at which we become more intelligent and capable.”
In response, we should remain open to the possibilities but further improve our critical thinking skills. We should acknowledge its limitations as well as our own, holding both to a high standard of performance, reliability, and of course, integrity.
"The best use of AI is when people use it to augment what we’re doing, not replace us," Dr. Sarkar said.
Tips for Responsible AI Use
When you’re considering genAI use for an assignment, keep this checklist handy. It’s inspired by this detailed AI guide from the Mississippi Association of Educators and Dr. Sarkar’s suggestions.
Study AI at the University of South Florida
The Bellini College of Artificial Intelligence, Cybersecurity and Computing opened Fall 2025 and is accepting students now. If you want to dive deeper into the topic of AI, including its foundations, functions, and ethics, the Bachelor of Science in Artificial Intelligence (BSAI) or a minor in AI could be the perfect way for you to prepare for an AI-driven future.
Start your journey today!
