The use of AI has become increasingly prevalent in higher education, particularly in fields like Computer Science, including software engineering, and others. In today’s world of learning, problem-solving, and productivity, tools such as coding assistants, automated debugging systems, and massive language models are frequently supported. In many ways, AI can act as a tutor, reference, partner, or even a second set of eyes for design and documentation.
I usually tried to finish the WOD’s on my own initially for Experience WODs. But rather than using AI as a solution generator, I utilized it as a clarification tool when I was absolutely stuck or unclear about expectations. However, there are times when I’m ultimately stuck and need AI to generate code for me. Though I have used AI many times, there wasn’t really a benefit from it. I refrained from requesting the whole code in order to continue practicing under a limited time; however, ChatGPT was helpful in clarifying what the WOD was testing. However, using applications such as ChatGPT, Google Gemini, and GitHub Copilot lessened the fear and confusion before beginning.
For practice WODs, I sometimes used AI to after class to review, especially the ones I struggled with and then trying the practice WOD on my own time without using AI.
There have been quite a few times when I have used AI, like getting a good layout of what I should be doing first, keeping in mind, and also fixing with debugging when I’m almost there. Also, sometimes using it to give me a boost when I’m stuck and don’t know what I should be doing
My use of AI when writing essays has been pretty rare; most of the time, I like to write what comes to my head, and I’m of the opinion that you should have AI write essays for you when you are absolutely capable of doing so. But the rare times I did use AI for an essay were for organization and tone rather than the content I wrote about.
In the RIBows final project, AI played a supportive role in debugging, UI, back-end, and documentation. I did use GitHub Copilot, which would autocomplete some code that I was going to use. I also used ChatGPT mainly for debugging and other purposes, such as asking AI what would be next after the issue.
AI was extremely useful when learning new concepts such as design patterns, Next.js app structure, or Prisma schemas. Most of the time, I used Google, YouTube, Stack Overflow, and sometimes ChatGPT for things that I needed more help with, usually ones where I’m unfamiliar.
I have not used AI to answer a question in class or on Discord, partly because I knew someone would have a better way of answering the question.
This relates to the last section, as though I haven’t asked AI to ask or write a smart question, AI has been there to clarify or refine questions into clearer, more focused forms.
For small, isolated syntax questions, AI was one of the most efficient tools I used. For example, “Give an example of using _.filter and _.map together in Underscore to process an array of objects, and explain the output.”
I have found that AI was especially helpful when reviewing code I had written earlier or code written by teammates. Though I do enjoy looking at what each set of lines does for the program, I sometimes like to have AI clarify what I’m looking at before I change or mess up stuff, and know what to do when implementing something.
I avoided using AI to write entire solutions, but I did use it to generate starting points or small snippets. But there have been times when I did have to have AI generate large pieces of code for me.
I have used AI for debugging or ESLint errors, such as using GitHub Copilot to fix a problem or copy-pasting my code into ChatGPT to find a better answer.
Other than the uses I’ve said prior, I’ve used AI to direct coding tasks, and rarely for design and decision-making.
My learning was altered more by the use of AI in ICS 314 than by how quickly it was. My understanding and problem-solving skills improved when I used them consciously, primarily for debugging, explanations, and observation. However, I couldn’t feel a little guilty for some of the reasons, and I used AI for things that I knew I could do by myself. Though AI is great and you get instant gratification when you’re done, who’s actually doing stuff?
Outside of ICS 314, I honestly didn’t use AI that much. Most of my AI use stayed inside the class context—WOD prep, debugging, or getting explanations—because that’s where it felt directly helpful. The main place I used AI beyond the course was in personal projects, where I was experimenting and didn’t have a team depending on me.
The largest issue I encountered with AI in this course was that it can be blatantly incorrect in ways that are difficult to recognize when you’re rushing or exhausted. Sometimes it would provide an answer that seemed right, but it didn’t match our precise stack, the library’s current version, or the ICS 314 requirements, such as ESLint. I also observed how simple it is to get a false sense of advancement: if AI provides you with a neat-looking solution, you may believe you understand it when, in reality, you have simply copied something that appears to work.
Additionally, I believe it would be beneficial if the course promoted AI in areas that are not detrimental to core learning, such as creating test ideas, assisting with the explanation of error messages, or enhancing documentation, while maintaining assessments like in-class WODs that concentrate on what students can accomplish independently.
In ICS 314, traditional teaching methods, such as in-class WODs and practice under time constraints, forced me to trust my gut. It made clear what I truly understood and what I still needed to practice. Because AI-enhanced learning allowed me to receive explanations when needed, or troubleshoot without losing momentum. Retention was the primary trade-off. Things were completed more quickly when I fully understood what was happening while looking at the modules or readings, and then using AI to clarify stuff.
I believe that AI will always be a part of software engineering education. The real question is how to use it without letting it take away from the parts of learning that are most important. As applications get better, they will probably be able to understand bigger codebases and suggest fixes or tests that are in line with the style of a project; however, the risks aren’t going away either, as there may be over-reliance, privacy concerns, and more.
For me, the future role of AI in classes like ICS 314 should focus on helping students learn and improve, while also ensuring students can explain, reason, and code independently when it matters.
Using AI in ICS 314 did not replace learning for me, but it did alter how I learned. At its best, it helped me overcome confusion and understand why things behaved the way they did. At worst, it may tempt me to accept an answer too quickly, mistaking a working solution for comprehending a subject. The main takeaway from this is that software engineering is still about responsibility. Even if AI helps you write code or explain an error, you are responsible for ensuring that the solution is correct, readable, and maintainable.
If I had to make a recommendation for how to improve AI use, I’d say keep in-class WODs with the use of AI. I think that you shouldn’t be dependent on AI to give you every answer, but learn from AI and encourage thoughtful AI use for learning and improvement outside of those assessments. If students are taught to verify, reflect, and be transparent, it can be a valuable tool without diminishing the core goal of ICS 314.