Artificial Intelligence

4 Things to Know About AI’s ‘Murky’ Ethics

By Alyson Klein — June 12, 2024 4 min read
Illustration of blue neon glowing weight / balance scales holding pink letters A on one scale and I on the other scale with sits on a silver laptop.
  • Save to favorites
  • Print

Overworked teachers and stressed-out high schoolers are turning to artificial intelligence to lighten their workloads.

But they aren’t sure just how much they can trust the technology—and they see plenty of ethical gray areas and potential for long-term problems with AI.

How are both groups navigating the ethics of this new technology—and what can school districts to do to help them make the most of it, responsibly?

That’s what Jennifer Rubin, a senior researcher at foundry10, an organization focused on improving learning, set to find out last year. She and her team conducted small focus groups on AI ethics with a total of 15 teachers nationwide as well as 33 high-school students.

Rubin’s research is scheduled to be presented at the International Society for Technology in Education’s annual conference later this month in Denver.

Here are four big takeaways from her team’s extensive interviews with students and teachers:

1. Teachers see potential for generative AI tools to lighten their workload, but they also see big problems

Teachers said they dabble with using AI tools like ChatGPT to help with tasks such as lesson planning or creating quizzes. But many educators aren’t sure how much they can trust the information AI generates, or were unhappy with the quality of the responses they received, Rubin said.

The teachers “raised a lot of concerns [about] information credibility,” Rubin said. “They also found that some of the information from ChatGPT was really antiquated, or wasn’t aligned with learning standards,” and therefore wasn’t particularly useful.

Teachers are also worried that students might become overly reliant on AI tools to complete their writing assignments and would “therefore not develop the critical thinking skills that will be important” in their future careers, Rubin said.

2. Teachers and students need to understand the technology’s strengths and weaknesses

There’s a perception that adults understand how AI works and know how to use the tech responsibly.

But that’s “not the case,” Rubin said. That’s why school and district leaders “should also think about ethical-use guidelines for teachers” as well as students.

Teachers have big ethical questions about which tasks can be outsourced to AI, Rubin added. For instance, most teachers interviewed by the researcher saw using AI to grade student work or even offer feedback as an “ethically murky area because of the importance of human connection in how we deliver feedback to students in regards to their written work,” Rubin said.

And some teachers reverted to using pen and paper rather than digital technologies so that students couldn’t use AI tools to cheat. That frustrated students who are accustomed to taking notes on a digital device—and goes contrary to what many experts recommend.

“AI might have this unintended backlash where some teachers within our focus groups were actually taking away the use of technology within the classroom altogether, in order to get around the potential for academic dishonesty,” Rubin said.

3. Students have a more nuanced perspective on AI than you might expect

The high schoolers Rubin and her team talked to don’t see AI as the technological equivalent of a classmate who can write their papers for them.

Instead, they use AI tools for the same reason adults do: To cope with a stressful, overwhelming workload.

Teenagers talked about “having an extremely busy schedule with schoolwork, extracurriculars, working after school,” Rubin said. Any conversation about student use of AI needs to be grounded in how students use these tools to “help alleviate some of that pressure,” she said.

For the most part, high schoolers use AI for help in research and writing for their humanities classes, as opposed to math and science, Rubin said. They might use it to brainstorm essay topics, to get feedback on a thesis statement for a paper, or to help smooth out grammar and word choices. Most said they were not using it for whole-sale plagiarism.

Students were more likely to rely on AI if they felt that they were doing the same assignment over and over and had already “mastered that skill or have done it enough repeatedly,” Rubin said.

4. Students need to be part of the process in crafting ethical use guidelines for their schools

Students have their own ethical concerns about AI, Rubin said. For instance, “they’re really worried about the murkiness and unfairness that some students are using it and others aren’t and they’re receiving grades on something that can impact their future,” Rubin said.

Students told researchers they wanted guidance on how to use AI ethically and responsibly but weren’t getting that advice from their teachers or schools.

“There’s a lot of policing” for plagiarism, Rubin said, “but not a lot of productive conversation in classrooms with teachers and adults.”

Students “want to understand what the ethical boundaries of using ChatGPT and other generative AI tools are,” Rubin said. “They want to have guidelines and policies around what this could look like for them. And yet they were not, at the time these focus groups [happened], receiving that from their teachers or their districts, and even their parents.”

Events

Student Well-Being & Movement K-12 Essentials Forum How Schools Are Teaching Students Life Skills
Join this free virtual event to explore creative ways schools have found to seamlessly integrate teaching life skills into the school day.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Special Education Webinar
Bridging the Math Gap: What’s New in Dyscalculia Identification, Instruction & State Action
Discover the latest dyscalculia research insights, state-level policy trends, and classroom strategies to make math more accessible for all.
Content provided by TouchMath
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
School & District Management Webinar
Too Many Initiatives, Not Enough Alignment: A Change Management Playbook for Leaders
Learn how leadership teams can increase alignment and evaluate every program, practice, and purchase against a clear strategic plan.
Content provided by Otus

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Artificial Intelligence Q&A The Risks and Rewards of AI in School: What to Know
Brookings Institution's report details the best ways to minimize risk and utilize benefits of AI for students.
4 min read
Students engage in an AI robotics lesson in Funda Perez’ 4th grade computer applications class at Dr. Martin Luther King, Jr. School No. 6 in Passaic, N.J., on Oct. 14, 2025.
Students engage in an AI robotics lesson at Dr. Martin Luther King, Jr. School No. 6 in Passaic, N.J., on Oct. 14, 2025. A new report from the Brookings Institution outlines the benefits and drawbacks of AI use in education.
Erica S. Lee for Education Week
Artificial Intelligence Letter to the Editor I’m Pro-Technology, But AI’s Role in Education Worries Me
A parent shares his concerns with artificial intelligence in K-12.
1 min read
Education Week opinion letters submissions
Gwen Keraval for Education Week
Artificial Intelligence 'Grok' Chatbot Is Bad for Kids, Review Finds
The chatbot on X suggests risky behavior, and is unsafe for teens, Common Sense Media says.
4 min read
Workers install lighting on an "X" sign atop the company headquarters, formerly known as Twitter, in downtown San Francisco, July 28, 2023. Grok is the artificial intelligence chatbot built into the social media platform X.
Workers install lighting on an "X" sign atop the company headquarters of X, a social media platform formerly known as Twitter, in San Francisco on July 28, 2023. Grok is the artificially intelligent chatbot built into the social media platform.
Noah Berger/AP
Artificial Intelligence States Put 'Unprecedented' Attention on AI's Role in Schools
Most of the bills address AI literacy and require guidance on responsible use of the technology.
4 min read
Image of AI in a magnifying glass superimposed over an aerial view of a school.
Collage via EdWeek and Getty