ChatGPT Can Write Essays, But Should It Have a Place in Texas Classrooms? | Dallas Observer
Navigation

CheatGPT: Will Artificial Intelligence Make Students Smarter or Dumber?

AI is here, and it's in schools. Many educators in North Texas question whether it's a welcome visitor or something that will doom our students.
With the advent of AI, will students still learn the way they used to?
With the advent of AI, will students still learn the way they used to? Alicia Claytor; Made with assistance from Dall•E
Share this:
Maya Bodnick, an undergraduate student at Harvard, received a report card over the summer that would make most parents proud. She’d brought back mainly A’s and B’s from the prestigious school: a solid 3.57 GPA.

But Bodnick didn’t complete any of the assignments herself. The artificial intelligence bot ChatGPT did.

Don’t worry: Bodnick was conducting an experiment, one that’s gone viral in the weeks since she published the results. The Harvard student had asked seven professors and teaching assistants to score her essays as they usually would, telling them — in an effort to reduce grading bias — that they’d either been penned by her or by ChatGPT.

In reality, the AI had written each response.

“Right now, ChatGPT enables students to pass college classes — and eventually, it’ll help them excel — without learning, developing critical thinking skills, or working hard,” wrote Bodnick in a piece for the Slow Boring Substack, where she also interns. “The tool risks intellectually impoverishing the next generation of Americans.”

ChatGPT is a large language model chatbot that allows users to request detailed responses to specific questions and prompts — anything from, “What is the first letter of the supporting actor’s name in There Will Be Blood?” to, “Write a 500-word op-ed about recent developments in quantum physics.” The seemingly omniscient AI spits out the desired product in seconds.

The chatbot isn’t even a year old but it has already evolved to become far more advanced.

A product of the latest era of the information age, ChatGPT, made by the company OpenAI, is evoking both wonder and dread among educators. It’s turned into something of a Rorschach test for teachers in Texas and across the country.

Where proponents see progress and an effective tool for learning, naysayers warn that ChatGPT will atrophy critical thinking. It arrived on the heels of widespread pandemic-induced learning loss. Now, critics are warning of a budding “cheating epidemic.”

Certain districts nationwide, including schools in Los Angeles and New York City, have chosen to ban the technology. North Texas districts are varied in their reactions.

“We need them [students] to be able to be strong thinkers, independent thinkers, critical thinkers." – Rena Honea, Dallas/AFT teachers union

tweet this

Fort Worth ISD indicated earlier this year that it had no plans to bar the AI. In July, Plano ISD’s digital learning team held a virtual class on how ChatGPT can help teachers and students alike. School officials in Denton, meanwhile, have warned that asking the technology to assist with homework amounts to plagiarism. Districts further south, like Austin and Eanes ISDs, also opted for bans.

Dallas ISD told the Observer via email that it has “decided to embrace responsible use” of ChatGPT. Chief Academic Officer Shannon Trejo said the district would devise ethical application guidelines for students and staff to combat concerns about plagiarism. “Professional learning for teachers and staff, as well as training opportunities for students will be incorporated into a Digital Citizenship focus,” she continued.

Rather than shunning the tech, some educators are working with ChatGPT as a teaching aid. It can compose handouts, syllabi, lesson plans and grading rubrics, leaving teachers more time for higher-level planning.

Others aren’t nearly as optimistic about its potential.

Rena Honea, president of Dallas’ Alliance/AFT teachers union, fears that ChatGPT allows students to avoid putting in the hard work. “We need them to be able to be strong thinkers, independent thinkers, critical thinkers — take what they're seeing and ask questions,” she said. “Don't just take it as, ‘That's the gospel. That's the word.’”

The way Honea sees it, kids need to explore being creative and develop their own ideas. Otherwise, they’ll become too dependent on others.

“Because when those people are gone or when those resources are gone,” she added, “what will they be left with?”


click to enlarge
Journalism professor Tracy Everbach wonders how students will keep integrity in their reporting with new AI tools.
Alicia Claytor
Tracy Everbach, a journalism professor at the University of North Texas, vividly remembers coming across an alarming social media post late last year. The video was from a former student who’d asked ChatGPT to respond to a specific prompt — something related to political and economic systems — and to cite sources only from within a certain time frame.

The AI pumped out the essay in a matter of seconds.

“When I saw that,” Everbach said, “I thought, ‘Oh, no. We are doomed.’”

The clip alone seemed concerning enough to Everbach, who is also an academic integrity officer at UNT. Then she read the post’s replies.

“The responses to it scared me, too,” she said, “because students … were asking for tips on how they can use this without it being found out.”

There is evidence indicating that students are flocking to ChatGPT-like tech to help them cheat. In a March exclusive for The Daily Mail, one expert noted that “well over half of students are likely using AI tools” to do just that, but that the real number could be even higher.

It’s possible that some students may consult one AI tool to write their assignment and another to reword it, making detection that much harder.

Bodnick’s Harvard report card isn’t the only proof of ChatGPT’s proficiency. The bot has passed difficult graduate-level exams, including the bar.

But ChatGPT isn’t always reliable. Sometimes, large language models are known to “drift,” meaning that they’ve swerved from the prompt’s initial parameters in unanticipated ways. One recent study from Stanford University and the University of California at Berkeley, for instance, suggests that the latest version got significantly worse at solving certain math problems over a three-month span.

ChatGPT isn’t exactly dependable when it comes to using legitimate references, either. It has at times concocted scholarly citations thanks to a phenomenon dubbed “hallucination.”

As an academic integrity officer, Everbach has encouraged professors to examine students’ sources: Do they seem credible? There are detectors available online, she added, but they, too, are flawed. UNT has purchased its own form of detection that’s fairly accurate. Still, Everbach urges faculty to read submissions closely if there’s a question of authenticity. Sometimes the chatbot will spew out nonsense or repetitive information, which students then turn in without a second glance.

Cases of ChatGPT use soared during the spring semester in many colleges. But trying to confirm that a suspect assignment was aided by AI can eat up time, Everbach said: “It does take labor on the part of the faculty member to be able to figure these things out.”

Rudi Thompson, associate vice president of digital strategy and innovation at UNT, said the university hasn’t issued a blanket ChatGPT ban. Each college and its departments can make decisions on how to proceed according to what’s best for them. Professors can specify in their syllabi whether students are allowed to consult the AI during the course.

Thompson cited two examples of how academics are responding to ChatGPT. One professor at another school asked students to use AI to help pen bios of historical figures before presenting them to the class.

On the other end of the spectrum, some faculty are turning back the clock, such as by giving tests on paper. “They think that will stop all cheating,” Thompson said. “I think students are creative. Students will find ways if that's what they want to do, no matter what.”

All the chatter surrounding ChatGPT reminds Thompson of when Google first touched down. The fear on campus was palpable. Some worried that if a student could consult a search engine for answers, it might make learning in a classroom setting pretty much pointless.

Things are different today.

“Now, nobody talks about using Google; it’s just what we do,” Thompson said.

She thinks the same could likely happen when it comes to ChatGPT: “A year from now, we'll be talking about something else.”

***

click to enlarge
Rudi Thompson, associate vice president of digital strategy and innovation at UNT, says ChatGPT is just new tech like Google.
Alicia Claytor
The ancient Greek god Zeus forbade humans to access fire. He wanted his creations to depend on the gods for warmth and sustenance. Left to fend for themselves in the cold, humans were rendered weak, but the Titan Prometheus couldn’t stand to see them that way. So, he stole the divine fire and brought it to Earth.

Fire helped spark the birth of civilizations, according to the Greek legend. Humans became self-reliant and could chop wood for lodging and build ships to travel the world. Armed with this powerful tool, they wielded fire to help humankind evolve — but also to kill their fellow man. They could now forge weapons, raze enemy villages and incite wars.

Prometheus the fire-bearer came to represent progress and knowledge in lore.

Today, artificial intelligence is the flame.

Lubbock educator David Ring equates the ChatGPT dilemma to calculators in the ‘90s. Few would have guessed that someday, smartphones would become nearly ubiquitous; now, everyone carries a calculator with them at all times.

“Looking at AI, the question is: Is this going to be a technology that in 20 years, humanity is just intertwined — we don't even think about it as novel anymore?” he said. “Kind of like we don't think about having a supercomputer in our pocket.

“But like fire,” he continued, “it can be used for good [or it] can be used to destroy.”

Some worry that AI has the potential to replace living, breathing educators, Ring said. Faced with budget concerns, school districts might try to streamline spending by buying an AI program and executing sweeping teacher cuts.

For critics, ChatGPT represents an existential threat to academia. But those with rosier lenses have argued that AI will save education, not destroy it.

Sal Khan, founder and CEO of the education nonprofit Khan Academy, recently delivered a TedTalk about how he believes that AI can offer “every student on the planet an artificially intelligent — but amazing — personal tutor.” And every educator could likewise receive an AI teaching assistant. He then demonstrated how the technology can help solve math problems without giving the answer away.

“We’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen,” Khan said during the talk.

Advocates argue that ChatGPT will improve education’s accessibility. It assists learners who have disabilities, such as by verbalizing answers for blind students. It aids non-English speakers by translating content into native tongues. It saves students time on homework via examples and explanations and also helps them prepare for upcoming exams.

Students who learn to work with generative AI, a type of artificial intelligence like ChatGPT that can create text and other content, will be better equipped for jobs in an increasingly technocentric economy. Yet how the AI is used down the line — for good or to destroy — is a question without an answer.

Edward Tian, a student at Princeton University who created an AI-detection program, referenced a different Greek myth during a January interview with MSNBC. “I think ChatGPT as a technology is incredible and exciting,” he said. “But at the same time, it’s like opening a Pandora’s Box.”

click to enlarge
Education professor Daniel G. Krutka is skeptical of the new technology.
Alicia Claytor
Daniel G. Krutka supports taking a skeptical approach to new technologies. The department chair for teacher education and administration in UNT’s College of Education is wary of speculating about generative AI’s potential benefits this early on.

“Ten years ago, everyone's like, ‘Social media is going to create democracy,’” Krutka said. “Ten years later, everyone’s like, ‘Social media is going to destroy democracy.’”

Krutka points out that ChatGPT isn’t the only generative AI out there. It’s arguably just the most recognizable version, similar to how one might ask for a Kleenex when in need of a tissue.

All the buzz surrounding ChatGPT has turned into free advertising for the tool, which is ultimately trying to get people to sign up and hand over the keys to their data, Krutka said. It isn’t an educational technology designed for schools. He wouldn’t advise his own students to create personal accounts because many companies are scraping data to use for their own purposes.

Generative AI can turbocharge the brainstorming process thanks to its instantaneous, tailored responses. But Krutka wonders: What, then, is lost along the way?

“I thought, ‘Oh, no. We are doomed.’” – Tracy Everbach, University of North Texas journalism professor

tweet this

Efficiency in learning may not always be the best path forward. Thoroughly digesting a range of learning materials is an integral part of knowledge creation. Synthesis drives home valuable insights in a way that surface-level learning does not.

“Things like generative AI, they traffic in information — not knowledge, and certainly not wisdom,” Krutka said. “When you have to synthesize material, it makes you have to think through it in a way that the thinking is deep and requires a deep understanding.”

Relying on ChatGPT to get through school will likely have real-world implications.

Everbach said if students cheat their way to a degree using ChatGPT, they may carry that tendency over into their careers. Cheating in the workplace can, in turn, damage companies and entire industries — something easily seen in journalism, for instance. (Think: Fabricated stories by news anchor Brian Williams and former reporter Stephen Glass, and the ammo that those scandals lent to the “fake news” narrative.)

ChatGPT’s ripple effects could also turn out to be quite tangible.

“You don't want to have an engineer who cheated their way through engineering school and cheated their way through the industry, because you're going to have roads and bridges that fall down,” Everbach said.

Of course, kids have long sought to cut corners in college, including through methods like SparkNotes’ book summaries. But Everbach hopes that students will be dissuaded from using ChatGPT as a crutch. Do young people really want to fake their way through school? Don’t they want to learn how to be an authentic, productive citizen in the workplace? In the world?

This early on in the AI’s lifespan, Everbach said, it’s sort of like the Wild West. It will likely offer real benefits, but no one can predict with certainty all of its far-reaching repercussions.

The ChatGPT panic may soon start to wane, but it’s clear that the technology isn’t going anywhere.

“We can't really run away from it. It’s here,” Everbach said. “So, we're just going to have to figure out how it can be used responsibly, and what is fair and honest.”
BEFORE YOU GO...
Can you help us continue to share our stories? Since the beginning, Dallas Observer has been defined as the free, independent voice of Dallas — and we'd like to keep it that way. Our members allow us to continue offering readers access to our incisive coverage of local news, food, and culture with no paywalls.