The last time you had a problem, needed an answer to a question or were in search of relationship advice, did you seek out human insight or did you open a new tab on your computer to ChatGPT?
It feels as if every conversation, lecture and email these days mentions AI, which makes sense when it’s so frequently used. With each assignment, it seems like everyone turns to ChatGPT — often bookmarked on their computer — for answers, particularly if the assignment involves writing.
At some point, we need to see that although artificial intelligence can string words together, it can’t truly write — not in the raw, beautiful way humans can. It’s never labored over a draft until its ideas came alive.
It’s tiring to hear students and university administrations fixate on AI and what it can do. By letting AI loom over us, we give it more power than it deserves, when really it should be used as a tool, not a crutch.
Top universities, including the Massachusetts Institute of Technology, Yale and Princeton, have established their own formal AI policies, most of which state the use of AI is up to the discretion of the professor, or students need to explicitly state if AI is used in any capacity in their work.
Lehigh is no exception, with an email sent to the campus community on Sept. 12 from Provost Nathan Urban discussing how to properly use generative AI. In the email, Urban clarified the importance of using AI effectively, ethically and as a tool to support learning as opposed to a replacement.
But with a sign-off to the email encouraging students to use Google Gemini, to which everyone is granted access through Lehigh’s licensing partnership with Google, and asking students to share “ideas for how Lehigh can better prepare you for an AI ready future,” the message felt contradictory — almost as if we should be preparing ourselves for an AI filled future of learning.
With this year’s Compelling Perspectives series being about AI, and it feeling like every other event on Lehigh’s events calendar is a seminar about how to use Gemini, it’s hard to escape the robot talk.
And while we know it’s important to discuss it — since it’s clearly not going away anytime soon — it’s hypocritical for Lehigh to say AI will never replace learning when it’s the same institution constantly pushing informational sessions and lectures at us about how to best use the tool.
As writers who pour time and energy into crafting captions, headlines and perfecting every sentence we publish each week, we’ve particularly felt the impact of AI taking the world by storm in recent years.
We’re also no stranger to the narrative seen, ironically, in the media that the next generation of journalists could just be robots writing stories. The U.N. even recently noted that “AI presents both powerful tools and significant threats to press freedom, integrity and public trust.”
A Pew Research Center survey found that 41% of U.S. adults think AI would do a poorer job writing than a journalist, while 19% think AI would do better and 20% said it would do about the same.
In other words, most survey participants saw AI writing as inferior to human work, reaffirming that trust in human judgement, context and nuance still matters. Yet one in five of the participants expressed outright dissatisfaction with human writing, and another portion remained indifferent.
This reality demands careful considerations of AI’s benefits and risks.
On one hand, it’s frustrating to see our peers rely on ChatGPT and other large language models for writing when the goal of academia and journalism is to preserve human voices.
We know what human writing looks like, what it sounds like and how it reads on paper. As student reporters, we take AI seriously, with our publication’s policy prohibiting AI in story and art creation, requiring transparency if generative AI is used at all.
Still, it would be ignorant for us to overlook AI’s valuable contributions to things like data analysis or idea generation. We’ve all experimented with it in one way or another, and it’s easy to see its appeal. Even in journalism, it can help with transcribing interviews or translating.
But for now, people still prefer to get their news from human journalists. Recent controversies have proved this, with outlets like Sports Illustrated and J.Crew facing intense backlash after publishing AI-generated content, sparking outrage among readers.
News outlets rely heavily on trust, and that trust can be shaken by AI, which is why it’s so important to ensure it never grows powerful enough to overshadow human judgement and storytelling.
If Lehigh really cared so much about making sure AI doesn’t replace learning, it should put more effort into celebrating the accomplishments of those who create things with their own judgement.
As opposed to sending university-wide emails about Gemini tools and advertising seminars and lectures about robots, let’s make better known the accomplishments of the writers, artists, creators and those who pour hours into their craft — no matter what it is being produced.
Our job isn’t to worship AI, but to understand it and use it responsibly, because when we walk across the stage at graduation, we’ll be celebrating what we’ve accomplished — not what AI has generated.



Comment Policy
Comments posted to The Brown and White website are reviewed by a moderator before being approved. Incendiary speech or harassing language, including comments targeted at individuals, may be deemed unacceptable and not published. Spam and other soliciting will also be declined.
The Brown and White also reserves the right to refuse the publication of entirely anonymous comments.