This article, which was co-written by Katie Axon and Becca Orjala, is the first in a series on AI at Transy. In the coming weeks, The Rambler will explore student perspectives, environmental impacts, accessibility, ethics, and other issues.
Disclosure: Katie Axon is a member of the Human Intelligence (Hi!) faculty learning community, a working group taking a critical look at AI in higher education; David Ramsey, the advisor of The Rambler, is the co-founder of the Hi! FLC.
~
All Transy students, faculty, and staff found something new when they logged in to their MyTransy portals this fall: links to ChatGPT 5, Google Gemini, and Google NotebookLM. The university spent $120,000 to purchase this suite of “AI” consumer products for everyone on campus this year.
“I think of AI as a tool,” said Amanda Sarratore, the university’s Vice President for infrastructure & chief information officer, who directs IT on campus. “It is just like any other technology tool that we make available to our faculty, staff and students.”
But many on campus have questions and concerns about whether these “tools” undercut the values and mission of a liberal arts education: Do “generative AI” chatbots like ChatGPT enable shortcuts that undermine the learning process? Should an educational institution encourage the use of products prone to misinformation and plagiarism? What about the environmental impacts or broader harms to society and culture?
“A university should be a place that values learning and community more than efficiency, profit, or the next trendy hot topic,” said history professor Hannah Alms. “I feel that higher education as a whole has been too quick to embrace generative AI without protecting core values such as independent thought, individual development, and the exploration of what it means to be human.”
Some professors believe that the consumer products purchased by the university are actively harmful to students and to Transy’s mission.
“I think it was a capitulation to the technocrats, none of whom have our students’ interests at heart,” said Spanish professor and Humanities Division chair Jeremy Paden. Learning can be transformative, he said, but it takes time, practice, and attention.
“AI threatens everything we value in higher education,” English professor Kremena Todorova said. “It’s anti-creative and anti-critical thinking. It’s anti-meaning and anti-learning. It is anti-human. It’s terrible for the environment. Billionaires love it, and love the idea of marketing it to young people, because they want to crush labor costs.”
Health and exercise science professor JJ Wallace has a different perspective, seeing a need to embrace these products. “It’s everywhere, and it’s going to continue to be everywhere, so we can’t keep our head in the sand,” she said. “We need to be able to use it effectively, appropriately, and ethically, because it’s not going to go away. I’m sorry, it’s just not.”
Wallace, the co-director of Transy’s Digital Liberal Arts initiative, said that AI represented an opportunity for faculty to take a leadership role in guiding students on how to (and how not to) use systems like ChatGPT. “I would love to see them actively engaging in how and if this technology is appropriate for what they do in the classroom,” Wallace said.
But not everyone shares her enthusiasm for the chatbots now a click away in every MyTransy portal. “It makes me want to retire,” said physics professor Jamie Day.

How AI came to Transy
“Artificial intelligence” can refer to any computational system that does tasks normally done by human beings. The current hype you’ve probably been reading about refers to “large language models,” or LLMs.
LLMs use vast quantities of data, typically gathered from the Internet, to generate text, images, or other content (often called “generative AI”). It’s a prediction system: If you ask it a question, it will provide an answer by predicting each bit of language that a human being—or a human being with instantaneous Internet searching—might use. If a student asks an LLM chatbot to write an essay or short homework response, it can do a passable job of producing something that sounds at least moderately like student writing. Or it can do the same with solving a complicated physics equation.
During the winter term last year, a Board-level ad-hoc working group began reviewing a “framework” document—general guidelines for AI at Transy. The bulk of the work on this document was done by Sarratore and then-Dean Rebecca Thomas, with all cabinet members providing input.
In addition to Sarratore and Thomas, the working group charged with evaluating the framework included Board members Michael Finley and Prakash Maggan, JJ Wallace (who served at the request of Thomas), and two professors nominated as faculty representatives: history professor Gregg Bocketti and computer science professor Jack Bandy. (Thomas and Bandy have since left Transy for academic jobs elsewhere.)
Bocketti and Bandy, according to a letter sent to the Faculty Concerns Committee last May, were alarmed by the scope of AI implementation that the working group had been planning, including in ways that would impact pedagogy, research, and curriculum:

According to their letter, Bocketti and Bandy were able to convince Thomas and Sarratore to revise the framework in substantive ways that they believed would make it more acceptable to faculty.
However, their concern that the university was moving on an “unnecessarily condensed calendar for the development of such a wide-ranging policy framework” went unaddressed.
While Bocketti and Bandy had input on the framework, they were not involved in the university’s biggest policy decision: Contracting with for-profit companies to purchase access to AI systems for everyone on campus this year.
Sarratore said that determination was made by evaluating existing AI products already in use on campus, reviewing current spending with the university’s controller, and incorporating feedback from an IT survey she conducted. “The Bingham Center for Teaching Excellence also served as a key partner and provided valuable input,” she said.
But the decision was otherwise made without direct faculty input.
Faculty concerns

The $120,000 price tag of the new AI products had not been previously publicly reported when The Rambler learned these figures last semester. Paden said it raised questions about the university’s priorities.
“Ten years ago, we had a cafeteria in the student center, a sandwich and fry shop in the basement of MFA, a coffee shop where Gratz Perk is, and an honest-to-goodness late night food option in Thompson Hall,” Paden said. “Currently, the only real place to get food is the Cafeteria. It is open for dinner only for two hours and the late option closes at 9 pm. The constraints are economic, but we are choosing to spend our money on things like AI rather than food availability.”
Todorova said she was appalled when she learned of the cost of the AI suite from The Rambler.
“There has been little to no input from faculty or students,” she said. “Many of us believe they are antithetical to learning and have no place in the classroom. Why was this decision made without a discussion within the campus community? What about students who aren’t ethically comfortable using AI for legitimate reasons? It seems like we’re rushing in to latch on to the hype without any kind of plan or safeguards in place.”
Julie Perino, co-director of the Bingham Center for Teaching Excellence, said she thought “the school’s response has been pretty measured.”
Transy’s first action, Perino said, was establishing an AI integrity policy in 2022. “We made sure to put that into place pretty quickly. And we’ve been bringing in speakers to talk to faculty about teaching with AI since April of 2022.”
For some faculty, however, the workshops and training offered on AI have appeared to be one-sided, pushing for the implementation of AI in the classroom whether faculty (or students) like it or not.
David Ramsey, an adjunct professor of English and WRC and faculty advisor for The Rambler, said that the AI workshop he attended as part of the training for FYS this fall featured sixty slides on how to implement AI. Just one slide had any information for faculty who wished not to use AI in their classrooms at all, Ramsey said. It was a sample policy statement disallowing AI. The statement itself had been written by AI.
Kurt Gohde, professor of studio and digital arts, said that giving only one side in such trainings was a problem given the number of faculty eager for a very different approach.
“I think we should have equal resources and equal training and equal workshops for people who want to find ways to live without AI as we do for people who want to find ways to use AI,” he said.
Though he never found it useful himself, Gohde previously experimented with assignments that required AI because he thought students would need those skills after they graduated. The idea that students need AI training (or “AI literacy”) to compete in the job market is likely the most common argument advanced by proponents of AI in higher education. But Gohde ultimately concluded that products like ChatGPT were so easy to use that no training was necessary. Gohde said he was confident that someone with a good liberal arts education from Transy would have no trouble figuring out how to use an AI chatbot if a job required it.
For critics of AI in higher education, the problem is not just that products like ChatGPT typically don’t actually require specialized skills, but that reliance on AI may weaken other skills that Transy promises to offer students.
“The argument that we need to teach them to use this tech responsibly is malarkey,” Paden said. At least in the Humanities, he said, “the only way to use it responsibly is to know how to do the very things you are asking it to do.” But students have not yet read or written enough to have that mastery, he said. “Language is a life-long endeavor that changes over time, due to context and use,” he said. “We lose it, if we don’t use it.”
The faculty members most critical of AI said they aren’t avoiding the topic or denying its widespread use; they are actively trying to come up with innovative new approaches in their pedagogy to respond to the problems these products create.
“Of course we know ChatGPT is everywhere,” Todorova said. “That’s not an argument for paying to invite something harmful onto campus.”
“The process is the point”
University leaders said that their aim was not to replace human thinking or judgment, or to undercut deep learning experiences.
“The goal isn’t to take away the human element or creativity,” Sarratore said. “It’s about removing the repetitive, time-consuming tasks so we can focus on the work that really matters.”

But for art professor Grace Ramsey, LLMs are “the worst possible thing for any creative process.”
She expressed frustration with LLM marketing that promises to let students skip the slog and get the product. The value of going through the process is the whole point, she said. Even if the final product doesn’t turn out well, she added, the experience of making it helps the student grow: “The mistakes, the epiphanies, the slog, the hours of labor, the unexpected results, the hard-won revisions—you bring all that forward to the next thing you make and the next thing and the next.”
At a certain point, she said, students find pleasure and meaning in the process—including the parts that are challenging or may feel tedious at first:

The fact that this process can be difficult means that teaching it is difficult, she said. But she believes that labor pays off: “Students need the challenge of doing the thing not just to learn the subject, but most importantly to learn who they are and how they might face future challenges.”
AI in the classroom at Transy
One challenge for students with the advent of products like ChatGPT is that rules and expectations about the use of AI vary widely by professor.
Perino said she was “conflicted” on its use: “I think it is something that should be used sparingly because of the environmental impacts. I tend to think it should really only be used by people who know how to get to the end result they want and understand how to get there.”
Wallace said her approach was to establish “an overall course policy.”
“If you use AI, I have a really robust citation policy about how to treat it as if it is an expert source,” she said. “So when using quotations, if you’re taking it word for word, or if you’re doing paraphrasing—you still cite it, just like you would any other source.”
Wallace also sees the potential for much deeper integration of AI at Transy. In a recent Academic Affairs presentation, she shared research on creating a “custom study buddy with ChatGPT,” expressed an interest in incorporating AI products in the Writing Center, and suggested the possibility of a chatbot programmed to offer the students the virtual experience of communicating with William Shakespeare.
Other professors have experimented with AI-based assignments with limitations on what students are allowed to use it for; some have had students critique content generated by AI.
Alms said that her approach this semester was to involve her students in crafting their own classroom policy.
“From that point, they are accountable to themselves, each other, and to me,” she said. “The responsibility is on students, in those courses, to create their own community guidelines and to hold themselves responsible to them.”
Alms said that she was pleasantly surprised to learn many of her students were quite skeptical of AI and concerned that it would harm their learning experience and ability to think independently.
Other professors have completely shifted the way their classrooms function in part as a response to AI.
After realizing that some of his students’ self-reflection assignments were generated with AI, Gohde decided this semester to have his students handwrite them instead. “The worst thing to steal is the act of self reflection,” Gohde said.
Eventually he decided that all writing assignments in his courses should be handwritten. “I’m going to suffer through reading bad handwriting as a result because I think it’s better for everyone,” Gohde said.
Todorova has taken a similar approach, also initially inspired as a way to avoid generative AI. But it has become key to her teaching philosophy, she said.
“By the time last semester began, I saw these assignments as central to my pedagogy, which stresses learning as embodied, relational, and communal,” she said.
Her students seem to enjoy the change, she said. In one reflection, a student stated, “I barely write on paper anymore because of technology and it was nice to be able to do that again. I would recommend that you keep asking students to do this because I feel like it also leaves less room for distractions.”
Physics professor Mostafa Tanhayi Ahari allows his students to use AI, but he recognized there can be problems in an academic setting. “Mathematically AI is doing a good job, but physically, sometimes it gives you the wrong answers,” he said. Students without the experience to catch those mistakes should avoid using it, Ahari said.
Among professors who do not allow any use of AI, few are punishing students who break the rules or taking disciplinary action.
“I don’t have the interest or bandwidth to play cop,” David Ramsey said. “Even if I did, there’s too much potential for false positives and false negatives.”
If someone appears to have used AI, Ramsey said, he has a conversation with the student.
“These are addictive products that harm society and young people,” Ramsey said. “Students who use so-called AI are responding to societal and institutional pressures and incentives, including the fact that their university provided the products to them. So for me, if someone violates the rules to use AI in my class, it’s an opportunity. I want to know what led them to make that choice, and I want to share with them why I believe that reliance on these products is damaging to their education and their future. And for those willing to try a different approach—or nervous about their ability to do college work themselves—I’m there to help.”

“Appropriate guardrails”
Asked about the concerns from faculty and students about AI on campus, President Brien Lewis responded in a statement compiled for him by key cabinet members: “Those concerns are both valid and shared by the university. Transylvania’s framework begins from the premise that AI is a tool, one that must never replace human judgment, creativity, or the deep learning experiences that define a Transy education.” (For critics of AI, even the word “tool” can be controversial.)
Transy’s leadership does not see the AI products they purchased as something new. Many faculty, staff, and students were already using various AI products, Sarratore said. “Our goal was not to introduce new technology, but to put appropriate guardrails in place to protect institutional and student data,” she said.
These “guardrails” seem to be the university’s primary explanation for the purchase. Instead of the standard AI products, the university acquired custom versions that OpenAI and Google promise will protect users’ data and privacy.
“We were able to make sure that when using the tools, all this safety is in place,” Wallace said. “That ensures that when we are using the tool as a campus, we’re doing so with as many safeguards as we possibly can.”
It’s hard to know precisely how these additional privacy or security measures function, or how reliable they are. But Bocketti and Bandy got a look at two relevant agreements that Transy signed with OpenAI. These documents state that Transy can make a written request, no more than once per year, for OpenAI’s most recent independent audit report regarding privacy and security, as well as summary details of certain other audits or security reports, “upon reasonable request.”
When asked about privacy concerns at one AI workshop last August, Wallace said that Transy could sue if a company like OpenAI didn’t follow the agreement, but acknowledged that the university was ultimately trusting the companies to hold up their end of the deal.
But AI skeptics worry about the long history of tech companies breaking promises about privacy. Companies like Facebook, Youtube, and Google have paid out large settlements and fines for breaking privacy agreements with customers (none of which threatened their business models).
“Given that history, I wouldn’t trust their promises of privacy, even if they offer a third party audit,” Gohde said. “I couldn’t advise students to trust them either.”
David Ramsey said he viewed these promises as a marketing ploy to infiltrate college campuses with a large potential user base. “Even if I trusted these companies,” he added, “why do AI apps get a special deal? Many of our students use TikTok or Instagram or Snapchat or any number of other apps that collect their information.”
Transy’s licenses with OpenAI and Google are purchased on an annual basis, so university officials will have a decision to make on whether or not to renew the current suite of products for next academic year.
Sarratore said that the $120,000 investment in these products didn’t mean the university was pushing AI on campus. “What we’re doing is making it available for those who are interested,” she said. “For people who want to use it, it can be a real benefit.”
What about the students?
The workshops provided to faculty have emphasized the need for safeguards for students using AI and careful instruction about how to use it in safe, ethical, and academically appropriate ways. But if the goal is to protect students, what did the university offer to help them navigate these new AI consumer products when they suddenly popped up in their MyTransy portals? Based on The Rambler’s reporting, it seems there was little plan in place to offer guidance on this transition to students last semester, other than relying on faculty to do so in their classrooms.
AI-critical faculty have generally disparaged the resources offered by Transy as unhelpful, but there have been workshops for faculty, and more offerings to come. The Bingham Center for Teaching Excellence and the Digital Liberal Arts Initiative, for example, are both expanding programming for faculty who choose to engage with AI. Last summer, Transy librarians Katrina Salley and Lori Bird created a microcourse that “provides information about what AI is, the capabilities and limitations of AI, and ethical ways to use AI,” Salley said. The microcourse was presented to faculty at a faculty retreat prior to the fall semester, with the hope that they would incorporate some of its lessons into their courses.
The microcourse was designed for a student audience, but fits the pattern of reliance on faculty to deliver the message. The university offered no similar direct outreach for students, but Salley said the library plans to launch a website on the topic for students this summer, which will cover similar ground as the microcourse.
There was also little to no opportunity for students to share their thoughts (or raise concerns or objections) before the AI products suddenly appeared.
According to the statement provided by President Lewis, “the [AI] framework and related investments were developed…with input from…SGA representatives.”
But SGA President Sean Gannon said that during their only meeting on the topic with Sarratore in February of 2025, the dialogue was vague—focusing on what tech they used as students and overall opinions about AI. “We, in general, opposed AI because of ethical and accuracy concerns,” Gannon said. “Nothing else much came of our conversation, and she never followed up about the specific topic.”
Last semester, Gohde and David Ramsey founded a faculty learning community (FLC), Human Intelligence (Hi!) in part to try to fill the gap on resources for AI-free pedagogy, resources for students who want to pursue AI-free education, and opportunities for students to express their opinions about AI at Transy. The group chose its name because, although most members are staunchly “anti-AI,” they wanted to emphasize promoting the liberal arts values they believe are under threat. Gohde and Ramsey said they wanted to start small (the FLC includes seven faculty members and eight students), but have been inspired to see students taking the initiative on plans to expand Human Intelligence to a campus-wide effort in the coming weeks.
During the FLC’s preliminary discussions, students have expressed frustration that the $120,000 purchase was made without regard for their input even though the products are in large part for their own use.
For her part, Wallace maintains that giving students these programs help them with “critical thinking and thinking about the ability to identify misinformation and the ability to verify information. If we can still continue to teach these foundational skills, we can apply that to anything, including AI.”
What about students who want no part of AI in their education? Ramsey said that anti-AI sentiment—including some more radical in their opposition than professors—was common among his students. One student in Ramsey’s current FYRS class made this comment in a recent assignment:

According to Perino, such students have options. “I know very few [professors] who are like, you have to use AI to do this,” she said.
If a student is uncomfortable with an assignment that does require AI, she said, they can opt out by communicating their concerns with their professor. At that point, she said, the professor should offer an alternative assignment..
But students should think carefully about opting out, Perino said. “Faculty who use AI in an assignment aren’t just doing it as an exercise. There’s some deeper goal there. So I’d ask the student to think about the goals of the assignment and make sure they’re not missing out on anything by skipping the assignment or by asking to do it in a different way.”
But there is no official university policy guaranteeing this option, Perino acknowledged. In practice, it’s hard to believe students would feel comfortable telling a professor they are refusing to do an assignment on principle.
While the debate over AI on campus continues at Transy, students in the Hi! FLC have begun planning workshops and activities to promote human intelligence and embodied learning, including a series of “Hi! DIY” events teaching handmade skills such as embroidery.
Students in the FLC said they are also planning to start a petition asking the university to end its investment in the suite of AI products next year and reinvest those funds in human-centered education.





