I promise AI didn’t write this column, and if it’s after my work, it’ll be over my dead body.

It’s been a long time since someone sat on my computer, writing me emails.
I don’t remember signing up for this artificial intelligence feature, like having a valet. It’s on my phone, too, which offers three usable but impersonal responses I can fire off to someone who just emailed me with a story or asked if I wanted to meet for coffee.
“I’d like to make coffee,” was one of the suggested responses to a recent email. “Let me go back soon about the time.”
One argument for these features is that they can save time and free me from more important tasks. But it takes me longer to read the three email options created than it takes me to write my response.
I find this really annoying for about 150 reasons, one of which is that in an increasingly automated world, it’s another nail in the coffin of human interaction. And yes, there are at least 150 reasons. I know because I asked the AI and it spat them out for about three seconds. No. 148: “It sounds like it was written by a committee.”
A fair share of negative feedback arrives in my inbox, so I wondered if an autoresponder tool would be helpful. But the robot doesn’t have enough salt to work. “Thanks for reading” was the suggested response to someone who called me a hopeless loon and another guy who wondered why anyone would read my “dumb column.”
On second thought, maybe a more consistent, dismissive answer is the way to go. But of greater concern is what happens to human intelligence as artificial intelligence does much of our writing, research, communication and thinking.
If a middle school, high school or college student can easily use a computer tool to turn off a book report or essay, what is the impact on vocabulary, grammar, reading, critical thinking, originality, intellectual curiosity?
In learning?
“There’s no nose like an English teacher’s nose,” said Mike Finn, a recently retired LA Unified coach who said teachers can tell if a student’s work is original or not and try to get them to avoid cutting corners and cheating.
But it is easier than ever for a student to become lazy. In a New Yorker article last year by a college professor, students noted AI-enabled cheating as a widespread and clever way to avoid wasting time on things they didn’t like. “I try to do as little work as I can,” said one student.
My son, who works in a college library, has seen that situation and the general erosion of research skills and decision-making skills among some students.
“They can’t choose a book from thousands of research books and they don’t even want to choose it because they think they can get the information easily on the computer,” he said.
Jenn Wolfe, a professor of higher education at Cal State Northridge, said the use of AI is “a very hot topic right now,” and in high schools and middle schools, some teachers are “going back to paper and pen, from what I see and hear.”
I met Wolfe in 2013when he was a high school teacher in LA Unified he adapted to the introduction of iPads in the classroom.
“This is not a teacher and it is not a student, either,” he said sagely about the iPad at the time. “It’s a tool.”
Professor Sarah W. Beck, chair of NYU’s department of teaching and learning, spoke to that idea of adapting to evolving technology.
“I think that the denial of AI or the rejection of AI is not a useful situation because it will stay here,” said Beck, so the important thing is to understand the benefits and reduce the risks.
He told me he just came out of an education class where future teachers “are mostly AI skeptics. They’re not AI rejects, but they’re very accepting of its limitations and really value human interaction through writing.”
There is no denying that AI can be useful as a research tool, exploring themes and helping writers organize their thoughts. It’s also useful in ways that aren’t limited to writing. It helped me replace a toilet tank valve a few weeks ago, for example. I also recently had a tooth extracted and wondered about the pros and cons of getting an implant. AI has fed me oodles of information on pros and cons.
Writing, Beck said, can organize your notes or perform “formula writing” tasks.
“We have to learn to use these tools in a way that gives us more time to devote to the important parts of writing,” she said.
We need to be careful, too.
If we are driven by fire directions, reviews, pre-made emails, ready-made manuscripts and unsolicited offers of help, where does it all come from? Who is entering the information? Do the creators have an agenda? Are students taught to recognize what information is reliable?
Cornell University study released this month suggests that AI writing assistants can not only influence the way we write, but the way we think.
The researchers observed 2,500 participants who wrote on many controversial topics including the death penalty, fraud rights and voting rights. Others were given biased information by AI autocomplete writing tools, and based on pre- and post-exercise surveys, their opinions shifted toward bias even when they were informed of the bias.
“We know that these species are controlled by large and powerful organizations, and they may or may not have an opinion that they want to incorporate or promote, and they may be abused,” said Mor Naaman, professor of information science at Cornell Tech and lead author of the study.
The information we’re being told is “wrapped in a persuasive AI language,” Nahamani says, and the benefits of the technology are evident. “The bad news is that there are hundreds of billions of dollars of investment and interest in trying to push AI into every corner of our lives…
It will take more time, says Nahamani, to uncover all the dangers and know how to manage them.
AI will create jobs, for sure. It will also finish jobs, and mine may be coming. So I asked AI for an ending to this column, and here’s what it came up with:
“And that is the tension of this world: the promise of efficiency versus the irreplaceable process of being human.”
I think my job is safe – for now.
steve.lopez@latimes.com



