In an Age of Impeccable AI-Generated Content, to Err has Become Even More Human

There is something deeply satisfying about finding typos and grammatical errors in articles from reputable newspapers like the New York Times or the Economist. The reader knows that these organizations employ dozens of dedicated editors to review, restructure, and polish pieces written by veteran journalists who themselves have years of experience checking on other people's work, not to mention degrees from prestigious academic programs specializing in exactly the types of writing they are paid to do. When readers can pick up what slips through these eyes and minds, they feel just a little bit like the pros' equals.

That satisfaction is all the more given how rare it has become. While not suggesting that top news organizations have embedded artificial intelligence (AI) in their everyday editorial process to save man-hours, AI has certainly become normalized among millions of citizen-journalists seeking to break the monopoly of the big names in providing information to the masses. Whether the motivation is to highlight niche issues that the big boys deem too insignificant to cover or to present non-mainstream ideologies through unique analyses of the same events, AI has become the guarantor of quality.

They certainly got the mechanics right. AI today is good enough to not hallucinate typos and grammatical errors, reducing the need for a professional editor to check first drafts. The result is social media feeds full of lengthy analyses, seemingly logical at first glance and professionally composed, despite the sheer variety of viewpoints on offer. By prompting chatbots to take on the professional journalist persona, even the most amateur writers have at their disposal the ability to turn their brain dumps into narrative copies. As more people jump on the bandwagon, machine-generated standardization prevails.

Never mind the extremism and falsehoods often overlooked and hidden under the flowy language. The bigger issue is trust. If the New York Times or the Economist can't even properly diagnose their (human) writing, aren't there more professional writers out there, even more professional than them? It sounds optimistic for those who believe in the equality and democratization of the loudspeaker, especially the digital sort. By undermining the traditional media gatekeepers as being late to the tech-centered world, many seek to take their business and undermine their credibility for fair and rigorous reporting.

Yet, in their flaws, the reputable organizations may actually be winning. As AI becomes prevalent, many express appreciation for the human touch. On LinkedIn, many complain about employers posting AI-generated job descriptions, using AI to filter resumes, evaluating candidates with AI interviews, and even sending out AI-written rejection letters. Not having a human even look at the application packages that many so carefully put together, understandably, is anger-inducing. The same can be said of the news. The value of information is at least partly reflected in the care of humans that went into it.

And what better way to show the presence of that human effort than showcasing a few errors that only humans can make? Not only do they stand out from the crowd of polished AI content, they also show that the writers and editors are relatable, perhaps rushing through copies on their way to the next assignment, or hunched over their desks, their overly caffinated minds shuffling through file after file in to-do lists. These relatable moments, prevalent in any office work beyond that in a newspaper office, creates the vulnerability that underpin trust, something AI can never replicate.

One day, when AI developers become less fixated with the goal of more efficient, more accurate, more perfect, they will reflect on what makes their models more loved among all the competitors that produce equally accurate and eloquent. It would be funny if the conclusion they reach, after product and market research involving many focus groups, that what attract users are not perfection but excusable flaws. The models may have small mistakes deliberately baked in, expressed with uncertainties rather than current AI's uninhibited (over)confidence, to become more likable.

Humans, even in the world of accurate digital databases, enjoy conversing with humans for information. And there is nothing more true than the fact that humans are fundamentally vulnerable to errors, emotionally disruptions, but also sympathy. Writing has always been the medium to channel those qualities, overtaken only in the most recent years by the advent of video-sharing. AI has disrupted just how effortlessly and quickly content can be generated through these mediums. But when our digital forums become inundated, those painstakingly slow but mistake-ladden manmade content will shine even brighter.

Comments

Popular posts from this blog

Sexualization of Japanese School Uniform: Beauty in the Eyes of the Holders or the Beholders?

I Owe My Stuffed Animals a Part of My Mental Sanity