When Everything Sounds Like AI: The Real Problem with Writing Today

Some of the links below are affiliate links, meaning, at no additional cost to you, I will earn a commission if you click on the link and make a purchase. Full disclaimer can be found here.

There’s a new kind of literary panic happening right now—and it’s not about banned books or declining readership.

It’s about AI witch-hunts.

I’ve spent some time experimenting with the so-called AI detection tools that are being used to “prove” whether something is human-written or machine-generated. Out of curiosity (and maybe a little skepticism), I ran the same pieces of writing through multiple detectors.

The results? Completely inconsistent.

One tool confidently labeled something as “likely human,” while another flagged the exact same text as “AI-generated.” No nuance. No explanation that held up under scrutiny. Just a verdict.

I’ll admit—Pangram AI detector is probably one of the better ones I’ve tested. It at least attempts to explain why something is being flagged. But here’s the issue I can’t ignore:

The very things it identifies as “AI signals” are often the same traits you’d expect from someone who has spent years learning how to write well.

Clear structure. Consistent tone. Logical flow. Polished grammar.

In other words… competence.

I’ve also seen people test these tools with works by Jane Austen, Mary Shelley, and even foundational documents like the United States Constitution—only to have them flagged as AI-generated.

Let that sink in for a moment.

If our tools can’t reliably distinguish between classic literature and AI output, what exactly are we measuring?

Now, to be clear—I’m not arguing that AI-generated writing doesn’t exist. It absolutely does. We see it every day in the library world, especially in the explosion of self-published content. Sorting through that volume to find quality materials is already a challenge, and AI has added another layer of noise.

According to industry reporting, over 3 million of the roughly 4 million books published annually are self-published, largely driven by platforms like Amazon KDP. That sheer volume makes discoverability—and quality control—an ongoing issue for libraries and readers alike.

But here’s the part that often gets lost in the conversation.

Because this topic comes up constantly between a colleague and me, I’ve gone down the rabbit hole of reading writer “how-to” books that talk about using AI in the writing process.

And here’s the thing—none of them actually tell you to have AI write your book.

Not one.

They all position it the same way: as a tool.

Use it to gather background information.

Use it to brainstorm ideas.

Use it to get unstuck when you’re staring at a blank page.

But when it comes to the actual writing—the part that makes a book good—they all come back to the same point:

That part has to be human.

Because what makes writing resonate isn’t just structure or clarity. It’s voice. It’s perspective. It’s the slightly imperfect, deeply personal way a person makes meaning out of their experiences.

That’s not something you can outsource to an algorithm.

And yet, we’re now in a moment where writers are being asked to defend their work against tools that flag polish as suspicious.

But polish isn’t the only thing people are pointing to.

There’s also this growing suspicion around output.

If an author is publishing frequently—every few months, or releasing multiple books in a year—the assumption starts to creep in: “That has to be AI.”

Because clearly, no human could possibly write that fast… right?

Except… we’ve seen this before.

Take James Patterson. He releases multiple books a year. And yes—people do question that level of output.

But notice how they question it.

They don’t jump to “this must be AI.”

They assume collaboration. Co-authors. Ghostwriters. A team behind the scenes.

In other words—they still assume humans.

But when indie authors hit a similar pace?

The assumption shifts.

Not “they must have help.”

Not “they’ve built a system.”

But “this must be AI.”

Same behavior. Different conclusion.

And that difference matters.

Because it reveals something deeper than just concern about technology. It shows how quickly trust erodes when the author doesn’t already have established credibility—or when their process doesn’t fit what we expect writing to look like.

We question the process—but we don’t question the humanity.

Add that to the mix, and this starts to feel even less like a thoughtful response to new technology—and more like a moving target where the rules aren’t applied evenly.

But if I’m being completely honest?

My bigger concern right now isn’t even AI.

It’s the overwhelming flood of books that all feel… suspiciously familiar.

Spend a little time browsing recent releases and you start to notice a pattern. And then another. And then suddenly you’re knee-deep in dragon riders, shadow-wielding love interests, and morally gray men who all seem to have the exact same personality… just with slightly different names and cloaks.

Plus they all have very similar covers! Seriously!

At some point, it stops feeling like a genre and starts feeling like a copy-paste situation.

And I say this with love—but some of these are starting to read like Star Wars fanfiction… with dragons added in for flavor.

New setting, same energy.

Now, trends in publishing are nothing new. We’ve had vampires. We’ve had dystopias. We’ve had magical schools for kids who discover they’re special on page three. We’ve had hockey flavored romance…wait, has anyone written a romance with a vampire hockey player at a magical boarding school yet? Please say we are not that far gone yet!!

Readers latch onto something they love, and the market responds. That’s normal.

But when everything starts to blur together, the question shifts.

It’s no longer: “Was this written by AI?”

It becomes: “Was this written with anything new to say?”

Because originality isn’t just about proving something is human.

It’s about voice. Perspective. Taking a risk instead of following a formula that already worked for someone else.

And here’s the irony—no detector is flagging that.

No tool is popping up to say: “Warning: this book may contain excessive amounts of brooding shadow energy and plot points you’ve already seen five times this month.”

Maybe that’s the detector we actually need.

If anything, this moment should push us to ask better questions:

What makes writing meaningful to us as readers? How do we evaluate quality in an age of abundance? And how do we support human creativity without policing it into silence?

Because the real risk here isn’t that AI will replace human writers.

It’s that we’ll stop trusting human voices altogether.

What are you seeing in your own reading or professional work—are AI detection tools helping, hurting, or just adding confusion? I’d love to hear your thoughts.


Discover more from Not Quite Superhuman

Subscribe to get the latest posts sent to your email.

You found the comments! Leave me a reply and I just might give one back!

This site uses Akismet to reduce spam. Learn how your comment data is processed.