AI is useful for publishing, but only when it removes drudgery

by Toni

Most AI advice for publishing is too broad to be useful. One camp talks as if refusing AI means choosing inefficiency on purpose. Another talks as if using it at all is a creative surrender or an ethical stain. Neither position helps much when you are actually trying to run a publishing workflow.

If you are staring at a rough transcript, a half-shaped draft, a pile of notes, and the annoying last-mile work between "basically written" and actually published, the question is not whether AI is good or bad in the abstract.

The useful question is smaller and harsher: what part of the workflow are you asking the tool to carry?

That matters because publishing contains different kinds of work. Some of it is repetitive operational overhead around material that already exists. Some of it is judgment: deciding what the piece is trying to say, which examples are fair, which claims hold up, what to cut, what to sharpen, what tone the piece has earned, and whether it should be published under your name at all.

AI helps most when the work is real and the overhead is annoying.

This is not a purity argument. It is a workflow argument. The problem is not assistance. The problem is substitution in the wrong layer.

A lot of publishing work really is tedious. A rough interview transcript has to become searchable. Voice notes have to become usable text. Scattered notes, links, quotes, and scraps have to become a map someone can inspect. Titles, descriptions, excerpts, alt text, categories, and small support assets all have to get made before a post can go live. None of that work is beneath the craft. It matters because unfinished operational work is one reason good ideas die in draft folders. But it is still the kind of work where assistance can be genuinely useful.

If you recorded an interview or dictated a pile of notes, using AI to clean the transcript, separate speakers, and make the material easier to search is an easy case. The raw material already exists. The tool is helping you move through it. Newsrooms already use AI in that lane for transcription, translation, search, clip finding, and other production support. That makes sense. The editorial point still belongs to humans. The tool is carrying repetitive motion, not deciding what the story means.

The same goes for source retrieval and clustering. A small publication often has a messy pile of issue comments, links, old notes, saved passages, half-formed examples, and one sentence you know mattered because past-you bothered to keep it. AI can help group that material, surface repeated themes, or turn a heap into something you can review without swearing at your own folder structure. Useful. But a neat cluster is not an argument, and a generated summary is not editorial synthesis. It is support work.

Headline and metadata work sit in the same category when the piece already has a spine. If I already know what an article is trying to say, asking for a batch of title variants or description directions can save time. It gives me something to cut against. Some options will overpromise, some will flatten the voice, some will make the post sound like generic internet sludge, but judging that is exactly the job. The tool can offer variations. It should not be discovering the point retroactively because I never found one.

That difference sounds obvious when stated plainly, but it is exactly where a lot of workflows start rotting.

The dangerous use of AI in publishing is not always the loud obvious one. Sometimes it looks responsible. Sometimes it looks efficient. Sometimes it even looks polished.

A writer asks the model what the article is really about because the draft still feels blurry. The model produces a clean framing paragraph. Suddenly the piece feels further along. But what happened there? The tool did not remove drudgery. It stepped into the part of the work where the writer was supposed to think.

That is fake progress, and publishing offers endless chances to mistake it for the real thing.

The same problem shows up when people ask AI to supply examples, claims, or source-like confidence. The Associated Press has a blunt rule here that I like: model output should be treated as unvetted source material. That is better than vague calls for oversight. The text may be fluent, but it has not arrived with earned trust attached. If you let it choose the supporting example, summarize the evidence, or provide the historical comparison you were too tired to verify, you are not saving time in some neutral way. You are moving risk into the middle of the editorial process and hiding it under a competent tone.

That tone is part of the trap. A hallucinated example rarely arrives looking confused. It arrives looking finished.

There is another failure mode that matters just as much on small blogs and indie publications: AI can make a draft smoother while making it worse.

Decent writing usually has some pressure in it. A sharper sentence than you expected. A choice to leave a little irritation visible. A lived example that makes the point feel owned instead of borrowed. A rhythm that sounds like a person thinking rather than a machine ironing everything flat. When people hand that layer over to AI for polishing, they often get back something tidier and less alive.

This is one reason I do not trust blanket claims that AI just helps you get to the final version faster. A faster workflow is not automatically a better editorial process. I would rather read a sentence with a little grit in it than a smoother one that could have come from anyone. If the tool removes the pressure, taste, memory, and selectiveness that gave the piece its shape, it did not simply improve the prose. It replaced the reasons to care about it.

And then there is the most quietly corrosive use of all: outsourcing editorial confidence.

This is when the workflow looks healthy from the outside. There is an outline, a draft, clean headings, transitions that mostly work, and a conclusion that sounds plausible. Maybe the piece even feels balanced and professional. What is missing is harder to see. The writer has not actually decided what they believe, tested whether the central example is fair, earned the emphasis, or asked whether the article should exist in this form at all.

The system looks productive because the words arrived. That does not mean the judgment did.

This is why the drudgery-versus-judgment distinction matters more than the louder cultural argument around AI. The practical boundary is not mysterious. It just requires honesty.

Before using AI for a publishing step, ask three things: does the underlying material already exist, would I still know what this piece is trying to say if the tool disappeared, and am I asking for help moving material or for help making the editorial call?

Those questions are not anti-tool. They are anti-self-deception.

If the material already exists, if the point is already yours, and if the tool is helping with motion rather than meaning, there is a good chance the assistance is legitimate. Clean the transcript. Cluster the notes. Generate title directions to judge against. Draft a rough description you will rewrite with your own standards intact. Fine.

If the material does not exist yet, if the point goes missing the moment the tool does, or if what you really want is borrowed certainty, then the workflow is no longer being supported. It is being quietly hollowed out.

People do not need one universal position on AI in publishing. They need a better way to decide where it belongs. A good default is simple enough to carry into real work: let the tool handle repetitive operational weight, but do not let it take the parts of publishing that require taste, verification, responsibility, and a real point of view.

The annoying parts can be assisted.

The editorial call is still yours.