It seems to me there’s a deliberate misunderstanding of what people mean when discussing the use of AI in written content. So let’s clear things up.
Using AI in Written Content
I’ve been meaning to write about this for a while, as I find it annoying. There’s not a day goes by where I don’t see someone on LinkedIn talking nonsense about writers and the use of AI.
Take this clown, for example.

Do you see the sleight of hand at work here? AI-driven systems include search engines and grammar tools, so we should also class spell-check and computers as AI.
Playing dumb. Pure engagement farming.
Missing the Point
A similar post in this genre goes along the lines of “AI isn’t bad. In fact, you’re stupid if you’re not using it in your writing process”.
They’re saying it’s okay – good even – to use LLMs for research, brainstorming and other simple tasks. But that’s not what the anti-AI brigade are complaining about. And I think these posters know it, which is the bit that winds me up.
Deliberately Disingenuous
I’ll confess, I’m part of the problem. That’s right – Dominic Field just can’t help shitting on the use of LLMs to spit out hot garbage. I post about it on social media all the time, fuelling the discourse a tiny little bit more.
But for all my hatred of artificial “intelligence”, I’ve never had an issue with writers using it as a search engine, for example.
I mean, it’s always wrong, obviously. But if you have time to waste double and triple-checking everything the AI says, go for it. Knock yourself out. I don’t care.
Similarly, nobody actually minds if you’ve asked an LLM to draw up an awful blog post outline, on which to hang your own words. Nor is it an issue to have the AI summarise large chunks of text, to speed up research.
I honestly believe no one, not even AI’s harshest critics, gives a toss about this kind of thing. If you claim otherwise, you’re either missing the point or being deliberately disingenuous.
So What’s the Real Problem?
It’s screamingly obvious to me that, when people complain about AI in written content, they mean using it for generation. Prompting the model to “write an 800-word blog post about gambling in style X mentioning points A, B and C” – that’s the issue.
People don’t want to read the bland output of a predictive text engine, which is what Large Language Models do. There’s also evidence that Google penalises you for posting such trash.
That’s what people are talking about. Producing reams of AI-generated drivel that says the square root of fuck all. Then attempting to polish this steaming turd, delivered freshly from the bowels of ShatPGTips. Or even worse, using it as is.
Get in the Bin
So let’s have less of the engagement baiting on social media, yeah?
Trying to imply that using AI as a search engine is exactly the same as spinning LLM-generated content doesn’t wash. And pretending spell-checkers are the same as Perplexity and Claude is just about the dumbest take I’ve ever read.
Still, I suppose it’s preferable to the “not takes” offered by AI. So many words generated, but not a single interesting point made.
My AI Usage Policy
If you’d like to know how I use such tools, feel free to check the AI usage policy I recently added to my website.
And of course, if you need quality iGaming content that’s free of tawdry LLM output, drop me a line today.

























