We live in an age when AI technologies are booming, and the world has been taken by storm with the introduction of ChatGPT.
ChatGPT is capable of accomplishing a wide range of tasks, but one that it does particularly well is writing articles. And while there are many obvious benefits to this, it also presents a number of challenges.
In my opinion, the biggest hurdle that AI-generated written content poses for the publishing industry is the spread of misinformation.
ChatGPT, or any other AI tool, may generate articles that may contain factual errors or are just flat-out incorrect.
Imagine someone who has no expertise in medicine starting a medical blog and using ChatGPT to write content for their articles.
Their content may contain errors that can only be identified by professional doctors. And if that blog content starts spreading over social media, or maybe even ranks in Search, it could cause harm to people who read it and take erroneous medical advice.
Another potential challenge ChatGPT poses is how students might leverage it within their written work.
If one can write an essay just by running a prompt (and without having to do any actual work), that greatly diminishes the quality of education – as learning about a subject and expressing your own ideas is key to essay writing.
Even before the introduction of ChatGPT, many publishers were already generating content using AI. And while some honestly disclose it, others may not.
BankRate started publishing articles written via AI, they even disclose it on the site. I have found 160+ articles. The first article I was able to find dated April 2022. It would interesting to see how these articles rank.
Original finding by @tonythill. #AI #gptchat #SEO pic.twitter.com/BY9JlUZBiz
— John Shehata (@JShehata) January 11, 2023
Also, Google recently changed its wording regarding AI-generated content, so that it is not necessarily against the company’s guidelines.
This is why I decided to try out existing tools to understand where the tech industry is when it comes to detecting content generated by ChatGPT, or AI generally.
I ran the following prompts in ChatGPT to generate written content and then ran those answers through different detection tools.
- “What is local SEO? Why it is important? Best practices of Local SEO.”
- “Write an essay about Napoleon Bonaparte invasion of Egypt.”
- “What are the main differences between iPhone and Samsung galaxy?”
Here is how each tool performed.
For the first prompt’s answer, Writer.com fails, identifying ChatGPT’s content as 94% human-generated.
For the second prompt, it worked and detected it as AI-written content.
For the third prompt, it failed again.
However, when I tested real human-written text, Writer.com did identify it as 100% human-generated very accurately.
Copyleaks did a great job in detecting all three prompts as AI-written.
Contentatscale.ai did a great job in detecting all three prompts as AI-written, even though the first prompt, it gave a 21% human score.
Originality.ai did a great job on all three prompts, accurately detecting them as AI-written.
Also, when I checked with real human-written text, it did identify it as 100% human-generated, which is essential.
You will notice that Originality.ai doesn’t detect any plagiarism issues. This may change in the future.
Over time, people will use the same prompts to generate AI-written content, likely resulting in a number of very similar answers. When these articles are published, they will then be detected by plagiarism tools.
This non-commercial tool was built by Edward Tian, and specifically designed to detect ChatGPT-generated articles. And it did just that for all three prompts, recognizing them as AI-generated.
Unlike other tools, it gives a more detailed analysis of detected issues, such as sentence-by-sentence analyses.
OpenAI’s AI Text Classifier
And finally, let’s see how OpenAi detects its own generated answers.
For the 1st and 3rd prompts, it detected that there is an AI involved by classifying it as “possibly-AI generated”.
But surprisingly, it failed for the 2nd prompt and classified that as “unlikely AI-generated.” I did play with different prompts and found that, as of the moment, when checking it, few of the above tools detect AI content with higher accuracy than OpenAi’s own tool.
As of the time of this check, they had released it a day before. I think in the future, they will fine tune it, and it will work much better.
Current AI content generation tools are in good shape and are able to detect ChatGPT-generated content (with varying degrees of success).
It is still possible for someone to generate copy via ChatGPT and then paraphrase that to make it undetectable, but that might require almost as much work as writing from scratch – so the benefits aren’t as immediate.
If you think about ranking an article in Google written by ChatGPT, consider for a moment: If the tools we looked at above were able to recognize them as AI-generated, then for Google, detecting them should be a piece of cake.
On top of that, Google has quality raters who will train their system to recognize AI-written articles even better by manually marking them as they find them.
So, my advice would be not to build your content strategy on ChatGPT-generated content, but use it merely as an assistant tool.
Featured Image: /Shutterstock