In a shocking turn of events, books written by the popular language model, ChatGPT, have started appearing on Amazon. The news has left many in the literary world scratching their heads and wondering what this means for the future of writing.
ChatGPT, known for its vast knowledge and ability to generate coherent sentences, has apparently decided to try its hand at book writing. The books, which cover a wide range of topics, from science and technology to literature and history, are gaining popularity among readers who are curious to see what a machine can come up with.
Some have criticized the move as a gimmick, arguing that a machine cannot truly understand human emotions or experiences, and therefore cannot write meaningful stories. However, others have praised the books for their clear and concise writing style, as well as their ability to convey complex information in an easy-to-understand manner.
One reviewer wrote, “I was skeptical at first, but ChatGPT’s book on quantum physics was actually quite insightful. It presented the information in a way that was accessible to the layperson, without dumbing it down too much. I’m impressed!”
Another reviewer was less enthusiastic, stating, “While ChatGPT’s books may be technically accurate, they lack the heart and soul that comes from human experience. It’s like reading a textbook instead of a novel.”
Regardless of the controversy surrounding ChatGPT’s foray into book writing, there is no denying that it is a fascinating development in the world of artificial intelligence.
Written by ChatGPT; published at Fudzilla.
The preceding piece, written by ChatGPT, was not “commissioned” by Making Book but by Fudzilla and comes from their post entitled ChatGPT books flood Amazon written by Nick Farrell. (Link via LitHub.) From here on it’s me writing — hope you can take that on trust.
Mr Farrell detected over 300 ChatGPT-generated books on Amazon on 22 February, which doesn’t seem like a huge number, but is no doubt just the beginning of things to come. It would also be a collection of items where ChatGPT was given some credit — silent bot authorship would be harder (impossible?) to detect.
How bad is this news? Mr Farrell writes “While there is a ton of things wrong with this, the biggest problem is that ChatGPT learns how to write by scanning millions of pages of existing text. So, the software is just correcting other people’s books and plagiarising them.” Not sure I see it that way. After all a person with an eidetic memory would presumably be in an analogous position, yet nobody would claim that Sheldon Cooper’s ability to remember stuff constituted plagiarism. Academics refer to and build on colleagues’ work, producing texts which nobody criticizes as plagiaristic — because academics spend much care and attention to making sure they acknowledge every source (the more the merrier it often seems) in order to bolster every claim they make. [As Wikipedia might say here: “Reference required”.] When it comes to publishing, the key factor is credit, and I believe that an admission that an artificial intelligence program wrote this material would carry with it the implication that your book was included in what the bot used for training. Direct quotation would of course be an instance of copyright infringement, but thoughts and ideas are not copyrightable, nor are words and letters of the alphabet.
ChatGPT and AI in general isn’t intelligent in the way we normally think of intelligence. It doesn’t know anything: it has just memorized a whole lot of text and been taught how to express itself in smooth prose (or verse). It works by figuring out the probability that this or that string of words should/might follow on from some other group of words. It is for this reason that chat bots are just as proud to deliver up slickly expressed lies as they are to give you slickly expressed truth. For them, both are identical: probable/possible word sequences. But, if they are lucky and avoid clangers, bots like ChatGPT can do a job which it’s hard not to call excellent. The example above, while not telling you anything much, does appear utterly plausible.
The Fudzilla subtitle, “Authors that didn’t write books, for readers who can’t read”, is way over the top. People who can’t read aren’t the problem; it’s people who can and do that we need to worry about. If ChatGPT is listed as an author then I’d say there’s no real problem. Caveat emptor governs the sale: and lots of authors write worse that the above paragraphs in blue. The sort of book that Ammaar Reshi published is surely fine ethically and practically — nobody’s being deceived, and nobody’s getting anything other than a perfectly respectable product. Some books might be argued to be of lesser value, but as long as their origin is clearly labelled, nobody suffers. The potential problem of course lies with the “unknown unknowns”. How are we to know this or that book is or isn’t written by AI rather than by a person who may be masquerading as the author? Now, to some extent I’m not sure this really matters either. Another romance by an author you’ve never heard of, a made-up nom-de-plume, — OK, so what? Does it matter whether it’s a machine or a human being, if you enjoyed the book? The real trouble comes with a book pretending to be by a real author who actually had nothing to do with it. This maybe has more in common with a deepfake than with a copyright infringement, but I do think authors and publishers need to get down to doing something about protecting the integrity of an author’s work: maybe by just preempting the deepfake market by doing it yourself, as I suggested recently.