Archives for category: Copyright

Todd A. Carpenter writes at The Scholarly Kitchen about how unfortunate it is that in the conclusion of the Internet Archive lawsuit these wicked publishers have managed effectively to quash the delightful idea of controlled digital lending. Basically controlled digital lending means that if the publisher doesn’t publish an ebook edition of a work, a library should have the right to scan the book, create a digital copy and lend it out. This plan for expropriation comes with a virtuous-sounding claim that while this ebook is out on loan the library will of course hold back the physical copy it just ripped off and not lend it out as well. Thus only one copy will be on loan at any time. What could be better? — a patron unable to come in and get the print book will be enabled to overcome the publisher’s negligent failure to provide an ebook, and will thus be able to enjoy what otherwise would have remained inaccessible.

Trouble is the right to make copies is a right belonging to the copyright owner, the author (occasionally it’s true, subleased or even assigned to the publisher), and not vested in any random purchaser who may have paid money to own a single copy of the work. No problem say the CDL supporters, it’s actually OK because what we would be doing is fair use. Mr Carpenter tells us that the Internet Archive “was seeking to extend the boundaries of Fair Use exemptions in copyright in an increasingly digital world.” Their argument would be based on the idea of transformative use. The trouble with this, and indeed the whole concept of fair use, is that it’s meaning isn’t defined in the law; it all has to be hammered out in law courts. In a piece at The Author’s Alliance we are told “a use can be transformative when it ‘utilizes technology to achieve the transformative purpose of improving delivery of content without unreasonably encroaching on the commercial entitlements of the rights holder.’” CDL advocates claim no harm is done because they are allowed to lend the physical copy, so what difference is there in their lending a digital copy which may enable some disabled readers actually to borrow the book.

However, a copy will have been made, and once a digital copy exists it is impossible to stop its proliferation into other copies. Besides, isn’t it perfectly reasonable that the publisher or the author should be allowed to decide whether this book is available in this way or that way. It’s only because digital reproduction is cheaper that it has now come up that we might plug holes in the supply base by creating editions of our own. Nobody ever, I think, took it into their own hands to print a paperback edition of a book because the publisher refused to offer anything other than a hardback. Sorry, Internet Archive, that’s essentially no different from your deciding to “publish” a digital edition.

Good intentions don’t represent a legal defense. It’s like arguing “I only stole the wallet because I wanted to give some money to this beggar.” It is not a legal argument to claim “Copy a book to make money: bad; copy a book to lend it to a nice person: good”. The result (unfortunately) in both cases, is that the author gets no remuneration for “granting” the right to copy. Why do you think it’s called copy right? — it controls the right to make copies.

Now it could well be you’d be right to argue that Congress should take issues like this into consideration when next they revise the copyright law. I expect they will: it certainly seems that a solution to this narrow problem could fairly easily be found when the law’s revised. I just wonder when that’ll be. In the meantime we have to live with the laws we have.

An open letter, hosted by the Future of Life Institute, calling for a pause in the development of Artificial Intelligence has now been signed by over 1,850 people, including Steve Wozniak and Elon Musk* (who funds the Future of Life Institute). That number may be less impressive than at first might seem — if you log on to the FOLI site (link above) you will be invited to sign the letter yourself. OK, warnings about AI may be needed — would that they were heeded. I seem to remember Stephen Hawking giving us a similar warning a few years ago — turns out it was in 2014 when he told the BBC, “The development of full artificial intelligence could spell the end of the human race.”

Sigal Samuel has an article at Vox, discussing the idea that for all our sakes we should slow down research into AI. She herself is in favor of a pause in research but identifies three problems which people may have with this idea:

  • Objection 1: “Technological progress is inevitable, and trying to slow it down is futile”
  • Objection 2: “We don’t want to lose an AI arms race with China”
  • Objection 3: “We need to play with advanced AI to figure out how to make advanced AI safe”

Not much of an argument against “the end of the human race” I fear. On an idiot level the objection to Ms Samuel’s objections is that her whole discussion is too USA-focussed. Let’s imagine that the US Congress did pass a law mandating a halt in AI research (which obviously isn’t going to happen) and the State Department were to persuade China to do likewise (which I suspect would be less unlikely), what would that achieve? Is such a thing at all enforceable, even within the USA? And last time I looked there were other nations in the world, and many of them seem to have access to computers! And the unfortunate reality is that research is carried out by people not nations, and undercover computer activities may be assumed to continue. Of course funding might be sparser, and there would no doubt be a sort of shaming influence, or the other way round, the urge to follow a good example, but I expect lots of people in lots of places would be perfectly happy, nay even eager, since now without competition, to forge ahead. Ms Samuel refers to technologies we have managed to “halt” including human cloning. OK, we know that the guy who did it was arrested, but to the ambitious human engineer that is probably more of a lesson on how to be more discrete than a warning to stop.

A pause would however be delightful. We hear too much about all this these days. But we do need time to think about what we want the copyright law to look like now that there are such things as computers that can do such things as they now can. Just saying “No” doesn’t really amount to a policy. This may be a somewhat secondary issue for most of the world, but copyright is the bread and butter of media businesses. I hope someone in Washington is thinking about copyright reform, or at least thinking that thinking about it might be something they should get round to soon.

Once the genie is out of the bottle, we know, don’t we, that it can’t be put back. Every new technology has been greeted with foreboding — think of Socrates and the supposed baleful influence of writing stuff down in books. This is not to disagree with Stephen Hawking — after all, he does say “could spell the end” not “will”. As far as I can see we really have no alternative to stumbling forward and keeping our fingers crossed. Maybe we’ll get lucky, and just as the breakneck speed of a fifty-mile an hour railway train turned out not to kill us, drive us insane, make uteruses drop out of bodies, we’ll be able to turn AI into a force for good. Or maybe as Ms Samuel suggests it will kill us all off because it needs us out of the way so that it can have unfettered access to every computer in the world.

_________________

* Maybe one should suggest that just because Elon Musk has a pretty universally bad reputation doesn’t mean that every idea he comes up with has to be immediately scoffed at — which has tended to be the effect of the media’s insistence on prefacing all their reporting on this appeal with Musk’s moniker. God knows he’s been struggling to develop a self-driving car for long enough to think that persuading everyone else to pause their AI research can’t be a bad idea.

I never thought of it this way, but are AI text-generators like ChatGPT really just plagiarism machines? They memorize everything ever written about a subject and then regurgitate it on request with perhaps a bit of rearrangement and the change a few words to make the thing read more slickly. (Let us leave aside their eager-to-please tendency to make up convincing sounding evidence so as to answer a question as fully as possible.) If there were a human being who could memorize everything ever written about say corona viruses and they were then to write out selections in response to enquiries about particular aspects of the subject, we would presumably judge that to be nothing more than copying. Of course such a phenomenon of memory cannot exist (well, it can, but as an AI bot, just not as a human).

Reflection on this topic is provoked by Plagiarism Today‘s post Is Plagiarism a Feature of AI?.

The Copyright office signals an unwillingness to see as copyrightable any work created by AI. (An author must be human.) Authors’ organizations push against AI by claiming that by memorizing copyright works these machines are violating copyright, or to put it another way, before they consume these works the bots should get permission from the copyright holder (— which their organizations propose should not be forthcoming). Of course for a computer “memorization” actually consists in storing an accessible copy. But, using an anthropocentric definition of “memorize”, as we tend to in these discussions, objecting to the reading of your work does on the face of it seem a bit illogical. A human reader doesn’t need permission from the copyright holder (or the permission grant is assumed in the purchase of access to the writing) in order to read a book, or even to memorize it — as of course a few have managed to do. Homer certainly remembered well, and lots of his audiences must have kept large chunks in their memory, as no doubt do a few moderns. No doubt there are lots of people who can recite from memory the entirety of T. S. Eliot’s The Wasteland, but the vigilant Eliot estate is not knocking on their door whenever they break silence. A professor of ancient philosophy has no doubt read everything written by the Greeks, and while they may not be able to recite it all, can point to this, that, or the other location for support for an argument they are making. Of course you’ve got to want to do it, but I often wonder whether the difference between the professor and the person sweeping out the lecture hall is anything more than differences in the efficiency of their memory.

If an academic draws on a couple of hundred sources in writing their work, the “problem” is dissipated by their citing their sources. Indeed the very idea of an academic work which referenced no sources at all is a contradiction in terms: academic work is of necessity a development from previous academic work. Shoulders are always being stood on. An academic treatise with no references would count as a polemic, not as an academic monograph. So, might AI be able to get away with it and become copyrightable by citing all the works it looked at in order to come up with its text? Of course there would be millions of citations, so they’d have to be available only in response to enquiry — a book which was 99% bibliographical references would be at least an unwieldy proposition. Of course if this sort of policy were to be adopted, we’d next run into the difficulty of permissions from works used more than tangentially, which just takes you further down a rather stultifying rabbit hole.

What I go on to wonder is whether the inability to remember everything is in some fairly fundamental way, a requirement for originality as an author eligible for copyright protection. Actually it goes further than that — the fact that we all forget things could be regarded as a requirement for the very existence of the job of writer: after all, if we had all memorized all that had ever been written about corona viruses, or ancient Greek philosophy why would we need anyone to regurgitate it for us? To ascend to even higher meta levels: might we not have to think of books as extensions of our memory? Certainly I can “remember” a lot more nowadays, when I am able to store most of it in Wikipedia.

See also AI and copyright.

In November last year, the major pirate ebook site Z-Library was seized by the FBI. The website, which hosted free ebooks it didn’t own and even charged them for it, was popular among students, even widely gaining traction on TikTok. On Twitter, many users also bemoaned its death, while many authors celebrated this illegal distribution site being shut down. “This website has been seized,” the defaced site now says. Shortly thereafter, the two people allegedly behind the site’s operation were arrested.

Most of the ebooks hosted on Z-Library were fiction and nonfiction bestsellers. The site also offered these ebooks in different formats, but ePub was a popular choice.

But days after Z-Library’s disappearance from the wider internet, it came back on an encrypted browser and network that one uses when they aim to be anonymous. It’s where the dark web lies as well.

Thus begins the BookRiot piece by Arvyn Cerézo explaining how easy it is to do book piracy. Despite what he claims I do think that publishers are quite aware that a lot of their books are being pirated overseas. No doubt stopping book piracy in say China might be expected make your overseas sales look better, but it’d cost you something, and might not result in sales rather than thefts. Some markets just can’t afford to buy our books.

However one has to wonder whether book piracy really matters all that much. “A consumer survey conducted by Nielsen in 2017 suggested that U.S. publishers lose up to $315 million each year” from book piracy. Hard numbers are hard to be sure of, but if book publishers’ revenue in 2020 was really $25.930,000,000, a mere $315 million might not seem of overwhelming concern. Still 1.2% is noticeable and it’s obviously better to have it on your side of the ledger than not. Part of the trouble of course is that nobody on their own is losing $315 million to pirates — lots of different companies and authors are losing a fraction of that amount, and it’s hard (impossible?) to figure out exactly how much you might be losing.

Stopping this stuff costs money, and if it’s one or two individual publishers who have to pursue the thieves then you rapidly fall into the pit of calculation where you balance “It’ll cost me $X to take action” against “and it’ll maybe benefit me $Y”. Y might often look like it’s likely to be less than X, so action is not taken. This does look like an issue which it might well be sensible to have publishers collaborate. The joint suit against Georgia State University e-reserves copyright suit, was a recent instance of publisher cooperation — but it failed in 2020: Publishing Perspectives delivered the sobering news. Of course one bit of the education system (as university presses are) suing another doesn’t have a great look. Society must be free to determine that the financial benefit for students outweighs the rights of authors to payment for use of their work: we publishers may not like it, but there is a coherence to the attitude. Fly-by-night merchants selling stuff which belongs to other people are less ambiguous targets, but hard to draw a bead upon.

The argument is often made that posting free stuff online usually leads to an increase in the sale of the legit version. I suppose it’s thought to be a bit like chumming the waters so that the little fishes will rush to feed. While free ebooks may or may not increase legit sales, it is obviously an untestable proposition — you can’t rerun the game; and it does have the embarrassing smell of compensatory fiction. Still, if it makes publishers and authors happy to think this thought, thinking this thought isn’t altogether a bad plan. And it might even be true. Let us not forget that it has always been possible (and relatively easy) to steal books — should we be offended that over the years thieves have been able to identify better targets.

__________________

* E-reserves are digital copies made for class study. Educational institutions have taken to making copies of prescribed readings from materials held in their library and making them available free of charge to students. They maintain that their avoidance of payment to the author under the copyright law is justified by the fair use clause which permits the making of “multiple copies for classroom use”. Of course that law was drafted before anyone had dreamt of digital copying, but it does seem hard to argue against, even though it facilitates much more widespread and permanent distribution that the old Xeroxing of a few pages entailed.

I rather think the right compromise for our Congress to aim at — if they ever get round to legislating anything again — would be to set up some sort of fund that would compensate authors (and dare one say, publishers too) for the free use of their materials by students.

In a shocking turn of events, books written by the popular language model, ChatGPT, have started appearing on Amazon. The news has left many in the literary world scratching their heads and wondering what this means for the future of writing.

ChatGPT, known for its vast knowledge and ability to generate coherent sentences, has apparently decided to try its hand at book writing. The books, which cover a wide range of topics, from science and technology to literature and history, are gaining popularity among readers who are curious to see what a machine can come up with.

Some have criticized the move as a gimmick, arguing that a machine cannot truly understand human emotions or experiences, and therefore cannot write meaningful stories. However, others have praised the books for their clear and concise writing style, as well as their ability to convey complex information in an easy-to-understand manner.

One reviewer wrote, “I was skeptical at first, but ChatGPT’s book on quantum physics was actually quite insightful. It presented the information in a way that was accessible to the layperson, without dumbing it down too much. I’m impressed!”

Another reviewer was less enthusiastic, stating, “While ChatGPT’s books may be technically accurate, they lack the heart and soul that comes from human experience. It’s like reading a textbook instead of a novel.”

Regardless of the controversy surrounding ChatGPT’s foray into book writing, there is no denying that it is a fascinating development in the world of artificial intelligence.

Written by ChatGPT; published at Fudzilla.

The preceding piece, written by ChatGPT, was not “commissioned” by Making Book but by Fudzilla and comes from their post entitled ChatGPT books flood Amazon written by Nick Farrell. (Link via LitHub.) From here on it’s me writing — hope you can take that on trust.

Mr Farrell detected over 300 ChatGPT-generated books on Amazon on 22 February, which doesn’t seem like a huge number, but is no doubt just the beginning of things to come. It would also be a collection of items where ChatGPT was given some credit — silent bot authorship would be harder (impossible?) to detect.

How bad is this news? Mr Farrell writes “While there is a ton of things wrong with this, the biggest problem is that ChatGPT learns how to write by scanning millions of pages of existing text. So, the software is just correcting other people’s books and plagiarising them.” Not sure I see it that way. After all a person with an eidetic memory would presumably be in an analogous position, yet nobody would claim that Sheldon Cooper’s ability to remember stuff constituted plagiarism. Academics refer to and build on colleagues’ work, producing texts which nobody criticizes as plagiaristic — because academics spend much care and attention to making sure they acknowledge every source (the more the merrier it often seems) in order to bolster every claim they make. [As Wikipedia might say here: “Reference required”.] When it comes to publishing, the key factor is credit, and I believe that an admission that an artificial intelligence program wrote this material would carry with it the implication that your book was included in what the bot used for training. Direct quotation would of course be an instance of copyright infringement, but thoughts and ideas are not copyrightable, nor are words and letters of the alphabet.

ChatGPT and AI in general isn’t intelligent in the way we normally think of intelligence. It doesn’t know anything: it has just memorized a whole lot of text and been taught how to express itself in smooth prose (or verse). It works by figuring out the probability that this or that string of words should/might follow on from some other group of words. It is for this reason that chat bots are just as proud to deliver up slickly expressed lies as they are to give you slickly expressed truth. For them, both are identical: probable/possible word sequences. But, if they are lucky and avoid clangers, bots like ChatGPT can do a job which it’s hard not to call excellent. The example above, while not telling you anything much, does appear utterly plausible.

The Fudzilla subtitle, “Authors that didn’t write books, for readers who can’t read”, is way over the top. People who can’t read aren’t the problem; it’s people who can and do that we need to worry about. If ChatGPT is listed as an author then I’d say there’s no real problem. Caveat emptor governs the sale: and lots of authors write worse that the above paragraphs in blue. The sort of book that Ammaar Reshi published is surely fine ethically and practically — nobody’s being deceived, and nobody’s getting anything other than a perfectly respectable product. Some books might be argued to be of lesser value, but as long as their origin is clearly labelled, nobody suffers. The potential problem of course lies with the “unknown unknowns”. How are we to know this or that book is or isn’t written by AI rather than by a person who may be masquerading as the author? Now, to some extent I’m not sure this really matters either. Another romance by an author you’ve never heard of, a made-up nom-de-plume, — OK, so what? Does it matter whether it’s a machine or a human being, if you enjoyed the book? The real trouble comes with a book pretending to be by a real author who actually had nothing to do with it. This maybe has more in common with a deepfake than with a copyright infringement, but I do think authors and publishers need to get down to doing something about protecting the integrity of an author’s work: maybe by just preempting the deepfake market by doing it yourself, as I suggested recently.

On February 8th a Manhattan jury “awarded Hermès $133,000 in damages for trademark infringement, dilution, and cybersquatting.” Mason Rothschild had been sued by Hermès for selling NFTs showing Birkin bags (an exclusive and wildly expensive brand) of various extravagant designs. Hermès claimed this diluted their brand. Reuters has the story. Mr Rothschild’s lawyer called the result a “terrible day for artists and the First Amendment”. 

So, do legit artists really suffer from this judgement? Described by his lawyer as “a conceptual artist”, Rothschild, “whose legal name is Sonny Estival, argued that the works are an absurdist statement on luxury goods and immune from the lawsuit based on First Amendment protections for art.” Rothschild had hired people to execute the “art” under his instruction — he can’t actually do the work of creating an NFT of a Birkin bag by himself. This may be irrelevant legally, but seems important to me. Rothschild was getting $450 a pop for his NFTs — which forces upon us the question of to whom such a thing might appear to be worth $450. Answer, I dare say, nobody, but once people start buying things it seems there’s no stopping them. Does the use of cryptocurrencies affect how people judge their expenditures?

I suppose anyone can easily make an “absurdist statement on luxury goods” and show it around to family and friends. The trouble comes if they try to sell that statement. It’s probably impossible to sell anything in any way connected to a luxury good without the reputation of that brand affecting the sale. So keep your absurdist statements to non-branded items please. Artists worrying that this judgement represents a “terrible day for artists and the First Amendment” need only reflect that the most artfully and beautifully expressed “advocacy of the use of force” will not be viewed as protected speech — the First Amendment is not a license to say whatever you want, however you want, whenever you want. I suspect that artists, conceptual or otherwise, will find it easy enough to deal with this judgement.

Of course we can’t get rid of this sort of thing by just laughing it off. This type of business is here to stay, and will affect businesses beyond bags. Maybe the Supreme Court and Congress are really going to be able to come to grips with the fact that social media, in the first instance, need regulation of some kind; and this may open the floodgates to regulation of other aspects of the internet. Once we dreamt that crazy stuff would be drowned out online by all the informed and responsible dialog — the dream is I fear dead. Care will need to be taken — babies and bathwater and all that!

____________________

* Cybersquatting is the practice of registering, trying to sell, or using an internet domain name to profit from the goodwill of someone else’s trademark.

Plagiarism Today told us in 2015 that we cannot copyright the alphabet or a typeface. We can however copyright computer code, and for any digital typeface there is of course an underlying computer code which enables the design to be expressed. This sounds back-to-front, but of course so does lots of stuff regarding copyright and computers since our law largely predates the computer’s domination of our world.

Now Plagiarism Today brings us an account of a lawsuit brought by type designer Nancy Laatz attempting to secure protection against unauthorized and unpaid-for use of three typefaces, Blooming Elegant Regular, Blooming Elegant Sans, and Blooming Elegant Hand, by Zazzle a site enabling users to design and sell a variety of products including cards, invitations, T shirts, mugs and more. Well, you can see how Blooming Elegant‘d be popular for this sort of thing:

Zazzle, who can only point to a single-user license for $20 acquired by someone who worked for them, is arguing that regardless of whether the code is protectable or not, they believe that Laatz likely used a font-design program that automatically generated the code. Because of this they argue, the software doesn’t qualify for copyright protection. They have in the meantime withdrawn the typefaces.

Plagiarism Today is excited about the implications of this case for the world of AI. OK. “The underlying creative work doesn’t enjoy copyright protection. That means, for a typographer to have any practical copyright protection, the code must be protectable. However, most typographers aren’t programmers, so, they rely on tools to generate or help generate that code.” Bated breath — just wait for the resolution of this exciting saga . . . Though I kind of suspect it’ll all end in a financial settlement rather than a Supreme Court judgement.

I think part of the problem with lots of analysis of copyright matters is the lazy use of the term intellectual property. Yes, George Orwell’s 1984 is in copyright, and yes, Orwell’s 1984 is intellectual property. But you cannot complete the syllogism and conclude that intellectual property is copyrightable. Some of it is: the form in which the ideas in 1984 are expressed is copyrightable — the ideas themselves (though they are obviously intellectual property) are NOT copyrightable. You can write as many novels as you like featuring a Big Brother (maybe you’d better choose a different name though) who’s watching over us, and they’ll all be copyrightable as long as they don’t just copy George Orwell’s words (or anyone else’s). I rather think that typeface designs are intellectual property, but ought not to be protectable by copyright. Trademarks, patents maybe but not copyright, unless we change copyright to create a new version for this sort of item. Do we open the road up to massive truck loads of intellectual property protection when we allow that the computer code behind a typeface is copyrightable? I suppose it does get by in that computer languages are languages in more that just a metaphorical sense. Still so much today is being expressed in computer code that there might be a risk that by allowing protection to all we end up protecting none.

We (Congress) really do need to sort all this out. Copyright reform is overdue. A few years ago I suggested that copyright might need to be split into three, or four, different strands. Still seems right to me; righter all the time.

Wow!

Todd A. Carpenter at The Scholarly Kitchen has conducted an interview with ChatGPT. For me this is certainly Turing-test-busting stuff. Cynics might want to claim that it’s been faked — and of course I cannot know, though what motive could be at work? (I suppose one could run a test of one’s own: I have not.)

Just see for instance the program’s answer to a question about copyright (but do look at the whole “interview”):

TAC: Should AI-driven text be copyrightable?

ChatGPT Response:

Under most copyright laws, original works of authorship, including literary works, are eligible for copyright protection. This includes works that are produced using artificial intelligence (AI). In general, if a work is original and has been fixed in a tangible form (such as a written or recorded work), it is eligible for copyright protection, regardless of whether it was produced by a person or by AI.

However, there may be certain legal issues that arise when it comes to AI-generated works and copyright. For example, if an AI program is used to generate a work, it is not clear who the “author” of the work is. Some countries have specific laws addressing this issue, while others do not.

It is also possible that a work produced by AI could be considered a “derivative work,” which is a work based on one or more preexisting works. In this case, the copyright in the derivative work would be held by the person or entity that created the work, but they would need to obtain permission from the copyright holder of the preexisting work in order to use it as a basis for the derivative work.

Overall, the question of whether AI-generated works are eligible for copyright protection is a complex legal issue that may vary depending on the specific circumstances and the laws of the country where the work was created. It is important to consult with a qualified attorney if you have questions about copyright and AI-generated works. 

OK maybe a human writer would have qualified that second sentence by adding “under the law as currently written” or something like that, because ChatGPT is right, damn it, even though we have been warned that it can lie with a totally straight face! The law doesn’t know anything about AI and its ability to create copy — for the simple reason that it was written long before such whizz-bangery could be imagined. But I’d never put this confident omission down to the piece’s having been written by a machine.

The US Copyright Office seems to be swithering. It has challenged the copyright registration for a comic book which uses AI-generated art although it had previously allowed such stuff as ©.

Nate Hoffelder, from whose Monday morning email this link comes, argues “To understand the problem with the Copyright Office’s decision, let’s consider a book cover on a novel (see the Tor Books story). If you write a novel and use an AI-generated image as part of the book cover, according to the Copyright Office your novel cannot be copyrighted. That is simply wrong.” And that obviously is wrong: the text is already copyright by being available in fixed and tangible form, whatever cover it might come decked out with. (Copyright registration is different from copyright itself.) What’s wrong in today’s law, and what’s wrong in an ultimate, ethical sense, are no doubt two different things. Congress needs to legislate, but before that happens we need to know what it is that we the people want. Stories are going to be generated by AI. Should new stories derived from a massive database of preexisting material actually be viewed as original, or do we rather want them to be seen as derivative works, as ChatGPT suggests in one option? This is not a question that can be dodged. For myself the derivative bit just seems too diffuse — if a book’s derivative of a million different sources, can we really think of it as truly derivative. To do so gets dangerously close to claiming that I’m infringing copyright here by failing to get permission from a hundred earlier authors because they all used the word “diffuse” in juxtaposition with “derivative”.

Do we want to declare copyright as only available to a human being? The monkey judgement tended in this direction it’s true. This might be a good idea, but no doubt there are likely to be objections and cases where one thing shades into another. Does the NFT art world need to weigh in?

The New York Times obliged us in this week’s Science Times with a trio of articles about AI. The main one, by Oliver Whang, reports on research in the Creative Machines Lab at Columbia University, and speculates on consciousness in robots, hitherto a taboo topic. Whether you think robots will become conscious or not sort of depends on your definition of consciousness. Self-aware might be a better term, and indeed a robot has already been built which recognizes itself in a mirror — a common test for consciousness in animals. There is, I guess, a step from gorillas to humans, but we keep on being reminded how blinkered we humans are when we persist in considering ourselves unique. The article quotes philosopher Eric Schwitzgebel as claiming “at the rate things are progressing, humankind would probably develop a robot that many people think is conscious before we agree on the criteria of consciousness”. A comment on the relative efficiency of engineering and philosophy?

You can stick your head in the sand, mumbling “it’ll never happen”, but I think progress is unstoppable. Hopes that we’ll all be responsible enough to create robots which follow Asimov’s rules and would never decide to annihilate human-kind just seem like cloud-cuckoo-land wishful thinking to me. Remember how we all assured ourselves that no scientist would be irresponsible enough to clone a human being. The 2005 cloning claim by Hwang Woo-suk appears to have been withdrawn, but quite a few human embryos have already been cloned. Who’d be overconfident that no researcher will ever take the trouble to allow one of these to survive? Indeed in 2018 He Jiankui announced the birth of a CRISPR-Cas9-generated babies who were meant to have genetic resistance to HIV. It all went awry, and Dr He was sentenced to three years in prison, and a sizable fine.

Serious thinking about all this is obviously essential. Of course maybe the world will be better off if “we” are all gone and all that remains is our mental product bouncing around inside durable metal bodies. In the meantime, let’s just get the less apocalyptic problem out of the way, and figure out what the copyright status of AI-generated text should be.

These portraits were apparently created by Artificial Intelligence using Stable Diffusion software. Send in a photo of yourself, and AI-enabled software can turn your image into a Renaissance portrait. Just a bit of fun, eh? No problem if you are Michelangelo: in fact a bit of an ego boost as you look down from your cloud, but a serious threat to the likes of Greg Rutkowski or Sarah Anderson who make a living off their art. Clearly if thieves misappropriate the ability to create your art — and if it’s already on line that’s easy to do — your chances of making money off it will be impacted. What can you do?

As The Guardian tells us, “The legal recourse for artists who feel these tools are infringing on their copyright is knotty and unclear. In the EU, lawyers are contesting the legality of using images under copyright for training AI models but as the UK bids to become an industry leader, it has already proposed a bill to allow carte blanche AI training for commercial purposes. Meanwhile it remains unclear if traditional copyright even applies here, as it is difficult to copyright a visual style.”

I cannot put much weight on the comfort summoned up by so many that AI’ll never match a human writer. Indeed I’d say that it was a certainty that it will — just look at those pictures. The AI book I wrote about recently appears to show the welds and screws still, but I’ve no doubt that AI will be perfectly capable of writing a Turing-convincing story any minute now. Should we care? A romance is a romance after all, and what does it matter if it’s written by your next-door neighbor, by a writer you’ve already read and love, by an unfamiliar author from China, or by a computer? Even if it’s not fiction — should you care that this engineering textbook never felt the touch of human hand until you opened it, as long as the facts are right, and the robot can write? I’m uneasy about all this, but can’t really see a real problem here. If the machine can do something we value, why shouldn’t it?

Well, as Sarah Anderson, a cartoonist, tells us in an article in Sunday’s New York Times (no doubt paywall protected, but partially available elsewhere if you do a search on its title “The Dark Possibilities of A.I. and Art”) it can facilitate the capture of your “style” and enable people to distort your message — in this case putting out fascist content which looks just like it came from Ms Anderson’s basically liberal pen. Of course this can be, and in Ms Anderson’s case previously was, done by real humans, but AI just makes it quicker and easier for them. We can sympathize with her claim that “Art is deeply personal, and A.I. had just erased the humanity from it by reducing my life’s work to an algorithm”, but it is a bit off target. The algorithm’s not the problem. It was not created by a machine. People coded it up and people use it. Might as well object to the pen as the tool used to copy your work! There will surely be sometime soon a legal method for preventing exploitation of this kind.

Copyright may not at the end of the day be the appropriate protection against artificial intelligence which writers need. After all, copyright protects the way in which your idea is expressed, not the idea. And not presumably, something as intangible as “style”, the way you tend to express your ideas. Maybe some clever lawyer can succeed where academics and literary types have for centuries failed, and come up with an unambiguous definition of style, but breath should not be held. The US Copyright Office, in a policy reversal, now seems to be intent on preventing the product of AI from getting copyright protection.

Still, you’d think something needs to be done to protect authors and artists. The world might benefit from a fictional account of the Truss tragedy in the style of Anthony Trollope, but no doubt J. K. Rowling might prefer to write her own version if thus motivated. A living writer might reasonably be pissed off by an AI-generated text identifying the Truss administration as the high point of modern conservative philosophy. Let them do that themselves, please. The problem would seem to reside in a machine’s or the machine’s owner’s, masquerading as a particular living author or artist whose livelihood might be expected to suffer from such competition. Maybe “passing off”, a common law tort, is the way to go for author protection.

Protection for authors, and maybe at a pinch readers; but not publishers. Indeed life being what it is, I think we can anticipate that, far from raging against the machine, book publishers will be vying with one another to get their hands on the best novel-writing-robot there is — after all when we publish books we are acting not fundamentally as a benevolent society for authors, but as a means of making money. While human writers are the source of our product we are happy, nay proud, to support them, but a cheaper alternative . . . ? — No brainer. Just so long as it doesn’t read like anyone alive might have written it.

Thanks to Annabel Hollick for the Guardian link.

See also AI and copyright.

Nate Hoffelder’s weekly links include this one to ArtNews telling us about a lawsuit in which a photographer is suing a tattoo artist for using his photo of Miles Davis as a tattoo.

The journey from photo to tattoo

The tattooist, Kat Von D, argues that the use is transformational, thus fair, and that there are bodily integrity and personal expression rights supporting her use of the photo. “Von D insists that her version is transformative of the original image. She created the tattoo by tracing Sedlik’s photograph on a lightbox and affixing the image to her client’s arm using a thermal copy machine and tattoo transfer fluid, before inking the design by hand.” Romano Law gives a balanced discussion of the issues. A jury will have to decide who’s right. To me it seems obvious that a tattoo cannot take the place of a photograph or damage demand for it — quite the opposite if anything. Still the law moves in mysterious ways.

Plagiarism Today reports on the verdict in another case which is concerned with the opposite direction of copying — here the tattooist sued a game company for reproducing her tattoo characters in their game. She won but “the jury opted not to award Alexander a share of the profits. Instead, they awarded her actual damages, totaling just $3,750 for the five tattoos at issue.” Not sure whether this has any relevance in the Miles Davis case.

This odd corner of the copyright world reminds me of my reproduction concerns in the case of the Gentileschi St Catherine picture, printed from a digital copy owned by The National Gallery in London. Obviously Gentileschi’s not protected by copyright, but if we were dealing with a modern work, would that make a difference? Surely it would, and here I’d want to believe that a freehand copy painted by a copy-artist would be more protected as fair use, whereas a print from a digital file would be a clear infringement. Would the hand-written copy of the Gutenberg Bible I speculated about (if we were dealing with a copyright work) not also be a copyright infringement? Presumably copying it out without any attempt to make your version look like the original, say for example by typing it including all its typos and following its page layout, would not be something copyright law would pay any attention to in the case of a Bible*, but for any other copyright work it would be a clear infringement as soon as you tried to sell it. The extreme case of copying here would of course be Xeroxing the book, which I think we all realize is not a good thing to do.

When I reported seven years ago on Tim Youd’s performance art, retyping novels, I didn’t consider the copyright position at all — I guess until he tries to sell the product it remains OK, as a copy made for his own personal use.

_______________

* Though the Authorized Version of the Bible does remain in copyright in England, where it is covered by Crown copyright, and is published under license.