AI Art Law, (Very) Briefly

Previous | Next

The funky pictures up there

Before I started writing this, I spent probably too long trying my hand at generating some AI art of the site's unofficial mascot to use as an image for this post. While the results were mixed, they were, nonetheless, very impressive considering I was working with seven year old hardware and I was able to get the software up and running in about 20 minutes. As machine learning continues to develop, the law will continue to lag behind it. Now seems as good a time as any to look at what the law is, at least with regards to AI art.

How does AI art get made?

To grossly oversimplify (and probably misrepresent) the process of machine learning:
  1. A set of training data is collected
  2. An AI looks at that data and tries to figure out patterns
  3. The AI checks its pattern-figuring-out method (a model if we're using the correct terminology) against the data to soo how close it is
  4. The AI revises its pattern-figuring-out method and checks to see if it's any better
  5. Repeat the last two steps until the AI can identify the pattern reliably enough
  6. At some point the AI will be well-trained enough that you can make it work backwards

For a longer, better explanation see this or this.

In the case of AI art, this means something like pairing images with keywords so an AI can figure out what a keyword means, or making an AI look at lots of faces so it can figure what a face looks like. After an AI is trained enough, you can give it a keyword and it will produce an image that corresponds with it, or produce a face based off what it thinks faces look like.

Legal Issues

We have some potential copyright issues before the AI has done anything. The training data has to come from somewhere, and a large enough dataset will invariably contain copyrighted material scraped from the web. While the training data is used to create a model, that model doesn't reproduce the data. One might accurately call it a derivative work. The creation of a derivative work typically requires permission from the original author(s) to not amount to copyright infringement, but there are some exceptions (helpfully covered on this very site).

A training model would likely be considered a fair use one the basis that it has a different purpose (human consumption vs training a computer), is completely unrecognisable compared to the training data, and has zero effects on the value of the training data (i.e., no one is going to look at training data as a substitute for the images it's derived from). The reasoning here is not dissimilar to that used for search engines, which have repeatedly been held to be non-infringing when indexing images (See Kelly v Ariba Soft),Kelly v Arriba Soft Corp 336 F3d 811 (9th Cir 2003). and is possibly even stronger. Outside of the US, the EU has a copyright exception for data mining, and the UK is planning on adopting a similar exception.

The outputs of machine learning are another issue. For starters, are they copyrightable? The UK, bizarrely, had the foresight to legislate for computer-generated works in 1988 with the Copyright, Designs and Patents Act. They don't receive moral rights,Copyright, Designs and Patents Act 1988, s 78. but do receive 50 years of copyright protection.ibid s 12(7). Outside the UK, the general consensus appears to be that AI generated works would not be considered copyrightable because they lack a human author. See Eva-Maria Painer v Standard Verlags GmbH and OthersCase-145/10 Eva-Maria Painer v Standard Verlags GmbH and Others EU:C:2011:239, Opinion of AG Trstenjak, [121]. for the EU, and Naruto v SlaterNaruto v Slater 888 F3d 418, 426, 432 (9th Cir 2018). for the US. This would put AI generated works in the public domain, free for anyone to use.

One aspect that remains unclear is the extent that human intervention (e.g., redrawing parts of the art) affects copyright in AI art. This may have the straightforward answer of every human intervention receiving copyright protection, rather than the entire work, although it would be difficult to ascertain which bit is which, putting a de facto copyright in the entire work.

Like the model, the outputs should be so different from the dataset as to not amount to infringement, at least under fair use. Likewise, the EU copyright case of Eva-Maria Painer v Standard Verlags GmbH and OthersCase-476/17 Pelham GmbH and Others v Ralf Hütter and Florian Schneider-Esleben ECLI:EU:C:2019:624. held that "where a user ... takes a sound sample from a phonogram in order to use it, in a modified form unrecognisable to the ear, in a new work, it must be held that such use does not constitute 'reproduction'."ibid [31]. I see no reason why images would be materially different. One concern that I've seen artists voice is that an AI could produce images very similar to training data. I do not see a reason why (legally) this would be any different than if a person made a very similar image. The fact that it was made by an AI is neither here nor there. It would simply be treated the same way as a human infringer.

The Stunning Conclusion

Having looked at the law, I think people might be making a mountain out of a molehill. Most of the law seems to have been settled years ago, and there aren't any particularly novel questions being raised. Neither training data nor models would be considered infringing, AI generated works don't attract copyright (except in the UK for some reason), and if an AI is infringing on copyright it's no different than if a human were.

If I had to guess, future developments will probably be found in the form of regulations.

I was expecting this to be longer. Perhaps I was a little too brief with this one.