Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and is now exploring new vistas in social media.
ChatGPT is actually able to translate the information it learns in one language into other languages, so if it’s having trouble speaking Bengali and such it must simply not know the language very well. I recall a study being done where an LLM was trained up on some new information using English training data and then was asked about it in French, and it was able to talk about what it had learned in French.
No, that’s not remotely what I said and I have no idea how you were able to derive it from that.
If it “rewrites” it as in it literally makes a copy-and-paste duplicate, then that’s covered by the existing copyright. And that’s also a failure of an AI because there are far easier ways to copy a text file.
If it “rewrites” it as in it makes a distinct book that tells the same basic story but is different in the details, then that’s a new work and either gets a new copyright or is in the public domain (depending on how various lawsuits pan out and what jurisdiction you’re in).
But AI training often does not qualify as fair use.
[citation needed]
Otherwise, intellectual property is treated similarly to other types of property.
Ha!
Intellectual property is nothing like physical property. It has nothing in common with it. If it did, why isn’t copyright violation literally “stealing”? People love to throw the word “stealing” around in the context of copyright violation, but they’re completely different areas of law and work completely differently.
It’s no wonder that people get weird about AI training if they are laboring under this basic misunderstanding.
You give servers a license to display what you wrote
That’s all an AI needs in order to get trained on something. They just need to see it.
Oh noes, someone is making money out there off of something I did that I can’t actually make money off of myself.
I have no love for Facebook or any other big giant corporation, but IMO people have really become overly sensitive about this stuff. They think they can send me ads that are more relevant to me now that they’ve seen a few of my posts. That doesn’t harm me at all, I don’t see their ads regardless because I’ve got ad blockers up the wazoo.
Copyright holders can prohibit all use of their work.
No, as I said, copyright holders aren’t kings. You’re not well versed in the details of copyright law. There are a lot of things that a copyright holder can’t prohibit you from doing with their work once it’s been published, the only way they can prohibit all use of their work is to never publish it in the first place.
Again, you’re getting lost in irrelevancies. We’re talking about information people have posted to the Fediverse. That’s a system that’s inherently designed to display that information to any computer that asks for it and to mirror it around to other computers to store and likewise display on request. If you didn’t want all that to happen you should never have posted it in the first place.
Copyright holders are not kings, there are limits to the sorts of things they can prohibit. And in the specific case we’re discussing they have already given permission for their posts to be viewed by the public. You’re getting lost in irrelevancies. If you want to get pedantic, set up a camera facing a browser and let the AI train that way.
If the use of AI is banned within the US I don’t think Hollywood will be happy about that, or all the other big content producers that America is known for. The business will move elsewhere.
The only question is whether you had permission to use your copy in the manner that you did.
The only permission needed is to look at it.
So for instance suppose you made a copy of a Disney movie in any fashion (by torrent, by videotaping a screening, by screen-capturing Disney+, etc), then showed it to a classroom in its entirety, and then deleted it immediately thereafter.
That’s a public performance, which is a form of redistribution. That’s not relevant to AI training.
Note that it would also make no difference if there were actually no students in the classroom.
[citation needed]
They can’t just “scrape” your medical records without your consent in order to study a particular disease.
The goalposts just swung wildly. Who’s posting medical records on the Fediverse?
I am confident future AI researchers in America can be both ethical and successful.
Except for being banned from using public data that non-American AIs are able to use.
Also, the undefined “ethical” term is a new goalpost just brought into this discussion as well. I’ve found its use to be unhelpful, it always boils down to meaning whatever the person who’s using it wants it to mean.
Yes, and that copy is provided with restrictions. You can view your copy in a browser, but not use it for other purposes.
No, it’s not. I can use it for other purposes. I can’t distribute copies, that’s all that copyright restricts.
Those cases have delineated what Google is and is not allowed to do. It can only store a short snippet of the page as a summary.
Which is way more than what an AI model retains. Fair use is not even required since nothing copyrighted remains in the first place. You’ll first have to show that copyright is being violated before fair use even enters the picture.
Disney does put its work online for people to see. So does the New York Times. That doesn’t mean you can make an unrestricted copy of what you see.
Again, that has nothing to do with all this. AI training doesn’t require “making an unrestricted copy.” Once the AI has learned from a particular image or piece of text that image or text can be deleted, it’s gone. No longer needed. No copy is distributed under any level of restrictiveness.
Both are illegal in the US
I am Canadian. America’s laws are not global laws. If they wish to ban AI training this will become starkly apparent.
all sorts of religious prohibitions and moral scares HAVE ended up in the law. The idea is that the “collective” is large enough to dispel any niche restrictive beliefs.
I’m rather confused by this. My point is that having the collective’s religious prohibitions and moral scares imposed upon the minority is a bad thing, and that it’s a flaw in “majority rule” that a rights-based legal system is supposed to attempt to counter. It doesn’t always work but that’s the idea. So simply having a large number of people pull out pitchforks and demand that the rights of AI trainers be restricted should not automatically result in that actually happening.
With regard to your scenario about furry art: You’re simply describing a specific example of the general scenario I already talked about. You’re saying that furry artists should have a right to copyright their “style”, which is emphatically not the case. Style cannot be copyrighted (and as a furry-adjacent who’s seen plenty of furry art over the years, I would also very much disagree that every furry artist has a unique style. They copy off each other all the time). You’re also saying that furry artists should have a right to their livelihood, which is also not the case. Civilization changes over time, new technologies and new social movements come along and result in jobs coming and going. Nobody has the right to make a living at some particular career.
You say “A core furry experience is getting art commissioned of your character from other artists.” Well, maybe that was a core furry experience. But the times they are a-changing. My avatar image here on the Fediverse was generated by me in large part by AI art generators and I got a much better experience and a much more accurate reflection of what I was going for than I would have got via a commission, and I got it for free. That sucks for the artists but it’s great for everyone else.
And while there are some comparisons you could draw from that situation, photography didn’t fundamentally replace their work verbatim, it merely provided an alternative that filled a similar role.
Does AI art actually replace an artist’s work verbatim? When I made my avatar image I still did a lot of intermediate fiddling steps in the Gimp. AI is just part of my workflow. An artist could also make use of it. Or they could continue making art the old fashioned way if they want, the mere existence of AI art generators doesn’t affect that ability one whit. All it does is change the market, possibly making it so that they can no longer make a living at their old job.
There are still plenty of painters. But when photography came along there were probably a lot of portrait painters who were put out of work. Over the years I’ve had several family photographs taken in photography studios, but I’ve never even considered commissioning a painter to paint a portrait of myself.
Ultimately the models themselves don’t contain any copyrighted content
And that’s that for basically all the anti-AI legal arguments.
but they (by design) combine related ideas and patterns found in the training data, in a way that will always approximate it, depending on the depth of training data
And there’s absolutely nothing wrong with this. People do it all the time, why is it suddenly a huge moral problem when a machine does? Should it be illegal for someone to go to a furry artist and ask for something “in the style of Dark Natasha”, or for an artist to pick up some of his personal style from Jay Naylor’s work?
I want to publicly express the notion that it’s not a silver bullet, and we need to develop legal frameworks for protecting people now, rather than later.
I actually agree, but the people that I think are most in need of protecting are the people who train and use AI models. There are tons of news stories and personal experiences being posted these days about these people being persecuted in various ways, deplatformed, lied about, and so forth. They’re the ones whose rights people are proposing should be restricted.
Making something publicly available does not automatically give everyone unrestricted rights to it.
Of course not. But that’s not what’s happening here. Only very specific rights are needed, such as the right to learn concepts and styles from what you can see.
In the case of AI, if training requires making a local copy of a protected work then that may be copyright infringement even if the local copy is later deleted.
That’s the case for literally everything you view online. Putting it up on your screen requires copying it into your computer’s memory and then analyzing it in various ways. Every search engine ever has done this way more flagrantly than any AI trainer has. There have been plenty of lawsuits over this general concept already and it’s not a problem.
It’s no different than torrenting a Disney movie and deleting your copy after you watched it.
Except that in this case it’s not torrenting a copy that Disney didn’t want to have online for you to see. It’s looking at stuff that you have deliberately put up online for people to see. That’s rather different.
Besides, it’s actually not illegal to download a pirated movie. It’s illegal to upload a pirated movie. A distinction that people often overlook.
The problem I have with this is that the argument seems to boil down to “I don’t like this so it should be illegal.” It puts me in mind of the classic objection on the grounds that something is devastating to your case. Laws should have a rationale beyond simply being what “collective morality” decides, otherwise all sorts of religious prohibitions and moral scares end up embedded in the legal system too.
Generally speaking, laws are based on the much simpler and more generic foundation of rights. Laws exist to protect rights, and get complicated because those rights can end up conflicting with each other. So what rights do the two “sides” of this conflict bring to the table? On the pro-AI side people are arguing that they have the right to learn concepts and styles from publicly available data, to analyze that data and record that analysis, and to make use of the products of that analysis. It all seems quite reasonable and foundational to me. On the anti-AI side - arguments based on complete misunderstandings of how the technology works aside - I generally see “because it’s devastating to my future career, your honor.”
Anti-AI artists are simply being selfish, IMO, demanding that society must continue to provide them with their current niche of employment and “specialness” by restricting other peoples’ rights through new legal restrictions. Sure, if you can convince enough people to go along with that idea those laws will be passed. That doesn’t make them right. There have been many laws over the years that were both popular and wrong on many levels.
Fortunately there are many different jurisdictions in the world. There isn’t just one “The Law.” So even if some places do end up banning AI I don’t think that’s going to slow it down much on a global scale, it’ll just help determine which places get a lead and which places fall behind in developing this new technology. There’s too much benefit for everyone to forego it everywhere.
You’re conflating a bunch of different areas here. Trademark is an entirely different category of IP. As you say, “style” cannot be copyrighted. And the sorts of models that chatter from social media is being used for is quite different from code generation.
Sure, there is going to be a bunch of lawsuits and new legislation coming down the pipe to clarify this stuff. But it’s important to bear in mind that none of that has happened yet. Things are not illegal by default, you need to have a law or precedent that makes them illegal. And there’s none of that now, and no guarantee that things are going to pan out that way in the end.
People are acting incensed at AI trainers using public data to train AI as if they’re doing something illegal. Maybe they want it to be illegal, but it isn’t yet and may never be. Until that happens people should keep in mind that they have to debate, not dictate.
which pretty irreparably embeds identifiable aspects of it into their model.
No, it doesn’t. The model doesn’t contain any copyright-significant amount of the original training data in it, it physically can’t contain it, the model isn’t large enough. The model only contains concepts that it learned from the training data - ideas, patterns, but not literal snippets of the data.
The only time you can dredge a significant snippet of training data out is in a case where a particular bit of training data was present hundreds or thousands of times in the training data - a condition called “overfitting” that is considered a flaw and that AI trainers work hard to prevent by de-duplicating the data before training. Nobody wants overfitting, it defeats the whole point of generative AI to use it to replicate the “copy and paste” function in a hugely inefficient way. It’s very hard to find any actual examples of overfitting in modern models.
It’s not literally embedded verbatim anywhere
And that’s all that you need to make this copyright-kosher.
Think of it this way. Draw a picture of an apple. When you’re done drawing it, think to yourself - which apple did I just draw? You’ve probably seen thousands of apples in your life, but you didn’t draw any specific one, or piece together the picture from various specific bits of apple images you memorized. Instead you learned what the concept of an apple is like from all those examples, and drew a new thing that represents that concept of “appleness.” It’s the same way with these AIs, they don’t have a repository of training data that they copy from whenever they’re generating new text.
You think Meta can’t pick up some random new IP address just for this?
A better solution would be to either stop fretting about trivialities like this, or if you can’t do that stop putting your data up on an open protocol that is specifically designed to spread it around and show it to anyone who wants to see it.
It really annoys me how people react with such shock and alarm at how companies are “stealing” their data, when they put said data up in a public venue explicitly for the purpose of everyone seeing it. And particularly in the case of AI training there isn’t even any need for them to save a copy of that data or redistribute it to anyone once the AI has been trained.
And stuff like “I know there’s a library out there that does the thing I’m trying to do, what’s it named and how do I call it?”
I haven’t been using ChatGPT for the “meat” of my programming, but there are so many things that little one-off scrappy Python scripts make so much easier in my line of work.
Ah, I had interpreted your comment to mean that you thought ChatGPT wouldn’t know how to answer a question in Bengali unless the information it needed to solve the problem had been part of its Bengali training set. My bad.