Iâve been ripping PS3 games for a little while now. Iâve recently bought some Dynasty Warriors Gundam games from Japan which Iâm dumping at the moment.
Armoured Core has especially been in a thorn in my side. Iâve got 4 and 5 but still trying to acquire For Answer and Verdict Day. Theyâre extremely hard to come across. I havenât looked at the PS2 games but Iâm sure I could find those for cheap. Super easy to rip with a standard Disk Drive.
Copyright Law doesnât talk about who can consume the work. ChatGPTâs theft is no different to piracy and companies have gotten very pissy about their shit being pirated but when ChatGPT does it (because the piracy is hidden behind its training), itâs fine. The individual authors and artists get shafted in the end because their work has been weaponised against them.
LLMs have been caught plagiarising works, by the simple nature of how they function. They predict the next word based on an assumed context of the previous words, theyâre very good at constructing sentences but often the issue is âwhere is it getting its information from?â Authors never consented to their works being fed into an optimisation algorithm and neither did artists when DALL E was created.
For authors, you buy the book and thus the author is paid but thatâs not what happened with ChatGPT.
My office is a 4 by 4m corner of my bedroom. Iâm lucky in that I can devote that much space to it. But itâs all about having a place you can dedicate to being your workspace. If thatâs on a couch, then let it be on the couch. At the end of the day, if youâre fulfilling the tasks outlined in your job description then thereâs really nothing to complain about.
When the pandemic started, my sisters and I would work from the dinner table. Then gradually we all drifted into different rooms, buying desks to work on. Pretty soon we had our own offices in our house. These people donât know or care to find out what normal people are like, they make decisions based on their own assumptions and thatâs why their employees hate them.
Treat them like humans, take the time to ask them what they think. Have some goddamn empathy for fuckâs sake.
Sure but the training isnât an algorithm deciding probabilities. Children do not 100% express themselves based on environment. On one side you have nature and the other you have nurture.
An example:
The FBIâs studies into serial killers uncovered that these people, even though have been influenced by their environment to become what they are, respond to external stimuli in an abnormal way which is what leads them down that path to begin with.
A child learns how language and creativity is expressed before attempting to express themselves. These bots arenât built to deal with this expression because at their core, they are statistical models. It looks at a sentence like a series of variables to determine what comes next. The sentence itself could be nonsensical but the bot doesnât know that, itâs using the probabilities itâs been trained on to construct the sentence.
You might say bots have their own way of expressing themselves but I would say thatâs something weâre applying to the bot than it is demonstrating itself. Iâm sure itâs very cute when it apologises for making a mistake but that apology isnât sincere, itâs been programmed to respond that way when it thinks youâre pointing out its mistakes. Itâs merely imitating a sense of remorse than displaying actual remorse.
Hereâs a Medium article I found on Mastodon which does a good job of outlining the issues with ChatGPT: https://karawynn.substack.com/p/language-is-a-poor-heuristic-for
An LLM or art creation tool is barely equatable to one person. The difference between a child and an art creation tool is that you could show a child a single picture of a bunny, a bike and a carrot then ask them to draw an orange bunny riding a bike and they could draw something resembling that. An art bot would require hundreds to thousands of images of each object to understand what it is before it can even make a reasonable attempt. Itâs not even comparable the level of training required.
At least the childâs drawing will have some personality in it, every output from an art bot ends up looking soulless. The reason for that is the simple concept that an art bot only imitates what itâs been trained on and an artist draws on inspiration before applying the two things an art bot will never have; intent or purpose.
Once again, being reductive about artistsâ work. Jackson Pollockâs entire career was smashing colours on a canvas. If you want to argue that Pollock had to look at thousands of paintings before making his, I honestly canât take you seriously at that point.
A computer could easily generate such âworkâ as well with no training data at all.
Yes and in the eyes of its creators, that was deemed a failure which is why Midjourney and Dall-E are the way they are. These bots donât want to create art, they want to imitate it.
Children have barely any experiences and can still create something. You might not deem it worthy of calling it art but they created something despite their limited knowledge and life experience.
Of course, youâd need books to read and write. The words have to be written and you need to see the words in written form if you also want to write them. But one thing you donât take into account is handwriting. Another thing that is unique to every individual. Some have worse handwriting than others and with practice (like any muscle) it can be improved but you havenât had to have seen handwritten text before writing it yourself. You only need to be taught how to hold a pen and you can write.
Novels are complex structures of language just like poetry. In order to write novels, you have to consume novels because itâs well understood that to find your own narrative voice you must see how others express theirs. Stories are told in unique ways and itâs crucial as a writer to understand and break these concepts down. Intention and purpose form a core part of storytelling and an LLM cannot and will not be able to express those things.
Theyâre written in certain ways because the author intended them to be that way, such as Cormac McCarthy deciding to be very minimalist with his punctuation.
I would love to see you make a point that an LLM without being specifically prompted to do so would make that stylistic decision. An LLM canât make that decision because unless you specify a style it is aware of, it wonât organically do it.
I am also a writer. Iâve written a short story. One of my stylistic choices is that I donât use dialogue tags like âsaidâ. An LLM wonât make that choice because it isnât designed to do so, it wonât decide to minimise its use of dialogue tags to improve the flow of the narrative unless you told it to.
Itâs also completely ignoring the fact that you had to previously learn the spoken language as well (which is a vast quantity of information that takes a human decades to acquire proficiency in even with daily practice).
Yes, in order to learn a spoken language you have to have heard it. However, languages evolve over time. You develop regional accents and dialects. All of the UK speaks English but no two towns speak the same way.
I wasnât talking about copyright law in regards to the model itself.
I was talking about what is/isnât grounds for plagiarism. I strongly disagree with the idea that artists and art bots go through the same process. They donât and itâs reductive to claim otherwise. It negatively impacts the perception of artistsâ work to assert that these models can automate a creative process which might not even involve looking at other artistsâ work because humans are able to create on their own.
A person who has never looked upon a single painting in their life can still produce a piece but the same cannot be said for an art bot. A model must be trained on work that you want the model to be able to imitate.
This is why ChatGPT required the internet to do what it does (the privacy violation is another big concern there). The model needed vast quantities of information to be sufficiently trained because language is difficult to decipher. Languages evolved by getting in contact with other languages and organically making new words. ChatGPT will never invent a new word because itâs not intelligent, it is merely imitating intelligence.
This is stupid and Iâll tell you why.
As humans, we have a perception filter. This filter is unique to every individual because itâs fed by our experiences and emotions. Artists make great use of this by producing art which leverages their view of the world, itâs why Van Gogh or Picasso is interesting because they had a unique view of the world that is shown through their work.
These bots do not have perception filters. Theyâre designed to break down whatever theyâre trained on into numbers and decipher how the style is constructed so it can replicate it. It has no intention or purpose behind any of its decisions beyond straight replication.
You would be correct if a humanâs only goal was to replicate Van Goghâs style but thatâs not every artist. With these art bots, thatâs the only goal that they will ever have.
I have to repeat this every time thereâs a discussion on LLM or art bots:
The imitation of intelligence does not equate to actual intelligence.
Iâve read that you can use a Blue-Ray drive. The RPCS3 website has a proper guide on it though I havenât tried it. I jailbroke my PS3 (not fully because itâs one of the 320GB models) and use that with FileZilla to dump games over the internet onto my PC.