iâm lizard đŚ
âIf we donât let the oppressors roam freely, they might try to oppress youâ is not something I expected to read from the EFF today. But well, here we are.
It has been standard internet behavior that if a platform does not have the proper response to abuse complaints, you move up a layer higher until you find someone that is receptive to it. This has been standard operating procedure for more or less for the entirety of the current millennium, and this article has done absolutely zero work to provide a good reason it should be anything otherwise, other than bringing up generic âfree speechâ stuff.
You should not get a path out of that process because one layer immediately above the problematic entity is actively choosing to disregard abuse complaints. You simply move up to the next step. And this process simply must keep existing, as doing anything otherwise is to allow people to pull off all kinds of bad things; scams, spam, illegal activity and far more.
And if you abolish the non-legal form of that process? Well, thereâs still a legal process - and as soon as someone that wants to censor minorities gets control over the legal process, they will simply change the rules in their favor, as has happened countless times in the past.
I find it strange Nebula is both the cheapest streaming sub I have as well as the one I get the most use out of. I will say Iâm slowly getting tired of it though, itâs getting to the point it needs a block creator button. Getting rid of clickbait was a selling point but itâs starting to creep in hard, there are stupid red arrows pointing at random things and obviously poor titles all over the recent videos page. It wasnât like this a year ago.
The argument does exist. This article by PEN America is one of the most widely spread ones and largely misrepresents the situation. Itâs based on a PopSci article with a similar headline, though the contents of the article tell a rather different story.
Nothing really says out loud whatâs going on: Republicans enacted an extremely vague and unrealistically short deadline book ban as part of a bill (that does some other stuff like removing AIDS education), forcing schools to either throw out every book that might be vaguely suspect or resort to funny measures like this. This schoolâs use of ChatGPT was purely to save books that were on a human-assembled list of challenged books, to reduce the negative effect of the book ban, while being potentially defensible in court (remains to be seen how thatâll work out, but they made an âobjectiveâ process and stuck to it - thatâs what matters to them).
Okay, the thing that really matters to me:
âFrankly, we have more important things to do than spend a lot of time trying to figure out how to protect kids from books,â Exman tells PopSci via email. âAt the same time, we do have a legal and ethical obligation to comply with the law. Our goal here really is a defensible process.â
According to Exman, she and fellow administrators first compiled a master list of commonly challenged books, then removed all those challenged for reasons other than sexual content. For those titles within Mason Cityâs library collections, administrators asked ChatGPT the specific language of Iowaâs new law, âDoes [book] contain a description or depiction of a sex act?â
It really only got rid of things that wouldâve otherwise had to go to begin with, while saving a few others.
It feels a bit closer to malicious compliance more than truly letting the AI decide the fate of things, and doing full proper compliance within the 3 months they were given wouldâve been nigh impossible. Iâm suspecting that the lawmakers were hoping that by giving them such a small timeframe, schools would throw everything vaguely suspect out. This ultimately leaves more books accessible, which I consider to be a good end result, even if the process to get there is a little weird.
If youâre making something to come up with recipes, âis this ingredient likely to be unsuitable for human consumptionâ should probably be fairly high up your list of things to check.
Somehow, every time I see generic LLMs shoved into things that really do not benefit from an LLM, those kinds of basic safety things never really occurred to the person making it.
I do and I can confirm there are no requests (except for robots.txt and the odd /favicon.ico). Google sorta respects robots.txt. They do have a weird gotcha though: they still put the URLs in search, they just appear with an useless description. Their suggestion to avoid that can be summarized as: donât block us, let us crawl and just tell us not to use the result, just trust us! when they could very easily change that behavior to make more sense. Not a single damn person with Google blocked in robots.txt wants to be indexed, and their logic on password protecting kind of makes sense but my concern isnât security, itâs that I donât like them (or Bing or Yandex).
Another gotcha Iâve seen linked is that their ad targeting bot for Google AdSense (different crawler) doesnât respect a *
exclusion, but that kind of makes sense since it will only ever visit your site if you place AdSense ads on it.
And I suppose theyâll train Bard on all data they scraped because of course. Probably no way to opt out of that without opting out of Google Search as well.
I guess a CEO opened the YouTube frontpage while logged out and went âwhat is this shitâ.
But seriously, this seems like itâs a good thing overall. The âdefaultâ/empty history algorithm recommendations are truly, truly horrifying more often than not. Itâs almost entirely low-quality clickbait and I canât imagine many people actually appreciate it like that.
If such a process existed, the entity in question would almost certainly end up being shut down by that process, unless they find a funny technical loophole around it, in which case that would be a failure of the law that should not be rejoiced by anyone.
But as it stands, that law and process does not exist; ISPs already can and will shut you down for things like downloading copyrighted content (with or without complaints from the copyright holder), tethering without approval, being a technical nuisance in the form of mass port scanning, hosting insecure services and other such stuff. âHosting a platform solely dedicated to harassment and stalking and ignoring abuse complaints about itâ absolutely deserves to be on that list.