From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models https://aclanthology.org/2023.acl-long.656.pdf

davehtaylor
link
fedilink
332Y

Technology is not apolitical, because humans are not apolitical. Anyone who says they are or claims to be “neutral” or “centrist” simply means their ideals align with the status quo.

This is a problem with all sectors of tech, but especially in places where algorithms have to be trained. For example, facial recognition systems are notoriously biased against anyone who isn’t cis and white. Fitness trackers/smart watches/etc. have trouble with darker skin tones. Developers encode implicit biases because they are oblivious to the fact that their experiences aren’t universal. If your dev team and your company at large aren’t diverse, that lack of diversity is going to show through in your product, intentional or not. How you shape the algorithms, what data you feed it to train it, etc. are all affected by those things.

Anyone who refuses to accept this and insists on holding on to the idea that somehow “computer” means “neutral and objective” is generally not worth engaging in any discussion about LLM’s/AI/etc.  Their partisan blinders are impenetrable. 

I’d add the caveat that some technologies are more political than others, too.

Anyone who says they are or claims to be “neutral” or “centrist” simply means their ideals align with the status quo.

Or frequently “I actually find politics too boring and complicated but don’t want to admit it”.

Garbage In, Garbage Out. One of the oldest and most immutable laws of computer science.

It’s the end result of training your AI on mountains of biased human thoughts

Not trying to be a smartass, but what’s the alternative?

FaceDeer
link
fedilink
82Y

Does there have to be one? It’d be nice if there were, of course, but this is currently the only way we know of to make these AIs.

Well, you can focus on rule-based/expert system style AI, a la WolframAlpha. Actually build algorithms to answer questions that are based on scientific fact and theory, rather than an approximated consensus of many sources of dubious origin.

Ooo, old school AI 😍

In our current cultural consciousness, I’m not sure that even qualifies as AI anymore. It’s all about neutral networks and machine learning nowadays.

Stefen Auris
link
fedilink
42Y

I guess shoving an encyclopedia into it. I’m not sure really, it is a good point. Perhaps AI bias is as inevitable as human bias…

interolivary
link
fedilink
52Y

Despite what you might assume, an encyclopedia wouldn’t be free from bias. It might not be as biased as, say, getting your training data from a dump of 4chan, but it’d absolutely still have bias. As an on-the-nose example, think about the definition of homosexuality; training on an older encyclopedia would mean the AI now thinks homosexuality is a crime.

And imagine how badly most encyclopedias would reflect on languages and cultures other than the one that made them.

radix
link
fedilink
22Y

The alternative is being extremely careful about what data you allow the LLM to learn from. Then it would have your bias, but hopefully that’ll be a less flagrantly racist bias.

Rikudou_Sage
link
fedilink
72Y

The models that were trained with left-wing data were more sensitive to hate speech targeting ethnic, religious, and sexual minorities in the US, such as Black and LGBTQ+ people. The models that were trained on right-wing data were more sensitive to hate speech against white Christian men.

White christian men is an awfully specific thing for the model to be sensitive towards IMO.

Right-wing media is perceived to be funded by white christian men, so if that is the source of the data then I’m not too surprised their writing and articles would protect themselves - but still intriguing how the model picked up on this from online discussions & news data, and was sensitive to hate speech aimed at that group specifically, compared with the Left data which appears more inclusive - although this is probably indicative of the bias they’re studying in the article

radix
link
fedilink
8
edit-2
2Y

I mean, hate speech aimed at left-wing people is more diverse generally than hate speech aimed at right-wing people because the left simply is more diverse in gender, orientation, ethnicity, religion, etc. Isn’t that universally accepted?

(Please correct me if I’m wrong, I approach in good faith!)

I don’t think you’re wrong at all tbh - from my perspective the left is always going to be more diverse, whereas the right isn’t very inclusive by default unless you “fit in” IMO

FaceDeer
link
fedilink
12Y

They become more human every day.

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 0 users online
  • 19 users / day
  • 242 users / week
  • 640 users / month
  • 1.28K users / 6 months
  • 1 subscriber
  • 1.67K Posts
  • 28.2K Comments
  • Modlog