Inbred: chaorace’s family has been a bit too familiar. (Can be inherited)

Expand?

  • 0 Posts
  • 39 Comments
Joined 2Y ago
cake
Cake day: Jun 10, 2023

help-circle
rss

Yes, but only on motorcycles. That’s because there’s no such thing as an automatic motorcycle[1][2][3][4][5], so you have to learn manual if you want to ride one. Unfortunately this skill doesn’t transfer well to manual driving because on bikes you operate the clutch with your hand and the shift with your foot. I’m not terribly worried about that, though… I’ve literally never even been on the inside of a manual drive car before!

For context: I’m mid-20s from the American south.


  1. No, electrics don’t count. ↩︎

  2. No, semi-autos don’t count. ↩︎

  3. No, three-wheelers don’t count. ↩︎

  4. No, the 2006 Yamaha bikes don’t count because that line was a sales failure. ↩︎

  5. Ok, fine. Honda’s DCT bikes do count, but holy shit are they expensive! ↩︎


Yeah that happens to me too, but I’m often happy with that because it fits well with my personal tendency to hyperfixate on new topics for a while.

What I like to do is ride out a trend until I lose interest and then start Not Interesteding them. It doesn’t take long for them to fall out of my feed and from that point on only surface every once in a while if a particularly strong video is currently making the rounds.

Don’t hesitate to throw out negative feedback even for stuff you feel lukewarm about. Youtube can take the hint without going overboard and forgetting that interest too completely.


I always keep watch history turned on, because the recommendation system has always sucked if you kept it turned off. It’s more honest to the user now that they give up instead of intentionally sucking – “we can’t give recommendations if we don’t know what you tend to watch”. That basically makes sense to me and I accept the tradeoff this poses.

I know a lot of people think Youtube recommendations always suck and are therefore not even worth trying, but I beg to disagree. You can cultivate good recommendations, even if your interests have no overlap with the default front-page. It comes down to two basic ingredients:

  1. Use the “Not Interested” button on bad recommendations
  2. Click on the like/dislike buttons after watching videos

By default Youtube is going to try feeding you lowest common denominator junk. This is because it starts out knowing very little about you besides broad demographics. The more feedback you give it the less it falls back on this crutch until eventually you get solid recommendations. Every single bad recommendation is a hidden opportunity to tell Youtube to get that garbage out of your face.

And, yeah… in my experience this really works. If you click the buttons and make it a habit, you can get some really great stuff! As encouragement, I’ll share a selection from my home feed full of fresh videos relevant to my tastes. Even the topic bar is on point:

A mobile screenshot of the Youtube home page showing three videos: "Islamic Denominations Explained" by Useful Charts, "Popular Misconceptions About Mythbusters" by Adam Savage's Tested, and "Ranking Anime Denny's" by hazel

I’ll probably watch all 3 of these videos at some point, which I think indicates a pretty successful outcome. In fact, over the years, I’ve found hundreds of channels almost exclusively using the recommendation system. Even if you primarily stick to your subscription box, improving your recommendations can help you with building that out little by little.

(Note: I am deliberately avoiding the question of whether or not one should want an algorithm to intimately understand their interests because that’s a hard conversation and my soul has already long since been sold)


Claude doesn’t believe in you.



Table 3 and Table 4 aren’t combined because they assessed different regions of interest. The tables don’t contradict each other, because they don’t even include the same ROIs:

Presented in Table 4 are ROIs that were assessed in only two studies and show significant SMDs between the ADHD and control subjects.

As for the heterogeneity, the paper notes which ROIs failed to remain statistically significant after correction. The Prefrontal region is not included this list:

Although frontal gray and white matter and premotor ROIs show substantial SMDs ranging from .59 to .75, they also show statistically significant levels of heterogeneity, indicating rather variable results across the two studies in each meta-analysis. Due to the lack of power for the meta-analyses in this table, we need to interpret these results with caution. For example, the measures of intracranial volume, frontal lobe, right amygdala, and the splenium using the O’Kusky et al. (1988) method failed to remain significant after correction for multiple comparisons.

As you say, the studies aren’t golden, but that’s why I picked a meta-analysis. To be honest, if I knew I was going to be held to such a high standard, I would have just kept my mouth shut!


I’m posting a source for my original claim: “we can consistently visually identify an ADHD prefrontal cortex in brain scans”. This is not the same as a source which proves that brain scanning is able to identify/diagnose ADHD itself. The order of operations is reversed, because the only way to diagnose a disorder like ADHD is through observation of symptoms, not physiology.

To be clear: what I claim is that you can compare brain imaging of an average individual (oxymoron notwithstanding) diagnosed with ADHD against the brain imaging of an average individual not diagnosed with ADHD and visually see a difference.

Source for this claim, w/ attention to table 4: https://sci-hub.se/https://doi.org/10.1016/j.biopsych.2006.06.011


All of the scientific literature that I have ever read on the topic has strongly stated that there is no way to identify ADHD from brain scans or anything like that

Identify =/= diagnose. You also cannot diagnose ADHD with a genetic test, despite genetics being a strong indicator. I alluded to this by following up with “when reduced volume is observed”, but you’re right in saying that it would have been less misleading to state directly that brain scans are never in and of themselves used to diagnose ADHD.

Also, no, any situation you describe wouldn’t be diagnosable as ADHD, one of the requirements of an ADHD diagnosis is that the condition is present from birth.

If we’re talking DSM-5, the criteria is actually that the onset of symptoms occur by 12 years of age. Even if you take the DSM-5 as gospel, it’s entirely possible for a 6 year old to experience a traumatic brain injury to the prefrontal cortex, heal from the initial trauma, continue to demonstrate symptoms, then receive an ADHD diagnosis. You might call that a misdiagnosis, but I don’t see much of a difference if the symptoms and treatment are the same. There are also recent studies which explore the development of ADHD secondary to traumatic brain injury in adults which I think could eventually warrant further broadening the diagnostic criteria.


Well, technically it’s a disorder which can emerge from any number of different causes. Yes, generally ADHD emerges as a developmental issue, but you could arrive at the same physiology through sufficiently specific neurodegeneration or brain trauma and these things would still be diagnosable as ADHD and even effectively treated using ADHD medication.

Saying that might seem like a stretch, but consider the fact that we can consistently visually identify an ADHD prefrontal cortex in brain scans. When reduced volume is observed, it’s even possible to predict to some extent the symptom severity by how much appears to be missing. For several decades of research, the precursor to the modern ADHD diagnosis was even called “minimal brain damage”.


So, this is a question with a cultural and legal element. Legally speaking, it is possible in many U.S. states to be fired for no reason – the employer does not need to explain themselves when asked for a cause[1]. This is to say that it’s perfectly legally possible in (many) U.S. states to be fired for a reason so petty as a customer complaint – whether or not that was the official cause notwithstanding[2].

With that being said, employers aren’t compelled to fire their own employees in response to a customer complaint. From a management perspective, it’s generally very inefficient to fire someone because you’ll then have to cover their hours and find/train a replacement. For that reason alone, it’s already rare in most industries for truly petty firings to happen. Unfortunately, this rule of thumb gets totally flipped in low-training industries whenever there’s a surplus of bodies in the labor pool. As a manager, if you’re able to replace a burnt-out and/or below-average worker by the end of the week, why wouldn’t you roll those dice?

Even then, it’s not exactly a daily occurance even in settings where these conditions are common… with one big exception. When it comes to businesses which serve “regulars” (e.g.: hotels, restaurants, grocery stores) there exists a certain type of individual who expects that their complaints will have the power to get people fired. This variety of power-starved person tends to exclusively patronize establishments where they feel taken seriously. Such establishments deliberately choose to indulge these sleazebags because they’re potential “whales” – people who, if handled correctly, will be worth much more money than the replacement cost of the staff they cause to be fired. These firings are basically performative in nature and have nothing at all to do with something the employee could have controlled.


  1. Protected classes are a whole other can of worms. For the purposes of this explainer, please just trust me when I say that the legal system is still able to protect protected classes without directly requiring paperwork from the employers themselves. The system would be significantly better at this job with a papertrail requirement, but the fact that it manages to work at all when employers can basically ghost employees is something worth noting. ↩︎

  2. Another can of worms! As you may imagine, when giving a reason is optional, it is often (but not always) legally advantageous for employers to report petty firings as no-cause firings. It’s all about CYA. For example, if they’re doing something dicey like racial discrimination or retaliation against union organizers, an employer might go in the opposite direction and meticulously document dozens of petty reasons in excrutiating detail. This is usually what’s happening when a service-worker employee is “written up” – that information goes in a file to be used against them if they ever sue. ↩︎


Well sure, we can take it as a given that sex basically exists in its own special category. Biologically speaking, it’s an impulse older than almost any other. I think that’s self-evident enough without any need to tap into mysticism.

(Content warning: sexual violence in human history, abstract)

With that being said, it could also be argued that r-word is also deeply ingrained within human biology, particularly in the context of warfare. Even if we discount the (extensive) evidence within the anthropological record demonstrating this, there are clues baked into human physiology which seem to indicate that the human species itself is uniquely adapted to perpetrating r-word when compared amongst the other hominid species.

(Content warning concluded)

I apologize for bringing such a nasty subject up at all, but it’s useful to weigh such things when talking about the deep biological roots of sex and how it makes us think/feel. I personally believe that it’s too limiting to describe sex as an implicitly pure thing which only becomes wrong when certain impure people corrupt it. Please don’t take that as a doomer statement! I personally see it as a triumpth that, through culture, we can collectively transform an act as ambiguous as sex into an idealized and pure expression of interpersonal love. I nevertheless do still try to be mindful of the capacity for sex to exist outside of the box we’ve crafted for it, though.


I can tell I’ve struck a nerve here. I apologize for the harm that has caused. I am sorry.

And, yes. I do have a concept of personal space. I do think that forced sex is worse than a forced haircut. I understand the point you’re getting at, but I would appreciate it if you didn’t try to make it in such a forceful way next time. Thank you for responding.


I also just really hate the idea of applying financial logic to something like this. Like, are we just gonna go and label the entire human race as a Ponzi scheme? Grandpa’s not able to pull his weight lately so fuck him? That’s a rhetorical question, obviously. The reality is that old people are going to cost what they cost and everyone else just has to suck it up because the alternative is way uglier.


Rest assured, it is not necessary to explain the concept to me. I just like exploring the underlying why that leads to the how. My intention was to provide food for thought, not provoke the internet into explaining for me the joys of sharing romantic sex.


Now that you mention it, isn’t it odd that it feels weird? I wonder exactly where the line starts to come into focus between something as innocuous as paying for a meal and something as taboo as paying for sex? Obviously that’s a question of culture, but it’s entertaining to think about nonetheless…

Like, there’s definitely something kind of unusual about this specific taboo. Speaking from the perspective of modern western culture, I’d say that the following things which share some characteristics with prostitution are all individually qualified as being relatively socially acceptable:

  • Paying for therapy (i.e.: buying the service of social comfort)
  • Paying for a massage (i.e.: buying the service of physical comfort)
  • Having a one night stand (i.e.: receiving the service of sexual comfort without buying it)
  • Buying a sex toy (i.e.: buying sexual comfort without involving a service worker)

I posit that there’s something uniquely specific about the direct intersection of service, money, and sexual pleasure which makes prostitution uniquely uncomfortable for (modern western) people to think about. I might be overthinking it, though. Perhaps these three things are already uncomfortable topics to really think about so we naturally want to resist the idea of combining them?


We can do better.

I’m guessing “wrong-sider” would be a step in the wrong direction?



Ah, nevermind sorry about the trouble. He’s a cofounder of the company whose logo you’re using as an avatar (“Ronimo”, i.e.: “Robot Ninja Monkey”).


This is a long-shot question, feel free to disregard… but I have to ask: is that you, Joost?


Spite them even harder: just tar it without compression.