Formerly u/CanadaPlus101 on Reddit.

  • 1 Post
  • 506 Comments
Joined 2Y ago
cake
Cake day: Jun 12, 2023

help-circle
rss

The entire field isn’t therapy.


“Must be at least this rich to ride”



Yeah, similar weather relatively speaking.

I’ve never been to Toronto, so I can’t talk too much trash, but I have been to Vancouver many times and experienced how awesome it is. And, they both cost a similar amount!


Privacy, food safety and environmental regulation basically mean Europe, but then Europe has crazy anti-migrant sentiment at this point. So, maybe one of the Scandinavian countries that’s still relatively welcoming? Portugal might also track, if you don’t mind a country that’s economically moribund.


Honestly I don’t get what the hype with Toronto is. It costs like Vancouver but with Calgary’s weather and general vibes.


It’s hard to imagine a world with no freedom of thought being better, somehow.

In practice, I doubt we’ll ever have to sacrifice much more than we already have. (Which is actually a significant amount. For example, until recent history living on a schedule was for ascetics and flagellants)


Okay, so it looks like nobody read your text. Sorry about that.

Edit: I suppose I should actually answer. The main thing is that you’re going to have to communicate with people who can taste. They’re going to notice things you don’t, and that can even be safety things if there’s an ingredient that has spoiled.


Looking at the responses, I’m guessing Lemmy isn’t a representative sample.


A lot of them don’t even go into it to teach, it seems. More just to be the smartest person in the room.


I’m guessing the derivation from first principles. I too learned the rules years before I was shown it, and it was just so cool to see where they came from.


To be fair, expression tend to be way, way smaller than a codebase. The math community was never forced to improve in the same way. Actually, the symbols were themselves an innovation; in ancient Greece they just had to try and explain that shit in long, tortured natural language sentences.

I really, really hope nobody feels like I’m trying to be unclear with them. I know I sometimes am, though.


It sounds like you’d know better than me, haha. Since they’re talking about being capital-lean I’m guessing they must outsource the frame pressing. Having a rare, super-specialty injection molding machine would not be lean.

IIRC they mentioned fibre reinforcement, but it couldn’t possibly be the aerospace-style precision product, exactly because that would cost a lot.

Edit: And I’m guessing cold-setting resin would be too expensive?


Hmm. Well, plastic can have a pretty good strength to weight ratio, if taking up more volume in the process. If sheet metal can do it maybe they went all-plastic.

If they’re including fibres too, that famously exceeds metal’s rigidity depending on to what precision it’s done.


Yeah, friction losses scale with angular velocity and not torque, and moving a ton of metal takes torque. Don’t forget the braking losses, though, unless it’s a hybrid of some kind. There’s no turning movement back into fuel the way you can turn it back into electricity.

The point is if you’re looking good range, there’s several dials that can be adjusted on an ICE car, related to the prime mover. On an EV, drag is the start and finish of the considerations (unless you’re going to move it onto rails, maybe). And of course range is a huge deal, because a liter of secondary cell can’t come close to the energy density of a liter of petrol and 38 liters of ambient air.


Are truck chassis usually stamped? I had assumed they were made from cast components.


Dope. I wonder if there’s a way to customise it into a sedan. I can speak less to the mechanical aspects of having a super-bespoke super-integrated manufacturing process, but I’m confident the electronics part needs to go back to basics like this.


Quite possibly. They’re gambling on a market for a no-frills car existing, but it might just be too small. That’s what killed economy cars the first time.


Volt or Bolt? Volt is a hybrid.

If Bolt, I’m guessing that was a very old one that will get like 50km of range.


As I understand it, the aerodynamics can be no joke on EVs. The acceleration is very efficient, there’s very efficient regenerative braking, and an object in motion just continues in motion until there’s a force. That means drag is pretty much where your whole battery charge goes. (I’m not sure how much tire flexing accounts for exactly)

For an example off the top of my head, the Arrow concept car manages 500km by not having side mirrors. Compare that to an ICE engine which wastes most of the fuel energy as heat, but to a widely varying degree depending on design and implemented energy recovery features.


“Will AI Destroy Us?”: Roundtable with Coleman Hughes, Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson
cross-posted from: https://lemmy.sdf.org/post/2617125 > A written out transcript on Scott Aaronson's blog: https://scottaaronson.blog/?p=7431 > > ------------------------- > > ::: spoiler My takes: > > > ELIEZER: What strategy can a like 70 IQ honest person come up with and invent themselves by which they will outwit and defeat a 130 IQ sociopath? > > Physically attack them. That might seem like a non-sequitur, but what I'm getting at is that Yudowski seems to underestimate how powerful and unpredictable meatspace can be over the short-to-medium term. I really don't think you could conquer the world over wifi either, unless maybe you can break encryption. > > > SCOTT: Look, I can imagine a world where we only got one try, and if we failed, then it destroys all life on Earth. And so, let me agree to the conditional statement that if we are in that world, then I think that we’re screwed. > > Also agreed, with the caveat that there's wide differences between failure scenarios, although we're probably getting a random one at this rate. > > > ELIEZER: I mean, it’s not presently ruled out that you have some like, relatively smart in some ways, dumb in some other ways, or at least not smarter than human in other ways, AI that makes an early shot at taking over the world, maybe because it expects future AIs to not share its goals and not cooperate with it, and it fails. And the appropriate lesson to learn there is to, like, shut the whole thing down. And, I’d be like, “Yeah, sure, like wouldn’t it be good to live in that world?” > > > And the way you live in that world is that when you get that warning sign, you shut it all down. > > I suspect little but reversible incidents are going to happen more and more, if we keep being careful and talking about risks the way we have been. I honestly have no clue where things go from there, but I imagine the tenor and consistency of response will be pandemic-ish. > > > GARY: I’m not real thrilled with that. I mean, I don’t think we want to leave what their objective functions are, what their desires are to them, working them out with no consultation from us, with no human in the loop, right? > > Gary has a far better impression of human leadership than me. Like, we're not on track for a benevolent AI if such a thing makes sense (see his next paragraph), but if we had that it would blow human governments out of the water. > > > ELIEZER: Part of the reason why I’m worried about the focus on short-term problems is that I suspect that the short-term problems might very well be solvable, and we will be left with the long-term problems after that. Like, it wouldn’t surprise me very much if, in 2025, there are large language models that just don’t make stuff up anymore. > > > GARY: It would surprise me. > > Hey, so there's a prediction to watch! > > > SCOTT: We just need to figure out how to delay the apocalypse by at least one year per year of research invested. > > That's a good way of looking at it. Maybe that will be part of whatever the response to smaller incidents is. > > > GARY: Yeah, I mean, I think we should stop spending all this time on LLMs. I don’t think the answer to alignment is going to come from through LLMs. I really don’t. I think they’re too much of a black box. You can’t put explicit, symbolic constraints in the way that you need to. I think they’re actually, with respect to alignment, a blind alley. I think with respect to writing code, they’re a great tool. But with alignment, I don’t think the answer is there. > > Yes, agreed. I don't think we can un-invent them at this point, though. > > > ELIEZER: I was going to name the smaller problem. The problem was having an agent that could switch between two utility functions depending on a button, or a switch, or a bit of information, or something. Such that it wouldn’t try to make you press the button; it wouldn’t try to make you avoid pressing the button. And if it built a copy of itself, it would want to build a dependency on the switch into the copy. > > > So, that’s an example of a very basic problem in alignment theory that is still open. > > Neat. I suspect it's impossible with a reasonable cost function, if the thing actually sees all the way ahead. > > > So, before GPT-4 was released, [the Alignment Research Center] did a bunch of evaluations of, you know, could GPT-4 make copies of itself? Could it figure out how to deceive people? Could it figure out how to make money? Open up its own bank account? > > > ELIEZER: Could it hire a TaskRabbit? > > > SCOTT: Yes. So, the most notable success that they had was that it could figure out how to hire a TaskRabbit to help it pass a CAPTCHA. And when the person asked, ‘Well, why do you need me to help you with this?’– > > > ELIEZER: When the person asked, ‘Are you a robot, LOL?’ > > > SCOTT: Well, yes, it said, ‘No, I am visually impaired.’ > > I wonder who got the next-gen AI cold call, haha! > :::
fedilink