• 0 Posts
  • 75 Comments
Joined 2Y ago
cake
Cake day: Jun 19, 2023

help-circle
rss

The video industry is perfectly capable of good standards. SDI, for example, was invented in 1989 and it’s still the best way to transmit video today. DisplayPort has advantages, but it’s worse than SDI in most ways.


But word is available.

Not for me. It’s just too expensive for a task that I very rarely need and there are good free alternatives (like Wordpad - though that’s not the one I use personally).


They don’t literally mean no batteries. They just mean small batteries. The 50Wh battery in my (modern, efficient) laptop lasts about 18 hours for example.

You’d also have battery powered lighting.

The real challenge is heating and cooling. If you want to be able to keep your house a comfortable temperature, your food cool in the fridge, your food hot when you eat it… that’s not easy to do with small batteries. But it can be done, e.g. with good insulation and by changing your habits a little (cook during the day, etc).

You can also, as it says in the article, use “non battery” storage. We already do that. For example lots of people keep hundreds of litres of hot water next to their house. That hot water can be used, for example, to keep warm overnight. You can also fill empty air space in your fridge with water - unlike air, which is instantly replaced with warm air every time you open the door, the cold water will stay in the fridge and help the fridge stay cold much much longer. Easily overnight.

Of course, you could also just use gas for all of that… but if one of your motivations is to avoid carbon emissions then that’s off the table.


While zero incidents is naturally what they should be aiming for, it’s more of a goal for continuous improvement, like it is for air travel.

As far as I know, proper self driving (not “autopilot”) AVs are pretty close to zero incidents if you only count crashes where they are at fault.

When another car runs a red light and smashes into the side of an autonomous vehicle at 40mph… it wasn’t the AV’s fault. Those crashes should not be counted and as far as I know they currently are in most stats.

What liability can/should we place on companies that provide autonomous drivers that will ultimately lead to safer travel for everyone?

I’m fine with exactly the same liability as human drivers have. Unlike humans, who are motivated to drive dangerously for fun or get home when they’re high on drugs or continue driving through the night without sleep to avoid paying for a hotel, autonomous vehicles have zero motivation to take risks.

In the absence of that motivation, the simple fact that insurance against accidents is expensive is more than enough to encourage these companies to continue to invest in making their cars safer. Because the safer the cars, the lower their insurance premiums will be.

Globally insurance against car accidents is approaching half a trillion dollars per year and increasing over time. With money like that on the line, why not spend a lazy hundred billion dollars or so on better safety? It won’t actually cost anything - it will save money.


nearly 1 year ago, ChatGPT was released to the world. It was the first time most people had any experience with a LLM. And everything you sent to the bot was given to a proprietary, for profit algorithm to further their corporate interests

You might want to pick another example, because OpenAI was originally founded as a non-profit organisation, and in order to avoid going bankrupt they became a “limited” profit organisation, which allowed them to source funding from more sources… but really allow them to ever become a big greedy tech company. All they’re able to do is offer some potential return to the people who are giving them hundreds of billions of dollars with no guarantee they’ll ever get it back.


Avoiding dangerous scenarios is the definition of driving safely.

This technology is still an area under active development and nobody (not even Elon!) is claiming this stuff is ready to replace a human in every possible scenario. Are you actually suggesting they should be testing the cars in scenarios that they know wouldn’t be safe with the current technology? Why the fuck would they do that?

So no, I would absolutely not say they are “less prone to accidents than human drivers”.

OK… if you won’t accept the company’s reported data - who’s data will you accept? Do you have a more reliable source that contradicts what the companies themselves have published?

to say nothing about the legality that will come up

No that’s a non issue. When a human driver runs over a pedestrian/etc and causes a serious injury, if it’s a civilised country and a sensible driver, then an insurance company will pay the bill. This happens about a million times a week worldwide and insurance is a well established system that people are, for the most part, happy with.

Autonomous vehicles are also covered by insurance. In fact it’s another area where they’re better than humans - because humans frequently fail to pay their insurance bill or even deliberately drive after they have been ordered by a judge not to drive (which obviously voids their insurance policy).

There have been debates over who will pay the insurance premium, but that seems pretty silly to me. Obviously the human who ordered the car to drive them somewhere will have to pay for all costs involved in the drive. And part of that will be insurance.


I don’t expect them to never fail, I just want to know when they fail and how badly.

“Over 6.1 million miles (21 months of driving) in Arizona, Waymo’s vehicles were involved in 47 collisions and near-misses, none of which resulted in injuries”

How many human drivers have done millions of miles of driving before they were allowed to drive unsupervised? Your assertion that these systems are untested is just wrong.

“These crashes included rear-enders, vehicle swipes, and even one incident when a Waymo vehicle was T-boned at an intersection by another car at nearly 40 mph. The company said that no one was seriously injured and “nearly all” of the collisions were the fault of the other driver.”

According to insurance companies, human driven cars have 1.24 injuries per million miles travelled. So, if Waymo was “as good as a typical human driver” then there would have been several injuries. They had zero serious injuries.

The data (at least from reputable companies like Waymo) is absolutely available and in excruciating detail. Go look it up.


And a thing blocking the road isn’t exactly unforeseen either.

Tesla’s system intentionally assumes “a thing blocking the road” is a sensor error.

They have said if they don’t do that, about every hour or so you’d drive past a building and it would slam on the brakes and stop in the middle of the road for no reason (and then, probably, a car would crash into you from behind).

The good sensors used by companies like Waymo don’t have that problem. They are very accurate.


Just because Tesla is worse than others doesn’t make it not self-driving.

The fact that Tesla requires a human driver to take over constantly makes it not self-driving.

so they can take over instantly.

Humans fundamentally can’t do that. If you sit a human in a self driving car doing nothing for hours, they won’t be able to react in a split section when it is needed.

The Human isn’t supposed to be “doing nothing”. The human is supposed to be driving the car. Autopilot is simply keeping the car in the correct lane for you, and also adjusting the speed to match the car ahead.

Tesla’s system won’t even stop at an intersection if you need to give way (for example, a stop sign. Or a red traffic light). There’s plenty of stuff the human needs to be doing other than turning the steering wheel. If there is a vehicle stopped in the middle of the road Tesla’s system will drive straight into it at full speed without even touching the brakes. That’s not something that “might happen” it’s something that will happen, and has happened, any time a stationary vehicle is parked on the road. It can detect the car ahead of you slowing down. It cannot detect a stopped vehicle.

They’ve promised to ship a more capable system “soon” for over a decade. I don’t see any evidence that it’s actually close to shipping though. The autonomous systems by other manufacturers are significantly more advanced. They shouldn’t be compared to Tesla at all.

Is anybody actively testing them in bad weather conditions?

Yes. Tens of millions of testing and they pay especially close attention to any situations where the sensors could potentially fail. Waymo says their biggest challenge is mud (splashed up from other cars) covering the sensors. But the cars are able to detect this, and the mud can be wiped off. it’s a solvable problem.

Unlike Tesla, most of the other manufacturers consider this a research project and are focusing all of their efforts on making the technology better/safer/etc. They’re not making empty promises and they’re being cautious.

On top of the millions of miles of actual testing, they also record all the sensor data for those miles and use it to run updated versions of the algorithm in exactly the same scenario. So the millions of miles have, in fact, been driven thousands and thousands of times over for each iteration of their software.


Let them drive in fog and suddenly they can’t even see clearly visible emergency vehicles.

That article you linked isn’t about self driving car. It’s about Tesla “autopilot” which constantly checks if a human is actively holding onto the steering wheel and depends on the human checking the road ahead for hazards so they can take over instantly. If the human sees flashing lights they are supposed to do so.

The fully autonomous cars that don’t need a human behind the wheel have much better sensors which can see through fog.


Drive to the right edge of the road and stop until the emergency vehicle(s) have passed

That is a direct quote from the California DMV and from the sounds of it that’s exactly what the autonomous car did.

The right answer, in my opinion, is to allow the first responders to take control of the car. This wasn’t just a lone ambulance that happened upon a stationary car. It was a major crash (where a human driven car ran over a pedestrian) with a road that was blocked by emergency vehicles. A whole bunch of cars, not just autonomous ones, were stopped in the middle of the road waiting for the emergency to be over so they could continue on their way. Not sure why only this one car is getting all the blame.


We should ban police cars too - because allegedly an empty police car was also blocking the ambulance.

The AV spokesperson said they reviewed the footage and found there was room to pass their vehicle safely and another ambulance and other cars did so.


A strawman argument is where you ignore what was said by the other person and instead respond with something distorted. That’s not what I did - the core premise of Drew’s argument is that AI will not “make the world better” and I provided a crystal clear example of how it makes the world better.

It was just one example, and obviously not the complete picture, but what choice do I have? It’s such a broad topic I couldn’t possibly list everything AI will impact without writing an entire book.

I think we all understand that capitalism is mostly bad for humans, and really good for corporations and their owners.

No I disagree. Corporations exist exclusively to benefit their human owners them. Which means anything that’s “good for corporations” is good for a select small number of humans.

Don’t blame “capitalism” for wealth inequality. Blame the actual humans (e.g. Donald Trump, Elon Musk) who have made it their life’s work to drive the global economy even harder into a world that benefits the fiew and ignores the struggles of the many.

Also - not all corporations are bad. Some of them do great work that truly benefits the world and I would personally put OpenAI in that category. Their mandate is not to make a profit - and in fact the amount of profit they can legally make has been limited. Their mission is literally “to ensure that artificial general intelligence benefits all of humanity”. I hope they succeed, and I think they will. Drew is wrong.


the method of enforcement should be to arrest the perpetrators

To do that, you have to know who the perpetrators are, which is routinely impossible.

This isn’t a hypothetical situation, we are living in a world where servers are kicked off the internet, SSL certificates are revoked, vast quantities of emails are deleted without even sending them to a spam folder, lemmy communities are closed down, etc.

In a perfect world, none of that would be necessary and we could simply send the perpetrators to jail. But we don’t live in a perfect world. We live in one where censorship is the only option.


The key difference is you’re an experienced cyclist. You’re capable of recognising that it’s safe to go 60mph down that particular hill and if it wasn’t you’d be on the brakes. Also you probably know how hard you can pull that front brake lever without going over the handle bars.

Inexperienced cyclists and high speeds are a really bad combination.

Most parents wouldn’t let their teenager ride YZF-R1, and they shouldn’t be letting them ride a high powered eBike either.


You think teenagers care about insurance? Even if they did, they certainly can’t buy any.

I’m pretty sure the teens my neighbourhood that go as fast as they can at night on the wrong side of the road around blind corners with their lights turned off are uninsured. I love my eBike. Not a fan of how I see other people riding them every day though (and not just kids).


Could have just typed the script in the first place.

Sure - but ChatGPT can type faster than me. And for simple tasks, CoPilot is even faster.

Also - it doesn’t just speed up typing, it also speeds up basics like “what did bob name that function?”


Like the article itself mentions, it has immense potential for advertising, scams and political propaganda. I haven’t seen AI proponents offering meaningful rebuttals to that.

You won’t get a direct rebuttal because, obviously, an AI can be used to write ads, scams and political propaganda.

But every day millions of people are cut by knives. It hurts. A lot. Sometimes the injuries are fatal. Does that mean knives are evil and ruining the world? I’d argue not. I love my kitchen knives and couldn’t imagine doing without them.

I’d also argue LLMs can be used fact check and uncover scams/political propaganda/etc and can lower the cost of content production to the point where you don’t need awful advertisements to cover the production costs.


I don’t have anything against you or your colleagues. You’ve got every right to strike if that’s what you want to do.

But there are millions of people being harmed by the strike. That’s a simple fact.

Journalists/etc need to do their job and provide good balanced information on critical issues like this one. FUD like Drew Devalt posted inflames the debate and makes it nearly impossible for reasonable people to figure out what to do about Large Language Models… because like it or not, they exist, and they’re not going away.

PS: while I’m not a film writer, I am paid to spend my day typing creative works and my industry is also facing upheaval. I also have friends who work in the film industry, so I’m very aware and sympathetic to the issues.


What are those people doing to you?

There are definitely people who are harmed by FUD like this. For example the current writers strike, which has 11,000 people putting down tools… indefinitely shutting down global movie productions that employ millions of people and leaving them unemployed for who knows how long.