I have a vague recollection that my little cousin was nearly ended when he managed to destabilize the stand that a CRT was sitting on, and it fell just behind him, but I may be entirely hallucinating that memory.
Regardless, there are multiple ways old CRTs can cause great harm.
It’s a trend and it’s slowing down not plummeting. And even if it is, there are already more of us than we know how to sustain.
The problem at hand is not growth rate slowing down, it’s humans divided in tiny pockets of countries burning through what little we have left of natural resources.
People who have kids today, do so knowing that their children will most certainly be displaced by natural disasters.
there are already more of us than we know how to sustain
what is the evidence for that? if that were true then we would have lot's of people going hungry, but that's simply not the case. poverty is getting reduced world wide. if we could not sustain the current population, we should have lots of people dying from hunger and the population should stop growing. but the reason why population is growing especially in africa is exactly because the growth is still sustainable. if it wasn't, then it could not be growing.
In 100 years, "us" is going to be Elon Musk's grandchildren, people from Niger, etc. and none of them are going to think like you whether they have to move or not.
I live in the affected neighborhood. There were hundreds of drivers that did not know how to handle a power outage... it was a minority of drivers, but it was a nontrivial, but nominally large number. I even saw a Muni bus blow through a blacked out intersection. The difference is the Waymos failed in a way that prevented potential injury, whereas the humans who failed, all fail in a way that would create potential injury.
I wish the Waymos handled it better, yes, but I think that the failure state they took is preferable to the alternative.
Locking down the roads creates a lot of potential injuries too.
And "don't blow through an intersection with dead lights" is super easy to program. That's not enough for me to forgive them of all that much misbehavior.
I wouldn't say "super easy" but if an autonomous vehicle isn't programmed to handle:
1: streetlight with no lights
2: streetlight with blinking red
2.5: streetlight with blinking yellow
Then they are 100% not qualified to be on the road. Those are basic situations and incredibly easy to replicate, simulate, and incorporate into the training data.
That is to say, they are not edge cases.
Dealing with other drivers in those settings is much harder to do but that's a different problem and you should be simulating your car in a wide variety of other driver dynamics. From everyone being very nice to everyone being hyper aggressive and the full spectrum in between.
If you are just arguing that they're not qualified to be on the road, then I agree with you. I've been an autonomous vehicle skeptic for a long time, mainly because in think our automobile transportation system is inherently dangerous. It's going to be a tough sell though, considering that they are already -- generally -- better drivers than a nontrivial number of human beings.
It's a tough question. The entire reason I'm defending this shortcoming is exactly that I prefer the fail-safe shutdown to any attempt to navigate bizarre, barely conforming to traffic code, blacked out intersections that are inherently dangerous.
Specifically identifying road signs, traffic lights, and dead traffic lights is a narrow problem that has feasible solutions. To the point where we can reasonably say “yeah, this sub-component basically works perfectly.”
Compared to the overall self-driving problem which is very much not a super easy problem.
The cars already know those are intersections with lights. I'm not talking about that part. Just the basic logic that you don't go through at speed unless there is a green (or yellow) light.
>The cars already know those are intersections with lights.
That's not how any of this works. You can anthropomorphize all you like, but they don't "know" things. They're only able to predictably respond to their training data. A blackout scenario is not in the training data.
Even ignoring the observations we can make, the computers have maps programmed in. Yes they do know the locations of intersections, no training necessary.
And the usual setup of an autonomous car is an object recognition system feeding into a rules system. If the object recognition system says an object is there, and that object is there, that's good enough to call "knowing" for the purpose of talking about what the cars should do.
Or to phrase things entirely differently: Finding lights is one of the easy parts. It's basically a solved problem. Cutting your speed when there isn't a green or yellow light is table stakes. These cars earn 2 good boy points for that, and lose 30 for blocking the road.
>They're only able to predictably respond to their training data. A blackout scenario is not in the training data.
Is there anyway to read more about this? I'm skeptical that there aren't any human coded traffic laws in the Waymo software stack, and it just infers everything from "training data".
Yes, it does lead to blocking the traffic but that is the only safe action to do in such an intersection; if an intersection has traffic lights, there's enough traffic that stop&give way is not a viable operation.
Usually in that case you would make it a priority to the right /or left so that everyone only has to look at one side (besides the pedestrians) and in a very busy intersection people with common sense and education naturally do an alternance where you give way to every other car.
I don't know if waymos are programmed for that and it could very well be that there were so many pedestrian crossing it wouldn't apply it anyway.
I mean, yes, if the Waymo's could safely pull over, or even know how to handle every emergency situation, I think that would be better. I'd say that's a big ask though. Training autonomous vehicles for blackouts, fires, earthquakes, tornadoes, hail storms, landslides, sinkholes, tsunamis, floods, or even just fog is not really feasible given that most humans won't even navigate the properly. I'll keep saying it: I'm glad the cars were set to fail-safely when they encountered a situation they couldn't understand.
I honestly wish the human drivers blowing through intersections that night would have done the same. It's miracle no one was killed.
Right. You know there are humans somewhere in the city who got confused or scared and mess up too. Maybe a young driver who is barely confident in the first place on a temporary permit, or just someone who doesn’t remember what you do and was already over-stressed.
Whatever, it happens.
This was a (totally unintentional) coordinated screw up causing problems all over as opposed to one small spot.
Definitely. The question then becomes how do they respond on the stimulus of other, more experienced drivers?
Eg. if they see 5 cars going around them and "solving" the intersection, do they get empowered to do the same? Or do some annoying honkers behind them make them bite the bullet and try their hand at passing it (and not to worry, other drivers will also make sure no harm comes to anyone even if you make a small mistake)? Human drivers, no matter how inexperienced, will learn on the spot. Self-driving vehicles can "learn" back in the SW department.
Yes, driving is a collaborative activity which requires that we all partner on finding most efficient patterns of traffic when traffic lights fail. Self-driving cars cannot learn on the spot, and this is the main difference between them and humans: you either have them trained on every situation, or they go into weird failure modes like this.
Was it unintentional? These systems were programmed to fall bad into "terrified 16yo/elderly lady" behavior because that's what's most legally defensible.
Yeah, the correlated risk with AVs is a pretty serious concern. And not just in emergencies where they can easily DDOS the roads, but even things like widespread weaknesses or edge cases in their perception models can cause really weird and disturbing outcomes.
Imagine a model that works real well for detecting cars and adults but routinely misses children; you could end up with cars that are 1/10th as deadly to adults but 2x as deadly to children. Yes, in this hypothetical it saves lives overall, but is it actually a societal good? In some ways yes, in some ways it should never be allowed on any roads at all. It’s one of the reasons aggregated metrics on safety are so important to scrutinize.
In my state (South Carolina) this is exactly how they handled it. If a parent or activist wishes see a book banned it goes through reviewed based on school-level appropriateness. A book like The Kite Runner with its deprecations of Bacha Bazi are a bit rough for a 5th grader but considered acceptable for a High Schooler given the cultural significance of the work.
Florida bill 1069 allows parents to challenge the inclusion of books in the library, but only explicitly identifies books related to sexual preferences/conduct/etc.
Often that reason is "too poor to afford proper representation" or "looked vaguely like the actual criminal" or "took a plea bargain because the justice system was threatening them with an immorally-long wait for a trial and a likely worse outcome".
Non-violent marijuana users haven't ever materialized as a large cohort of the prison population. Sorry, I too used to believe that prisons were overflowing with them
I mean if this was the 90s, yes it was true but you are also correct that it's very rare for anyone to be in prison for just marijuana alone in the US. Even in states where it's "illegal."
Not really? I mean, when you compare the number of people who have committed a "horrific violent" crime to the total number of people caught up in the US prison system, I expect it's not "often".
The numbers are fuzzy but they indicate that at least a simple majority of (and possibly up to an extreme majority) of prisoners have committed violent crimes.
That really depends on what you classify as “violent”. There are a lot of crimes labeled “violent” that don’t include direct physical harm to another person. Eg burglary is labeled as “violent” many places when the actual act was “smashed a window, grabbed a TV and ran away”. Drug manufacturing is also typically considered “violent” even without any kind of assault/murder/turf war/etc.
The numbers I saw said 47% of inmates had a violent crime under federal or state classifications.
You didn't, but I'm taking your stance to its logical conclusion.
GP:
> they shouldnt be paid at all. they're in prison for a reason. they have a debt to society.
Your response:
> Often that reason is "too poor to afford proper representation" or "looked vaguely like the actual criminal" or "took a plea bargain because the justice system was threatening them with an immorally-long wait for a trial and a likely worse outcome".
Be that as it may, this is our system. Through a series of laws we have defined due process for our people, and people who end up in prison are a result of this due process. Like it or not this is the best we were able to do.
If we are going to say prisoners should be given more privileges because some prisoners do not deserve to be in there, then why are we holding them in a prison to begin with? Being confined to prison is a thousand times more punitive than not receiving pay for making a license plate.
A better reason for arguing that prisoners should be paid for their work is because it is more humane. That's a better argument than some people are in prison unjustly.
I'm actually in favor of prison reforms. Prisons' number one goal should be to reduce recidivism. I see that as the entire point of the prison system: reducing crime. If a person leaves prison and re-offends, we have failed to do our job.
> The naturally curious will remain naturally curious and be rewarded for it
Maybe. The naturally curious will also typically be slower to arrive at a solution due to their curiosity and interest in making certain they have all the facts.
If everyone else is racing ahead, will the slowpokes be rewarded for their comprehension or punished for their poor metrics?
> If everyone else is racing ahead, will the slowpokes be rewarded for their comprehension or punished for their poor metrics?
It's always possible to go slower (with diminishing benefits).
Or I think putting it in terms of benefits and risks/costs: I think it's fair to have "fast with shallow understanding" and "slower but deeper understanding" as different ends of some continuum.
I think what's preferable somewhat depends on context & attitude of "what's the cost of making a mistake?". If making a mistake is expensive, surely it's better to take an approach which has more comprehensive understanding. If mistakes are cheap, surely faster iteration time is better.
The impact of LLM tools? LLM tools increase the impact of both cases. It's quicker to build a comprehensive understanding by making use of LLM tools, similar to how stuff like autocompletion or high-level programming languages can speed up development.
Regardless, there are multiple ways old CRTs can cause great harm.
reply