Hacker Newsnew | past | comments | ask | show | jobs | submit | top_sigrid's commentslogin


> If you can train a policy that drives well on cameras, you can get self-driving. If you can't, you're fucked, and no amount of extra sensors will save you.

Source: trust me, bro? This statement has no factual basis. Calling the most common approach of all other self-driving developers except Tesla a wank also is no argument but hate only.


[flagged]


Yes that’s why having both makes sense.

This is so dumb, I don't even know if you are serious. Nobody ever said it is lidar instead of cameras, but as additional sensor to cameras. And everybody seems to agree that that is valuable sensor-information (except Tesla).

I'm able to drive without lidar, with just my eyeball feeds.

I agree that lidar is very valuable right now, but I think in the endgame, yeah it can drive with just cameras.

The logic follows, because I drive with just "cameras."


Yeah, but your "cameras" also have a bunch of capabilities that hardware cameras don't, plus they're mounted on a flexible stalk in the cockpit that can move in any direction to update the view in real-time.

Also, humans kinda suck at driving. I suspect that in the endgame, even if AI can drive with cameras only, we won't want it to. If we could upgrade our eyeballs and brains to have real-time 3D depth mapping information as well as the visual streams, we would.


What "a bunch of capabilities"?

A complete inability to get true 360 coverage that the neck has to swivel wildly across windows and mirrors to somewhat compensate for? Being able to get high FoV or high resolution but never both? IPD so low that stereo depth estimation unravels beyond 5m, which, in self-driving terms, is point-blank range?

Human vision is a mediocre sensor kit, and the data it gets has to be salvaged in post. Human brain was just doing computation photography before it was cool.


What do you believe the frame rate and resolution of Tesla cameras are? If a human can tell the difference between two virtual reality displays, one with a frame rate of 36hz and a per eye resolution of 1448x1876, and another display with numerically greater values, then the cameras that Tesla uses for self driving are inferior to human eyes. The human eye typically has a resolution from 5 to 15 megapixels in the fovea, and the current, highest definition automotive cameras that Tesla uses just about clears 5 megapixels across the entire field of view. By your criterion, the cameras that Tesla uses today are never high definition. I can physically saccade my eyes by a millimeter here or there and see something that their cameras would never be able to resolve.

Yep, Tesla's approach is 4% "let's build a better sensor system than what humans have" and 96% "let's salvage it in post".

They didn't go for the easy problem, that's for sure. I respect the grind.


I can't figure out your position, then. You were saying that human eyes suck and are inferior compared to sensors because human eyes require interpretation by a human brain. You're also saying that if self driving isn't possible with only camera sensors, then no amount of extra sensors will make up for the deficiency.

This came from a side conversation with other parties where one noted that driving is possible with only human eyes, another person said that human eyes are superior to cameras, you disagreed, and then when you're told that the only company which is approaching self driving with cameras alone has cameras with worse visual resolution and worse temporal resolution than human eyes, you're saying you respect the grind because the cameras require processing by a computer.

If I understand correctly, you believe:

1. Driving should be possible with vision alone, because human eyes can do it, and human eyes are inferior to camera sensors and require post processing, so obviously with superior sensors it must be possible 2. Even if one knows that current automotive camera sensors are not actually superior to human eyes and also require post processing, then that just means that camera-only approaches are the only way forward and you "respect the grind" of a single company trying to make it work.

Is that correct? Okay, maybe that's understandable, but it makes me confused because 1 and 2 contradict each other. Help me out here.


My position is: sensors aren't the blocker, AI is the blocker.

Tesla put together a sensor suite that's amenable to AI techniques and gives them good enough performance. Then they moved on to getting better FSD hardware and rolling out newer versions of AI models.

Tesla gets it. They located the hard problem and put themselves on the hard problem. LIDAR wankers don't get it. They point at the easy problem and say "THIS IS WHY TESLA IS BAD, SEE?"

Outperforming humans in the sensing dept wasn't "hard" for over a decade now. You can play with sensors all day long and watch real world driving performance vary by a measurement error. Because "sensors" was never where the issue was.


Yeah, Tesla gets it, except they’ve been promising actual FSD for a decade now, and have yet to deliver. Their “robotaxi” service has like 30 cars, all with humans, and still crashes all the time. They’re a total fucking joke.

Meanwhile Waymo (the LiDAR wankers) are doing hundreds of thousands of paid rides every week.


There is no evidence of unsupervised robotaxis actually rolling out. These are just the same promises Elon has wrongfully done since literally 10 years and some publicity stunts.

People have taken rides in unsupervised Teslas. Please check news.

Yes, privy influencers. And it was supervised from the car behind it. No one else was able to find such a ride. Tesla cars also autonomously self-delivered. Which also turned out to be a one-off publicity stunt. Up until now, nothing points to that this is something different this time.

So first it was "it needs attention". Then "it has a navigator". Then "it has a following car". You see?

Yes and nothing points to them having solved unsupervised driving. Which you seem to conclude from publicity stunts.

Like I said. No amount of proof is enough.

Yes and like I said, nothing about this is proof that they are close to actual unsupervised driving.

Why the denial? Go check the news man. And they are expanding.


Do you have ANY datapoints or arguments to underpin that renewables "destroy all wilderness". Or even more that they are worse than fossil fuels? This claim - especially in your harsh tone - could need at least some reason.


Can you elaborate what exactly you mean by that and what you encountered?


I can’t be exact and thorough in a HN comment. I will just state that the happy path is too narrow. The ecosystem needs a big push for a better DX and a better direction.

Ruby should look at how the PHP ecosystem was modernized. Sure the syntax has always been awful and is even more degraded now, but the ecosystem is globally in a much better place.


Is there actually anything known to his wellbeing?



He actually made a blogpost on December 6th regarding Warner Bros & Netflix but it's kinda weird: http://blog.fefe.de/?ts=97cd29cd Seems like he still needs a lot of time to recover.


Thanks for these insights! Do you have any material or sources for a layman to learn more about this and where these numbers come from?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: