Hacker Newsnew | past | comments | ask | show | jobs | submit | verelo's commentslogin

Microslop*

Had a meeting with a friend the other day, discussing the 'times' and all that is happening around us.

I sit here thinking how wonderful and terrible of a time it is. If you can afford to sit in the stands and watch, it's exciting. There's never been so much change in such a short period of time. But if you're in the arena, or expecting to end up in the arena at some point, what terrifying moments lay ahead of you.

I never thought I'd say this, but I expect the arena is where I'll end up...I've enjoyed my time in the stands, but I'm running low on energy, capital and the will to keep trying.


Wait what does the arena stand for?

I see it as a metaphor for those who are having to battle to survive and those who are already retired or wealth enough not to care how things turn out. So i guess it could be job market, it could also be...a literal battlefield lol

Job market.

How absurd this is an option, but I’ll be using this config too.


Not only what files, but what part of the files. Seeing 1-6 lines of a file that's being read is extremely frustrating, the UX of Claude code is average at best. Cursor on the other hand is slow and memory intensive, but at least I can really get a sense of what's going on and how I can work with it better.


I am not a claude user, but a similar problem I see on opencode is accessing links. More than once I've seen Kimi, GLM or GPT go tothe wrong place and waste tokens until I interrupt them and tell them a correct place to start looking for documentation or whatever they were doing.

If I got messages like "Accessed 6 websites" I'd flip and go spam a couple github issues with as much "I want names" as I could.


> FWIW I think LLMs are a dead end for software development

Thanks for that, and it's worth nothing FYI.

LLMs are probably the most impressive machine made in recorded human existence. Will there be a better machine? I'm 100% confident there will be, but this is without a doubt extremely valuable for a wide array of fields, including software development. Anyone claiming otherwise is just pretending at this point, maybe out of fear and/or hope, but it's a distorted view of reality.


I'd love to learn more about your process of the sale. And in your graphs, is that attributed to profit/revenue in 2024?

I'm on a super similar journey. Started in 2022, did about 400k in revenue in 2025 at 79% margin, we'll se how this year goes. Theres a world where i'd love to add a lot of scale, but that'll rely on some experiments (underway) panning out. It's 'failure' in most peoples books but 2x'ing would be great too?!

How did you find a buyer? How did you come to a sale price? Why didn't you keep going?


> I'm on a super similar journey. Started in 2022, did about 400k in revenue in 2025 at 79% margin, we'll se how this year goes. Theres a world where i'd love to add a lot of scale, but that'll rely on some experiments (underway) panning out. It's 'failure' in most peoples books but 2x'ing would be great too?!

Nice, congrats!

Yeah, I think depending on what bubble you're in, bootstrapping to 400k and 2x'ing every few years is either failure or amazing. The VC/hypergrowth path doesn't appeal to me, so I think something that gives you $100k+/yr in profit is a huge success.

> I'd love to learn more about your process of the sale.

Sure, I wrote a couple of posts with the details.[0, 1]

[0] https://mtlynch.io/i-sold-tinypilot/

[1] https://mtlynch.io/lessons-from-my-first-exit/


Thanks for the reply. Just to clarify the picture for me, was the 2024 jump in profit attributing the sale or was that just a solid year?

Agreed on the 2x every year or two being a weird failure state. I've done both, and I'm at a junction point right now, but I really think a huge part of going all in on the VC route is finding the right money to work with. I've been mostly technical, hidden in dark places all my life. I love having the chance to be customer facing, i love the business side of my work, but doing that with the wrong backers is so unappealing i'd rather not have their money.

Edit: just skimmed your prior post (https://mtlynch.io/i-sold-tinypilot/). Great stuff, love the transparency!


> Thanks for the reply. Just to clarify the picture for me, was the 2024 jump in profit attributing the sale or was that just a solid year?

Mostly the sale. The deal closed in April, but Jan-March was an especially profitable quarter.[0]

[0] https://mtlynch.io/retrospectives/2024/04/


Do you do podcast interviews? Would love to dig into the no-VC/hypergrowth path as that's what I'm all about.


Yep, the place where I've talked about it the most has been The Software Misadventures Podcast.[0, 1]

[0] https://softwaremisadventures.com/p/michael-lynch-on-quittin... (2022, two years into the business)

[1] https://softwaremisadventures.com/p/michael-lynch-indie-hack... (2024, just after I sold)


I think scalemaxx is offering to interview you.


Oh, I'm open to that!


Yes indeed! What's the best place to reach out?


You can email me through the address here: https://mtlynch.io/about/


Thanks! Sent your way.


Love this. I have to say after 20 years of working in tech, I'm keen to retire to a world of chopping wood and gardening...but who knows, i guess winter will still keep me in doors when that happens.

Congrats on making it to retirement and keeping busy, hope you have a great time!


I am 4 months into a planned 6 month break. The plan was learn Spanish and do some traveling. After a month I started a side project and haven't focused on much else.

Years of the reward cycle being around shipping code is hard to override I guess.


Oh, I am doing a lot more than just hacking code. I am also cooking, gardening, reading, writing, travelling, brewing and distilling, swimming, volunteering, …


Yeah its absurd. As a Tesla driver, I have to say the autopilot model really does feel like what someone who's never driven a car before thinks it's like.

Using vision only is so ignorant of what driving is all about: sound, vibration, vision, heat, cold...these are all clues on road condition. If the car isn't feeling all these things as part of the model, you're handicapping it. In a brilliant way Lidar is the missing piece of information a car needs without relying on multiple sensors, it's probably superior to what a human can do, where as vision only is clearly inferior.


The inputs to FSD are:

    7 cameras x 36fps x 5Mpx x 30s
    48kHz audio
    Nav maps and route for next few miles
    100Hz kinematics (speed, IMU, odometry, etc)
Source: https://youtu.be/LFh9GAzHg1c?t=571


So if they’re already “fusioning” all these things, why would LIDAR be any different?


Tesla went nothing-but-nets (making fusion easy) and Chinese LIDAR became cheap around 2023, but monocular depth estimation was spectacularly good by 2021. By the time unit cost and integration effort came down, LIDAR had very little to offer a vision stack that no longer struggled to perceive the 3D world around it.

Also, integration effort went down but it never disappeared. Meanwhile, opportunity cost skyrocketed when vision started working. Which layers would you carve resources away from to make room? How far back would you be willing to send the training + validation schedule to accommodate the change? If you saw your vision-only stack take off and blow past human performance on the march of 9s, would you land the plane just because red paint became available and you wanted to paint it red?

I wouldn't completely discount ego either, but IMO there's more ego in the "LIDAR is necessary" case than the "LIDAR isn't necessary" at this point. FWIW, I used to be an outspoken LIDAR-head before 2021 when monocular depth estimation became a solved problem. It was funny watching everyone around me convert in the opposite direction at around the same time, probably driven by politics. I get it, I hate Elon's politics too, I just try very hard to keep his shitty behavior from influencing my opinions on machine learning.


> but monocular depth estimation was spectacularly good by 2021

It's still rather weak and true monocular depth estimation really wasn't spectacularly anything in 2021. It's fundamentally ill posed and any priors you use to get around that will come to bite you in the long tail of things some driver will encounter on the road.

The way it got good is by using camera overlap in space and over time while in motion to figure out metric depth over the entire image. Which is, humorously enough, sensor fusion.


It was spectacularly good before 2021, 2021 is just when I noticed that it had become spectacularly good. 7.5 billion miles later, this appears to have been the correct call.


What are the techniques (and the papers thereof) that you consider to be spectacularly good before 2021 for depth estimation, monocular or not?

I do some tangent work from this field for applications in robotics, and I would consider (metric) depth estimation (and 3D reconstruction) starting to be solved only by 2025 thanks to a few select labs.

Car vision has some domain specificity (high similarity images from adjacent timestamps, relatively simpler priors, etc) that helps, indeed.


depth estimation is but one part of the problem— atmospheric and other conditions which blind optical visible spectrum sensors, lack of ambient (sunlight) and more. lidar simply outperforms (performs at all?) in these conditions. and provides hardware back distance maps, not software calculated estimation


Lidar fails worse than cameras in nearly all those conditions. There are plenty of videos of Tesla's vision-only approach seeing obstacles far before a human possibly could in all those conditions on real customer cars. Many are on the old hardware with far worse cameras


Interesting, got any links? Sounds completely unbelievable, eyes are far superior to the shitty cameras Tesla has on their cars.


There's a misconception that what people see and what the camera sees is similar. Not true at all. One day when it's raining or foggy, have some record the driving, through the windshield. You'll be very surprised. Even what the camera displays on the screen isn't what it's actually "seeing".


Yea.. not holding my breath on links to superman tesla cameras performing better than eyes


Monocular depth estimation can be fooled by adversarial images, or just scenes outside of its distribution. It's a validation nightmare and a joke for high reliability.


It isn't monocular though. A Tesla has 2 front-facing cameras, narrow and wide-angle. Beyond that, it is only neural nets at this point, so depth estimation isn't directly used; it is likely part of the neural net, but only the useful distilled elements.


I never said it was. I was using it as a lower bound for what was possible.


Always thought the case was for sensor redundancy and data variety - the stuff that throws off monocular depth estimation might not throw off a lidar or radar.


It doesn't solve the "Coyote paints tunnel on rock" problem though.


IIRC, that was only ever a problem for the coyote, though.

Source: not a computer vision engineer, but a childhood consumer of looney toons cartoons.


Time for a car company to call itself "ACME" and the first model the "Road Runner".


Fog, heavy rain, heavy snow, people running between cars or from an obstructed view…

None of these technologies can ever be 100%, so we’re basically accepting a level of needless death.

Musk has even shrugged off FSD related deaths as, “progress”.


Humans: 70 deaths in 7 billion miles

FSD: 2 deaths in 7 billion miles

Looks like FSD saves lives by a margin so fat it can probably survive most statistical games.


How many of the 70 human accidents would be adequately explained by controlling for speed, alcohol, wanton inattention, etc? (The first two alone reduce it by 70%)

No customer would turn on FSD on an icy road, or on country lanes in the UK which are one lane but run in both directions; it's much harder to have a passenger fatality in stop-start traffic jams in downtown US cities.

Even if those numbers are genuine (2 vs 70) I wouldn't consider it apples-for-apples.

Public information campaigns and proper policing have a role to play in car safety, if that's the stated goal we don't necessarily need to sink billions into researching self driving


Is that the official Tesla stat? I've heard of way more Tesla fatalities than that..


There are a sizeable number of deaths associated with the abuse of Tesla’s adaptive cruise control with lane cantering (publicly marketed as “autopilot”). Such features are commonplace on many new cars and it is unclear whether Tesla is an outlier, because no one is interested in obsessively researching cruise control abuse among other brands.

There are two deaths associated with FSD.


This is absolutely a Musk defender. FSD and Tesla related deaths are much higher.

https://www.tesladeaths.com/index-amp.html


Autopilot is the shitty lane assist. FSD is the SOTA neural net.

Your link agrees with me:

> 2 fatalities involving the use of FSD


Tesla sales are dead across the world. Cybertruck is a failure. Chinese EVs are demonstrably better.

No one wants these crappy cars anymore.


I don't know what he's on about. Here's a better list:

https://en.wikipedia.org/wiki/List_of_Tesla_Autopilot_crashe...


Good ole Autopilot vs FSD post. You would think people on Hacker News would be better informed. Autopilot is just lane keep and adaptive cruise control. Basically what every other car has at this point.

"MacOS Tahoe has these cool features". "Yea but what about this wikipedia article on System 1. Look it has these issues."

That's how you come across


Autopilot is the shitty lane assist. FSD is the SOTA neural net.

Your link agrees with me:

> two that NHTSA's Office of Defect Investigations determined as happening during the engagement of Full Self-Driving (FSD) after 2022.


Isn't there a great deal of gaming going on with the car disengaging FSD milliseconds before crashing? Voila, no "full" "self" driving accident; just another human failing [*]!

[*] Failing to solve the impossible situation FSD dropped them into, that is.


Nope. NHTSA's criteria for reporting is active-within-30-seconds.

https://www.nhtsa.gov/laws-regulations/standing-general-orde...

If there's gamesmanship going on, I'd expect the antifan site linked below to have different numbers, but it agrees with the 2 deaths figure for FSD.


Better than I expected. So this was 3 days ago, is this for all previously models or is there a cut off date here?


I quickly googled Lidar limitations, and this article came up:

https://www.yellowscan.com/knowledge/how-weather-really-affe...

Seeing how its by a lidar vendor, I don't think they're biased against it. It seems Lidar is not a panacea - it struggles with heavy rain, snow, much more than cameras do and is affected by cold weather or any contamination on the sensor.

So lidar will only get you so far. I'm far more interested in mmwave radar, which while much worse in spatial resolution, isn't affected by light conditions, weather, can directly measure stuff on the thing its illuminating, like material properties, the speed its moving, the thickness.

Fun fact: mmWave based presence sensors can measure your hearbeat, as the micro-movements show up as a frequency component. So I'd guess it would have a very good chance to detect a human.

I'm pretty sure even with much more rudimentary processing, it'll be able to tell if its looking at a living being.

By the way: what happened to the idea that self-driving cars will be able to talk to each other and combine each other's sensor data, so if there are multiple ones looking at the same spot, you'd get a much improved chance of not making a mistake.


Lidar is a moot point. You can't drive with just Lidar, no matter what. That's what people don't understand. The most common one I hear: "What if the camera gets mud on it", ok then you have to get out and clean it, or it needs an auto cleaning system.


Maybe vision-only can work with much better cameras, with a wider spectrum (so they can see thru fog, for example), and self-cleaning/zero upkeep (so you don't have to pull over to wipe a speck of mud from them). Nevertheless, LIDAR still seems like the best choice overall.


Autopilot hasn’t been updated in years and is nothing like FSD. FSD does use all of those cues.


I misspoke, i'm using Hardware 3 FSD.


Did you skip Anthropic models? I honestly can't take this seriously if you're not looking at all the leading providers but you did look at some obscure ones.


There's 151 models there right now (with all the latest Anthropic models), it's all randomized, it's just that there aren't enough annotations for the anthropic models to be elicited right now.


This just looks like a hard bag…wouldn’t you need to open it to use the device? I feel the goal of the suggestion was a way to hide the device while keeping it physically accessible.


The signals would be able to escape out the front of the screen, so a proper faraday cage effect would require full enclosure... since that's one of the core principles behind faraday cages.


Transparent aluminium! Or..eeeerrr...copper!? Make it happen Mr Scott!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: