Hacker Newsnew | past | comments | ask | show | jobs | submit | misterdabb's commentslogin

Distorting the economy is a real issue and should be talked about.

There is way too much front-loading of hiring strategies and infrastructure that happens on future (over-)promised capabilities, TAM and profitability projections.

If we are lucky it fizzles out into reasonable valuations and sound investments. If not we have had a giant misallocation of capital on an unprecedented scale setting us back by many many years, a diamond shaped work force in some sectors causing labour shortages and reduced growth in the long run.


They don't. That's why you'll only hear growth metrics like ARR and not "sustainability" metrics like user retention.


It's a weird graph... It's specifically tokens per GPU but the x-axis is "interactivity per second", so the y-axis is including Blackwell being twice the size and also the increase from fp8 -> fp4, note it will needs to be counted multiple time as half as much data is needed to be going through the networks as well.


I'd think it's just a fine-tuned Llama model with Twitter content in a vectordb...


Doesn't llama's license forbid companies with over 700 million users from using it?


The paying users of Twitter are probably less than that.


Then Twitter is in the clear, most of those users are bots.


Is this itself Psyops? ;)


the real fun starts when you consider that everything is!

but in all seriousness, no. don't join the Army unless you truly want to dedicate a chunk of your life to an organization like that.

be a good neighbor, stay politically informed, vote, take care of your friends, help others in need, create art and support artists, etc.


Idk why people have the preconception of two (or more) sensors each deciding on an outcome indepently..

Suppose you have a color sensor and a form sensor and you have to identify an orange (fruit). Clearly in conjuction the two sensors will be way better, than assigning everything with color orange orange and everything round an orage.


It's for safety reasons so that the system doesn't enter an undefined state if one of those sensors fails, which it eventually will.

If the signal goes from "black nothing" to "orange round" you know you went from no object to an orange, but what if your form sensor breaks and it goes from "black nothing" to "orange nothing?"


You just made a point for redundancy ie more sensors...

Sensor failures are independent events and need to be correctly identified no matter what.

To take Lidar and Vision, both say something about distance and form of objects. Together they can achieve better performance. If one of them fails (and failure is identified) it defaults to the other sensor only and will be somewhat worse, but should at least allow it to pull over and warn you about sensor failure.

Also failures are much easier to identify when you have a baseline reference of one or more other sensors, in isolation much harder.


And there is land infrastructure in place for autolanding, aiming localization etc.. Much different from roads.


It's not really slower. It's comparable, with some clear benefits. But yeah, reality is caching up.

Apple is performance-wise matching their competitors (was clear from how they positioned m1 products that this would be the case).

Now one can say that it's lower power and low end, but then we can also say that when especially AMD catches up to 5nm (within a year) and adds on-chip memory (within 2 years) they'd be ahead, with a chip with the same die size..


It is a general theme, people say how they are switching their specced out 2019 16inch MBP to these new M1 Macbooks, then getting all upset when people point out, they are probably not the professionals the top-end MBPs were aimed at.

A lot of people run around with way more powerful laptops than they actually need for whatever they are doing, because it's through a business or it's deductible, but news flash, buying a Macbook Pro doesn't make you a pro.

A question, if ALL pros were fine with 16GBs of RAM, why does Apple offer 4x as much? Answer, because a lot of people will actually need it.

I am happy for people who will get these new devices an be happy with it, I might get one too. But truth be told, most of us getting these devices could make it work with the latest iPad Pro + Magic Keyboard just as well. (OK, I do need to code occasionally, but even for that there is pretty ok apps for iPad I could use)

The expectations have to come the fuck down from where they are today, because the expectations put on these devices are just crazy. It's so overhyped that I think many will be disappointed, when compatibility issues surface and when people realise that the 3x, 5x, 7x performance digits are mainly down to Fixed Function Hardware and Accelerators and general performance increase is just slightly above the generational leap we are used to, with a bigger increase in efficiency.


Given that the M1 is a full node ahead of Zen 3 and two nodes ahead of whatever Intel has to offer, one would think that when on same node, Intel and AMD will be just as capable.

But the truth is comparing to future offerings is bullshit, and we have to stick to what's available today. Impressive power/performance and all that, I have to say. We will see how sustained load looks like and how it runs non-optimized software. But to put in perspective 1 CCX of zen 3 performs better on 7nm (but draws up to 65W). With approximately the same die size (although w/o GPU and other things, the M1 has).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: