Hacker Newsnew | past | comments | ask | show | jobs | submit | sofixa's commentslogin

Gradium (https://gradium.ai/), a commercial company offshoot of Kyutai (open source lab), are focusing on emotion (both being able to recognise emotion and also understanding what emotion to use depending on context). I don't think any of their public existing models already does that, but they demoed it pretty impressively at the ai-Pulse conference.

No? It's still the same Astro that you can move to any other provider that supports it - and it's just Javascript, so pretty much everyone supports it.

For now.

While this is generally good advice, it only works if you have women you're close with, at that level, already. If the only women you know are work colleagues, you can't go around asking them for advice on dating (depends on your relationship with them of course, but usually, not work appropriate).

Perhaps that is part of the problem. Talking to women outside of romantic interest might be a good first step

Yes, but that's not useful advice to someone who currently has none.

Boeing themselves, including their CEOs, kept repeating that bullshit. Even after the FAA finally realised the issue, and refused Boeing's first attempted fix that relied on pilots being able to identify the situation and enact the procedure within 10 seconds (in various tests in a Southwest training center, it was around 30s on average). Then the FAA mandated a full redesign of the MCAS system to actually rely on two sensors and handle disagreements. And Calhoun kept repeating that "this wouldn't have happened with American pilots".

> they are tiresome to read about, and it doesn't lead to productive interesting discussion (which is supposed to be what the vote buttons are for here). Politics isn't 100% off topic for HN but mostly I come here to get away from it and I'm sure others do too.

I don't agree. Crypto scams get discussed at length here for days, but when it's a Trump crypto scam, it gets flagged and disappears.


> I swear every other one leaks right away, and those that don't can only be refilled once or twice before they do. So you end up going through like 10 of those a day

Yeah, if you're using that many, the solution is, and always has been, to get a proper reusable cup (ceramic, glass, whatever).


Right, but this just shows why these policies don't work in practice. People will just use 10 paper cups which are free, rather than cart around a big ceramic one.

Especially in situations where people don't even have an assigned spot in the office anymore, it's not exactly shocking that many will choose the easier route.


wildly country dependent, e.g. check the stats for the EU: https://ec.europa.eu/eurostat/web/products-eurostat-news/w/d...

> Absolute nightmare.

Yes, but still probably a million times easier for both the building management and the software vendor to have a SaaS for that, than having to buy hardware to put somewhere in the building (with redundant power, cooling, etc.), and have someone deploy, install, manage, update, etc. all of that.


Easier maybe. But significantly worse. Parts of these systems have been build and engineered to be entirely reliable with automatic hand-overs when some component fails or with alternative routings when some connection is lost.

>than having to buy hardware to put somewhere in the building (with redundant power, cooling, etc.), and have someone deploy, install, manage, update, etc. all of that.

You don't need any of that. You need one more box in the electrical closet and one password protected wifi for all the crap in the building (the actual door locks and the like) to connect to.


And when that box fails, you're looking at how long with no access? Longer than any AWS outage.

The IT guy walks in and replaces/restarts the box instead of waiting for the gods of AWS to descent to earth and restart theirs. They have direct control vs. waiting for something magic to happen.

You also have real-time ETAs from an actual human local to the issue. Plenty of domains where your clients won't care if AWS is down for everyone.

The building has an onsite IT guy with enough spares to fix anything that could go wrong with the box?

Have you ever actually seen these systems in person? It's usually a microcontroller which already rules out a ton of stuff you're talking about. Serious places will buy 2-3 of them at the time of installation to have some spares. The ones here are "user-replaceable" as well (unplug these three cables, replace the box, plug them back in). It's not some mysterious bunch-of-wires-on-arduino-pins magic box that nobody dares to touch.

The one at my previous office even had centralized management through an RS232 connection to a PC. No internet and related downtime at all. And I don't recall us ever being locked out because of that.


If you buy hardware from HID Global / Assa Abloy the box never breaks.

Its absolutely possible to have both a SaaS based control plane and continue functioning if the internet connection/control plane becomes unavailable for a period. There's presumably hardware on site anyway to forward requests to the servers which are doing access control, it wouldn't be difficult to have that hardware keep a local cache of the current configuration. Done that way you might find you can't make changes to who's authorised while the connection is unavailable, but you can still let people who were already authorised into their rooms.

Yes, but your average webdev doesn't know how to program that SaaS, so the market is saturated with bad software.

> with redundant power, cooling, etc

The doors the system controls don't have any of this. Hell, the whole building doesn't have any of this. And it definitely doesn't have redundant internet connections to the cloud-based control plane.

This is fear-mongering when a passive PC running a container image on boot will suffice plenty. For updates a script that runs on boot and at regular intervals that pulls down the latest image with a 30s timeout if it can't reach the server.


What updates? That would be on a local network and have no internet connection, if done right.

I am guessing the main attraction of such a system is that owners can set the cards remotely and get data about it (ie: who accessed and when)

And? That doesn't mean, especially for a system with security impact (like door access), that it should never be updated.

Those devices can be trivially power cycled, and won’t have as many issues with dodgy power. Some PC somewhere with storage is a bigger problem.

> Some PC somewhere with storage is a bigger problem

Both an embedded microcontroller and a PC have storage. The reason you can power-cycle a microcontroller at will is because that storage is read-only and only a specific portion dedicated to state is writable (and the device can be reset if that ever gets corrupted).

Use a buildroot/yocto image on the PC with read-only partitions and a separate state partition that the system can rebuild on boot if it gets corrupted and you'll have something that can be power-cycled with no issues. Network hardware is internally often Linux-based and manages to do fine for exactly this reason.


PCs are orders of magnitude more complex, with a lot more to break. Sounds like a whole lot of work for… what?

Assuming the internet connection and AWS work of course. Which they won’t always, then oops.


A large number of embedded micro controllers are just PCs running Yocto linux configured as GP said. You can save money with a $.05 micro controller, but in most cases the development costs to make that entire system work are more than just buying an off the shelf raspberry pi.

If you're relying on AWS you either way have a "PC" to relay communication between AWS and the keycard readers & door latches.

There are IoT libraries that don’t require that.

You know what else would suffice plenty? Physical keys and mechanical locks. They worked (and still work) without electricity. The tech is mature and well-understood.

The reason for moving away from physical keys is that key management becomes a nightmare; you can't "revoke" a key without changing all the locks which is an expensive operation and requires distributing new keys to everyone else. Electronic access control solves that.

You might find Matt Blaze's paper on vulnerabilities in master-keyed physical locks interesting:

https://eprint.iacr.org/2002/160.pdf


It's also easier to keep all the water for fighting fires in trucks that are remote, than to run high pressure water pipes to every room's ceilings, with special valves that only open when exposed to high heat. Imagine the overhead costs!

Cooling for a card access system?

A card access system requires zero cooling, it’s a DC power supply or AC transformer and a microcontroller that fits in a small unvented metal enclosure. It requires no management other than activating and deactivating badges.

There is no reason to have any of the lock and unlock functionality tied to the cloud, it’s just shitty engineering by a company who wants to extract rent from their customers.


The server running that system needs cooling, yes. You can't just shove it in a closet with zero thought and expect it to not overheat/shut down/catch fire, unless you live in the Arctic.

There are card access systems that don’t require a computer, just a microcontroller. Perhaps if you need to integrate with multiple sites or a backend system for access control rules you can add computers, but card access systems are dead ass simple for a reason; they need to be reliable. The good systems that have computers still allow access in the event of a network failure.

Any access control system that fails in the event that it loses internet connectivity is poorly designed.


>You can't just shove it in a closet with zero thought and expect it to not overheat/shut down/catch fire

Actually in almost all products meant for real companies doing real work, this is an explicit design requirement.

Every cash register runs off of a computer that sits in a tiny metal oven with no cooling and is expected to run 24/7 without fail.

The difference between a tech gadget and a real world, real purpose appliance.


You must be young. We used to have handhelds and computers with no cooling at all.

I have a little fanless mini PC that runs various stuff around my house, including homeassistant. The case is basically a big heat sink.

It started crashing during backups.

The solution was to stick a fan on it. :( This is literally a box _designed to not need a fan_. And yet. It now has a fan and has been stable for months. And it's not even in a closet - it's wall-mounted with lots of available air around it.


I'm guessing it's the HDD that's failing. Had such mysterious failures with my NVR (the Cloud Key thingie) from UniFi. Turns out, HDDs don't like operating in 60+ degree Celsius heat all the time - but SSDs don't mind, so fortunately the fix was just to swap the drive for a solid state one.

I think it was the DRAM on mine, oddly. It already uses an nvme ssd. Could have been the CPU, of course - the error was manifesting as memory corruption but that could well have been happening during read or write.

That is, in fact, exactly what we typically see in reality with local access control system head-ends.

At the doors, there might be keycards, biometrics and PINs (oh my!) happening.

But there's usually just not much going on, centrally. It doesn't take much to keep track of an index of IDs and the classes of things those IDs are allowed to access.


You're saying that as if we never had Z80-based microcontrollers doing all this without problems. Complete with centralized control and all.

The system was not built with resiliency in mind and had no care/considerations for what a shit-show will unfurl once the system or the link goes down. I wonder if exit is regulated (you can still fully exit the building from any point using the green buttons and I think these are supposed to activate/still work even if electricity is down).

> Yes, but still probably a million times easier for both the building management and the software vendor to have a SaaS for that, than having to buy hardware to put somewhere in the building (with redundant power, cooling, etc.)

A isolated building somewhere in the middle of the jungle dependent for its operation on some American data-center hundreds of miles away is simply negligence. I am usually against regulations but clearly for certain things we can trust that all humans will be reasonable.


In the US, the answer is that exit would have to work in the event that AWS is down or power is out. Some exceptions exist for special cases.

You can run vLLM with AMD GPUs supported by ROCm: https://rocm.docs.amd.com/en/latest/how-to/rocm-for-ai/infer...

However from experience with an AMD Strix Halo, a couple of caveats: it's drastically slower than Ollama (tested over a few weeks, always using the official AMD vLLM nightly releases), and not all GPUs were supported for all models (but that has been fixed).


vLLM ususally only plays out its strength when serving multiple users in parallel, in contrast to llama.cpp (Ollama is a wrapper around llama.cpp).

If you want more performance, you could try running llama.cpp directly or use the prebuilt lemonade nightlies.


But vLLM was half the t/s of Ollama, so something was obviously not ok.

Apple hardware has "only" a 36% margin, while their software and services have a 75% margin. They definitely want to make more money on software with absurd margins.

A huge portion of that margin is from the 33% App Store cut which is infinite margin for them, effectively.

"software and services" really should be broken out from the App Store cut.


Is margin profit/revenue or profit/costs? I think it is the former, so it should be “effectively 100%” right?

Anyway, this isn’t really a meaningful quibble argument-wise, it is obvious what you mean!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: