Hacker Newsnew | past | comments | ask | show | jobs | submit | montereynack's commentslogin

Cool to see principles behind this, although I think it’s definitely geared towards the consumer space. Shameless self plug, but related: we’re doing this for industrial assets/industrial data currently (www.sentineldevices.com), where the entire training, analysis and decision-making process happens on customer equipment. We don’t even have any servers they can send data to, our model is explicitly geared on everything happening on-device (so the network principle the article discussed I found really interesting). This is to support use cases in SCADA/industrial automation where you just can’t bring data to the outside world. There’s imo a huge customer base and set of use cases that are just casually ignored by data/AI companies because actually providing a service where the customer/user is is too hard, and they’d prefer to have the data come to them while keeping vendor lock-in. The funny part is, in discussions with customers we actually have to lean in and be very clear on “no this is local, there’s no external connectivity” piece, because they really don’t hear that anywhere and sometimes we have to walk them through it step by step to help them understand that everything is happening locally. It also tends to break the brains of software vendors. I hope local-first software starts taking hold more in the consumer space so we can see people start getting used to it in the industrial space.


It doesn't help that all the SCADA vendors are jumping on the cloud wagon and trying to push us all in that direction. "Run your factory from your smartphone!" Great, now I'm one zero-day away from some script kiddie playing around with my pumps.


An exciting space and I'm glad you and your team are working in it.

I looked over your careers page and see all of your positions are non-remote. Is this because of limitations of working on local-first software require you to be in-person? Or is this primarily a management issue?


Gonna throw in my hat and say that if you’re working on industrial applications (like energy or manufacturing) give us a holler at www.sentineldevices.com! Plug-and-play time series monitoring for industrial applications is exactly what we do.


Gonna throw in my hat here, time series anomaly detection for industrial machinery is the problem my startup is working on! Specifically we’re making it work offline-by-default (we integrate the AI with the equipment, and don’t send data to any third party servers - even ours) because we feel there’s a ton of customer opportunities that get left in the dust because they can’t be online. If you or someone you know is looking for a monitoring solution for industrial machinery, or are passionate about security-conscious industrial software (we also are developing a data historian) let’s talk! www.sentineldevices.com


We’ve finally managed to give our AI models existential dread, imposter syndrome and stress-driven personality quirks. The Singularity truly is here. Look on our works, ye Mighty, and despair!


Great... Our AI overloads are going to be even more toxic than the leaders we have now.


Just what we need in our current time line. /a


Seconding the other question, would be curious to know


‘"An ML engineer at Slack says they don’t use messages to train LLM models," Orosz wrote. "My response is that the current terms allow them to do so. I’ll believe this is the policy when it’s in the policy. A blog post is not the privacy policy: every serious company knows this."’

I really wish this advice was followed to the letter. I’m sick and tired of trying to read into policies on AI training (or AI anything these days) that is pure blog posts on how the service works and what the data protections are. Even from Microsoft no less! Even their “documentation” has a bad habit of referring to half-decisions alluded to in a blog post, saying this is how it works, just trust us, interpret this policy vaguely because we interpret it vaguely. None of the cloud vendors will just sit down with you and sign a contract saying what they will or will not do, they all default to as minimal responsibility as possible. And then they have the guts to jump in on regulation - there’s zero components of the current way these giants do business in AI that is amenable to regulation, at least not in a way that helps you unless you’re an uber-enterprise.


Sentinel Devices | https://www.sentineldevices.com/ | Atlanta, GA | Full-time | Onsite

Sentinel Devices develops "zero-cloud" anomaly detection devices for industrial machinery. Our platform, OTAware, reduces the time users have to spend identifying and tracking down issues in their machines by analyzing machine signals and providing engineers with useful pointers on what's going on, as well as identifying signs of issues that would otherwise go unnoticed. Our unique approach is we do EVERYTHING, including data storage, ML training and decision-making, entirely on embedded devices - we never send data outside of the customer facility. This means that we are an instant buy-in for security-sensitive customers or customers with remote operations, such as defense or oil & gas.

We are ACTIVELY hiring for founding software engineer and AI/ML engineer roles: https://sentinel-devices.breezy.hr/

Also, if you're curious about the roles or the tech feel free to reach out at [email protected]


I sympathize a lot with the headline statement; it boggles my mind on a lot of the data residency/integrity/confidentiality measures taken around massive data silos (as well as the infra teams companies bring to bear to manage, scale and then inevitably publish gospel articles on the web about) when companies could just opt… NOT to collect that data? I really like the model of “It stays on your device, we never see it. At most we get bare-minimum location statistics.” Although I question the assertion that their metrics system won’t be turned against them; seems obvious that anything programmed can be reprogrammed or updated, especially in the modern update-focused age. I don’t think they addressed that beyond a general statement that they took pains to assure that their users won’t ever be spied on. Would be interested in a technical article on that.

Side note, we at Sentinel Devices are taking exactly this “we don’t hold your data” approach for industrial machinery. Think automated AI pipelines that are air-gapped. And we’re hiring! If you’re interested, reach out to [email protected]


Since this is a topic that’s near and dear to my heart, going to quickly plug my own startup that’s doing exclusively this: www.sentineldevices.com. We make industrial equipment smart enough to self-monitor and self-report issues using AI, but we are specialized in making AI that does EVERYTHING (including training) on embedded devices (think something only marginally more powerful than an RPi 4) so we never call out to a server. As the article notes, a big challenge has been making the AI and all support software resilient and able to “bounce back” from random occurrences in industrial environments (which can be extremely unpredictable and unique). Also as the article notes, this pretty handily opens up the Defense market since we don’t need to clear a cloud component.

Feel free to reach out at [email protected] if you’d like to chat. We’re building and trying to take people on part-time if anyone’s interested in this space! Also if anyone just wants to talk about challenges in the space I’m happy to, HN usually gives really great conversation partners.


Sorry, but this is untrue; action potentials in neurons have extremely complex interplay with each other, including residual “soft” periods and chemically-induced changes in how they fire. Neurons don’t just “fire” or “not fire”, they adaptively change the strength of their firing constantly, unpredictably and continuously.


Yes, and there's things like feedback / reflection, self-modifying like behaviour, lossy memory, emotional state, tiredness, aging, loads of drugs & hormones to influence the process, etc, etc.

Buuuttt... it's possible that few if any of those things are needed to capture the essence of a brains' functionality.

Maybe it's simply a matter of size. Maybe some configuration tweaks. Perhaps a different architecture.

We simply don't know - yet.


> few if any of those things are needed to capture the essence of a brains' functionality

I'm pretty sure all those things being dismissed are the essence of a brain.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: