Hacker Newsnew | past | comments | ask | show | jobs | submit | wooptoo's commentslogin

The postcode doesn't tell the whole story. But what you can do is use an IP geolocation service which should narrow down your location enough, so that typing in the entire address is no longer necessary.

I.e. using something like https://ipinfo.io/json and then typing in a full postcode and street name + number should work well in most cases.


IP geolocation is increasingly not useful for anything, especially for mobile users. The best it can do is give you the correct country and maybe get you in the right region.

I work for IPinfo. Has our data been inconsistent for you? We actually invest heavily and continuously in data accuracy. I think for hosting IP addresses we are nearing the highest level of accuracy possible, especially with data center addresses. We are investing in novel, cutting-edge research for carrier IP geolocation.

I am curious about your experience with us so far.


That link nailed me perfectly. I'm on my phone. Connected to wifi, like most people probably are. Chilling in bed or on the toilet.

If you're on cell service.. yeah probably less accurate. Not sure if it makes the form harder to fill out if you have to change some of the fields.

What I've started doing for my personal app though is I've added a "guess" button. It fills in the form using heuristics but it's opt in. Fills out like 10 fields automatically and I've tuned it so it's usually right, and when it isn't correcting a few is still quicker.


I work for IPinfo. The accuracy you see is inferred data actually. Our IP address location should not perfectly pinpoint anyone, unless that IP address is a data center of some sort. The highest accuracy for a non-data center IP address is usually at the ZIP code level. In terms of carrier IP addresses, currently we do one data update per day. If we did more, I guess the accuracy of mobile IP addresses would improve, but on an overall scale, it would be quite miniscule.

Our country-level data (which is free) is 10-15 times larger than the free/paid country-level data out there. We constantly hear that the size of the database is an issue. The size is a consequence of accuracy in the first place. So, it is a balancing act.


> Our IP address location should not perfectly pinpoint anyone, unless that IP address is a data center of some sort.

By perfectly, I meant it got my city and zip correct, but I looked up the lat/lng and its a 5 min drive away. So pretty dang close!

Not sure how you got it that close if its only supposed to point to the nearest data center.


What if I order something on the road and want it delivered to my home? Or what if I want to order something over mobile? My mobile IP is often 1500km away from where I live.

Autofill solves all of that with an implementation cost that approaches zero.


Amazing how an entire profession that until yesterday would pride itself on precision, clarity (in thought and in writing), efficiency, and formality, has now descended into complete quackery.


I can understand the benefit from XML if there is a at least a three-level variable structure to share with the LLM. If there is strong consistency in a repeated three or more level structure, then JSON ought to be sufficient. If there is just a one or two level structure, it feels like unnecessary quackery, possibly reflective of a poorly trained model if the structure is a genuine necessity.


Are you talking about the office of the president of the united states?

This vague posting is kind dumb.


It's a simple observation. I'm not here to win internet points. I've never before seen so much cargo-culting and mystic belief among engineers.


A comment on libxml, not on your work: Funny how so many companies use this library in production and not one steps in to maintain this project and patch the issues. What a sad state of affairs we are in.


About a day after I resigned as maintainer, SUSE stepped in and is now maintaining the project. As announced here [1], I'm currently trying a different funding model and started a GPL-licensed fork with many security and performance improvements [2].

It should also be noted that the remaining security issues in the core parser have to do with algorithmic complexity, not memory safety. Many other parts of libxml2 aren't security-critical at all.

[1] https://gitlab.gnome.org/GNOME/libxml2/-/issues/976

[2] https://codeberg.org/nwellnhof/libxml2-ee


Hi Nick, first of all thank you for your work and dedication through the years.

Second, I found this entirely by accident just now: https://www.sovereign.tech/programs/fellowship

> For the duration of the fellowship, one “maintainer-in-residence” will be employed up to full-time (32-40 hours per week) as part of the Sovereign Tech Agency team. > This option offers the maintainer the personal and professional advantages of being part of team, as well as the stability of being employed to continue working on critical FOSS infrastructure. > This position is only available for maintainers located in Germany,


Yeah I agree, maintaining OS projects has been a weird thing for a long time.

I know a few companies have programs where engineers can designate specific projects as important and give them funds. But it doesn't happen enough to support all the projects that currently need work, maybe AI coding tools will lower the cost of maintenance enough to improve this.

I do think there are two possible approaches that policy makers could consider.

1) There could probably be tax credits or deductions for SWEs who 'volunteer' their time to work on these projects.

2) Many governments have tried to create cyber reserve corps, I bet they could designate people as maintainers of key projects that they rely on to maintain both the projects as well as people skilled with the tools that they deem important.


There should be public works grants to maintain them, or else a foundation specifically to maintain them funded with donations, grants, etc.

The alternative is another XZ backdoor.


> 1) There could probably be tax credits or deductions for SWEs who 'volunteer' their time to work on these projects.

Why exclusive to SWEs? They tend to be more time-restricted than financial-restricted (assuming the "SWE" comes from a job description). I'd be more interested in making sure that those with less well-paying jobs are able to access such benefits rather than stacking it onto those already (probably) making 6-figures.

Of course, the problems arise in the details. Define "volunteer": if $DAYJOB also uses it (in a way related to my role), is it actually, instead, wage theft? Also, quantifying the benefit is a sticky question. Is maintaining 10k emoji packages on NPM equivalent to volunteer work on libcurl? Could it ever be? Is it volunteer work if it ends up with a bug bounty payday? Google's fuzzing grant incentives?


funny how this myth won't die. Checking the commit history plenty of companies are contributing

redhat, apple, samsung, huawei, google, etc...


we need a tax on companies using or selling anything OSS, the funds of which go into OSS, the wealth it generated is insane, and it's nearly all just donations of experts


Which is approximately all companies because all companies use software and depending on what the researchers look at, 90% to 98% of codebases depend on OSS.

Conclusion: support OSS from general taxation, like the Sovereign Tech Fund in Germany does. It's a public good!


That's a bit unclear on the concept. It's not open source if you have to pay for it. How about charging money for your code instead?


Well that's not strictly true.

OSS is allowed to make money and there are projects that require paid licenses for commerical use.

The source is available and collaborative.

Qt states this on their site: Simply put, this is how it works: In return for the value you receive from using Qt to create your application, you are expected to give back by contributing to Qt or buying Qt.


There is nothing in the open source licensees that prevents charging money, in fact, non-commercial clauses are seen as incompatible with the Debian Free Software Guidelines.

And there is a lot of companies out there that make their money based on open source software, red hat is maybe the biggest and most well known.


I meant in the sense that someone else can redistribute the source for free, not that the company has to do it.

> The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale.

https://opensource.org/osd


Feels like tragedy of the commons.


Feels more like you don’t understand the concept of the tragedy of the commons.

EDIT: Sorry, I’ve had a shitty day and that wasn’t a helpful comment at all. I should’ve said that as I understand it TOTC primarily relates to finite resources, so I don’t think it applies here. Sorry again for being a dick.


the finite resource here is the unpaid developer time. everyone takes advantage of it until the developer burns out.


Have you tried Micro? https://micro-editor.github.io/



Hmm. Yes. Does look like vibe work. Other comments here point to some technical inconsistencies that may support this claim further.


The linked article (with the original incident) was really good:

https://www.mh4ckt3mh4ckt1c4s.xyz/blog/aur-chaos-malware-ana...


What's the deal with `Devboxes`? is this a common thing? Sounds very clunky for regular (human-driven) development.


Seems like a compliance thing? I too run my LLMs inside some sort of containment and does "manual" development inside the same environment, but wouldn't make sense to have that containment remotely, so I'm guessing that's because they need to have some sort of strict control over it?


While there are compliance/security benefits it is not the primary motivation.

If you have fairly complicated infrastructure it can be way more efficient to have a pool of ready to go beefy EC2 instances on a recent commit of your multi-GB git repo instead of having to run everything on a laptop.


Amazon developers use similar devboxes. I think it is mostly so that developers can use a production-like Linux environment with integrated Amazon dev tooling. You're not required to use a devbox, but it can be easier and more convenient than running stuff on your laptop.


It's not uncommon. It's more common at large companies. For example, Google calls theirs "Clients in the Cloud".


The collective AI hysteria is now in full swing.


Hi, I'm trying to find Lennart Poettering's presentations, but they don't seem to be available yet? https://fosdem.org/2026/schedule/speaker/lennart_poettering/ Thanks


The FOSDEM speakers are sent emails to review and approve the video recording (this involves rudimentary stuff like reviewing the start and end time, if the automated system didn't get it right; choosing one of the three audio channels etc). The recordings that have been reviewed and approved would be online by now.


Look forward to ye olde uncle Lennart's old-timey sales pitch.

I'm gonna summarize the Varlink talk: DBus is, and I quote, "very very very complex" and his system with JSON for low-level IPC is, in fact, the best thing since sliced bread and has no significant flaws. It works basically just like HTTP so the web people will love it. Kernel support for more great shit pending! I'm not sure where the hardon for a new IPC system with lernel (keeping that typo) support is from, but he's been trying for 15 years now. AFAICT, the service discovery problem could be solved by a user space service without much trouble. I mean if the whole thing wasn't an exercise in bad technological taste.


I think you are misrepresenting this;

Varlink is based on much more conventional UNIX technology than Dbus, which is decades old: You connect to a named UNIX socket through its socket file in the filesystem (man page: unix(7)).

This is an old mechanism and it is known to work well. It does not require a broker service, it works right at system startup, and it does not require a working user database for permission checks (which would be a circular dependency for systemd in some configurations). If at all, I am surprised that systemd didn't use that earlier.

The main thing that Varlink standardizes on top of that is a JSON-based serialization format for a series of request/response pairs. But that seems like a lightweight addition.

It also does not require kernel support to work, the kernel support is already there. He mentioned in the talk that he'd like to be able to "tag" UNIX sockets that speak varlink as such, with kernel support. But that is not a prerequisite to use this at all. The service discovery -- and he said that in the talk as well -- is simply done by listing socket files in the file system, and by having a convention for where they are created.


What do you connect to, when you connect to varlink, if there is no broker service?


It's kinda weird to present this as a DBus alternative when it doesn't even offer the same facilities, particularly many-to-many communication.

Though I have a pretty good idea of where a Varlink broker would turn up and which init system it would be tied to.


Very typical systemd–developer style.


Those are pathname UNIX domain sockets, so you address them through the socket file, which is conventionally stored somewhere under /run.

You can run "netstat --listening --unix" to list the UNIX domain servers on your system, to get an impression.

See https://man7.org/linux/man-pages/man7/unix.7.html


And what does the socket connect to?


I do not share your view of old timey sales pitch, at least for the talk about systemd nspawn OCI container support.

If anything, that talk was a tad low effort, with even dismissive answers — "Yes" and "No?" as full answers to audience questions, with no follow up?! Still very informative though!


The Varlink talk really was very salesy for a Fosdem presentation. Shouldn't be long until the recording becomes available, feel free to tell me I was wrong after watching it.


It's mainly re-hashed. I think I've seen the same talk twice before? At least once.

It's a very "I've made a cool thing. This is what I think is cool about it" type of talk. Which I don't think is uncommon for FOSDEM. Maybe a bit uncommon for a higher profile figure like Lennart.


> It's mainly re-hashed. I think I've seen the same talk twice before? At least once.

He held a similar talk at All Systems Go I think (I missed the talk here at FOSDEM).

> It's a very "I've made a cool thing. This is what I think is cool about it" type of talk.

Varlink isn't something he just made up, he mearly "adopted it" (started making use of it). It existed before, but I don't know anything that really made use of it before.


Who made it up, then?

The official-looking website at https://varlink.org doesn't give any information about who the authors are, as far as I can tell, but the screenshots show the username "kay". There's a git repo for libvarlink [1] where the first commits (from 2017) are by Kay Sievers, who is one of the systemd developers.

An announcement post [2] from later in 2017, by Harald Hoyer, says that the varlink protocol was created by Kay Sievers and Lars Karlitski in "our team", presumably referring to the systemd team.

So the systemd developers "adopted" their own thing from themselves?

[1] https://github.com/varlink/libvarlink

[2] https://harald.hoyer.xyz/2017/12/18/varlink/


While I guess you aren't wrong, I also wouldn't say you are entirely correct that Kay is a systemd developer. He use to work on udev, but hasn't been active in any meaningful way on it for 2 years before varlinks release[1]. For what it was made I can't really say, but Lennart hadn't start integrating Varlink until a while after its release (I think I remember it being like 2021 or so when he started making use of it, after another check it seems the start of varlink stuff in systemd was 2019[2]).

[1]: https://github.com/systemd/systemd/commits/main/?author=kays...

[2]: https://github.com/search?q=repo%3Asystemd%2Fsystemd+varlink...


Kay Sievers' Wikipedia page cites a blog post by Lennart Poettering [1] which says that systemd was designed in "close cooperation" with Kay Sievers and that Harald Hoyer was also involved, so it seems pretty clear that he's on the team that develops systemd, the team that Harald Hoyer referred to as "our team". All three of them gave a talk [2] together in 2013 about what they were developing.

If Lennart Poettering "adopted" varlink, he seems to have done so from members of his own team ("our team") who created varlink and who are also fellow co-creators of systemd.

[1] https://0pointer.de/blog/projects/systemd.html#faqs

[2] https://www.youtube.com/watch?v=_rrpjYD373A


It takes time before all videos have been edited and reviewed for publishing.

You can see the progress schedule here: https://review.video.fosdem.org/overview


Hehe, I'm eagerly waiting for this one as well as I'd be extremely happy to replace some hack to run docker images with `systemd-nspawn` served from the nix store.


> The models we have in mind for attestation are very much based on users having full control of their keys.

FOR NOW. Policies and laws always change. Corporations and governments somehow always find ways to work against their people, in ways which are not immediately obvious to the masses. Once they have a taste of this there's no going back.

Please have a hard and honest think on whether you should actually build this thing. Because once you do, the genie is out and there's no going back.

This WILL be used to infringe on individual freedoms.

The only question is WHEN? And your answer to that appears to be 'Not for the time being'.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: