Hacker Newsnew | past | comments | ask | show | jobs | submit | nicoco's commentslogin

Sure.

In practice, in federated networks bad actors end up being blacklisted. It does not provide any "formal" guarantee, but… it tends to work fine enough. For this specific "deletion request" feature, of course it should always be seen as a convenience thing, and absolutely not about security.

As with many engineering things, it's tradeoffs all the way down. For instant messaging, a federated approach, using open protocols, offers what I value most: decentralisation, hackability, autonomy, open source. My options in this space are Matrix or XMPP. I have not attempted to self-host a matrix server, but have been very happy with my [prosody](https://prosody.im/) instance for almost a decade now.


I don't know what's wrong with XMPP other than the network effect collapsed when the GMail chat thing was killed, while the mobile client options were poor for a very long time.

Matrix has the appearance of being a drop in replacement for Slack or Discord, but the design decisions seem so compromised that the only explanation is they did manage to establish a (somewhat weak) network effect? It certainly is not a good look for an open source project to be running on Slack or Discord (free/cheap plans rugpulled or to be soon.) Then that leaves IRC, which has a network effect collapsing at a much slower pace.

I never got far enough to try hosting a matrix server, but reading the linked post -- Matrix definitely is not GDPR compliant. The combination of whatever end form of ChatControl the EU gets along with possibly hundreds of other laws across the world and individual US states makes me think the days of a public facing non-profit or small startup running a project like this are over. (Or maybe the future of open source is funding lawyers while the development is all done for pennies by AI?)


In what way do you think it's not compliant with GDPR?


The GDPR is being neutered anyway because the EU caved in to Trump.

Not being chatcontrol compliant? That's a feature not a bug. Nobody wants that anyway. Just another stupid US lobby (Thorn).

A big organisation won't be able to run matrix for everyone no but that's the cool thing about it. People can run their own for smaller groups of people.


An open protocol can mandate indeed, but that is still in the realm of pinky promise security. A better design for a privacy-friendly chat protocol is to not write a lot of stuff on a lot of different remote servers when that's not necessary IMHO. One of matrix's selling points is to be censorship-proof though; in that case copying stuff as much as possible makes a lot more sense.


>pinky promise security

You are right, though I still prefer "weak feature" as a term :) There's enough value in such things. Cryptography crowd is concentrated on omnipotent Eve breaking ciphers, and that wrench from xkcd, but I dare to claim that majority of both commercial and private leaks happen just because well-intentioned users don't have enough capacity to keep track of all the things, and proverbially think twice. Features like "unsend", or timed deletion are indeed laughable on their purely technical merits, but do wonders saving users from grave mistakes anyway.


It's hard to explain to a non technical user. Something like "We tried to delete the message, but some of the people who received your message might still have a copy." Does not sound great and is going to be hard for a non technical user to understand and hard to implement in a way that a non technical user will find satisfying.

So if I was a dev on matrix/element and this feature came across my plate I would have to weigh it against features that I know can be implemented in a way which make technical and non technical people feel satisfied and better about the application.


That is exactly what happens in WhatsApp though. Maybe the message isn't there anymore but it used to say pretty much exactly that.


uv pip is a full reimplementation of pip. Way faster, better caching, less disk usage. What'd not to like about it?


I have been running a family and friends XMPP server on a cheap VPS for almost 10 years and the only downtime we had was when the datacenter burned down (OVH, true story).


Movim and Dino do multi-party jingle, aka voice and video group calls. Maybe you can contribute to improving it ;)


And Libervia (disclaimer: I'm the lead dev). It also implements SFU including components (based on Galène), but I'm reworking design on this part.

Also note that Libervia is using a backend/frontends architecture with a D-Bus API, you can use it to make your own frontend with any language you like.


I went looking for more info on Libervia and noticed your site is down.


Yes, I'm having a DDoS attack these days, no idea why somebody would do that to my small server. I've deployed counter measures so the site is more or less usable, but the attack is still going on.


Jingle is P2P with an optional server middleman for firewalled connections only no? I haven't seen any support for actually hosting a voice server with XMPP, only just allowing clients to figure it out themselves. I'll give it a look either way.


Indeed, Jingle is for establishing connections, P2P when possible, but there are lot of extensions around it.

I've proposed a specification for SFU hosting (check https://bloggeek.me/webrtcglossary/sfu/ if you don't know what's a SFU), and wrote a component based on the excellent Galène SFU, as well as client implementation (in Libervia) as part of a NLNet/NGI grant (https://nlnet.nl/project/Libervia-AV/).

XMPP council (disclaimer: I'm a council member for the current term) asked me to some modifications and to re-propose, which I'm about to do. I couldn't find the time so far (cause I'm working on ton on stuffs), but will go back to it very soon.

To sum-up: this is very much worked on.


For (repliable) notifications on iOS, you need mod_cloud_notify server-side.


That article loses its credibility because of this, my thoughts too. Facebook and Instagram websites are among the worse offenders when it comes to "time-to-content" or whatever metric cool kids use these days. Maybe the apps are faster, but I'd rather avoid spyware on my pocket computers. Probably the author is running a $3k+ laptop and renews it every year?


Anarchists generally don't want to run things, they usually aim for that thing witj freedom, um, what's it called again... oh yes, democracy.


I am not a AI booster at all, but the fact that negative results are not published and that everyone is overselling their stuff in research papers is unfortunately not limited to AI. This is just a consequence of the way scientists are evaluated and of the scientific publishing industry, which basically suffers from the same shit than traditional media does (craving for audience).

Anyway, winter is coming, innit?


Sure, it's not. But often on AI papers one sees remarks that actually mean: "...and if you throw in one zillion GPUs and make them run until the end of time you get {magic_benchmark}". Or "if you evaluate this very smart algo in our super-secret, real-life dataset that we claim is available on request, but we'd ghost you if you dare to ask, then you will see this chart that shows how smart we are".

Sure, it is often flag-planting, but when these papers come from big corps, you cannot "just ignore them and keep on" even when there are obvious flaws/issues.

It's a race over resources, as a (former) researcher on a low-budget university, we just cannot compete. We are coerced to believe whatever figure is passed on in the literature as "benchmark", without possibility of replication.


> It's a race over resources, as a (former) researcher on a low-budget university, we just cannot compete. We are coerced to believe whatever figure is passed on in the literature as "benchmark", without possibility of replication.

The central purpose of university research has basically always been that researchers work on hard, foundational topics that are more long-term so that industry is hardly willing to do them. On the other hand, these topics are very important, that is why the respective country is willing to finance this foundational research.

Thus, if you are at a university, once your research topic becomes an arms race with industry, you simply work either at the wrong place (university instead of industry) or on a "wrong" topic in the respective research area (look for some much more long-term, experimental topics that, if you are right, might change the whole research area in, say, 15 years, instead of some high resource-intensive, minor improvements to existing models).


I agree with that. Classically used "AI benchmarks" need to be questioned. In my field, these guys have dropped a bomb, and no one seem to care: https://hal.science/hal-04715638/document


Can you give brief summary why this paper is a breakthrough for an outsider of the field?


Checking it shortly (I haven't seen the paper before) this seems to be a very good analysis of how results are reported specifically for medical imaging benchmarks.

As is often the case with statistics, selecting just a single number to report (whatever that number is) will hide a lot of different behaviours. Here, they show that just using the mean is a bad way to report data as the confidence intervals (reconstructed by the methods in the paper in most cases) show that the models can't really be distinguished based on their mean.


Hell, I was asked to use confidence interval as well as average values for by bs thesis when doing ml benchmarks and scientist publishing results in medical fields aren't doing it...

How can something like that happen? I mean, i had a supervisor tell me "add the confidence interval to the results as well", and explained me why. I guess that at nobody ever told them? Or they didn't care? Or it's just a honest mistake


Is it because it’s word-of-mouth and not written down in some NSF (or other organization) guidance? Thiss seems to be the issue


That might be, but couldn't a paper be asked to include that to be published? It looks like an important information


I don't think it qualifies as a breakthrough. In short:

1. Segmentation is a very classical in medical image processing. 2. Everyday there are papers claiming that they beat the state of the art 3. This paper says that most of the time, the state of the art has not been beat because they actually are in the margin of error.


I published my first papers a little over fifteen years ago on practical applications for AI before switching domains. Recently I've been sucked back in.

I agree it's a problem across all of science, but AI seems to attract more than it's fair share of researchers seeking fame and fortune. Exaggerated claims and cherry-picking data seem even more extreme in my limited experience, and even responsible researchers end up exaggerating a bit to try and compete.


AI just happens to be the current hype magnet, so the cracks show more clearly


But AI makes it easier to write convincing looking papers


I'll argue music algorithmic recommendation on these platforms is a bad thing anyway.

First, the algorithm is opaque, so it can push stuff to you because the platform decide it has to get the spotlights. Maybe the label/producer/musician paid for it or whatever you want to imagine that is even worse. It is a well-known phenomenon that if some music is pushed to your ears, you'll end up appreciate it most often than not. This is how hits have been and are still made.

But even if the algorithm was not gamed at all, I still think it is a bad thing. It is not going to push you out of your comfort zone. Listening to new stuff is usually not pleasant at first. You will only "discover" things that are very similar to what you know and already enjoy.

If these recommendation algorithms were about food, they would "reason" like this: "Hey, you've really enjoyed this whole pack of M&M's, I'm sure you'll like this Kit-Kat bar now! Oh and you've had a glass of wine, what about trying out meth, it's pretty good too.". Do we really want our computers to reinforce such behavior?

Go to concerts, buy merch, buy albums on bandcamp (it has not enshittified too much yet apparently), donate money to artists; discover music through your friends and other humans recommending it. Recommend what you like to your friends. Cancel your Spotify subscription, none of that money is going to artists anyway. And use soulseek.


What are the musical equivalents to Kit-Kat and meth?


Equivalence is too strong a word, but content produced by spotify where musicians (or AI prompters) are mere contractors comes to mind.

Getting back to "I don't even want virtuous algorithmic recommendation"… I like jazz rock/fusion, especially when it has a touch of bluesy/blues rock influence. There is probably a lifetime of listening time of that genre, and it takes no effort for me to appreciate anything that resemble this. Long guitar solos by a jazz-educated guitarist who happens to like Jimi Hendrix, sign me up.

But I do think there is value in getting out of my comfort zone, and listen to something drastically new, from time to time. It requires effort though. My first reflex when I hear synthetic drums or autotune, for instance, is to press "next". But it is through other humans being recommendation, that I sometimes make that effort, and actually learn to appreciate something else.

Call me an elitist prick, but I hate to think of music as a commodity for us consumers to consume. It is art. Art is not always pleasant. It sometimes becomes pleasant after overcoming an initial disgust.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: