Yes, they can analyze it in the app on user devices before it is encrypted and transmitted. The app on user decide need access to clear text in order to encrypt the message. Same on the receiving end. The app on user device can analyze the message once it is decrypted on user device.
Wow did I not read something similar last couple of days about on-device machine learning being better than in the cloud machine learning and how Apple got that right. That's where this may also be going then if we follow your train of thought.
On device is where you want it if it's going to analyse really private data, or something effectively your own (such as homomorphic encryption, or a link to your own computers elsewhere).
You're likely to feel so much happier, freer and easier sharing your most personal life datastream with an AI assistant, if you can be sure its most intimate analysis is just between the two of you.
Coincidence or not, the dystopian AIs are somewhere in the cloud and work for someone else, while the utopian AIs are intimately personal to each user and work just for the user.
Sure why not? As long as no data ever leaves the device unencrypted and the encrypted data can only be decrypted by the client at the other end. Of course you'd probably have to take the app's word for it that that's actually what it's doing if you don't have the source, but that's no different from current E2E encryption offerings from WhatsApp etc.
The part I'm not sure about is whether the on-device certification that the message is "clean" couldn't be (easily) spoofed. But it would probably help curb distribution of illegal material anyway.
No, obviously not. The mental gymnastics involved here are impressive: the point of E2E encryption is to stop the service provider seeing or tampering with your messages. If they do that anyway it doesn't really matter how it's implemented. They could also just use a broken random number generator, or many other ways to implement the policies whilst still having encryption code in the product. It's the end result that matters, not the precise means of implementing it.
Phew, agreed. I mean of course the company "can" read the message. If it does, I would love to see that shown by the app upfront, so I can avoid using it.
Analysis happens on either end, not the network or servers. Of course if both ends are "cracked" this doesn't work, but the goal is to stop mass spread of disinformation. Most people won't modify their client.
Is the implication of violence/aggression in said speech is a justified consequence?
> Said commenter has _completely_ missed the point
I'm I understanding this correctly, this is because it is against the principle of free speech, and people might conflate it with 1A? Isn't it preconditioned on everyone being on the same page about free speech? We've seen people having extreme opinions being shunned by the the rest of the cohort. How does this group then maintain cohesion, rather is it even possible to do so?
> Is the implication of violence/aggression in said speech is a justified consequence?
I'm not quite sure what you're asking here, but note that I was speaking to definitions (ie I wasn't debating the merits of any particular situation). Employing relevant terms in a mutually understood manner is a prerequisite for the productive conversation of a topic.
> this is because it is against the principle of free speech, and people might conflate it with 1A?
You misunderstand. In the hypothetical situation, the merits (or extent, or mechanics, etc) of free speech (ie the principle) in some specific social context (ex at work) are being discussed. Someone shows up to the party and unhelpfully points out what the current legal realities are. But the legal status isn't what's under discussion - in context, it's an off topic comment that serves only to derail the conversation.
IP theft would be one reason. If a computer is stolen which has source code that could be a problem. That said, most companies just deal with that risk or their business is not so dependent on source code but services and support instead.
> That said, most companies just deal with that risk
They actually don't, sadly. Most companies limit the risk using non-competes.
It's very hard to secure code. If someone wants to steal it, they can usually figure it out.
It's much easier to limit their legal ability to profit from stolen code. Most companies don't have IP that's very valuable on the black market. If you stole Zoom's compression algo, for example, it would be hard to profit without openly starting a new company and violating your non-compete.
I just Ctrl+F'd Google Meet and no one seems to be really talking about it. We've been using it for our meetings for a long time and it works really well. I'm wondering why it doesn't have widespread adoption. You can call-in via phone, can log the minutes of the meeting and seems to "just work" too
Requires a G Suite enterprise account. It also doesn't help that Google Hangouts Meet and Google Hangouts are two similarly named and looking but incompatible products.
My impression is that to use it you'd need to sign up the organization for G Suite. Whereas Zoom you can just start for free and then if they want individual users can upgrade their accounts to paid ones. That helps with grass-roots adoption in companies. It's also a clear "we pay for video conferencing", not "we pay for video conferencing and all this other stuff we don't want to use because we already have solutions for it"
I just want to take a moment to thanks folks over at memfault for bringing us in depth content from the world of embdedded systems. Be sure to check out their articles on ARM, RTOS etx.
Thanks! We've been writing all the content we wish had existed when we started out as embedded software engineers. It's fantastic to hear from folks who enjoy reading it as much as we do writing it.
If you aren't wedded to R then pickling sklearn Pipeline and loading in Flask app can be nice. Advantage of this is that data pre-processing can also be included in a sklearn Pipeline.
One way is to obviously go all out Java - definitely makes things streamlined. But not all team members are familiar with Java. Especially not ones formally trained on data science - who tend to work with R/python etc. Atleast that has been my experience.