I am a fairly young consultant and have initially advocated against Zoom due to the glaring privacy issues.
Obviously to no avail. Nobody cares. People just want to open a service and have it work and Zoom excels at that. As Teams was already there for my large enterprise client, we tried that at first but nope. Has issues with screen sharing and makes problems even with less than a dozen users.
I never had any Zoom connection issues and among my consultant company and my client I have participated in Zoom meetings with 200-300 people (we did some events digitally) with absolutely zero issues. Grid view is amazing as well.
Jitsi exists but I couldn't even convince a single person to switch for more than one session. Zoom works and nothing else counts. In Europe I don't know any company that would get Google licenses for Meet due to ... Google being Google.
It's all Zoom (professional & private) and Houseparty (private).
I have had plenty of issue with Zoom. Quality being bad, people not understanding the interface and my university had people zoom-bombing or whatever it is called. I recently accidently logged into a session with the wrong credentials (whatever Zoom defaulted to, not my official uni login) and everyone thought I was zoom-bombing them. I couldn't understand why I was being cross-questioned by people in the session.
Unfortunately the media have pushed zoom so hard that everyone assumes it is the best option and don't want to hear of anything else.
> Quality being bad, people not understanding the interface
Honestly hearing this complain about zoom for the first time. I have never seen anyone complain about quality of meet or the interface (more than a 100 people from different teams).
> Obviously to no avail. Nobody cares. People just want to open a service and have it work and Zoom excels at that.
Not true in my experience. Some people definitely care: the CIO at the mid sized company I work at has restricted Zoom usage; and my wife's company also cares (they started with Zoom and have moved to Teams partly because of Zoom issues).
Keep trying, not everyone cares indeed but plenty do, especially when it impacts data regulations/security.
I have used it privately in small groups and it had a few issues. People dropping out, constant sound issues and duplicate people entering. I have no experience with 80-300 people (interactive events mostly, sometimes division-wide meetings/webinars...).
I would definitely put it right behind Zoom though and really love what they are doing.
Everyone seems to have their preferred style of coding, and it is an easy defence mechanism, when presented with anyone who tries and finds it wanting, to say that "Well they didn't do it properly".
You find that with Microservices vs Monolith, Strong types vs Weak types, Exception Handling vs Results, Agile vs Waterfall.
People fragment into camps which turn into echo chambers and it's easy to dismiss anyone who doesn't commit to that cult as being unpure and not worthy of being in the cult anyway.
If you want a serious answer is because peer to peer webRTC doesn't scale beyond two people.
If you have 4 people every client would need to maintain 3 streams, a total of 6 streams between all participants.
To have any kind of scalability you need a proxy in the middle that can act as a single stream to each participant.
This middlebox can also handle normalisation, interpolation and other useful features you might want to smooth things out when clients have connection difficulties.
Why don't we have a Free Open Source webRTC proxy server implementation? Because these days just publishing a protocol isn't enough for adoption; not to mention that proxying large amounts of data will incur a significant cost.
And that hasn't covered the need for authentication, which is yet another required service.
And if the proxy box is interpolating and gracefully handling frame drops, does that mean it will be handling gasp decrypted video traffic? Yes, it will, unless you want to move all that to the client and then have a key exchange happen not just at the start but a renegotiation every time someone connects or disconnects.
So you see it's not as simple as, "everyone just opens this URL, webRTC is a thing duh".
In theory one could implement the "supernode" model using WebRTC, turning the client with the best connection into a middlebox as well. In practice I suppose many meetings don't have any clients with a connection that could support that kind of bandwidth requirements.
You can design all the declarative things you want, but someone will eventually come up with a requirement that doesn't appear to fit a declarative structure, such as an event that fires 2 seconds after something else happens.
And so to resolve that you can construct state machines with transition states to get back to a declarative model, but those state machines can quickly become a bigger headache to maintain (yes, even with redux) than if you'd just been able to setTimeout from the component in the first place.
And you start with the best of intentions of re-usable components but find more things that ought to be owned by the component pushed up to the store and suddenly there isn't a neatly re-usable component but an empty shell of something that still needs the other half plumbed in manually every time.
I love the idea that you can abstract everything happening in an application to state and render reproducibly from that, but the reality is that it's really hard and time-expensive to do compared to jquery.
But that would be fine if the tooling was there. If React had a CLI like the AngularCLI it wouldn't be such a problem to add new actions, or new pieces of state, or new components.
But it doesn't, so each new component ends up adding a little more state into the state machine which ends up breaking types in far off places.
Now there's probably an assumption about how state is built up or how reducers are typed or how actions are defined that I'm missing some reason why this shouldn't be happening. But it is happening, it's the reality for the project I've been working on and it's a real headache.
I’d use an event handler to work out when something happened. This would set the timeout, and the handle would be stored as a ref. I’d have a use effect hook to clean up the timer, so when the component is unmounted the timeout is cleared.
I remember ASP.NET Webforms, and the reason most of the community moved towards MVC-ish frameworks is exactly that most of the time, you ended up shoehorning functionality into different lifecycle methods, which made them much harder to reason about. Most of the time, for anything more complex than just performing the initial databinding, you'd just be looking at the lifecycle chart and trying to figure out which place would be best to put which part of your code. Page_Init? Page_Load? Page_PreRender? I remember the pain of trying to add dynamic controls to a site right in time before ViewState was hydrated but AFTER something else had happened.
I agree that for the easiest of use cases, lifecycle methods make sense. But you very quickly leave that comfortable zone and then it just becomes painful and confusing.
Don't worry, I'm not espousing lifecycles as a panacea or solution for the modern programmer, I just wanted to share that experience with lifecycle methods meant that I was less tripped up by the pitfalls of react lifecycles.
Less so than I've been tripped up by react hooks certainly so it's been a surprise that many here are describing them as being much easier and simpler than classes.
Classes had limits and some pitfalls but it's been frustrating that lifecycle methods have been retired and deprecated because sometimes they felt like the natural place to do something.
Then I agree completely, and must have misunderstood what was proposed by a handler. Typically a handler will launch an external application such as mailto, ftp, magnet, etc.
If we want to run code in browser there is WASM.
So is the proposal that it would it be beneficial to have a DOS-like OS or x86 emulator in WASM for running COM files?
Yes, that would be better and more sandboxed than dosbox running outside the browser.
Something that comprises many people is treated as singular unless it's in a plural form.
-Arsenal is a team.
-We are Arsenal players.
-There are many arsenals in Britain, but there is only one Arsenal.
If it's unclear you have to add words. You have to make sure the subject is singular. With Arsenal the singular nature of the word doesn't require it.
-There are many patriots in the USA, but there is only one Patriots football team.
In your example, Arsenal is the singular team. "Arsenal players are winning" is grammatically correct but not really used because players are understood to be part of a team.
I think in most jurisdictions it's possible to get legally married, just not be celebrated in the traditional way.
So wedding celebrations have been delayed by 6 months or a year, well so what?
It's such a small thing to focus on and worry about when almost the whole global economy has been put on pause. There's almost nothing that hasn't been affected, the inability to immediately celebrate in a traditional over-the-top way is of such a minor consequence in comparison.
This reminds me of a prediction game experiment I heard that was described like the following. \*
The researchers presented the following to people.
f(1) = true
f(2) = true
f(4) = true
f(8) = true
And asked, what is f?
And the people will immediately jump in and test 16, 32, and then proudly declare that
f = x -> x = 2^n for some integer n
Forgetting to test f(3), f(5), etc.
With more examination it turns out that
f = x -> true.
\* I wish I could remember more of the details such as whether it was an experiment or just an illustration of one but it's not an easy thing to search for and I rely on memory and searching too much.
He uses the game to show that people do something akin to Bayesian updating over possible concepts and have certain intuitive priors (e.g., ‘even numbers’ is a priori more likely than {2, 7, 9, 31}).
This is briefly mentioned at the beginning of Kevin Murphy’s Machine Learning: A Probabilistic Approach, so you might also have encountered it there.
a similar thing happens with folks when debugging. They assume something, and test their assumption and can happily declare victory. Instead, they should be testing _against_ their assumptions to prove their theory wrong.
> Instead, they should be testing _against_ their assumptions to prove their theory wrong.
I think this mindset should be taught explicitly starting in grade school.
If you have an idea, think how you'd disprove it, and test it. If you can't think of how you'd disprove it, that's a strike against the idea. If you can't test it, that should at least make you suspicious.
"There's no recipe chain that takes in Green Circuits and produces either Iron or Copper Wire. So once produced, you need to find a higher function for your products. You can't 'unmake' through a cyclic recipe."
There are Factorio mods that will give you access to recycling machines that will let you get back the ingredients that went in to making whatever it is that you put in to them.
So Factorio's recipes can be fully cyclic, if you want.