Hacker Newsnew | past | comments | ask | show | jobs | submit | 7777fps's commentslogin

They apparently have 300M daily meeting participants which is actually more impressive than this headline.


Its 3x what Google Meet reported in their recent blog post, just for context.


I am a fairly young consultant and have initially advocated against Zoom due to the glaring privacy issues.

Obviously to no avail. Nobody cares. People just want to open a service and have it work and Zoom excels at that. As Teams was already there for my large enterprise client, we tried that at first but nope. Has issues with screen sharing and makes problems even with less than a dozen users.

I never had any Zoom connection issues and among my consultant company and my client I have participated in Zoom meetings with 200-300 people (we did some events digitally) with absolutely zero issues. Grid view is amazing as well.

Jitsi exists but I couldn't even convince a single person to switch for more than one session. Zoom works and nothing else counts. In Europe I don't know any company that would get Google licenses for Meet due to ... Google being Google.

It's all Zoom (professional & private) and Houseparty (private).


I have had plenty of issue with Zoom. Quality being bad, people not understanding the interface and my university had people zoom-bombing or whatever it is called. I recently accidently logged into a session with the wrong credentials (whatever Zoom defaulted to, not my official uni login) and everyone thought I was zoom-bombing them. I couldn't understand why I was being cross-questioned by people in the session.

Unfortunately the media have pushed zoom so hard that everyone assumes it is the best option and don't want to hear of anything else.


> Quality being bad, people not understanding the interface

Honestly hearing this complain about zoom for the first time. I have never seen anyone complain about quality of meet or the interface (more than a 100 people from different teams).


I am in New Zealand so distance may have been a factor. I haven't had problems in a couple of weeks.


> Obviously to no avail. Nobody cares. People just want to open a service and have it work and Zoom excels at that.

Not true in my experience. Some people definitely care: the CIO at the mid sized company I work at has restricted Zoom usage; and my wife's company also cares (they started with Zoom and have moved to Teams partly because of Zoom issues).

Keep trying, not everyone cares indeed but plenty do, especially when it impacts data regulations/security.


Jitsi is easier and better in many ways, in my subjective opinion. Barrier to getting a meeting going or joining a meeting is even lower than Zoom.


I have used it privately in small groups and it had a few issues. People dropping out, constant sound issues and duplicate people entering. I have no experience with 80-300 people (interactive events mostly, sometimes division-wide meetings/webinars...).

I would definitely put it right behind Zoom though and really love what they are doing.


In general, yes.

Everyone seems to have their preferred style of coding, and it is an easy defence mechanism, when presented with anyone who tries and finds it wanting, to say that "Well they didn't do it properly".

You find that with Microservices vs Monolith, Strong types vs Weak types, Exception Handling vs Results, Agile vs Waterfall.

People fragment into camps which turn into echo chambers and it's easy to dismiss anyone who doesn't commit to that cult as being unpure and not worthy of being in the cult anyway.


If you want a serious answer is because peer to peer webRTC doesn't scale beyond two people.

If you have 4 people every client would need to maintain 3 streams, a total of 6 streams between all participants.

To have any kind of scalability you need a proxy in the middle that can act as a single stream to each participant.

This middlebox can also handle normalisation, interpolation and other useful features you might want to smooth things out when clients have connection difficulties.

Why don't we have a Free Open Source webRTC proxy server implementation? Because these days just publishing a protocol isn't enough for adoption; not to mention that proxying large amounts of data will incur a significant cost.

And that hasn't covered the need for authentication, which is yet another required service.

And if the proxy box is interpolating and gracefully handling frame drops, does that mean it will be handling gasp decrypted video traffic? Yes, it will, unless you want to move all that to the client and then have a key exchange happen not just at the start but a renegotiation every time someone connects or disconnects.

So you see it's not as simple as, "everyone just opens this URL, webRTC is a thing duh".


> Why don't we have a Free Open Source webRTC proxy server implementation?

Isn't that what Jitsi is doing? https://github.com/jitsi, meet.jit.si


In theory one could implement the "supernode" model using WebRTC, turning the client with the best connection into a middlebox as well. In practice I suppose many meetings don't have any clients with a connection that could support that kind of bandwidth requirements.


Valid points, but challenges of this kind have never stopped open source developers before.

By the way, one of the clients could serve as the middlebox, I suppose, to be determined by a consensus algorithm.


But that's great until it isn't.

You can design all the declarative things you want, but someone will eventually come up with a requirement that doesn't appear to fit a declarative structure, such as an event that fires 2 seconds after something else happens.

And so to resolve that you can construct state machines with transition states to get back to a declarative model, but those state machines can quickly become a bigger headache to maintain (yes, even with redux) than if you'd just been able to setTimeout from the component in the first place.

And you start with the best of intentions of re-usable components but find more things that ought to be owned by the component pushed up to the store and suddenly there isn't a neatly re-usable component but an empty shell of something that still needs the other half plumbed in manually every time.

I love the idea that you can abstract everything happening in an application to state and render reproducibly from that, but the reality is that it's really hard and time-expensive to do compared to jquery.

But that would be fine if the tooling was there. If React had a CLI like the AngularCLI it wouldn't be such a problem to add new actions, or new pieces of state, or new components.

But it doesn't, so each new component ends up adding a little more state into the state machine which ends up breaking types in far off places.

Now there's probably an assumption about how state is built up or how reducers are typed or how actions are defined that I'm missing some reason why this shouldn't be happening. But it is happening, it's the reality for the project I've been working on and it's a real headache.


I can call ‘setTimeout’ from the component?

I honestly don’t see the issue with your example.

I’d use an event handler to work out when something happened. This would set the timeout, and the handle would be stored as a ref. I’d have a use effect hook to clean up the timer, so when the component is unmounted the timeout is cleared.

What’s the issue here?


It's no longer declarative as soon as you've done that though; it's still a break in the declarative paradigm.


The issue is it's not as simple as calling `setTimeout` as soon as your use case becomes more complex. See this great post from Dan Abramov about using setInterval with hooks: https://overreacted.io/making-setinterval-declarative-with-r...


I liked that model because as an ASP webforms developer the class lifecycle is easy to reason about, like a page lifecycle.

In ASP you wouldn't put initialisation code in the page constructor either, it would go in Page_Init, Page_Load, etc.

Of course the tooling there made it both easier to put in the right place and harder to edit the page constructor to stick it in the wrong place.


I remember ASP.NET Webforms, and the reason most of the community moved towards MVC-ish frameworks is exactly that most of the time, you ended up shoehorning functionality into different lifecycle methods, which made them much harder to reason about. Most of the time, for anything more complex than just performing the initial databinding, you'd just be looking at the lifecycle chart and trying to figure out which place would be best to put which part of your code. Page_Init? Page_Load? Page_PreRender? I remember the pain of trying to add dynamic controls to a site right in time before ViewState was hydrated but AFTER something else had happened.

I agree that for the easiest of use cases, lifecycle methods make sense. But you very quickly leave that comfortable zone and then it just becomes painful and confusing.


Don't worry, I'm not espousing lifecycles as a panacea or solution for the modern programmer, I just wanted to share that experience with lifecycle methods meant that I was less tripped up by the pitfalls of react lifecycles.

Less so than I've been tripped up by react hooks certainly so it's been a surprise that many here are describing them as being much easier and simpler than classes.

Classes had limits and some pitfalls but it's been frustrating that lifecycle methods have been retired and deprecated because sometimes they felt like the natural place to do something.


A handler for executing arbitrary code. What could possibly go wrong?


We already allow arbitrary code to execute by clicking a link, in the form of JavaScript.

You may argue that JS is sandboxed, but so is DOSBox. At least DOSBox can’t easily connect to remote servers over the internet.


Correct me if I'm wrong, I haven't used DOSBOX for a decade but doesn't it have the ability to access hard drives and mount them?

Given that, it's not much of a sandbox.

Or does that require intervention from the host system rather than auto-mounting home and similar?


I would clearly prefer a web browser with a dosbox to the "real" dosbox when it comes to safety...


Then I agree completely, and must have misunderstood what was proposed by a handler. Typically a handler will launch an external application such as mailto, ftp, magnet, etc.

If we want to run code in browser there is WASM.

So is the proposal that it would it be beneficial to have a DOS-like OS or x86 emulator in WASM for running COM files?

Yes, that would be better and more sandboxed than dosbox running outside the browser.


Run the code in an emulator, with the emulator implemented in WASM running in a web browser. That's enough sandboxing to be "reasonably secure".


Exactly, I think grammatically this would be correct:

Arsenal are winning

Arsenal is the best team in the league

It's contradictory (as well as plainly false ;) ), but that's English.


Lots of people who speak English as a first language would say "Arsenal are the best team" or "arsenal are the best club".

https://twitter.com/search?q=%22arsenal%20are%20the%20best%2...


Well yeah, either is fine.


No, this is incorrect.

Something that comprises many people is treated as singular unless it's in a plural form.

-Arsenal is a team. -We are Arsenal players.

-There are many arsenals in Britain, but there is only one Arsenal.

If it's unclear you have to add words. You have to make sure the subject is singular. With Arsenal the singular nature of the word doesn't require it.

-There are many patriots in the USA, but there is only one Patriots football team.

In your example, Arsenal is the singular team. "Arsenal players are winning" is grammatically correct but not really used because players are understood to be part of a team.


Not much?

I think in most jurisdictions it's possible to get legally married, just not be celebrated in the traditional way.

So wedding celebrations have been delayed by 6 months or a year, well so what?

It's such a small thing to focus on and worry about when almost the whole global economy has been put on pause. There's almost nothing that hasn't been affected, the inability to immediately celebrate in a traditional over-the-top way is of such a minor consequence in comparison.


So what about all the weddings workers that cannot pay their mortgages now?


I'm not saying there's no effect I'm saying that weddings aren't special in that regard.

That's just as easily (or not!) answered the same as:

> So what about all the workers that cannot pay their mortgages now?


This reminds me of a prediction game experiment I heard that was described like the following. \*

The researchers presented the following to people.

   f(1) = true
   f(2) = true
   f(4) = true
   f(8) = true
And asked, what is f?

And the people will immediately jump in and test 16, 32, and then proudly declare that

   f = x -> x = 2^n for some integer n
Forgetting to test f(3), f(5), etc.

With more examination it turns out that

   f = x -> true.
\* I wish I could remember more of the details such as whether it was an experiment or just an illustration of one but it's not an easy thing to search for and I rely on memory and searching too much.


Something like this is in Josh Tannenbaum’s PhD thesis, where he calls it “The Numbers Game.” https://dspace.mit.edu/handle/1721.1/16714

He uses the game to show that people do something akin to Bayesian updating over possible concepts and have certain intuitive priors (e.g., ‘even numbers’ is a priori more likely than {2, 7, 9, 31}).

This is briefly mentioned at the beginning of Kevin Murphy’s Machine Learning: A Probabilistic Approach, so you might also have encountered it there.


Sounds a lot like Derek Muller’s (Veritasium) video:

https://youtube.com/watch?v=vKA4w2O61Xo


Thanks, that's it!

A good video and better than I explained it too.

The video says it's inspired by Taleb's Black Swan so I suspect I read it in that too.


a similar thing happens with folks when debugging. They assume something, and test their assumption and can happily declare victory. Instead, they should be testing _against_ their assumptions to prove their theory wrong.


> Instead, they should be testing _against_ their assumptions to prove their theory wrong.

I think this mindset should be taught explicitly starting in grade school.

If you have an idea, think how you'd disprove it, and test it. If you can't think of how you'd disprove it, that's a strike against the idea. If you can't test it, that should at least make you suspicious.


In theory, that's the scientific method, but aomehwt we've evolving this weird grade school science project "scientificky" method in it's place.


I think idempotent wasn't a good word but I think the idea being described is that crafting is acyclic.

For example Copper Wire + Iron = Green Circuit.

There's no recipe chain that takes in Green Circuits and produces either Iron or Copper Wire.

So once produced, you need to find a higher function for your products. You can't 'unmake' through a cyclic recipe.

Contrast that to ONI's water cycle, where clean water gets made dirty then cleaned again.

So ultimately in Factorio everything is leading toward the only resources that get destroyed - Science, Rockets, or Coal (or other energy types).


"There's no recipe chain that takes in Green Circuits and produces either Iron or Copper Wire. So once produced, you need to find a higher function for your products. You can't 'unmake' through a cyclic recipe."

There are Factorio mods that will give you access to recycling machines that will let you get back the ingredients that went in to making whatever it is that you put in to them.

So Factorio's recipes can be fully cyclic, if you want.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: