Hacker Newsnew | past | comments | ask | show | jobs | submit | mbrevda1's commentslogin

yup, here is node's docs for it (WIP): https://nodejs.org/api/permissions.html


in Israel, Friday is generally not a workday (or "business day"), a la Sunday in the western world


For comparison, what percentage of human-generated code is secure?


It seems reasonable to want Copilot to help you produce code of a reasonable quality.

If it’s just helping you crank out the same bad code more quickly, without learning anything in the process, that’s useful to know. Some people might still want a tool like that, I wouldn’t.


Sure. But in order to know if its 'of reasonable quality' you need some sort of baseline to compare it to. What is reasonable quality? I think what your average human does is probably reasonable.

Like, if your average dev will produce insecure code in 80% of samples, then Copilot starts to look really good! But if its closer to 0.01% of code samples, then copilot looks more like an intriguing novelty, not to be brought too near serious work. Much like dippin dots in this regard.


That's basically where my gut went when I read the headline - so is that of a junior engineer, or really any engineer who hasn't had to think about it, and we don't promote their code directly to prod, either (if we can avoid it).

Copilot shouldn't be able to generate code destined for prod without review any more than should any line of code written by a human.


> For comparison, what percentage of human-generated code is secure?

Yeah how did they measure? Did static and dynamic analysis find design bugs too?

Maybe - as part of a Copilot-assisted DevSecOps workflow involving static and dynamic analysis run by GitHub Actions CI - create Issues with CWE "Common Weakness Enumeration" URLs from e.g. the CWE Top 25 in order to train the team, and Pull Requests to fix each issue?: https://cwe.mitre.org/top25/

Which bots send PRs?


Are events and webhooks mutually exclusive? How about a combination of both: events for consuming at leisure, webhooks for notification of new events. This allows instant notification of new events but allows for the benefits outlined in the article.


What about supporting fast lookup of the event endpoint, so it can be queried more frequently?

I think that a combo of webhooks / events is nice, but "what scope do we cut?" is an important question. Unfortunately, it feels like the events part is cut, when I'd argue that events is significantly more important.

Webhooks are flashier from a PM perspective because they are perceived as more real-time, but polling is just as good in practice.

Polling is also completely in your control, you will get an event within X seconds of it going live. That isn't true for webhooks, where a vendor may have delays on their outbound pipeline.


The article advocate for long-polling


Yea, you're right. I am reading the advocacy as "if you need real-time, then support long-polling."

I see the value in this, but I actually disagree with the article in terms of that being the best solution. Long-polling is significantly different than polling with a cursor offset and returning data, so you wouldn't shoe-horn that into an existing endpoint.


Couldn't keeping a request open indefinitely open the system up to the potential of DoS attacks though? Correct me if I'm wrong, but isn't it kind of expensive to keep HTTP requests open for an indeterminate amount of time, especially if the system in question is servicing many of these requests concurrently?


I think that's what the author was getting at, after reading through the whole article. The idea isn't to get rid of webhooks, but provide an endpoint that can be used when webhooks won't necessarily work.


Very similar to how I built my previous application.

1) /events for the source of truth (I.e. cursor-based logs) 2) websockets for "nice to have" real-time updates as a way to hint the clients to refetch what's new


Yeah... I'd go so far as to argue that this is the only architecture that should even ever be considered, as only having one half of the solution is clearly wrong.


This is the way to go and I'd love to see more API's with robust events endpoint for polling & reconciliation. Deletes are especially hard to reconcile with many APIs since they aren't queryable and you need to instance check if every ID still exist. Shopify I'm looking at you.


Yes to the combination of both. I worked on architecture and was responsible for large-scale systems at Google. Reliable giant-scale systems do both event subscription and polling, often at the same time, with idempotency guarantees.


Sorry if I'm daft, could you/someone explain why one would want to use both at the same time for the same system?

One thing that makes sense: if you go down use polling so you can work at your own pace. But this isn't really at the same time. When/why does it make sense to do both simultaneously?


There is an inherent speed / reliability tradeoff that is extremely difficult to solve inside one message bus. When you get to truly large systems with a lot of nines of reliability, it starts to make sense to use two systems:

1. Fast system that delivers messages very quickly but is not always partition-tolerant or available 2. Slower, partition tolerant system with high availability but also higher latency (i.e. a database)

The author goes through this in the very first section. Webhook events will eventually start getting lost often enough for the developer to think about a backup mechanism.

Long-polling works if you have a lot of memory on your database frontend. Most shared databases want none of your long-running requests to occupy their memory which is better used for caches.

Even if your message bus has the ability to store and re-deliver events, you might want to limit this ability (by assigning a low TTL). Consider that the consumer microservice enters and recovers from an outage. In the meantime, the producer's events will accummulate in the message service. At the same time, the consumer often doesn't need to consume each individual event but rather some "end state" of some entity or a document. If all lost events were to get re-delivered, the consumers wouldn't be able to handle them, and would enter an outage again. This is where deliberately decreasing the reliability of the message bus and rely on polling would automatically recover the service.

There are other reasons, of course. The author is absolutely correct in their statement, though: whenever a system is implemented using hooks / messages, its developers always end up supplementing it with polling.


What's the point of implementing webhooks once you implemented long polling for the /events endpoint?


I'd argue against long/persistent polling. Webhooks allows for zero resource usage until a message needs to be delivered.


> Webhooks allows for zero resource usage until a message needs to be delivered.

Doesn't that only work in the case where the server treats each webhook delivery as ephemeral? If you're keeping a queue to allow reliable / repeatable delivery, that's definitely not "zero resource usage", right?


On the sender side, sure. On the receiver side? You have to have a service listening 0-24.


I don't think the original comment meant long polling (i.e. keeping the connection alive), they meant periodically call the endpoint to check for events.


The article advocates for long polling of endpoints.


That's technically correct, but ultimately useless, information


All information is useless until it's useful


Speed is variable, capacity can be improved. The bigger question re Starlink is latency, which this test doesn't show.


I saw these Starlink speed test results 2 weeks ago: https://tweakers.net/i/xDBb34kfMAju3YPPMsC3iOc2gXc=/656x/fil...

Seems like pretty solid latency :)


30-100ms is better than I feared, and should be generally reasonable for most uses.


Consistent 30ms would be pretty excellent, and make it useful for many things. Consistent 50ms, similarly. It starts to get a bit more of an issue at 80ms or 100ms, but my worry is more that jitter may be huge, and 30-100ms is a huge jitter window that could limit usefuless not just for games, but also many other things such as voice calls.


A 100ms ping is perfectly playable in everything except twitch-based shooters. In voice calls you will notice it, but it won't get in the way like say a 1-2s delay would (like you get if you phone from one end of the world to the other). It's really a very good result bearing in mind that this can work absolutely anywhere.


Fighting games, too. Most gamers know to avoid wifi, much less a wireless ISP.


I don't think that's still true with 802.11ac. My Wi-Fi adds only 1 or 2 ms latency. There is way more jitter, but it doesn't really matter if the total is always under 10ms.


wifi doesn't add any latency, inhernetly. nor in practice with actual equipment


The shared medium (frequency spectrum) is what can add latency. If a device wants to talk over Wifi but another device is transmitting it has to wait. This introduces (variable) latency, aka jitter.

Here's an anecdotal example for you, in practice with actual equipment:

1) Mac pro via ethernet to router:

    # ping -c 5 -S 192.168.1.88 192.168.1.1
    PING 192.168.1.1 (192.168.1.1) from 192.168.1.88: 56 data bytes
    64 bytes from 192.168.1.1: icmp_seq=0 ttl=64 time=0.413 ms
    64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.396 ms
    64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.417 ms
    64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.553 ms
    64 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=0.514 ms

    5 packets transmitted, 5 packets received, 0.0% packet loss
    round-trip min/avg/max/stddev = 0.396/0.459/0.553/0.063 ms
2) Same machine via wifi over Unify AP to router:

    # ping -c 5 -S 192.168.1.72 192.168.1.1
    PING 192.168.1.1 (192.168.1.1) from 192.168.1.72: 56 data bytes
    64 bytes from 192.168.1.1: icmp_seq=0 ttl=64 time=2.992 ms
    64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=4.136 ms
    64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=1.873 ms
    64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=2.293 ms
    64 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=2.552 ms

    5 packets transmitted, 5 packets received, 0.0% packet loss
    round-trip min/avg/max/stddev = 1.873/2.769/4.136/0.774 ms
That's an average of 2.3ms extra latency, or 6x higher.


I speak a lot with customers on satellite phones more or less every day. When they are calling from a BGAN terminal the latency can be a problem. Latency is around 1000-1500ms. But when speaking on a VSAT Ka-band terminal the latency is less of a problem. Data latency is around 600-800ms on VSAT. So I doubt latency of 100ms would be a problem at all. Most VOIP solutions have some mechanisms to reduce the perceived latency.


I'm struggling to imagine a network path that could induce a 1-2 second delay from one side of the world to the other. Even at just 50% of c-in-vacuum that's only a tenth of a second.


There's a lot of active gear introducing delays between those ends. Pinging www.govt.nz, which is about 17000km from me and as close to the antipode as I can find in a quick search, pings at around 300 to 400ms, so only at about 15 to 20% of c.


Huh. My nearest land antipode (from Boston, MA, USA) is Perth, and I get about 150ms ping to there. Regardless, even 400 ms is a far cry from 2000 ms.


A ping is a round trip, so you have to double the distance.


Even in an FPS unless you are trying to play at a pro level or something 100ms will be barely noticeable. Your reaction time is already probably over 300ms


One thing that may be a game changer: inter-satellite relaying. With the whole network, Starlink client to Starlink client latency might actually drop.


30 is really good actually. At least for what I’m used to.


On my home fiber, pretty much all sites I visit are like 80 ms away, Starlink would be an upgrade for me, latency-wise


anything above 100 ms is starting to get into unusable territory for quite a few applications


Count yourself lucky you don't live in either South America, Africa or Asia, where +100ms latency is more common than not. Fortunately I can count on one hand the applications that don't work with +100ms latency (not counting gaming). Usually it's the fault of the application developers (or rather the infrastructure team) where they put too low timeouts for requests rather than letting the requests go for a bit longer.

Sub 100ms for satellite internet is incredible, hopefully it'll be cheap enough for people to actually get, compared to the current satellite internet we have.

Can't wait for humanity to become a multi-planet species, as then application developers would have to start taking multi-minute latencies into account, and hopefully that'll help me as someone with ~500ms latency to most services.


> Fortunately I can count on one hand the applications that don't work with +100ms latency (not counting gaming).

Pardon my ignorance, but what are some exampes of applications that wouldn’t work with +100 latency?


Depends on where on the +100ms range we're on. Once you start hitting 1s latency, lots of applications (or rather, their servers) have a hard limit on 1s for every request. So when loading data from the backend, you have to continue to retry the request until it gets below 1s and then you will finally get the data.

I think Adobe been one of the worst companies I've dealt with personally, as many of the endpoints have ridiculously low timeouts (for someone with really shitty latency).


Anything that requires real-time interaction between a client and a server and other clients, e.g., gaming, stadia/geforce now, videoconferencing, ...

< 100ms is usually the "minimum", > 150ms is often "unusable", and for a smooth experience you might need < 30ms depending on the application (e.g. depending on the game you might need < 90ms or <60ms or <30ms).


Videoconferencing does not need imperceptible latency.

The other two things on your list are both gaming.


Depends how much you care about people talking over one another. If your call is a presentation/lecture/class with few switches between speakers, latency's no problem.

But if your calls normally have lively discussion where someone different jumps in any time there's a pause, the higher the latency the more likely people will say "meeting in person is much better"

Likewise, with things like remote desktop, 100ms of latency isn't a dealbreaker but it'll certainly leave some of your users saying "things that run locally just feel snappier"


For mobile phone networks, >20ms latency in audio is "unacceptable" from the point-of-view of standards conformance and a client "accepting" the hardware of some vendor.

Up to 100ms is kind of ok-ish barely-sluggish, but over 100ms latency, it becomes extremely annoying to maintain a conversation.

Video conferencing often makes this worse, because it is what people use for meetings, etc. and that involves more than 2 people maintaining a conversation, so latency becomes even more important there.

Otherwise 3-4 people start talking over each other, and none of them notices until they receive what the others are saying. Which is extremely annoying.


Gaming and video conferencing come to mind. Videoconferences need bandwidth for obvious reasons but it's also nice if you don't have any delays in when you says something and when the other side hears it. I've been in some calls lately with very noticable delays. Especially people joining from mobile phones tend to be affected (shit latency, variable bandwidth).


Voice chat is obnoxious with more than a few tens of ms latency.

Around about 100ms things stop appearing instantaneous, beyond the scope that everything starts to suck if you're not a patient person.


> Can't wait for humanity to become a multi-planet species, as then application developers would have to start taking multi-minute latencies into account, and hopefully that'll help me as someone with ~500ms latency to most services.

Stuff that requires realtime (or near realtime) communication simply won't be possible.

What remains is bulk data transfer, here I guess the only viable way is:

1) on both ends in both directions, massive buffers (at least bandwidth x 4)

2) massive FEC (of course it will reduce the net bandwidth, but there's no real other way to avoid lots of retransmissions)

3) sender station transmits the data in blocks, with each object of data having a specified number of blocks

3) receiver station checks all the data blocks for integrity, places it in buffers, and transmits back a list of broken blocks and a list of successfully received blocks

4) sender station receives the list of broken/successful blocks, deletes successful blocks from its buffer and retransmits those marked as broken

5) receiver station waits until all the blocks for an object of data have been successfully transmitted, and delivers the message to the recipient system



Yeah, in short: content-addressable systems are needed if we're ever to send data between planets on a larger scale. Systems like IPFS and alike solves this problem nicely, at least in my tests with high-latency scenarios.


What sort of areas is low latency important in outside of video games?


Finance, stock trading. Theoretically Starlink should eventually have lower latency than the optical cables under the Atlantic.


I wonder when the first CDN's will be put into space when this will become broadly used.


This got me thinking... people _will_ start putting infra into space sooner or later, just for the better latency. Imagine the new availability zones in aws/gcp/azure :)

Could definitely lead to more investment into space, more money to SpaceX / Blue Origin / etc for their new-gen lift vehicles.


Would that not be impossible because of the lack of effective cooling in space? The only way to get rid of heat in space is by radiation, which is a very inefficient process. Assuming temperatures cannot rise above 100°C, every square meter of radiator fin emits at most a kilowatt. And because half the orbit is spent in direct sunlight, a significant part of the surface area will have to be reflective, making it useless for radiating heat.

Also, where'd you get the power? Solar panels will only yield a kilowatt per square meter at most, for half of the orbit. Beam it up from Earth?


That's actually better than what most 3rd world countries got, oh god.


It’s better than what most of the US has (by land not people)


I would guess the same for the UK - certainly for Scotland :-(

Edit: Surprisingly little of the UK is actually built up areas: https://www.sheffield.ac.uk/news/nr/land-cover-atlas-uk-1.74...


That really is surprising considering it’s just an island (or a few islands? I can never remember wether Ireland/isle of man is included.) It’s weird how expensive housing can get when there’s so much empty space.


Part of the island of Ireland, Northern Ireland, is part of the UK the rest being the Republic of Ireland (which everyone not in NI refers to as just Ireland).

The Isle of Man isn't part of the UK, though it is a Crown Dependency.

This is all just as confusing as it sounds...


is that real? how are they beating the speed of light?


Latency to where? One of the promises of starlink is reducing latency over mid distance - say transatlantic or even across the states, due to using vacuum rather than fibres. Will be interesting how HFT use it.

Starlink network is nowhere near complete so I’d expect things to only get better (until customers start piling on)


If I understood it correctly Starlink doesn't send between satellites yet, only sat<->ground. Sat<->sat is a big point of starlink and when they roll that out the latency should go down, especially for starlink<->starlink comms I imagine.


I don't understand how this is meant to lower the latency. Currently:

    ground <--> satellite <--> ground
With satellite links:

    ground <--> satellite <--> satellite <--> ground
How can the latter possibly be faster?

Edit: Thanks for all the responses. I'd been assuming it was a test of Starlink latency only, but if it's Starlink -> ground station -> open internet -> ISP then it would make sense how that would be slower than a pure Starlink connection.


Currently:

ground <--> satellite <--> ground <------------> ground

With satellite links:

ground <--> satellite <--> satellite <--> ground

i.e. the sat-to-sat link should be faster than the ground-to-ground link, on the basis that light transmitted in vacuum goes faster than light transmitted in glass. That's the theory at least.


Plus they're transmitting in a straight line without a lot of switches in between.


More like this:

Currently:

  ground <--> satellite <--> ground <--> ground farther away via legacy infrastructure
With satellite links:

  ground <--> satellite <--> satellite <--> ground farther away directly


Light travels through glass (fiber optics) at 2/3 of the speed in vacuum. So as long as you are skipping some ground links by doing similar length links in space it is faster.

Low altitude orbits mean that the hops up and down can be compensated by faster hops across.

That all of course is not there yet and depends on Starlink implementing the cross sattelite links.


Current satellites are in a much higher orbit


This is a good question, I see three main reasons but I may miss something:

1. Hop count is not one on the ground to go same distance.

2. Speed of light is significantly faster in vacuum.

3. The path is potentially straighter for longer distances.


There are some places where there is no ground, say the middle of the Atlantic. Satellite to satellite is important for that.


Yes that’s a few more years before then, and the more sats the more capacity and the lower the latency.


What is the lifetime of a SpaceX satellite?

Judging from their high launch cadence, it seems satellite-satellite communication was just a means to excite the fanboys and motivate their employees, the same way the peddle the Mars stuff.


Why would HFT even consider using it? They are located as close as possible to the exchange they operate on, not across states or halfway across the world.


Same stock is listed in more than one place, and also e.g. stock movements in NY affect stocks in London. There are undersea cables built for this purpose: https://www.popularmechanics.com/technology/infrastructure/a...


What a waste of human ingenuity. And I thought ad-tech was bad enough


I think you can argue on the whole is a waste, but I do believe it does have some advantages. EG efficient HFTs can reduce bid ask spreads which does save a lot of money for retail traders.


I think the premise was that such investments are wasteful if it's only for a trading arms race.


> EG efficient HFTs can reduce bid ask spreads which does save a lot of money for retail traders.

Berkshire Hathway has $1000+ spreads yet you dont see a lot of people complaining.


Stock price is probably a pretty big factor in that along with volume.


Exactly, that is the point.


But being able to do that 50 milliseconds faster really doesn't.


That is true for much of the finance field.


Actually it's not. There are objective benefits to society that you simply ignore.


As there are objective downsides that we, collectively, are ignoring right now.

Financial capitalism shouldn't be praised, at the very best it's a lesson, we got useful tools out of it and that is all.


I am not praising anything, I am correcting your statement that much of the financial industry is a waste of human ingenuity. That "waste of human ingenuity" enabled us to build the modern world.


It wasn't my statement.


Don't forget using the equivalent of the entire power consumption of Austria to mine bitcoin?


HFT have installed microwave relays between Chicago and New York and between London and Berlin to arbitrage on the 47% fiber optic delay between the exchanges. A LEO satellite relay serves the same purpose. I can see London to New York, New York to Tokyo fiber connections being superceded by LEO satellite.


If you mean Deutsche Börse AG, than that would be Frankfurt am Main and not Berlin which is a slight difference of about 400 km. Frankfurt to London is actually a shorter distance. https://en.wikipedia.org/wiki/List_of_stock_exchanges


Quite surprised why LEO for cross exchange arbitrage was not already done. Microwaves were not super with weather conditions the last time taking an arbitrary interest.


LEO sat constellations that are low enough to beat existing solutions need to be almost global to provide reliable communication anywhere.


Arbitrage between multiple exchanges requires data to go from exchange A to exchange B as fast as possible.

Light in vacuum is faster than light in fiber.


Light in vacuum is only faster if it goes in a straight line from point A to B, not if it zig-zags across 50 satellites.


How could zigzagging through fiber be faster thaz zigzagging through vacuum? You still need to cross nearly the same distance with fiber.


The fiber is a fixed geometry.

The satellite mesh is not, a straight line from point A to point B is not possible most of the time, given the number of satellites available and range of laser communication in space.


https://www.youtube.com/watch?v=QEIUdMiColU is an animation of how it's supposed to work, including latencies

There are enough satelites and low enough routing on the lasers to mean packets will arrive from London to NY faster than even a great-circle fibre.

In reality how much bandwidth is available is a function of money, and HFTs tend to have a lot of money.


Light travels 47% slower in a fiber optic cable than it does in the vacuum of space. [1]

[1] https://youtu.be/giQ8xEWjnBs?t=288 (the whole video is worth watching, but that timestamp answers your specific question.


Too lazy to watch it -- does it take into account the multiple criss-crossing satellite hops?

I watched it -- no it doesn't :) It is comparing fiberoptics latency with straight line light propagation. So the worst case scenario of non-existing inter-satellite communication could easily be worse than fiberoptics.

But I guess if the hedgefunds knew exactly which packet travels in a straight line, they could send one packet via Starlink and others via fiberoptics.


Too lazy to answer


yes


They make trades at one exchange based of prices at another. For that reason there has been a lot of microwave relays set up between New York and Chicago, for example. Starlink could reduce latency from New York to London, another important center of trade.


But if you can access to information from an exchange halfway across the world faster than others, you can definitely trade on that.


And if you can break the speed of light, you don't need a second exchange.


It’s quite easy to “break” the speed of light in fiberoptic glass.


But it's not that useful even if you pour a glass cube over the whole exchange to slow it down.


> It’s quite easy to “break” the speed of light in fiberoptic glass.

I don’t understand this, why? Does light travel faster than the speed of light in fibre optic glass?


"the speed of light in fiberoptic glass" is the thing being broken.


Geographically distributed arbitrage?


arbitrage


They need sat-sat links for that, I think the current generation of starlink doesn't have that so initial customers will not see that kind of benefit, their data will be bounced back to a ground station from the same satellite.


I'd assume latency from endpoint to satellite back to terrestrial base station.

I can get satellite Internet right now with 40Mbps down / 5 up, but the latency is 800 to 1000 ms, so it's slow.


He obfuscated the title!


That guy gets it


How do you "shut down" a PWA? At the end of the day, it's just a website?!


Many of the functionalities of a true PWA simply aren’t available in Safari on iOS. And since all browsers on iOS must use the Safari engine, there is no way to get around this limitation.


PWA installed locally still use many APIs that depend on the OS. A good example is what Apple recently did; remove local storage after certain period, which effectively destroys a whole array of possible apps. Can you imagine if you e.g. stopped using Duolingo for a week and BAM all your progress is gone.


Don’t let people pin it to their home screen and push an alternative that does distribute through the App Store. That will effectively kill the business.


A PWA can be install-able onto your phone, work offline (via service-worker cache), and receive webpush notifications. All three of these could be removed from chrome.


> How do you "shut down" a PWA? At the end of the day, it's just a website?!

You limit the capabilities of the web apps, like not allowing notifications from websites.


They depend on OS support. Apple/Google would just remove support.


I don't know how shutting down one specific PWA would work but shutting down all PWAs could be done by removing relevant APIs from popular browser(s).

PWAs make use of in-browser APIs that enable native-app-like features. For example, a service worker (i.e. the Service Worker API [0]) enables us to build offline-first web apps. Other examples include the Payment Request [1], Web NFC [2], Notifications [3], Web Speech [4], Contact Picker proposal [5] and more. Google and Apple can choose what they support and could remove APIs from their popular browsers. I think removing APIs is rare because browsers generally care about backwards compatibility but one notable exception is the sunsetting of Flash.

I'm no expert but I think Google is betting on PWAs succeeding (or at least sees them as inevitable) and is trying to put themselves in the best position for when/if it succeeds. For example, ChromeOS supports PWAs and Chrome shipped APIs, that could be useful when building PWAs, that aren't supported in other browsers (e.g. the Web OTP API might be coming to Chrome soon [6] and makes inputting 2FA codes from SMS real quick). I'd be surprised if Google tried to "shutdown" PWAs. However, if PWAs were to succeed then we wouldn't rely on native app stores as much as we do today and that could mean the end of the Google/Apple duopoly on mobile operating systems. I wrote a short post about this on my personal website [7].

[0] https://developer.mozilla.org/en-US/docs/Web/API/Service_Wor...

[1] https://developer.mozilla.org/en-US/docs/Web/API/Payment_Req...

[2] https://developer.mozilla.org/en-US/docs/Web/API/Web_NFC_API

[3] https://developer.mozilla.org/en-US/docs/Web/API/Notificatio...

[4] https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_...

[5] https://wicg.github.io/contact-api/spec/

[6] https://blog.chromium.org/2020/05/chrome-84-beta-web-otp-web...

[7] https://konaraddi.com/writing/2019/2019-01-06-pwas-could-hel...


Node.js is a server side implement of js, used for running apps on the server. It's like PHP/Ruby/go/python/etc, only the syntaxt is js


They are considering a conflict (which they don't feel would really affect anyone) as an option: https://github.com/nodejs/node/pull/20876#issuecomment-39091...


That's really ugly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: