Hacker Newsnew | past | comments | ask | show | jobs | submit | less_less's commentslogin

Yeah. It's especially relevant for the author's focus on shipbuilding. The old shipyard at Hunter's Point in San Francisco is horribly polluted, and they've been working to decontaminate it for more than three decades in order to reclaim the land for other uses (in particular, housing). Treasure Island and Yerba Buena Island also have a lot of pollution from the former naval base there. There is a cost to overregulation, and there is a cost to underregulation.

And OK, sure, there's a lot of industry that ought to happen somewhere. Someone has to build ships and electronics and whatever, and if California's code is too strict then it just becomes NIMBYism. But if some company moves their gigafactory to Reno for easier permitting, I don't whether (or more likely by how much) CA is too strict, or NV is too lax. And I know that CA has NIMBYish and overregulatory tendencies, but given the clear bullshit on this website, I'm not inclined to give it the benefit of the doubt either.

I'm especially doubtful when it says "THE classic example of what you can't do in CA" is auto paint shops ("Impossible"!) ... but then the detail it gives is that they're "effectively impossible" to permit in the Bay Area AQMD, that being only one of the state's 35 AQMDs (albeit one of the larger ones).


Hunter’s Point was where they cleaned the boats that were used for open air nuclear weapons testing.

It’s not a great example. I’d be more convinced if you picked a decommissioned shipyard with more conventional problems, like marinship.


I picked Hunter's Point because I used to live near it. The problems from decommissioning radioactive ships are bad, but they're far from the only pollution that was there. Lots of VOCs, solvents, oils, radiation from other stuff (eg, glow-in-the-dark equipment made with radium), heavy metals, pesticides, PCBs, and what have you.

But sure, there are other shipyards they cleaned up in less than three decades.


In addition to what others have pointed out, many of these aren't actually missing from traditional dictionaries: they're just inflected differently. So your example lists phrases like "operating systems", "immune systems" and "solar systems" as missing from traditional dictionaries, but at least the online OED and M-W have "operating system", "immune system" and "solar system" in them. It's just that your script is apparently listing the plural as a separate phrase.

On languages other than English: in general, different languages do word division very differently. At least in German and Dutch, many of those phrasal verbs are separable, meaning that they are one word in the infinitive but are multiple words in the present tense. So for example, where in English you would say "I log in to the website", in Dutch it would be "Ik log in op de website". "Log in" is two words in both cases, but in Dutch it's the separated form of the single-word separable verb inloggen ("I must log in now" = "Ik moet nu inloggen"). The verb is indeed separable in that the two words often don't end up next to each other: "I log in quickly" = "Ik log snel in".

Dutch, like German, has lots of compounds. But there are also agglutinative languages, which have even more complex compound words, perhaps comprising a whole sentence in another language. Eg (from Wikipedia) Turkish "evlerinizdenmiş" = "(he/she/it) was (apparently/said to be) from your houses" or Plains Cree "paehtāwāēwesew" = "he is heard by higher powers"; and these aren't corner cases, that's how the language works.


Glove80 is super nice, though rather expensive.

I got one with low-force switches. It's very comfortable to type on, but between the low-force switches and slightly different layout from a regular keyboard (column-staggered, concave, symbols in different locations) I make more mistakes. So I usually type on my laptop instead, especially while coding.


If you are comfortable using the laptop keyboard what was your motivation to buy the glove80? I use that as my daily driver but if I didn't get tendonitis from normal keyboards then I wouldn't have bought it.


The laptop keyborad is good enough, but I'd enjoyed using the Kineses before. I moved long distance and the Kinesis was bulky and didn't make the cut for things to haul. Once I was settled I started looking to set up a proper office again, and that included a keyboard. But I didn't find one that was enough better than a laptop keyboard to get regular use.


It's not supposed to be some red line absolute max price, but rather "how much is this item worth to you?" You set that as your max bid price. If you get it at auction for less than that, you got a good deal, but if you buy it for more, you got a bad deal. If someone outbids you, then maybe it was worth it to them, but you (supposedly) would not have wanted to buy the item for that much, and would rather use your money for something else.

For tricky-to-price items like unique art pieces, the idea that you can pin this down might be a fantasy, but for commodity items it's pretty reasonable. If you can buy the same thing at costco dot com for $500, then it's probably not worth more than $500 to you, and if at auction you get outbid and it sells for $500.01 then you'll shrug and go order the same thing for a cent less, having wasted only a few minutes of your time. If the item you're bidding on is discontinued (e.g. it's last year's model) but you can buy a slightly better one for $550, and you can spare that extra $50, then again you won't be too sad about getting outbid. Online auctions are more popular for used items, but again in that case you usually still have an idea of what a used item is worth to you.


The logic of "you shouldn't ever need to snipe, just bid your max price" only works if we assume that the max price is a red line though. If I "value something" at $5000 (as in I want to buy it at $5000), and I bid $5000, and someone bids $5000.01, I would probably be happy if I sniped them and got the item for $5000.02.


I'm not defending "you shouldn't ever need to snipe, just bid your max price" as a hard principle, just trying to explain where the idea comes from. Sniping can be strategic for lots of reasons: you don't have to commit to a bid until the last second (in case you find a similar item for cheaper elsewhere), you deny other people information, you might avoid anxiety from wondering whether your bid will win, etc.

That said, the max price is supposed to be a price where you are not especially happy to get the item at that price, but not really sad either, a price where you would say "well, I hoped for better but I guess that's a fair deal". That's not realistically pinned down to the cent. But if you set a max price at $5000 and would be happy to get the item at $5000.02 (for some reason other than satisfaction from sniping), then you set your max price wrong, or at least differently from how economists expect you to set it.


> But if you set a max price at $5000 and would be happy to get the item at $5000.02 (for some reason other than satisfaction from sniping), then you set your max price wrong, or at least differently from how economists expect you to set it.

I think this is the problem. When most sciences observe reality diverge from the model, they see that as a flaw in the model. When economists (at least you HN "economists") observe reality diverge from the model, they seem to see that as a flaw in reality.

The model is wrong.


According to you, a bidder should always be willing to extend the bid to infinity since each increment is only one cent more.


No, I am willing to pay $0.01 more but not infinity more, there's a difference


"I made my max bid $500.00, but I'd have paid $500.01!"

"I made my max bid $500.01, but I'd have paid $500.02!"

"I made my max bid $500.02, but I'd have paid $500.03!"

…where does this process end?


The way you've put it, always exactly 1¢ above your max bid.


I can't give you a precise number, but it's somewhere above $0 and below $∞


This thread is pretty weird.

My phrase "how economists expect you to set it" is probably wrong here, since I'm not an economist, I've just read the most basic theory about how to use this tool, and also used it myself (on eBay, you know, years ago when the site was mostly auctions). So I don't really know what "economists expect", but rather the basic guidelines for using this tool. You got me there.

> I think this is the problem. When most sciences observe reality diverge from the model, they see that as a flaw in the model. When economists (at least you HN "economists") observe reality diverge from the model, they seem to see that as a flaw in reality.

But like, to double-check here: "reality" means your imagined use of a tool that you do not in fact use, right? Like you say you "don't do auctions" and I'm trying to explain what that option is for, and you're countering that the basic "how to use this tool" explanation is a wrong model of reality?


By your logic, there is no such thing as a limit or maximum bid. It's like you don't understand the concept of a maximum.

If you're always willing to add one more cent then that wasn't your maximum.


See also the paper Ribbon filter: practically smaller than Bloom and Xor: https://arxiv.org/abs/2103.02515, which is a similar idea though not by the same authors.

IIRC, binary fuse filters are faster to construct than ribbon filters, but typically not quite as space-efficient. There are also frayed ribbon filters (by me) which are slower and more complex to construct but more space-efficient. There's no paper for those, just a Rust implementation.

Ribbon filters are deployed in Mozilla's Clubcard for distributing compressed certificate revocation lists: https://github.com/mozilla/clubcard and https://jmschanck.info/papers/20250327-clubcard.pdf. CRLs are an almost ideal application of this sort of compressed set tech, since the aggregator runs batch jobs and needs to distribute the set to very many clients. It's not perfectly ideal because CRLs require frequent updates and none of these methods support delta updates. There is a straightforward but inelegant workaround, which is to send a compressed set that represents the delta, and query both on the client.


Another answer to this: https://en.wikipedia.org/wiki/Cayley–Bacharach_theorem

A second special case of this theorem is Pascal's theorem, which says (roughly) that a variant of the elliptic curve group law also works on the union of a conic C and a line L (this union, like an elliptic curve, is cubic), where the group elements are on the conic. One point O on the conic is marked as the identity. To add points A+B, you draw a line AB between them, intersect that with the fixed line L in a point C, draw a second line CO back through the marked identity point, and intersect again with the conic in D:=A+B. This procedure obviously commutes and satisfies the identity law, and according to Pascal's theorem it associates.

Under a projective transformation, if the conic and line don't intersect, you can send the line to infinity and the conic to the units in (IIRC) a quadratic extension of F (e.g. the complex unit circle, if -1 isn't square in F). Since the group structure is defined by intersections of lines and conics, projective transformations don't change it. So the group is isomorphic to the group of units in an extension of F. If they do intersect ... not sure, but I would guess it instead becomes the multiplicative group in F itself.

The multiplicative group of F can be used for cryptography (this is classic Diffie-Hellman), as can the group of units in an extension field (this is LUCDIF, or in the 6th-degree case it's called XTR). These methods are slightly simpler than elliptic curves, but there are subexponential "index calculus" attacks against them, just like the ones against the original Diffie-Hellman. The attack on extension fields got a lot stronger with Joux's 2013 improvements. Since no such attack is known against properly chosen elliptic curves, those are used instead.


Annoyingly, while that d = e^-1 usually isn't used in practice (except in cases where you care about side-channel / fault resistance more than the 4x speedup), the Carmichael totient itself still is used in practice. At least if you want to conform to FIPS 186-5 / SP800-56B, which says that the private key includes d = e^-1 mod the Carmichael totient LCM(p-1,q-1), even if you're going to use the CRT. And that means you have to compute LCM(p-1,q-1), which also has side-channel considerations.


Do the standards require strong primes for RSA? I think FIPS doesn't ... it gives you that option, either for the legacy reasons or to get a proof with Pocklington's theorem that (p,q) really are prime, but just choosing a random (p,q) and running enough rounds of Miller-Rabin on them is considered acceptable IIRC.


Yeah see https://en.wikipedia.org/wiki/Strong_prime#Factoring-based_c...

There is probably a newer standard superseeding that but it is there in the ansi standards


Internally, most signature algorithms use hash functions. RSA-PSS, EdDSA and ML-DSA use them to provide something like randomness, and the security analysis of those signature schemes includes arguments assuming (in some very particular, technical ways) that the hash function outputs "look random".

Classical DSA and ECDSA do not use hash functions this way, but in my opinion they aren't stronger for it: they're basically assuming instead that some other mathematical function "looks random", which seems riskier than assuming that about a hash function. I've heard that the reason for this is to get around Schnorr's patent on doing it with hash functions, which has since expired.

The SHA3 and SHAKE hash functions (underlying e.g. ML-DSA) are explicitly designed to "look random" as well.

There are some signature schemes that try not to make such strong assumptions: in particular SLH-DSA targets properties more like first- and second-preimage resistance, target-collision-resistance, and so on.


All the algorithms you mention are PKI. RSA uses two large prime numbers. I don't see what hash sequences have to do with this at all.

PKI isn't even really about randomness. RSA does use a kind of randomness to generate its large primes, but that is beneficial and not required. The primary consideration is the math to reverse guess a factor of two primes or the square root of a large number, or something else computers currently find cheap to compute in one way but extremely expensive to reverse.


The intro textbook descriptions of cryptographic systems omit a lot of very important details.

When using RSA to sign a message m, in practice you don't send m^d mod N. That would generally be insecure, depending on what kinds of messages your system sends and/or accepts. In practical systems, instead you hash m, and then adjust the hash through a (possibly randomized) process called "padding" to be a value in [0,N). There are different standards for padding, and the better designs use additional hashing.

The security of the system depends in part on the hashed-then-padded message "looking random", i.e. not having structure that can be exploited by an attacker. It turns out to be tricky to formalize what exact randomness property you need, so cryptosystems are often analyzed in the "random oracle model" (ROM) in which the hash function has impossibly strong randomness properties.

It seems that usually, if you use a strong hash function, a scheme that's proved secure in the ROM is secure in real life (or at least it's not the ROM part that breaks); the counterexamples are usually really contrived. This article is about a somewhat-less-contrived, but still not quite realistic, example where something that's secure in the ROM would break due to the ROM being an unrealistic model.


As I understand the paper, the point is that Fiat-Shamir does *not* give a correct proof of the program's output.

They gave a (maliciously constructed) program whose outputs are pairs (a,b) where certainly a != b (instead the program is constructed such that a = b+1 always). But you can get the corresponding Fiat-Shamir protocol to accept the statement "I know a secret x such that Program(x) = (0,0)", which is clearly a false statement.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: