The whole point of the bill is to create a cause of action for the Attorney General to sue companies. In the bill, they say the damages are up to $2,500 per negligently affected child ($7,500 if intentional), so it doesn't matter how many non-children it affects. E.g. if the OS/appstore/accounts/application is in the context of a workplace that only employs adults, none of this matters.
The bill doesn't define "accounts", so it's entirely possible local users that a human signs into would count.
The saving grace is that obviously they have no idea what a Linux distribution is, and only the Attorney General can bring action, so there isn't much risk of the AG suing Debian.
The crazy people depend a lot on routes, the part of the city, and the time of day. E.g. the 1 (Sacramento St/California St) is basically fine all the time. The 38 (Geary) and 14 (Mission) are OK during the commute rush since they are packed full of commuters, but outside of those times, you will eventually see all kinds of unsocial behavior (shouting, fights, defecation, etc.), especially closer to civic center/tenderloin/mission.
I've previously struggled getting LLMs to manipulate tscn/tres files since they like to generate non-unique uids. Despite being text files, the godot tscn/tres files are normally meant to be manipulated by the editor and need to define and reference unique ids. The editor always generates completely random alphanumeric strings, but LLMs like to use names or placeholders (e.g. "aaaaa1", "example", or "foobar") for the ids.
The linter in the article that detects duplicate uids is interesting. Obviously the article is about creating a bunch of harnesses for the LLM to be productive. I wonder how many problems can be transformed like this from something LLMs just can't do reliably to something they just need to burn credits for a while on. The LLM probably can't tell if the games are fun, especially with it's rudimentary playtesting, but who knows.
regarding the non-random ids: I had this issue with uuids. Now I have "Never write your own ids. Always use uuidgen to generate real ones" in my AGENTS.md and haven't had this issue for a long time now.
I'm playing around with a tool to generate the IDs for me. I'm honestly not sure if it'll be an improvement since it likely means more tokens/context than just letting it YOLO IDs.
The model makes a huge difference. I tried this about a year ago and Claude occasionally got it right. These days, it seems to get it right on the first try most times and then always self corrects after. Codex 5.2 (I haven't played with 5.3 enough yet) gets it wrong more often than not, and frequently doesn't call the linter; I'm willing to accept that my bloated CLAUDE.md might be a bad fit for Codex and causing this to fail.
Generally yes, but if you use e.g. the parallel consumer, you can potentially keep processing in that partition to avoid head-of-line blocking. There are some downsides to having a very old unprocessed record since it won't advance the consumer group's offset past that record, and it instead keeps track of the individual offsets it has completed beyond it, so you don't want to be in that state indefinitely, but you hope your DLQ eventually succeeds.
But if your DLQ is overloaded, you probably want to slow down or stop since sending a large fraction of your traffic to DLQ is counter productive. E.g. if you are sending 100% of messages to DLQ due to a bug, you should stop processing, fix the bug, and then resume from your normal queue.
Their options should be priced lower, but the common stock isn't valued according to the $5.15B. They raised $300M at $12B and $425M at $7.4B, which are both under water, so those shareholders will use their liquidation preference to get paid at least 1x. Assuming those rounds owned 7% of the company, there is at most $4.4B left for the remaining 93% of shareholders. That's about 8% less. If they deducted fees, legal services, or retention packages or had worse liquidation preferences or more underwater rounds, then it gets even lower.
You have to exercise the options or let them expire. You normally have 10 years not 7, but if a company comes up on 10 years after they issued their first options, they might try a tender offer to buy some employee shares. If your 10 year old "start up" shares can't be sold anywhere, then they probably aren't worth exercising. A company that can't provide liquidity to employees for 10 years will probably never do it.
ISO options have to expire within 10 years of when they are granted. Sometimes companies make them expire earlier than that, so OP might be thinking of options they were granted. E.g. I once had options that expired 30 days after ending employment even thought the ISO requirement is up to 90 days.
PG does reuse plans, but only if you prepare a query and run it more than 5 times on that connection. See plan_cache_mode[0] and the PREPARE docs it links to. This works great on simple queries that run all the time.
It sometimes really stinks on some queries since the generic plan can't "see" the parameter values anymore. E.g. if you have an index on (customer_id, item_id) and run a query where `customer_id = $1 AND item_id = ANY($2)` ($2 is an array parameter), the generic query plan doesn't know how many elements are in the array and can decide to do an elaborate plan like a bitmap index scan instead of a nested loop join. I've seen the generic plan flip-flop in a situation like this and have a >100x load difference.
The plan cache is also per-connection, so you still have to plan a query multiple times. This is another reason why consolidating connections in PG is important.
Yes manual query preparation by client [1] is what you did in MSSQL server up until v7.0 I believe, which was 1998 when it started doing automatic caching based on statement text. I believe it also cached stored procedures before v7.0 which is one reason they were recommended for all application code access to the database back then.
MSSQL server also does parameter sniffing now days and can have multiple plans based on the parameters values it also has a hint to guide or disable sniffing because many times a generic plan is actually better, again something else PG doesn't have, HINTS [2].
PG being process based per connection instead of thread based makes it much more difficult to share plans between connections and it also has no plan serialization ability. Where MSSQL can save plans to xml and they can be loaded on other servers and "frozen" to use that plan if desired, they can also be loaded into plan inspection tools that way as well [3].
In MSSQL Server part of the plan match is the various session/connection options, if they are different there are different plans cached.
I believe the plan data structure PG is intimately tied to process space memory addresses since it was never thought to share between them and can even contain executable code that was generated.
This makes it difficult to share between processes without a heavy redesign but would be a good change IMO.
reply