Hacker Newsnew | past | comments | ask | show | jobs | submit | 9x39's commentslogin

I’m more curious in the genesis of these laws, whether their sponsors received written suggestions or ghostwritten bills, etc. as a form of parallel construction.

It seems all at once, everywhere that many groups that have a vested interest in forcing precedent and compliance of non-anonymous access across the computer world. It smacks of something less-than-organic.


I was curious about your question and googled. Here's the legislative history of the law: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm....

Reading the first analysis PDF:

> This bill, sponsored by the International Centre for Missing and Exploited Children and Children Now, seeks to require device and operating systems manufacturers to develop an age assurance signal that will be sent to application developers informing them of the age-bracket of the user who is downloading their application or entering their website. Depending on the age range of the user, a parent or guardian will have to consent prior to the user being allowed access to the platform. The bill presents a potentially elegant solution to a vexing problem underpinning many efforts to protect children online. However, there are several details to be worked out on the bill to ensure technical feasibility and that it strikes the appropriate balance between parental control and the autonomy of children, particularly older teens. The bill is supported by several parents’ organizations, including Parents for School Options, Protect our Kids, and Parents Support for Online Learning. In addition, the TransLatin Coalition and The Source LGBT+ Center are in support. The bill is opposed by Oakland Privacy, TechNet, and Chamber of Progress.


This law doesn't do anything that prevents non-anonymous access. Here's how you would access things anonymously if you bought a new computer that implemented this.

1. When you set up your account and it asks for your birthdate, make up any date you want that is at least far enough in the past to indicate an age older that what any site you might use that checks age requires.

2. Access things the way you've always done. All that has changed is that things that care about age checks find out you claim to be old enough.

The only people it actually materially affects on your new computer are people who cannot set up their own accounts, such as children if you have set up permissions so they have to get you to make their accounts.

Then if you want you can enter a birthdate that gives an age that says non-adult, so sites that check age will block them.

From a privacy and anonymity perspective this is essentially equivalent to sites that ask "Are you 18+?" and let you in if you click "yes" and block you if you click "no". It is just doing the asking locally and caching the result.


As with all age verification bills, the fact that developers are opened up to liability if children access content they're not "supposed to" means that facial scans and ID checks will be implemented as they currently are everywhere.

From the bill:

> (3) (A) Except as provided in subparagraph (B), a developer shall treat a signal received pursuant to this title as the primary indicator of a user’s age range for purposes of determining the user’s age.

> (B) If a developer has internal clear and convincing information that a user’s age is different than the age indicated by a signal received pursuant to this title, the developer shall use that information as the primary indicator of the user’s age.

It's not enough to just accept the age signal, you can still be liable if you have reason to believe someone is underage based on other information.

The cheapest and easiest way to minimize that liability is with face scans and ID checks. That way you, as a developer, know that your users won't bankrupt you.


Sounds like if the OS doesn't track anything else about the user, then it won't receive any other signals and will just use whatever was typed in at account creation.

If websites accept this as age verification it could provide a very easy way to bypass it.

In fact, looking at it again, point B specifically says if the "developer" has information rather than the "system" has information. So really sounds like if the developer isn't collecting logs that they can access themselves this wouldn't apply to them.


The cheapest and easiest way to minimize liability is not to collect any information not needed to actually provide the service you are providing.

I agree. I feel the flow of having browsers send some flag to sites is the most privacy-preserving approach to this whole topic. The system owner creates a “child” account that has the flag set by the OS and prevents the execution of unsanctioned software.

This puts the responsibility back on parents to do the bare minimum required in moderating their child’s activities.


What would be even more privacy preserving would be to mandate sites to send age appropriateness headers (mainstream porn sites already do this voluntarily).

Possibly it could be further mandated that the OS collect relevant rating information for each account and provide APIs with which browsers and other software could implement filtering.

And possibly it could be further mandated that web browsers adopt support for this filtering standard.

And if you want a really crazy idea you could pass a law mandating that parents configure parental controls on devices of children under (say) 12 and attach civil penalties for repeated failure to do so.

There's never any need for information about the user to be sent off to third parties, nor should we adopt schemes that will inevitably provide ammo for those advocating attested digital platforms.


I think you would find widespread support from the various websites out there for this. Most porn websites today voluntarily implement some type of mechanism that advertises them as not for children.

The issue is how does the browser know the age bracket of the user in question so it knows to not load content with those headers? The API this bill mandates is the missing half to make those headers actually useful without specialized browsers/3rd party plugins.

So does Google send a header for each search result when you look up "Ron Jeremy" so that some results get hidden, or does the browser just block the whole page?

Sending all the "bad" data to the client and hoping the client does the right thing outs a lot of complexity on the client. A lot easier to know things are working if the bad data doesn't ever get sent to the client - it can't display what it didn't get.


Google would send a header that it is appropriate for all ages (I'm not sure how the safe search toggle would interact with this, the idea is just a rough sketch after all).

When you click on a search result, you load a new page on a different website. The new page would once again come with a header indicating the content rating. This header would be attached to all pages by law. It would be sent every time you load any page.

Assuming that the actual problem here is the difficulty of implementing reliable content filtering (ala parental controls) then the minimally invasive solution is to institute an open standard that enables any piece of software to easily implement the desired functionality. You can then further pass legislation requiring (for example) that certain classes of website (ex social media) include an indication of this as part of the header.

Concretely, an example header might look like "X-Content-Filter: 13,social-media". If it were legally mandated that all websites send such it would become trivially easy to implement filtering on device since you could simply block any site that failed to send it.

> A lot easier to know things are working if ...

Which is followed by wanting an attested OS (to make sure the value is reliably reported), followed by a process for a third party to verify a government issued ID (since the user might have lied), followed by ...

It's entirely the wrong mentality. It isn't necessary for solving the actual problem, it mandates the leaking of personal data, and it opens an entire can of worms regarding verification of reported fact.


Yes this is a really simple fix. The first line if your post says it all. If they really wanted to protect children, you would put the responsibility on the services on the other end. This is about mass surveillance or disadvantaging open source solutions

If browsers are going to send flags, they should only send a flag if its a minor. Otherwise is another point of tracking data that can be used for fingerprinting.

If you send a flag ever, then absence of a flag is also fingerprinting surface.

If you imagine a world where you have a header, Accepts-Adult-Content, which takes a boolean value: you essentially have three possibilities: ?0, ?1, and absent.

How useful of a tracking signal those three options provide depends on what else is being sent —

For example, if someone is stuffing a huge amount of fingerprinting data into the User-Agent string, then this header probably doesn’t actually change anything of the posture.

As another example, if you’re in a regular browser with much of the UA string frozen, and ignoring all other headers for now, then it depends on how likely the users with that UA string to have each option: if all users of that browser always send ?0 (if they indicate themselves to be a minor) or ?1 (if they indicate themselves to be an adult or decline to indicate anything), then a request with that UA and it absent is significantly more noteworthy — because the browser wouldn’t send it — and more likely to be meaningful fingerprinting surface.

That said, adding any of this as passive fingerprinting surface seems like an idea unlikely to be worthwhile.

If you want even a weak signal, it would be much better to require user interaction for it.


I'm not sure it's worth entertaining these hypotheticals. Just another absurd CA law that's impossible to comply with. "When you set up your account and it asks for your birthdate." What does this mean? "Setup" what account? "It" what? Some graphical installer? What if I don't want to use one? How would this protocol be implemented in such a way where it's not trivially easy for the user to alter the "age signal" before sending a request? The "signal" is signed with some secret that you attest to but can't write? So it's in some enclave? What if my smart toaster doesn't have an enclave? Does my toaster now have to implement software enclave? I'm not aware of a standard, or industry standards body, or standard specification, or implementation of a specification, around this "age signal" thing. Is this some proprietary technology that some company has a patent on, and they've been lobbying for their patent to be legally mandated? If so that's very concerning and probably has antitrust implications (it is ironic that ever-tightening surveillance of people is a downstream consequence of all this deregulation of corporate persons; fine for me but not for thee I guess). I would love to know the full story here, since this is being shopped around in several states, but I haven't seen any sort of investigative journalism about this which is disappointing. This whole thing is really curious.

Most of these questions are actually answered in the law itself. You could be your own investigator in seconds.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...

Your toaster is not impacted. You’re turning a law that, yes, has some open questions around implementation, into a way bigger scare and conspiracy.

> operating system provider, as defined, to provide an accessible interface at account setup that requires an account holder, as defined, to indicate the birth date, age, or both, of the user of that device for the purpose of providing a signal regarding the user’s age bracket to applications available in a covered application store and to provide a developer, as defined, who has requested a signal with respect to a particular user with a digital signal via a reasonably consistent real-time application programming interface regarding whether a user is in any of several age brackets, as prescribed. The bill would require a developer to request a signal with respect to a particular user from an operating system provider or a covered application store when the application is downloaded and launched.

Let’s be honest here. 99% of general purpose computing devices targeted at consumers make an “account” when you setup for the first time. Even Linux if just to name a home directory. It’s pretty obvious what an account is. Especially when it only applies to bundled app stores. What App Store has no account anyways?

It allows the operating system to define the interface. No patent or proprietary system. No surveillance. The law says user interface. Not graphical interface. Do with that as you will. A OS producer who has an App Store probably has a graphical interface, but if not they surely figured out how to interface with users already.

It actually requires operating systems and developers to not abuse this data or use it for anticompetitive purposes.

There is no attestation. It’s entirely self reported and unverified.


You should follow your own advice.

Their definition of "app store" is a mile wide: "(e) (1) “Covered application store” means a publicly available internet website, software application, online service, or platform that distributes and facilitates the download of applications from third-party developers to users of a computer, a mobile device, or any other general purpose computing that can access a covered application store or can download an application."

Grats, github is an appstore. apt-get is an app store. You posting software on your own website is an app store.


GitHub isn’t an app stores associated with an operating system though. Your personal website is most likely not in scope. You have to put all the pieces together.

Apt… yes is an App Store run by an operating system organization (Debian org). That feels pretty unsurprising. Debian’s parent organization (headquartered in the US) probably needs to comply with this.


> Apt… yes is an App Store run by an operating system organization (Debian org). That feels pretty unsurprising. Debian’s parent organization (headquartered in the US) probably needs to comply with this.

And that right there is exactly the fucking problem. A zero profit collective “store” that publishes zero profit hobbyist “apps” is now going to have to invest in some kind of harebrained compliance scheme that will only grow from here.

In a couple of years is my “app” in Debian’s store going to require some goddamn TPS report and certification to tell California that everything is above board? It’s incredibly likely! By itself this law does nothing but lay the groundwork for regulation of “apps”, which by itself might be acceptable, but including FOSS distribution channels and hobby apps in the scope of this law is nothing short of evil. It’s laying the groundwork for a frontal assault on FOSS, and if you don’t see that then I don’t know what to tell you.

My guess is that Linux wasn’t extensively considered in the writing of this law, but when the next stage comes along and people start complaining, legislators will shrug and say “oh well, they need to comply”—and lobbyists for the big 3 proprietary software firms will back that position up. This is setting up a killshot for consumer Linux.


Where in that definition does it say the app store must be associated with an OS?

> It seems all at once, everywhere that many groups that have a vested interest in forcing precedent and compliance of non-anonymous access across the computer world. It smacks of something less-than-organic.

I think you’ve nailed it here. How many of these people campaigned on this issue? Where were the grassroots to push this? Where did this even come from?

Somebody, somewhere - with a heck of a lot of money - wants to see this happen. And I don’t think they have good intentions with it.


Conservatives discovered a cheat code to get: (a) people to have to identify on the computer everywhere and (b) control what they can do with and without this identification.

Of course they are copying the play everywhere.


Death threats mainly. Personally I think it would be easier if they just made it so that platforms ran a tiny LLM against the content that will be posted - determined if it is a death threat, then require them to be identified before it's posted, then it would solve a lot of these problems.

TLDR: Evil people be doxxed internally not everyone.


That turns jokes into contracts that nobody wants. Bad idea.

Maybe just don’t make “jokes” like that.

I don't make such "jokes". Idiots do.

And when the idiots do, the proposed system locks the fire door for them. That's just dangerous. We'd want them with bunch of confusing options and better illuminated de-escalation paths.


a "tiny large language model"? lol

See https://tinyllm.org

These days the name "LLM" refers more to the architecture & usage patterns than it does to the size of model (though to be fair, even the "tiny" LLMs are huge compared to any models from 10+ years ago, so it's all relative).


Yeah, a small one that is cheaper because they'll be processing billions of messages per year.

Good thing all the kind people doing death threats won’t just bypass it?

I'm totally lost here. If you don't identify, you don't post.

Good thing no one ever breaks any rules!

If a platform decides to require an account to post, or requires your message to pass an LLM sniff test before publishing it, you can break all the rules you want but your message won't be visible to others on said platform.

The example given was a ‘lightweight LLM’ by the poster, which sounded an awful lot like client side?

If server side, you already have the heavyweight stuff going on, and yes there is no need to do all the bypassable shenanigans.


Since that client side llm would be processing billions of messages each year on each person's laptop, lol

Actually that’s what data and the preponderance of victims allege: an intersection of immigration and policing which interlocked to systematically deprioritize the investigation into abuse of working-class white girls by an over represented ethnic group.

In the local data that the audit examined from three police forces, they identified clear evidence of “over-representation among suspects of Asian and Pakistani-heritage men”.

It’s unfortunate to watch people and entire countries twist themselves in logic pretzels to avoid ever suggesting that immigration has no ills, and we’re just being polite here about it.

https://www.aljazeera.com/news/2025/6/17/what-is-the-casey-r...

https://celina101.substack.com/p/the-uks-rape-gang-inquiry


Odds vs stakes argument, kinda. Is it perfect? no. Should you do something? probably.

In personal protective gear, you have ballistic helmets. They don't cover the face. They often have cutouts around your ears. They don't cover your neck. They can generally stop a low velocity handgun round, and anything more energetic except a glancing rifle round is usually going right through. Even if it doesn't penetrate, backface deformation may be lethal. They're still generally worn as the only game in town.


You can see the same mechanisms - albeit with less available cash - in South African farms, such as general property defenses and safe rooms.

Spending a little as a hedge against anarcho-tyranny and its collateral damage showing up in your (gated) neighborhood seems rational for those who can afford it.


> I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.

Advancing a case for a precedent-creating decision is a well-known tactic for creating the environment of success you want for a separate goal.

It's possible you can find a genuine belief in the people who advance the cause. Charitably, they're perhaps naive or coincidentally aligned, and uncharitably sometimes useful idiots who are brought in-line directly or indirectly with various powerful donors' causes.


There was a meme going around that said the fall of Rome was an unannounced anticlimactic event where one day someone went out and the bridge wasn't ever repaired.

Maybe AGI's arrival is when one day someone is given an AI to supervise instead of a new employee.

Just a user who's followed the whole mess, not a researcher. I wonder if the scaffolding and bolt-ons like reasoning will sufficiently be an asymptote to 'true AGI'. I kept reading about the limits of transformers around GPT-4 and Opus 3 time, and then those seem basic compared to today.

I gave up trying to guess when the diminishing returns will truly hit, if ever, but I do think some threshold has been passed where the frontier models are doing "white collar work as an API" and basic reasoning better than the humans in many cases, and once capital familiarizes themselves with this idea more, it's going to get interesting.


But it's already like that; models are better than many workers, and I'm supervising agents. I'd rather have the model than numerous juniors; esp. the kind that can't identify the model's mistakes.


This is my greatest cause for alarm regarding LLM adoption. I am not yet sure AI will ever be good enough to use without experts watching them carefully; but they are certainly good enough that non-experts cannot tell the difference.


My dad is retired and enamored with ChatGPT. He’s been teaching classes to seniors and evangelizing the use to all his friends. Every time he calls he gives me an update on who he’s converted into a ChatGPT user. He seems disappointed with anyone who doesn’t use it for everything after he tells them about it.

A couple days ago he was telling me one lady he was trying to sell on it wouldn’t use it. She took the position that if she can’t trust the answers all the time, she isn’t going to trust or use it for anything. My dad almost seemed offended by this idea, he couldn’t understand why someone wouldn’t want the benefits it could offer, even if it wasn’t perfect.

I think her position was very sound. We see how much misinformation spreads online and how vulnerable people are to it. Wanting a trusted source of information is not a bad thing. Getting information more quickly is of little value if it isn’t reliable data.

If I prod my dad enough about it, he will admit that ChatGPT has made some mistakes that he caught. He knew enough to question it more when it was wrong. The problem is, if he already knew the answer, why was he asking in the first place… and if it was something he wasn’t well versed on, how does he know it’s giving him good data?

People are defaulting to trust, unless they catch the LLM in a lie. How many times does someone have to lie to a person before they are labeled a liar and no longer trusted at face value? For me, these LLMs have been labeled a liar and I don’t trust them. Trust takes a long time to rebuild once it’s broken.

I mostly use LLMs to augment search, not replace it. If it gives me an answer, I’ll click through to the sourced reference and see what it says there, and evaluate if it’s a source with trusting. In many cases the LLM will get me to the right page, but it will jumble up the details and get them wrong, like a bad game of telephone.


How do you know that it’s a source worth trusting?

I think the expectation of AI being perfect all the time is probably driven by the hype and marketing of “1 million PhDs in your pocket”.

If you compare AI to an average person or a random website you’d come across google I would wager that AI is more likely to be accurate in almost every scenario.

Hyper specific areas, niche domains and rapidly evolving data that is not being published - a lot less so.


Thanks for sharing that anecdote. I think everyone is susceptible to misinformation, and seniors might be especially unprepared to adapt to LLMs tricks.


The problem becomes your retirement. Sure, you've earned "expert" status, but all the junior developers won't be hired, so they'll never learn from junior mistakes. They'll blindly trust agents and not know deeper techniques.


We are currently at a point where the master furniture craftsmen are doing quality assurance at the new automated furniture factory. Eventually, everyone working at the factory will have never made any furniture by hand and will have grown up sitting on janky chairs, and they will be the ones supervising.


This is a great example...

Designing and building chairs (good chairs, that is) is actually a skill that takes a lot of time and effort to develop. It's easy to whip up a design in CAD, but something comfortable? Takes lots of iterations, user tests etc. The building part would be easy once the design is hammered out, but the design is the tough part.


The majority can be like that but the few can set the tone for many.


You can get experience without an actual job.


Can I rephrase this as "you can get experience without any experience"? Certainly, there's stuff you can learn that's adjacent to doing the thing; that's what happens when juniors graduate with CS degrees. But the lack of doing the thing is what makes them juniors.


>that's what happens when juniors graduate with CS degrees

A CS degree is going to give you much less experience than building projects and businesses yourself.


How much time will someone realistically dedicate to this if they need to have a separate day job? How good will they get without mentors? How much complexity will they really need to manage without the bureaucracy of an organization?

Are senior software engineers of the future going to be waiting tables along side actors for the first 10+ years of their adult life, working on side projects on nights and weekends, hoping to one day jump straight to a senior position in a large company?

The questions I instinctively ask myself when looking at a new problem, having worked in an enterprise environment for 20 years, are much different than what I’d be asking having just worked on personal projects. Most of the technology I’ve had access to isn’t something a solo hobbyist dev will ever touch. Most of the questions I’m asking are influenced by having that access, along with some of the personalities I’ve had to deal with.

How will people get that kind of experience?

There is also the big issue of people not knowing what to build. When a person gets a job, they no long need to come up with their own ideas. Or they generate ideas based on the needs of the environment they’re in. In the context of my job, I have no shortage of ideas. For solo projects, I often draw a blank. The world doesn’t need a hundred more todo apps.


>How much time will someone realistically dedicate to this if they need to have a separate day job?

Typically parents subsidize the living of their children while they are still learning.

>Most of the technology I’ve had access to isn’t something a solo hobbyist dev will ever touch

That's already true today. Most developers are react developers. If hired for something else they will have to pick that up on the job. When you have niche tech stacks you already will need to compromise on the kind of experience people have. With AI having exact experience in the technology is not that necessary since AI can handle most of it.


Parents can only subsidize children if they are doing well themselves, most aren’t.

That “learning” phase used to end in the 18-25 range. Getting rid of juniors and making someone get enough experience on side projects to be considered a senior would take considerably longer. Exactly how long are parents supposed to be subsidizing their children’s living expenses? How can the parents afford to retire when they still have dependents? And all of this is built on the hope that the kid will actually land that job in 10 years? That feels like a bad bet. What happens if they fail? Not a big deal when the kid is 27, but a pretty big deal at 40 when they have no other marketable skills and have been living off their parents.

The difference is there are juniors getting familiar with those enterprise products today. If they go away, they will step into it as senior people and be unprepared. It’s not just about the syntax of a different language, I’m talking more about dealing with things like Active Directory, leveraging ITSM systems effectively, reporting, metrics, how to communicate with leadership, how to deal with audits. AI might help with some of this, but not all of it. For someone without experience with it, they don’t know what they don’t know… in which case the AI won’t help at all.

I even see this when dealing with people from a small company being acquired by a larger company. They don’t know what is available to them or the systems that are in place, and they don’t even know enough to ask. Someone from another large company knows to ask about these things, because they have that experience.


>Not a big deal when the kid is 27, but a pretty big deal at 40 when they have no other marketable skills

Let's say someone started building products since 10. By the time they were 27 they would have 17 years of experience. By 40 they would have 30 years of experience. That is more than enough time for one to gain a marketable skill that people are looking for.

>they don’t know what they don’t know… in which case the AI won’t help at all.

I think you are underestimating at AI's ability to sus out such unknown unknowns.


You’re expecting kids in 5th grade to pick a career and start building focused projects on par with the experience one would get in a full time position at a company?

This can’t be serious?

How does AI solve the unknown unknowns problem?

Even if someone may hear about potential problems or good ideas from AI, without experience very few of those things are incorporated into how a person operates. They have never felt the pain of missing those steps.

There are plenty of signs at the pool that say not to run, but kids still try to run… until they fall and hurt themselves. That’s how they learn to respect the sign.


>You’re expecting kids in 5th grade to pick a career and start building focused projects on par with the experience one would get in a full time position at a company?

Yes, I am. Do not underestimate how smart 5th graders are and what they can do with all of the free time they have.

>How does AI solve the unknown unknowns problem?

You can ask it what it thinks you should know. You can ask it for what pitfalls to look out for. You can ask it to roleplay to play out scenarios and get practice with them. I think such practice is enough to get them to a state of being hirable.


I’m sure there are some exceptional 5th graders doing amazing things. The number that will keep that same interest into adulthood is exceptionally low. Kids also need a chance to be kids. Expecting them to be heads down working on their career ambitions at 10 is dystopian.

It’s not about just getting hired. It’s about being effective once hired. I expect a senior to have preferences and opinions, informed by experience, on how things can and should run… while also being able to adapt to the local culture. We should be able to debate ideas in real time without having to run to the LLM to read the next reply. If that’s all someone is bringing to the table, just tell the team to use an LLM during brainstorming sessions.


From my experience, if you think AI is better than most workers, you're probably just generating a whole bunch of semi-working garbage, accepting that input as good enough and will likely learn the hardware your software is full of bugs and incorrect logic.


hardware / hard way, auto-correct is a thing of beauty sometimes :)


I'd always imagined that AGI meant an AI was given other AIs to manage.


I don't think this is how it'll play out, and I'm generally a bit skeptical of the 'agent' paradigm per se.

There doesn't seem to be a reason why AIs should act as these distinct entities that manage each other or form teams or whatever.

It seems to me way more likely that everything will just be done internally in one monolithic model. The AIs just don't have the constraints that humans have in terms of time management, priority management, social order, all the rest of it that makes teams of individuals the only workable system.

AI simply scales with the compute resources made available, so it seems like you'd just size those resources appropriately for a problem, maybe even on demand, and have a singluar AI entity (if it's even meaningful to think of it as such, even that's kind of an anthropomorphisation) just do the thing. No real need for any organisational structure beyond that.

So I'd think maybe the opposite, seems like what agents really means is a way to use fundamentally narrow/limited AI inside our existing human organisations and workflows, directed by humans. Maybe AGI is when all that goes away because it's just obviously not necessary any more.


>These models are demonstrating an incredible capacity for logical abstract reasoning of a level far greater than 99.9% of the world's population.

This is the key I think that Altman and Amodei see, but get buried in hype accusations. The frontier models absolutely blow away the majority of people on simple general tasks and reasoning. Run the last 50 decisions I've seen locally through Opus 4.6 or ChatGPT 5.2 and I might conclude I'd rather work with an AI than the human intelligence.

It's a soft threshold where I think people saw it spit out some answers during the chat-to-LLM first hype wave and missed that the majority of white collar work (I mean it all, not just the top software industry architects and senior SWEs) seems to come out better when a human is pushed further out of the loop. Humans are useful for spreading out responsibility and accountability, for now, thankfully.


LLMs are very good at logical reasoning in bounded systems. They lack the wisdom to deal with unbounded systems efficiently, because they don't have a good sense of what they don't know or good priors on the distribution of the unexpected. I expect this will be very difficult to RL in.


Why the super-high bar? What's unsatisfying is that aren't the 'dumbest' humans still a general intelligence that we're nearly past, depending how you squint and measure?

It feels like an arbitrary bar to perhaps make sure we aren't putting AIs over humans, which they are most certainly in the superhuman category on a rapidly growing number of tasks.


Doesn't seem like a very good clone. I wonder if he's hoping he's in their training data for a payout, if he can force that to be disclosed.

I think a few random samples trivially shows NotebookLM is higher pitched, although if you generalize to "deep male voice with vocal fry" you could lump them together with half the radio and podcast voices.


This view reduces countries to nothing more than oversized hotels or economic zones, as if they don’t have communities that go back many generations and who would fight or die to defend the borders.

Think this through: If the world likes your real estate, they can just come in and take it over overnight? Borders suddenly don’t matter?

Pop caps can easily be understood as visa or naturalization buffers. Hysteria doesn’t help.


Do you own that land? If you don't, then its not your land and not for you to decide what to do with it. Where has anyone proposed taking over someone's land?


The Swiss own Switzerland, to clear this up.


Which swiss owns switzerland? Switzerland is 42000 sq km. Can you show me land deeds by owners that cover the entire area?


Well, you can start with Wikipedia: https://en.wikipedia.org/wiki/Cantons_of_Switzerland#Constit...

Then, you could reach out to the Cantons and ask about individual parcels or titles.

We can measure whether they have ownership by testing if they respond to trespass, maybe by constructing a building and seeing they mobilize a response.

Where do you want to move the goalposts next?


You are the one who made this really odd claim, you do the homework and show me, otherwise retract your nonsense. Who is they? Unless they physically own a title to the land the building was being made on, why should they get any say? You are the one said swiss "own" switzerland, so you need to show me the sum total of all private land deeds covers the entirely of switzerland. Not your land, not your decision. And unless you can produce an actual land deed, "trespass" is completely bullshit.


Nationalism and borders have done very little for humanity.

The world would be a better place if we defined our communities by how we welcomed people and not by who we excluded


If that was the case, there would be no need to worry about migrating to the area formerly-known-as-Switzerland to be among its people, would there?

The people who clamor for moving there now could simply remake what they imagine liking about it in another area - careful not to erect borders or engender any kind of pride or loyalty to what they build, naturally.


I support anti nationalism all over the world. The rest of the countries are shitty too. I still am yet to see what good nationalism and religon have given us in balance against the huge negatives they have bred.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: