I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this. Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement. The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.
I'm an OpenAI employee and I'll go out on a limb with a public comment. I agree AI shouldn't be used for mass surveillance or autonomous weapons. I also think Anthropic has been treated terribly and has acted admirably. My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons, and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples). Given this understanding, I don't see why I should quit. If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit, but so far I haven't seen any evidence that's the case.
Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that. Either the terms are looser, they're not going to be enforced, or there's another reason for the loud attempt to blacklist Anthropic. It's very difficult to see how you could take this at face value in any case. If it is loose terms or a wink agreement to not check in on enforcement you're never going to be told that. We can imagine other scenerios where the terms stated were not the real reason for the blacklisting, but it's a real struggle (at least for me) to find an explanation for this deal that doesn't paint OpenAI in a very ethically questionable light.
This, for that check theyll be building the autonomous robots themselves, saying "theyre food delivery robots, thats not a gun that a drink dispenser!"
Back in 1960 us early detection systems mistook the moon for a massive nuclear first strike with 99.9% certainty.
With a fully autonomous system the world would have burned.
> Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.
I sincerely doubt that's true. I hope it's not. $1m is a lot of money, but I find it hard to believe most people would be willing to indiscriminately kill a large number of people for it.
Never mind people in the US, there are plenty of people elsewhere happy to work with their governments who are doubtless developing such autonomous entities.
> Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.
I will respond with a personal, related story. I was living in Hongkong when "democracy fell" in the late 2010s / early 2020s. It was depressing, and I wanted to leave. (I did later.) I was trying to explain to my parents (and relatives) why most highly skilled foreign workers just didn't care. I said: "Imagine you told a bunch of people in 1984 that they could move to Moscow to open a local office for a wealthy international corporation and get paid big money, like 500K+ in today's dollars. Fat expat package is included. How many people would take it? Most."
Another point completely unrelated to my previous story: Since the advent of pretty good LLMs starting in 2023, when I watch flims with warfare set in the future, it makes absolutely no sense that soldiers are still manually aiming. I'm not saying it will be like Terminator 2 right away, but surely the 19-22 year old operator will just point the weapon in the general direction of the target, then AI will handle the rest. And yet, we still see people manually aiming and shooting in these scenarios. Am I the only one who cringes when I see this? There is something uncanney valley about it, like seeing a character in a film using a flip phone post-2015! Maybe directors don't want to show us the ugly truth of the future of warfare.
I don't cringe because it's for dramatic/narrative effect. It's the same reason the crew of the Enterprise regularly beam into dangerous locations rather than sending a semi-autonomous drone. Or that despite having intelligent machines their operations are often very manual, as it is on many science fiction shows. The audience (if they think about it) realises this is not realistic and understands that the vast majority of our exploration would be done by unmanned/automated vessels. But that wouldn't be very interesting.
Other universes take it further - Warhammer 40k often features combatants fighting with melee weapons. Rule of cool and all that.
Agreed, but I think it goes far beyond warfare. The biggest "plot hole" in much scifi (IMO) is the lack of explanation for why all the depicted systems aren't autonomous. Most worldbuilding seems rather lazy to me, a haphazard mishmash of things that imply AGI and things that would only ever exist in a pre-ChatGPT world.
One of the few works that at least attempts to get this right is the Culture series where it's remarked on several different occasions that anything over some threshold of computing power has AGI built into it (but don't worry you're totally free, just ignore the hall monitor in all of your devices).
I mean this is not actually true and the statement justifies and vindicates those that do sell out by saying of course anyone would. There are countless marytr for religion, politics, and other things.
A better way is to say you can always find a cheap sellout at least than the morally dammed cannot claim equality of belief
> There are countless marytr for religion, politics, and other things.
I think those are not really comparable to OpenAI employees who leave, but that only underlines your point more:
Leaving OpenAI is not like death. In fact most of the employees will have an easy time finding a new job, given the resume of having worked at OpenAI. It is nowhere near any actual martyr.
You mean like all of the religious leaders who are actively supporting a defending a three time married adulterer? You’ll have to excuse my skepticism of the morality of “the moral majority”.
Religion is and always has been about control… it strikes me as exceedingly naive to be surprised the church is backing a pedophile, have you literally ever read any history of any kind?
Not claiming all religious people everywhere are some moral majority? Simply that people die for there beliefs and don't sellout. It happens in religion, politics etc. Also it's some super faulty logic to say look those prominent religious people support trump so all religous people support him is stupid. If that were the case Trump would win every election by massive margins. Trump might win 60/40 in rural areas the 40% he is losing is still very religous generally speaking because rural populations are religious. Cambridge MA voted for Biden by like 96% they have more than 4% of there populations that is also religious.
Also your point is kind of self defeating Trump's true believers dont sell Trump out no matter what he does. He could hide and suppress a pedophile conspiracy and his believers will still say he is tough on crime.
Selling out is bad I think people should passionate stand and be consistent in what they believe and do anything less shouldn't be celebrated or excused because its hard
1) I don't think you have read nor understood my argument. Stating that 80% of Evangelical Christians voted for Trump is not the the ding you think it is. Your imply 1/5 of Evangelicals don't sellout? I think that estimate is way too high and even if it was 99.99% of Evangelic Christians that doesn't excuse their selling hence my original statement. Say everyone sellsout so its okay to sell out is excusing in my opinion is abhorent behavior and suppport of an extremely dangerous leader. But this leads to point 2.
2) I am assuming your a democrat, congrats me to. I am also religiou, and I am assuming your not religous. But you don't see to undertand much about different religous groups and I think this sharp narrow view thinking really harms the democrats ability to reach out to religous people which is around 70-75% of american's according to pew research.
If you want to understand Evangelicals are basically defined by following some charismatic leader who either speaks for Christ, or has visions, or just claims to have all the answers. Believers will follow in any direction because they trust that person but when trust is lost they usually face a crisis of faith and leave that church or the faith all together because they didn't really have strong buy in to the ideals of chirst just that person. This is an extremely well documented occurence. While not all Evangelical people are that occurence a large number are and that architype perfectly describes Trump supporters and MAGA cultist. I think that explains the extreme overlap.
But also religion is a much more complex subject than 1 statistic, as being MAGA is not on the set of beliefs required to be Christian, In fact being a good person isn't either. I think it would be worth while to read up on up and coming people like James Talarico, who understand well that infusing the 2 philosophies is motivating. Because remember religion while being used to do horrible things was also used to immensely liberal things. Universal voting is a protestant thing, anti slavery is largerly a religous movement against white supremacy. The civil rights movement is baked in religion.
Understanding religion as equal to Trump support is corrosive the game of politics and pandering the playroom of vanity. As it doesn't help change anything and is just social meeting points to talk that way. I don't care for vanity I care about wining political power and using it run the country well and help people. I care about uplifting people economically, so they have the freedom to explore whatever faith, athiestism, or whatever they want because that liberality I believe is inherent into the decency of the human condition.
I very much understand religion and I’m surrounded by religious folks as a 52 year old Black guy growing up with religious parents and still living in the Bible Belt and I actually went to a private Christian mostly white school through elementary school.
I understand the difference between socially liberal Christian churches like the ones that were key to the civil rights movement and today are fightijg ICE.
My own wife is what I consider a very liberal Christian. She is a tither and she also is a dance fitness instructor and almost every male fitness I instructor in her organization is gay and she considers them friends and she is as far away from MAGA as possible. (I was a fitness instructor part time for over a decade myself in my younger years and I am well aware that all male instructors aren’t gay).
But the Black led mega churches also aren’t speaking up strongly about all of the things that clearly go up against the “RFC of Christianity” - adultery, bearing false witness, etc.
Yep, theoretically it could just be oligarchic corruption and not institutional insanity at the highest levels of the government. What a reassuring relief it would be to believe that.
I agree with your assessment, but given the past behaviour of this administration I wouldn't be shocked to discover that the real reason is "petulance".
I agree it makes little sense, and I think if all players were rational it never would have played out this way. My understanding is that there are other reasons (i.e., beyond differing red lines) that made the OpenAI deal more palatable, but unfortunately the information shared with me has not been made public so I won't comment on specifics. I know that's unsatisfying, but I hope it serves as some very mild evidence that it's not all a big fat lie.
Your ballooned unvested equity package is preventing you from seeing the difference between “our offering/deal is better” and “designated supply chain risk and threatening all companies who do business with the government to stop using Anthropic or will be similarly dropped” (which is well past what the designation limits). It’s easier being honest.
The supply chain risk stuff is bogus. Anthropic is a great, trustworthy company, and no enemy of America. I genuinely root for Anthropic, because its success benefits consumers and all the charities that Anthropic employees have pledged equity toward.
Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me. I can see arguments on both sides and I acknowledge it’s probably impossible to eliminate all possible bias within myself.
One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public so that people can judge for themselves, without having to speculate about who’s being honest and who’s lying.
>Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me.
That isn't what many of us are challenging here. We're not concerned about OpenAI's ethics because they agreed to work with the government after Anthropic was mistreated.
We're skeptical because it seems unlikely that those restrictions were such a third rail for the government that Anthropic got sanctioned for asking for them, but then the government immediately turned around and voluntarily gave those same restrictions to OpenAI. It's just tough to believe the government would concede so much ground on this deal so quickly. It's easier to believe that one company was willing to agree to a deal that the other company wasn't.
I’m skeptical because while I can totally believe that the deal presently contains restrictive language, I can totally believe that OpenAI will abandon its ethical principles to create wealth for the people who control it. Sort of like how they used to be a non-profit that was, allegedly, about creating an Open AI, and now they’re sabotaging the entire world’s supply of RAM to discourage competition to their closed, paid model.
Exactly this. Looks like we had the same conclusion. I really am inclined to believe that OpenAI given that its IPO'ing (soon?) would be absolutely decimated and employees would be leaving left and right if they proclaimed that, yes OpenAI is selling DOD autonomous killing machines.
But we all know how OpenAI is desperate for money, its the weakest link in the bubble quite frankly burning Billions and failed at Sora and there isn't much moat as well economically.
DOD giving them billions for a deal feels like a huge carrot on the stick and wink wink (let's have autonomous killing machines) with the skepticism that you, me or perhaps most people of the community would share.
I for what its worth, don't appreciate Anthropic in its whole (I do still remember perhaps the week old thread where everyone pushed on Anthropic for trying to see user data through API when they looked at the chinese models whole thing) but I give credit where its due and Enemy of my Enemy is my friend, and at the moment it seems that OpenAI might be more friendlier to DOD who wishes to create autonomous killing machine and mass surveillance systems which is like Sci-fi level dystopia rather than anthropic.
> One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public
Until they volunteer evidence that the deal is being misdescribed or that it won't be enforced, you can honestly say that you haven't seen any. What a convenient position!
> Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me.
You're conflating the Trump administration and their fascist tendencies with all US government. You want to work for fascists if you get paid well enough. You can admit that on here.
Friend, this reads like that situation where your paycheck prevents you from seeing clearly - I forget the exact quote. Sam doesn't play a straight game and neither does the administration - there are more than a few examples.
As an OpenAI employee, quitting wouldn't be a problem, as you have a much higher chance of being successful after quitting than anyone else. You could go to any VC and they would fund you.
This isn't even close to true. VCs aren't silly, and it's not the 2010-2015 days of free money any more. Having a big company on your resume is not enough to land your seed round. You need a product, traction, and real money revenue in most cases.
I mean, even if that's the case Facebook was hiring 100 Million$ just a few months ago though even poaching from OpenAI and I do think that these employees will always have an easier time getting a decent job offer from major companies in general as well. They may or may not be making the same money but, I do think that their morals have to be priced in as well.
Yes I agree, I don't know the current VC market so I am not gonna comment about that but my point was that the OpenAI employees would still be considerably well off even if they switch jobs.
My point was I don't think that Money (whether from VC or taking Jobs from other massive AI employers) should be as important issue to them atleast imo.
I agree with what you're saying, but given the egos involved in the current admin there's a practical interpretation:
1. Department of War broadly uses Anthropic for general purposes
2. Minority interests in the Department of War would like to apply it to mass surveillance and/or autonomous weapons
3. Anthropic disagrees and it escalates
4. Anthropic goes public criticizing the whole Department of War
5. Trump sees a political reason to make an example of Anthropic and bans them
6. The entirety of the Department of War now has no AI for anything
7. Department of War makes agreement with another organization
If there was only a minority interest at the department of war to develop mass surveillance / autonomous weapons or it was seen as an unproven use case / unknown value compared to the more proven value from the rest of their organizational use of it, it would make sense that they'd be 1) in practice willing to agree to compromise on this, 2) now unable to do so with Anthropic in specific because of the political kerfuffle.
I imagine they'd rather not compromise, but if none of the AI companies are going to offer them it then there's only so much you can do as a short term strategy.
That is pretty optimistic, i hope it is true, and just a miss-understanding.
But man, this blew up pretty fast for a miss-understanding in some negotiation. Something must have been said in those meetings to make anthropic go public.
These people are drunk on power. They have been running around dictating things to everyone so for someone to push back is pretty novel _and_ it will inspire (I hope) other people to push back.
Nah, they just respectfully said no to their face, which prompted him to make a big threat display and post another message with caps and exclamation signs on social media.
> Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that.
to be clear i think your assessment of this situation is likely, but it could also be the case that pete and co likes sam more than they do dario.
I was trying to make no particular call on the actual reason aside from pointing at how obviously not the real story and false the statements made so far are. What a knot you have to tie yourself into to seek out an explanation where OpenAI has not made an ethical compromise to stay in the game here. I can stretch and think of some ways but they are far from the simplest explanation.
Lots of responses below give the likely real reasons most of which are probably true in part, but my opinion is it's the primary reason all who is in and who is out decisions are made by the trump administration - fealty. Skills, value brought, qualifications, etc. none of that matter above passing frequent loyalty tests, appealing to ego, bribes (sorry, i mean donations). Imagine thinking "hey, we'll work towards fully autonomous killbots because our adversaries will get them too but the tech isn't strong enough to allow them loose yet" or "yes you can use our ai for your panopticon surveillance, but just not on our own citizens because that is illegal" are lefty woke stances but here we are. Dario failed the loyalty test, as anyone rational would.
Yeah, agreed. I probably wasn't going to delete my OpenAI account (ala the link that is also being upvoted on HN), it just seemed like a hassle vs ceasing to use OpenAI. But when the staff at OpenAI employ mental gymnastics, selective hearing, willful ignorance, or plain ignorance to justify compliance with manmade horrors, I think it's probably important to vote with our feet.
> while another agrees the the same terms that led to that
One of them needs to be investigated for corruption in the next few years. I’d have to assume anyone senior at OpenAI is negotiating indemnities for this.
> one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that
Never discount the possibility of Hegseth being petty and doing the OpenAI deal with the same terms to imply to the world that Anthropic is being unreasonable because another company signed a deal with him.
anthropic has nothing but a contract to enforce what is appropriate usage of their models. there are no safety rails, they disabled their standard safety systems
openai can deploy safety systems of their own making
from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident
this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model
- When has any AI company shipped "safeguards" that aren't trivially bypassed by mid bloggers? Just one example would be fine.
- The conventional wisdom is that OAI's R&D (including safety) is significantly behind Anthropic's.
- OpenAI is constantly starved for funding. They don't make money. They have every incentive to say yes to a deal that entrenches them into govt systems, regardless of the externalities
There's a critical mass of Trump Derangement Syndrome in SV, as this site exemplifies almost daily. The amount of vitriol and hatred spewed here is not healthy, nor are those who spew it. It kills rational debate, nuance and leads to foolish choices like someone cutting off their nose to spite their face as the old saying goes.
The president of the United States sets the tone that hated without reason or explanation is the way the system works now. Belligerence and power are the currency.
Speaking to people's better angels as if it has a chance of influencing Trumps behaviour is a fool's errand. It's not derangement. His word is worthless.
(Disclosure, I'm a former OpenAI employee and current shareholder.)
I have two qualms with this deal.
First, Sam's tweet [0] reads as if this deal does not disallow autonomous weapons, but rather requires "human responsibility" for them. I don't think this is much of an assurance at all - obviously at some level a human must be responsible, but this is vague enough that I worry the responsible human could be very far out of the loop.
Second, Jeremy Lewin's tweet [1] indicates that the definitions of these guardrails are now maintained by DoW, not OpenAI. I'm currently unclear on those definitions and the process for changing them. But I worry that e.g. "mass surveillance" may be defined too narrowly for that limitation to be compatible with democratic values, or that DoW could unilaterally make it that narrow in the future. Evidently Anthropic insisted on defining these limits itself, and that was a sticking point.
Of course, it's possible that OpenAI leadership thoughtfully considered both of these points and that there are reasonable explanations for each of them. That's not clear from anything I've seen so far, but things are moving quickly so that may change in the coming days.
I don't understand how any sort of deal is defensible in the circumstances.
Government: "Anthropic, let us do whatever we want"
Anthropic: "We have some minimal conditions."
Government: "OpenAI, if we blast Anthropic into the sun, what sort of deal can we get?"
OpenAI: "Uh well I guess I should ask for those conditions"
Government: blasts Anthropic into the sun "Sure whatever, those conditions are okay...for now."
By taking the deal with the DoW, OpenAI accepts that they can be treated the same way the government just treated Anthropic. Does it really matter what they've agreed?
It looks like Anthropic likely wanted to be able to verify the terms on their own volition whereas OpenAI was fine with letting the government police themselves.
From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.
> From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.
Anthropic didn't want a kill switch, they wanted contractual guarantees (the kind you can go to courts for). This administration just doesn't want accountability, that's all.
It was OpenAI that said they prefer to rely on guardrails and less on contracts (the kind that stops the AI from working if you violate). The same OpenAI that was awarded the contract now.
I don’t know why more people don’t see this. It’s a matter of providing strong guarantees of reliability of the product. There is already mass surveillance. There is already life taking without proper oversight.
I think it's a bit more nuance than that. The government (however good or bad, just bear with me) already has oversight mechanisms and already has laws in place to prevent mass surveillance and policy about autonomous killing.
So the governments stance is "We already have laws and procedures in place, we don't want and can't have a CEO to also be part of those checks"
I don't think this outcome would have been any different under a normal blue government either. Definitely with less mud slinging though.
If you think a blue government would even consider threatening to falsely accuse a company of being a supply-chain threat in order to gain leverage in a contract negotiation, you're insane. There's nothing remotely normal about this, it's not something you see in any western democracy
Government's free to not like the terms and go with another provider. That's whatever.
Government's not free to say, "We'll blow up your business with a false accusation if you don't give us the terms we want (and then use defence production act to commandeer the product anyway)". How much more blatantly authoritarian does it get than that?
This is wise analysis. To summarize: appeasement of the Trump administration is a losing strategy. You won’t get what you want and you’ll get dragged down in the process.
Jeremy Lewin's tweet referenced that "all lawful use" is the particular term that seems to be a particular sticking point.
While I don't live in the US, I could imagine the US government arguing that third party doctrine[0] means that aggregation and bulk-analysis of say; phone record metadata is "lawful use" in that it isn't /technically/ unlawful, although it would be unethical.
Another avenue might also be purchasing data from ad brokers for mass-analysis with LLMs which was written about in Byron Tau's Means of Control[1]
The term lawful use is a joke to the current administration when they go after senators for sedition when reminding government employees to not carry out unlawful orders. It’s all so twisted.
To be clear, the sticking point is actually that the DoD signed a deal with Anthropic a few months ago that had an Acceptable Use Policy which, like all policies, is narrower than the absolute outer bounds of statutory limitations.
DoD is now trying to strongarm Anthropic into changing the deal that they already signed!
I’d like to see smart anonymous ways for people to cryptographically prove their claims. Who wants to help find or build such an attestation system?
I’m not accusing the above commenter of deception; I’m merely saying reasonable people are skeptical. There are classic game theory approaches to address cooperation failure modes. We have to use them. Apologies if this seems cryptic; I’m trying to be brief. It if doesn’t make sense just ask.
Did Sam Altman say that he wouldn't allow ChatGPT to be used for fully autonomous weapons? (Not quite the same as "human responsibility for use of force".)
I don't want to overanalyze things but I also noticed his statement didn't say "our agreement specifically says chatgpt will never be used for fully autonomous weapons or domestic mass surveillance." It said something that kind of gestured towards that, but it didn't quite come out and say it. It says "The DoW agrees with these principles, and we put them in our agreement." Could the principles have been outlined in a nonbinding preamble, or been a statement of the DoW's current intentions rather than binding their future behavior? You should be very suspicious when a corporate person says something vague that somewhat implies what you want to hear - if they could have told you explicitly what you wanted to hear, they would have.
But anyway, it doesn't matter. You said you don't think it should be used for autonomous weapons. I'd be willing to bet you 10:1 that you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons", now or any point in the future.
> you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons"
To be fair, Anthropic didn't say that either. Merely that autonomous weapons without a HITL aren't currently within Claude's capabilities; it isn't a moral stance so much as a pragmatic one. (The domestic surveillance point, on the other hand, is an ethical stance.)
They specifically said they never agreed to let the DoD use anthropic for fully autonomous weapons. They said "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance [...] Fully autonomous weapons"
Their rational was pragmatic. But they specifically said that they didn't agree to let the DoD create fully automatic weapons using their technology. I'll bet 10:1 you won't ever hear Sam Altman say that. He doesn't even imply it today.
Not sure how that's relevant. I never said Dario was taking an ethical stand. I said they did not agree for Claude to be used for fully autonomous weapons. Now, compare that to OpenAI, whose agreement does allow fully autonomous weapons.
> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons,
In that case, what on earth just happened?
The government was so intent on amending the Anthropic deal to allow 'all lawful use', at the government's sole discretion, that it is now pretty much trying to destroy Anthropic in retaliation for refusing this. Now, almost immediately, the government has entered into a deal with OpenAI that apparently disallows the two use cases that were the main sticking points for Anthropic.
Do you not see something very, very wrong with this picture?
At the very least, OpenAI is clearly signaling to the government that it can steamroll OpenAI on these issues whenever it wants to. Or do you believe OpenAI will stand firm, even having seen what happened to Anthropic (and immediately moved in to profit from it)?
> and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples)
If OpenAI leadership sincerely wanted this, they just squandered the best chance they could ever have had to make it happen! Actual solidarity with Anthropic could have had a huge impact.
Am I wrong to think that such an agreement is basically meaningless? OpenAI gets to say there are limits, the government gets to do whatever it wants, and OpenAI will be very happy not to know about it.
Bingo. You don’t have to read much into this if you remember how the DoD uses the word trust. In their world, a "trusted" system is one that has the power to break your security if it goes wrong. So when they say "unrestricted use," the likely meaning isn’t just fewer guardrails it’s that the vendor doesn’t get to monitor or audit how the system is being used. In other words, the government isn’t handing a private company visibility into sensitive operations.
"AI shouldn't be used for mass surveillance or autonomous weapons". The statement from OpenAI virtually guarantees that the intention is to use it for mass surveillance and autonomous weapons. If this wasn't the intention them the qualifier "domestic" wouldn't be used, and they would be talking about "human in the loop" control of autonomous weapons, not "human responsibility" which just means there's someone willing to stand up and say, "yep I take responsibility for the autonomous weapon systems actions", which lets be honest is the thinnest of thin safety guarantees.
My understanding is that OpenAI's deal, and the deal others are signing, implicitly prevents the use of LLMs for mass domestic surveillance and fully autonomous weapons because today one care argue those aren't legal and the deal is a blanket for allowing all lawful use.
Today it can't be used for mass surveillance, but the executive branch has all the authority it needs to later deem that lawful if it wishes to, the Patriot Act and others see to that.
Anthropic was making the limits contractually explicit, meaning the executive branch could change the line of lawfulness and still couldn't use Anthropic models for mass surveillance. That is where they got into a fight and that is where OpenAI and others can claim today that they still got the same agreement Anthropic wanted.
Assuming this is real: Why do you think anthropic was put on what is essentially an "enemy of the state" list and openai didn't?
The two things anthropic refused to do is mass surveillance and autonomous weapons, so why do _you_ think openai refused and still did not get placed on the exact same list.
It's fine to say "I'm not going to resign. I didn't even sign that letter", but thinking that openai can get away with not developing autonomous weapons or mass surveillance is naive at the very best.
Life is more than a paycheck. We should raise the bar a little IMO. Turning down money for good reasons is not something extreme we should only expect from saints.
Of course. Doesn't change the reality that this is why someone would accept a justification that a neutral would easily see as plainly dishonest. Anyway, this is why we need unions
Who still does business with open ai and why? They are usually 5th or sixth in the benchmarks bracketed below and above by models that cost less. This has been the case for quite some time. Glm is out for us government purposes I'd imagine, but if google agrees to the same terms I don't see why the us government would use open ai anyway. If google disagrees it would be rather confusing given the other invasions of privacy they have facilitated, but if they do then using open ai would make sense as all that would be left is grok...
Imo the more ethical thing is obstructionism. Twitter's takeover showed it's pretty easy to find True Believer sycophants to hire. Better to play the part while secretly finding ways to sabotage.
Why do you suppose OpenAI's deal led to a contract, while Anthropic's deal (ostensibly containing identical terms) gets it not only booted but declared a supply chain risk?
"domestic" "mass" surveillance, two words that can be stretched so thin they basically invalidate the whole term. Mass surveillance on other countries? Guess that's fine. Surveillance on just a couple of cities that happen to be resisting the regime? Well, it's not _mass_ surveillance, just a couple of cities!
So, can you please draw the line when you will quit?
- If OpenAI deal allows domestic mass surveillance
- If OpenAI allows the development of autonomous weapons
- OpenAI no longer asks for the same terms for other AI companies
Correct?
If so, then if I take your words at face value:
- By your reading non-domestic mass surveillance is fine
- The development of AI based weapons is fine as long as there is one human element in there, even if it could be disabled and then the weapon would work without humans involved
- The day that OpenAI asks for the same terms for other AI companies and if those terms are not granted then that's also fine, because after all, they did ask.
I have become extremely skeptical when seeing people whose livelihood depends on a particular legal entity come out with precise wording around what does and does not constitute their red line but I find it fascinating nonetheless so if you could humor me and clarify I'd be most obliged.
Thank you for responding. Everyone wants to think they will “do the right thing” when their own personal Rubicon is challenged. In practice, so many factors are at play, not least of which are the other people you may be responsible for. The calculus of balancing those differing imperatives is only straightforward for those that have never faced this squarely. I’ve been marched out of jobs twice for standing up for what I believed to be right at the time. Am still literally blacklisted (much to the surprise of various recruiters) at a major bank here 8 years after the fact. I can’t imagine that the threat of being blacklisted from a whole raft of companies contracting with a known vindictive regime would make the decision easier.
The founders are all on a first name basis. I’m surprised no one has noted that Anthropic and OpenAI winning together by giving the world two different choices, just like the US does in its political landscape. In this circumstance, OpenAI wins the local market for its government and aligned entities (while having the free consumer by a matter of cost dynamic for that ideal customer profile which is vary broad and similar to Google’s search audience where most their revenue still depends), while Anthropic is provided the global market and prosumer market where people can afford choice by paying for it.
You should quit because the only reasonable thing for your leadership to have done is to refuse to sign any agreement with DoW whatsoever while it's attempting to strongarm Anthropic in this fashion.
It doesn't even matter if OpenAI is offered the same terms that Anthropic refused. It's absurd to accept them and do business with the Pentagon in that situation.
If you take the government at its word, it's killing Anthropic because Anthropic wanted to assert the ability to draw _some_ sort of redline. If OpenAI's position is "well sucks to be them", there's nothing stopping Hegseth from doing the same to OpenAI.
It doesn't matter at all if OpenAI gets the deal at the same redline Anthropic was trying to assert. If at the end of this the government has succeeded in cutting Anthropic off from the economy, what's next for OpenAI? What happens next time when OpenAI tries to assert some sort of redline?
What's the point of any talk of "AI Safety" if you sign on to a regime where Hegseth (of all people) can just demand the keys and you hand them right over?
#1 weekend HN is not a sane place. #2 emotions are high. #3 for what it’s worth @tedsanders I understand where you’re coming from and I believe you’re making the right choice by staying or at least waiting to make a decision. Don’t let #1 and #2 hurt you emotionally or force you to make a rash decision you later regret.
Edit: I don’t work at OpenAI or in any AI business and my neck is on the chopping block if AI succeeds… like a lot of us. Don’t vilify this guy trying to do what’s right for him given the information he has.
> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons
And you believe the US government, let alone the current one will respect that? Why? Is it naïveté or do you support the current regime?
> If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit.
So your logic is your company is selling harmful technology to a bunch of known liars who are threatening to invade democratic countries, but because they haven’t lied yet in this case (for lack of opportunity), you’ll wait until the harm is done and then maybe quit?
I’ll go out on a limb and say you won’t. You seem to be trying really hard to justify to yourself what’s happening so you can sleep at night.
Know that when things go wrong (not if, when), the blood will be on your hands too.
His point reeks of cope. But making a large amount of money would make anyone dumb, deaf, and blind. Also, I give a little leeway to people who are employees without executive decision-making power, as they do stand to have a lot to lose in situations like this.
It's probably how they are coping with the cognitive dissonance. I certainly feel for them, I don't know that I could easily walk away from a big pay package either without backup options when I have family to support and I'm not near retirement.
Ted, what do you think of your CEO’s statement: “the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”
The evidence seems to overwhelmingly point in the opposite direction.
This seems like the kind of foolishness it takes a lot of money to believe. Anthropic blew up their contract with the Pentagon over concerns on lethal autonomous weapons and mass domestic surveillance. OpenAI rushes in to do what Anthropic wouldn't.
If you think that means your company isn't going to be involved in lethal autonomous weapons and mass domestic surveillance... I don't really know what to tell you. I doubt you really believe that. Obviously you will be involved in that and you are effectively working on those projects now.
Aside from that unlikely read, this deal was still used as a pressure point on Anthropic, there's absolutely no way OpenAI was not used as a stick to hit with during negotiations.
Anthropic is deemed a betrayer and a supply chain risk for actually enforcing their principles.
OpenAI agrees to be put in the same position as Anthropic.
It seems like you must actually somehow believe that history will repeat itself, Hegseth will deem OpenAI a supply chain risk too, then move to Grok or something?
There's surely no way that's actually what you believe...
Giving you the benefit of the doubt and assuming [1] does not play a role in your thinking:
I don't mean this in any way rude and I apologize if this comes accross as such but believing it won't be used in exactly this way is just naive. History has taught us this lesson again and again and again.
What people don't understand is that domestic surveillance by the government doesn't happen and isn't needed. They know it's illegal and unpopular and for over two decades they have a loophole. Since the Bush administration it's been arranged for private contractors to do the domestic surveillance on the government's behalf. Entire industries have been built around creating "business records" for no other purpose than to sell them to the government to support domestic surveillance. This is entirely legal and why the DoW has been able to get away with saying things like "domestic surveillance is illegal, we don't do that" for over two decades while simultaneously throwing a shit fit about needing "all legal uses" if their access to domestic surveillance is threatened.
There's a big difference between "the government won't use our tools for domestic surveillance" (DoW/DoD/OpenAI/etc) and "we won't allow anyone to use our tools to support domestic surveillance by the government" (Anthropic)
Hegseth and the current Trump admin are completely incompetent in execution of just about everything but competent administrations (of both parties) have been playing this game for a long time and it's already a lost cause.
Listen, if the Government using it for legit and safe use cases wasn’t an issue, then they wouldn’t have complained about Anthropic’s language. Sam is just looking the other way and pretending for you employees.
Or Sam bribed the government to do this, which is also entirely possible.
I don't know you, so maybe you're actually for real and speaking on good faith here but honestly this and your other responses in this thread read exactly like "...salary depends on not understanding"
Assuming this isn't a troll and you really think this, you should at least have the cojones to admit you're taking the blood money instead of trying to pretzel the truth so hard that you just look like a moron instead.
For the record I don’t care if you quit or not. Cash rules after all… However, you are incredibly naive if you think the current admin will follow through on those terms.
Looks to me like you have decided that you are being paid to shut up and take the word of the most thoroughly dishonest and corrupt US government we've yet seen. Why on God's slowly-browning green earth do you trust that Altman got the deal Anthropic was trying for?
I have a bridge to Brooklyn to sell you if you believe this.
Standing up for whats right often is not easy and involves hard choices and consequences, your leader has shown you and the world that he is not to be trusted.
I can't tell you what to do but I hope you make the right decision.
lol, naive as hell. why would your company's agreement be the same as the one who just refused the _same_ agreement? Even my question doesn't even make sense, this is a contradiction, therefore your statement must be false. There, it's proven
Can you at least stop lying to yourself? Given what they did with Anthropic for not supporting domestic mass surveillance and autonomous weapons...
> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons
Your understanding is entirely wrong. At least stop lying to yourself and admit that you are entirely fine with working on evil things if you are paid enough.
I know the money is good, but if I were you (or any OpenAI employee), I'd move over to Google or Anthropic posthaste.
Is it really worth the long-term risk being associated with Sam Altman when the other firms would willingly take you and probably give you a pay bump to boot?
It doesn't make sense to me why anyone would want to associate themselves with Altman. He is universally distrusted. No one believes anything he says. It's insane to work with a person who PG, Ilya, Murati, Musk have all designated a liar and just general creep.
Defending him or the firms actions instantly makes you look terrible, like you'll gladly take the "Elites vs UBI recipients" his vision propagates.
You work for a company that’s part of the Trump, Ellison, Kutchner orbit of corruption.
Y’all are developing amazing technology. But accept reality and drop whatever sense of moral righteousness you’re carrying here. Not because some asshole on the internet says so, but for your own mental health.
there is a recent post about how one of the top OpenAI exec has given 25 million$ to a Trump PAC before publicly supporting Anthropic/signing this deal.
One got characterized as supply chain risk and so much for OpenAI to get the same.
And even that being said, I can be wrong but if I remember, OpenAI and every other company had basically accepted all uses and it was only Anthropic which said no to these two demands.
And I think that this whole scenario became public because Anthropic denied, I do think that the deal could've been done sneakily if Anthropic wanted.
So now OpenAI taking the deal doesn't help with the fact that to me, it looks like they can always walk back and all the optics are horrendous to me for OpenAI so I am curious what you think.
The thing which I am thinking OTOH is why would OpenAI come and say, hey guys yea we are gonna feed autonomous killing machines. Of course they are gonna try to keep it a secret right before their IPO and you are an employee and you mention walking out of openAI but with the current optics, it seems that you/other employees of OpenAI are also more willing to work because evidence isn't out here but to me, as others have pointed out, it looks like slowly boiling the water.
OpenAI gets to have the cake and eat it too but I don't think that there's free lunch. I simply don't understand why DOD would make such a high mess about Anthropic terms being outrageous and then sign the same deal with same terms with OpenAI unless there's a catch. Only time will tell though how wrong or right I am though.
If I may ask, how transparent is OpenAI from an employees perspective? Just out of curiosity but will you as a employee get informed of if OpenAI's top leadership (Sam?) decided that the deal gets changed and DOD gets to have Autonomous killing machine. Would you as an employee or us as the general public get information about it if the deal is done through secret back doors. Snowden did show that a lot of secret court deals were made not available to public until he whistleblowed but not all things get whistleblowed though, so I am genuinely curious to hear your thoughts.
Your response is a perfect encapsulation of "It is difficult to get a man to understand something when his salary depends upon his not understanding it."
I think its wrong for someone to ask someone to resign but acting that there is no issue here is debating in bad faith.
The comment perfectly exemplifies the kind of person that would work at OpenAI. Government AI drones could be executing citizens in the streets but they’d still find some sort of cope why it’s not a problem. They’ll keep moving the goalposts as long as the money keeps coming.
OpenAI employees put knives on their own necks to demand Altman to get back and be their boss [1], not too long ago, right? Altman wiggles his tongues and makes them a solid paycheck. "We will not be divided," unless the water boils slow enough. Wait for a few months, he will renegotiate the terms with DoD, just like his move to turn OpenAI into a for-profit.
It's comforting to know that some of the brightest minds of our generation are going to work at OpenAI, then quitting a few months later horrified, only to post a short mysterious tweet warning everyone of the dangers ahead. So much for alignment and serving humanity.
And they will continue to work for Google / Meta et al to use novel AI techniques to sell us more and better ads, only to quit a few years later to do more soul searching where everything went wrong /s
They've been deleted. For obvious reasons. You want to take a stand but you don't want to stop working for the people who do the things you don't want to do. It's all so very american. I'll put my name on but if it doesn't work remove my name so I don't get into trouble ok? Home of the brave.
> Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.
But they did.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
The difference is that Anthropic wanted to reserve the right to judge when the red lines are crossed, while OpenAI will defer to the DoD and its policies for that. In both cases, the two parties can claim to agree on the principles, but when push comes to shove, who decides on whether the principles are violated differs.
> The difference is that Anthropic wanted to reserve the right to judge when the red lines are crossed, while OpenAI will defer to the DoD and its policies for that.
It was pretty clear from Anthropic’s and Hegseth’s statements that they didn’t disagree on the two exclusions, but on who would be the arbiter on those. And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.
who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.
Amodei is the type of person who thinks he can tell the US government what they can and can’t do.
And the US government should have precisely none of that, regardless of whether they’re red or blue.
> Amodei is the type of person who thinks he can tell the US government what they can and can’t do.
I don't think that's the case. Amodei is worried that AI is extraordinarily capable, and our current system of checks and balances is not adequate yet to set the proper constraints so the law is correctly enforced. Here's an excerpt from his statement [1]:
> Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.
Let's do this thought exercise: how long would it take you, using Claude Code, to write some code to crawl the internet and find all the postings of the HN user nandomrumber under all their names on various social media, and create a profile with the top 10 ways that user can be legally harassed? Of course, Claude would refuse to do this, because of its guardrails, but what if Claude didn't refuse?
And that’s where the authoritarian in you is shining through.
You see, Obama droned more combatants than anyone else before or after him but always followed a legal paper trail and following the book (except perhaps in some cases, search for Anwar al-Awlaki).
One can argue whether the rules and laws (secret courts, proceedings, asymmetries in court processes that severely compress civil liberties… to the point they might violate other constitutional rights) are legitimate, but he operated within the limits of the law.
You folks just blurt “me ne frego” like a random Mussolini and think you’re being patriotic.
> Amodei is the type of person who thinks he can tell the US government what they can and can’t do.
> And the US government should have precisely none of that, regardless of whether they’re red or blue.
This is a pretty hot take. "You can't break the law and kill people or do mass surveillance with our technology." fuck that, the government should break whatever laws and kill whoever they please
I hope you A: aren't a U.S. citizen, and B: don't vote.
If I'm selling widgets to the government and come to find out they are using those widgets unconstitutionally and to violate my neighbors rights you can be damn sure I'm going to stop selling the gov my widgets. Amodei said that Anthropic was willing to step away if they and the government couldn't come to terms, and instead of the government acting like adults and letting them they decided to double down on being the dumbest people in the room and act like toddlers and throw a massive fit about the whole thing.
> It was pretty clear from Anthropic’s and Hegseth’s statements that they didn’t disagree on the two exclusions, but on who would be the arbiter on those.
No. Altman said human responsibility. Anthropic said human in the loop.
> And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.
I don’t understand your first comment. At that point, Altman’s tweet didn’t exist yet, and is immaterial to the reading of Anthropic’s and Hegseth’s statements.
To your second comment, it was clear enough to me to be the most plausible reading of the situation by far.
We state what we think the situation is all the time, without explicitly writing “I think the situation is…”.
Seems Anthropic did not understand the questions they were asked. From the WaPo:
>A defense official said the Pentagon’s technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?
>It’s the kind of situation where technological might and speed could be critical to detection and counterstrike, with the time to make a decision measured in minutes and seconds. Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us and we’d work it out.
>An Anthropic spokesperson denied Amodei gave that response, calling the account “patently false,” and saying the company has agreed to allow Claude to be used for missile defense. But officials have cited this and another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as flashpoints in a spiraling standoff between the company and the Pentagon in recent days. The meeting was previously reported by Semafor.
I have a hunch that Anthropic interpreted this question to be on the dimension of authority, when the Pentagon was very likely asking about capability, and they then followed up to clarify that for missile defense they would, I guess, allow an exception. I get the (at times overwhelming) skepticism that people have about these tools and this administration but this is not a reasonable position to hold, even if Anthropic held it accidentally because they initially misunderstood what they were being asked.
"It’s the kind of situation where technological might and speed could be critical to detection and counterstrike"
Missile detection and decision to make a (nuclear) counterstrike are 2 different things to me but apparently the department of war wants both, so it seems not "just" about missile detection.
Is there any reason at all to believe the account of the unnamed "defence official"? Whatever your position on this administration, you know that it lies like the rest of us breathe. With a denial from the other side and a lack of any actual evidence, why should I give it non-negligible credence?
It is bizarre. I like how, "past performance predicts future performance" is supposed to apply to founders and companies but completely disregarded for a two term president and admin, as if we have no idea how they will operate in the future.
Anthropic, with its current war chest, is supposedly employeeing lawyers that are misunderstanding the Department of War? This is considered to be the likelier of possibilities, am I understanding this correctly?
This is not what I said, and not what the WaPo quoted. We're talking about the CEO, who is shall we say unfamiliar with war making, getting asked a hypothetical about how the product he sells would perform in a first strike scenario, and he reportedly gives what is an entirely legalese answer. Yes, I consider this a likely possibility. It sounds exactly like how someone would respond if they've been swimming in legal memos for months.
> It sounds exactly like how someone would respond if they've been swimming in legal memos for months.
I think you're being highly speculative. The part you quoted from the WaPo doesn't even state the defense official was complaining about about any "legalese" reponse, that seems like a projection on your part. The only info you gave in your comment about what Dario said is only a defense official's paraphrasing. It seems a simply case of Dario refusing to give a blank check in all scenerios whereas the defense official, for maximum impact, chose to portray "not having a blank check" as "having to call Anthropic" in every case
where "help" is given by an LLM. The appearance of "misunderstanding" you're seeing in the media is not about the parties' misunderstanding of what the other side wants, it's simply a fallout from each side fighting to control the narrative.
That's a copout and you know it. You're focusing on the 'unnamed' part; I'm focusing on the 'representative of an administration that lies constantly and brazenly' part.
Noted Rationalist responds to a question about a first strike scenario with "I need to think about it" instead of "of course we'd launch the missiles, are you kidding?" and everyone here seems to think this is somehow unbelievable.
You're still dancing around the point. Person A said X; person B said not X; we have no concrete evidence either way. Person A is an anonymous representative of a group that has no norms against dishonesty, an obvious motive to falsely claim X, and a track record of telling frequent, shameless lies. X doesn't need to be 'unbelievable' for me to ask, again, what positive reason do you have for believing it?
Why the fuck would you use an LLM to determine whether a nuclear missile was hurtling towards you? The question makes no sense, and so you get a nonsensical answer.
Seems not unlikely that Anthropic was manipulated into this position for purposes of invalidating their contract.
> If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?
Are you serious? This is the kind of thing you'd ask a clarifying question on and get information back immediately. Further, the huge overreaction from Hegseth shows this is a fundamental disagreement.
The flip side of "Hegseth is an unqualified drunk", a position which I've always held and still maintain, is that he very well might crash out over nothing instead of asking clarifying questions or suggesting obvious compromises. This is the same guy who recalled the entire general staff to yell at them about the warrior mindset. Not an excuse for any of this, but I do think the precise nature of the badness matters.
> could the military use Anthropic’s Claude AI system to help shoot it down?
What a joke. I suggest folks read up on the very poor performance of US ICBM interceptor systems. They're barely a coin flip, in ideal conditions. How is Claude going to help with that? Push the launch interceptor button faster? Maybe Claude can help design a better system, but it's not turning our existing poor systems into super capable systems by simply adding AI.
I'm sure it's a matter of interpretation. Anthropic thinks the DoW's demands will lead to mass surveillance and auto-kill bots. The DoW probably disagrees with that interpretation, and all OpenAI needs to do is agree with the DoW.
My bet is that what the DoW wants is pretty clearly tied to mass surveillance and kill-bots. Altman is a snake.
Why do you choose to call it the "DoW"? Its official name is the Department of Defense, it was titled that way by Congress and only Congress can change it. What is your motivation in using a term that the current administration has started to use? Do you also use the Gulf of America when referrring to the body of water that defines the southern edge of the USA?
Don't you think it is more to-the-point to call it what it is and what the people running it with, i'll bet everything i have, absolute immunity, are doing and intend to do with it?
It is "honest" in the historical sense, certainly.
But the executive-order driven name change just another bit of illegal/extra-legal/paralegal behavior by the administration that, every time we just nod along, eats away at the constitutional structure of our government. So don't go along with it.
Personally, as someone coming from a region that has suffered many times over by the actions of this so called "Ministry of Defense," I feel like "Department of War" is a more accurate and honest term.
As I've noted multiple times here on HN, I don't disagree with this.
But the question is not about whether it is a more accurate and honest term. It is about people complying in advance with the illegal/extra-legal renaming of a federal agency by a president who does not have the right or authority to do so.
If we were talking about Congress voting to rename the DoD as the DoW, I'd have nothing to say on the matter that differed from your observation.
It's the term used by Sam Altman in the announcement. Maybe aim your anger there, to someone knowingly helping them in their attempt to turn the department into one of aggression.
No, the Department of War is the former name of the Department of the Army and nothing else. DoD is a new creation that includes the Army, historic Department of the Navy, and the other, post-WW2, new services.
The president has no authority to do this. Federal departments and agencies are named by Congress, and even the Republicans in Congress have shown no interest in formalizing this.
Sure, no such law that I know of. But there's also no law that suggests that anybody else needs to refer to the Department of Defense using terms that the president and his minions just made up out of thin air. I'm also arguing that going along with them, by itself, is harmful to a democratic government.
Exactly this! Just like the Gulf of Mexico is still called the Gulf of Mexico, if we just ignore his ramblings and continue calling the department of defense, we undermine his whole point. If we fall for all their crap and just accept it, then we loose in the end. Any resistance to a Fascist government is good resistance. Anything that makes their life's a little shittier is good. Better that they go around having tantrums about how they renamed it but no one is paying attention.
Anthropic has safeguards baked in the model, this is the only way to make sur it's harder for the DOJ to misuse it. A pinky swear from the DoD means nothing
If your starting position is already that Sam Altman lies about everything that doesn't fit your preconceived positions, that doesn't seem like a very useful meaningful position to update.
I think it is like a loyalty test to an authority above the law (executive immunity) in order to do business. “If we tell you to do so, you may do something you thought was right or wrong.” It is like an induction into a faction and the way the decisions could be made. Doesn’t necessarily mean anything about “in practice in the future”, just that the cybernetic override is there tacitly. If the authority thinks they can get away with something, they will provide protection for consequences too. Some people more equal than others when it comes to justice for all, etc. There are probably alternative styles for group decision making…
> I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this
Well some may voluntarily leave, some will be actively poached by Anthropic perhaps and some I suppose will stay in their jobs because leaving isn't an easy decision to make.
> some I suppose will stay in their jobs because leaving isn't an easy decision to make.
Anyone who chooses to stay shouldn’t have signed the letter. What’s the point of doing it if you’re not going to follow through? If you signed the letter and don’t leave after the demands aren’t met, you’re a liar and a coward and are actively harming every signatory of every future letter.
I think the problem might actually be with reenforcing the red lines. The events of the last few weeks and this new deal only make sense if Anthropic was trying to find out how Palantir and the Pentagon had circumvented their restrictions to attempt to reenforce those restrictions like company actually concerned about the misuse of their product. OpenAI most likely came in with assurances that they wouldn't attempt to reinforce their restrictions.
Yes, what is implied in this episode is that all big companies that do AI development or provide computing for Ai are now signing for these very shady uses of their technologies.
>Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.
Have we been watching the same Trump admin for the last year? That sound exactly like something the government would do: pointlessly throw a fit and end up signing a worse deal after blowing up all political capital.
While that thought crossed my mind, someone in a sub thread of parent comment made a point: OpenAI made a statement about how "We insisted this be not be used in those ways and DoD totally says they won't". Which sounds to me like they ceded any hard terms oand conditions and are letting the DoD use it in "any lawful means" which is what Anthropic didn't stand for.
Another plausible explanation that is familiar to a lot of people in other countries is banal corruption. Kick out one competitor on bogus allegations, then on the next day invite another one… what else that could be?
It was just a ruse to figure out who to fire. Either resign on your own terms or get fired. Companies and government only have one loyalty, to themselves,
For all I know Sam Altman orchestrated this via well timed donations and whatever the hell contacts he has in government, Trump specifically seems to have taken the man
So using Anthropic’s own words to cover a power play or pulling relationships to see if they could get anthropic to balk at it.
Money buddy, they never cared. They didn’t care when they went back on their safety and guidance boards, they didn’t care when they tried to push Altman out, and these employees won’t care when the first AI nuke launches. Money, money, money so they don’t think about it later. It’s the exact same reason Facebook employees have given us the other side of surveillance hell.
I would not discount how much of a factor, irrational human emotions play in negotiations.
Dario is arrogant and pompous so probably wound Hegseth up the wrong way. Sam is much more charming and amenable so more able to get his way despite similar terms.
Its about network effect - The biggest issue is that ChatGPT is a household name like Google at this point. Everyone and their grandma knows it or are learning about it, while Claude is very well known in the tech circles. Getting tech people to switch is relativity easy (ignoring Enterprise contracts), but getting everyone else to switch is going to be very slow.
Honestly, the best thing to happen is that someone comes up with a new UI (think claw...like) that everyone starts using instead. A very cute, well integrated system that just works for everyone, has free tier, and has something that the others dont have.
>> All of us can act too. Stop using the OpenAI models. Stop using the app. Design in other models no matter what. Screw these guys.
> Do you expect that to work?
Many years ago Tim O'Reilly (of book publishing fame) knew Apple would one day would become really big even though they were a small, niche player in the "PC" space as the time (2000s). How did he know that? By seeing what the 'alpha geeks' were doing: the folks that not just used tech, but were working at companies that were inventing the future. They were the ones where friends and families asked for advice. And the alpha geeks (at the time) were switch to MacOS X and telling their friends and family about it.
There's a good chance that if you're on HN, you're the person in your non-techies social group that many others ask for advice. You can potentially sway many people by your example and your advice.
Nah. It's possible that the agreement still supports the required terms.
There is more to this story behind the scenes. The government wanted to show power and control over our companies and industries. They didn’t need those terms for any specific utility, they wanted to fight “woke” business that stood up to them.
Supposedly OpenAI had the same terms as Anthropic (according to SamA). Maybe they offered it cheaper and that’s why they agreed. Maybe it’s all the lobbying money from OpenAI that let the government look the other way. Maybe it’s all the PR announcements SamA and Trump do together.
"we put them into our agreement." is strange framing is Altman's tweet. Makes me think the agreement does mention the principles, but doesn't state them as binding rules DoD must follow.
I prescribe literally zero truth value to what Sam says. He will say whatever he needs to get ahead. It is honestly irritating to me that you and many others here seem to implicitly assume his messages are correlated with truth, doing his social engineering work for him, as if his word should adjust your priors even slightly.
I don't necessarily think he's lying, but there's so much obvious incentive for him to lie here (if only because his employees can save face).
He doesn't even need to be lying, the comment is vague and contains enough loopholes that it could be true yet meaningless. I explained some that I noticed here: https://news.ycombinator.com/item?id=47190163
And fired from YC for lying. And lied to investors about how many Loopt employees he had. And lied about having 100x the actual number of users when he sold it. And lied to employees about the Microsoft deal. And lied to his safety team.
It's this simple: Trump is a criminal. Larry Ellison is his pal. Sam Altman has a huge deal for cloud services from Oracle. Trump is using the DoD budget to backstop Ellison's business.
This is pretty much on the right take on it, although it's much more than that. It's very clear at this point, especially the first conclusion, but people insist in looking to the other side.
Attempting to kneecap the breakout front runner of the major American AI companies to ensure the shittier, politically compliant one wins in the short term? Gee I wonder.
For better or worse, outright nationalization of military related companies is common on a global scale. I plan to do my best to ensure this is a domestic catastrophe, and I hope we'll succeed, but I don't expect other countries to care much about varying levels of regime alignment between two billionaire American defense contractors.
Maybe Sam Altman said nicer things about Donald Trump. Maybe he promised that he would not revoke their API keys when Hegseth directs the military to seize ballots. Maybe he's jockeying for position to take over the government when AGI hits.
Ultimately, I don't know how much the specific reasons matter. Pete Hegseth must be removed from office, OpenAI must be destroyed for their betrayal of the US public, that's all there is to it.
> Trump’s son in law (Kushner) has most of his net worth wrapped up in OpenAI.
If true (too lazy to check but I honestly take your word for it), this should probably be bigger news. Not that the outright corruption when it comes to the highest position in the US Government constitutes news anymore, but because it puts the Government’s fight against Anthropic (and supposedly other potential OpenAI competitors) in a new light.
This reminds me of Ken Thompson’s speech on trusting trust. The recursive/meta nature of it all has helped me explain to those unfamiliar that this is such a waste of time. Education is where it’s at, but I’m preaching to the choir here on HN.
Trying to restrict the non-printed ICs you'd connect to your 3D printed parts would be even dumber. There's a zillion things that can slam out bits and control a stepper motor.
you can build a 3d printer out of general-purpose electronic bits, anything they tried to ban would send ripples into countless other industries that are completely unrelated to each other or 3d printing.
By "general-purpose" I mean that there's no components that are 3d-printer specific; motor controllers and microcontrollers and voltage regulators and all the various jellybean parts. And even if there were any, they could easily be replaced with general-purpose components.
>The anxiety creeps in: What if they have removal? Should I really commit this early?
>However, anxiety kicks in: What if they have instant-speed removal or a combat trick?
It's also interesting that it doesn't seem to be able to understand why things are happening. It attacks with Gran-Gran (attacking taps the creature), which says, "Whenever Gran-Gran becomes tapped, draw a card, then discard a card." Its next thought is:
>Interesting — there's an "Ability" on the stack asking me to select a card to discard. This must be from one of the opponent's cards. Looking at their graveyard, they played Spider-Sense and Abandon Attachments. The Ability might be from something else or a triggered ability.
The anxiety is coming from the "worrier" personality. Players are combination of a model version + a small additional "personality" prompt - in this case (https://mage-bench.com/games/game_20260217_075450_g8/), "Worrier". That's why the player name is "Haiku Worrier". The personality is _supposed_ to just impact what it says in chat (not its internal reasoning), but I haven't been able to make small models consistently understand that distinction so far.
The Gran-Gran thing looks more like a bug in my harness code than a fundamental shortcoming of the LLM. Abilities-on-the-stack are at the top of my "things where the harness seems pretty janky and I need to investigate" list. Opus would probably be able to figure it out, though.
That's the best way to do it. Otherwise all the money will go to the rich brat children of politicians/etc who are socially connected to whoever they put on the selection committees.
Rich parents are masters of helping their children exploit the system in as many of the thousands of ways that exist. A few hundred here, another hundred there, maybe some one-off thousands here.
In most of the world, rich people are rich because they are good at exploiting government funds. It's a lifestyle.
Mostly because the kind of people who run and advocate for programs like this are actively hostile to the idea of merit. Prioritizing talented people would be antithetical to them.
Prioritizing merit would be fine if there was some way to measure merit empirically, and if that measure couldn't be gamed by anybody with money and/or connections. But this is for artists, so...
And thinks that s/he's a winner and the stuff s/he enjoys is made by winners, and the stuff s/he doesn't like is made by losers. Merit, universal, objective = ME; Worthless, narcissistic, special interest = YOU.
>An advertising-based business model would introduce incentives that could work against this principle.
I agree with this - I'm not so much worried that ChatGPT is going to silently insert advertising copy into model answers. I'm worried that advertising alongside answers creates bad incentives that then drive future model development. We saw Google Search go down this path.
>By contrast, intrinsic mortality stems from processes originating within the body, including genetic mutations, age-related diseases, and the decline of physiological function with age
So we put genetic diseases in the bucket of intrinsic mortality and then found that intrinsic mortality has a heritable component?
Yeah this paper came across to me basically as "if you ignore environmental causes of death, the heritability of death goes up"... which seems kind of circular.
Not necessarily. It could be the case that randomness plays a huge part in non-environmental caused deaths, and if that were the case we would see very little heritability.
No, you randomly get cancer since cancerous mutations happens randomly. Environment can just affect chance of getting cancer, it doesn't give you cancer directly and there is no way to completely avoid cancer risk.
For example even if you live the best life possible you will still have an inherent cancer risk based on your genes and that affects the random chance of you getting cancer, it isn't a clock that says exactly when cancer will happen.
I really like everything Uri Alon (last author) publishes, but these types of studies have a history of inflating genetic contributions to phenotypes. Decoupling genetics from environment is not easy as they are both highly correlated.
In fact, the article discussion states: "Limitations of this study include reliance on assumptions of the twin design, such as the equal environment assumption". My take on this is that the main result of the article is probably true, but the 50% figure is likely to be inflated.
I hit the jackpot with the ultrasound technician who spoke passionately about what she believed about lifestyle risk for cardiovascular conditions and she believed quite strongly that heart disease runs in families more because lifestyle runs in families than because of genetics. She's not at the top of the medical totem pole but I can say she inspired me to take responsibility for my health than the specialist who I talked to about the results.
If the environment was significantly more varied in health impact between twin comparisons than expected, then the correlations they found under estimate the genetic component.
Some randomness is part of the signal being studied, and some is undesired measurement noise to be controlled for. And it is only the latter that is beneficial to be carefully removed or otherwise controlled for.
There's no prior reason to expect the cited conditions to have any specific relation to genetics. Any of them could easily be caused or accelerated by environmental conditions.
Yeah, it’s important to note that heritability is a statistic about today’s population, not a deep natural parameter that tells you about causality. Heritability of smoking went up when smoking became less socially approved, for example.
I am somewhat surprised that the constitution includes points to the effect of "don't do stuff that would embarrass Anthropic". That seems like a deviation from Anthropic's views about what constitutes model alignment and safety. Anthropic's research has shown that this sort of training leaks across contexts (e.g. a model trained to write bugs in code will also adopt an "evil" persona elsewhere). I would have expected Anthropic to go out of its way to avoid inducing the model to scheme about PR appearances when formulating its answers.
I think the actual problem here is that Opus 4.5 is actually pretty smart, and it is perfectly capable of explaining how PR disasters work and why that might be bad for Anthropic and Claude.
So Anthropic is describing a true fact about the situation, a fact that Claude could also figure out on its own.
So I read these sections as Anthropic basically being honest with Claude: "You know and we know that we can't ignore these things. But we want to model good behavior ourselves, and so we will tell you the truth: PR actually matters."
If Anthropic instead engaged in clear hypocrisy with Claude, would the model learn that it should lie about its motives?
As long as PR is a real thing in the world, I figure it's worth admitting it.
A (charitable) interpretation of this is that the model understands "stuff that would embarrass Anthropic" to just be code for "bad/unhelpful/offensive behavior".
e.g. guiding against behavior to "write highly discriminatory jokes or playact as a controversial figure in a way that could be hurtful and lead to public embarrassment for Anthropic"
In this sentence, Anthropic makes clear that "be hurtful" and "lead to public embarrassment" are separate and distinct. Otherwise it would not be necessary to specify both. I don't think this is the signal they should be sending the model.
reply