There are enough idiots involved who "heard about this AI thing" that would demand someone make a Claude-based kill bot. Do not underestimate the disconnect from reality of senior military leadership. They easily forget that everyone who works for them are legally obligated to laugh at their jokes.
Anthropic specifically called out systems "that take humans out of the loop entirely and automate selecting and engaging targets".
I take that to mean they don't want the military using Claude to decide who to kill. As a hyperbolic yet frankly realistic example, they don't want Claude to make a mistake and direct the military to kill innocent children accidentally identified as narco-terrorists.
At least, that's the most charitable interpretation of everything going on. I suspect they are also worried that the sitting administration wants to use AI to help them execute a full autocratic takeover of the United States, so they're attempting to kill one of the world's most innovative companies to set an example and pressure other AI labs into letting their technology be used for such purposes.
I don't know what you're referencing, but it doesn't matter. I judge people by their actions more than their words. The actions in this case are simple: Anthropic doesn't want their models to be used for fully autonomous weapons or mass surveillance of American citizens, but everything else is fair game; in response, the sitting administration is attempting to kill the company (since a strict reading of the security risk order would force most of their partners, suppliers, etc., to cut them off completely).
Giving precedence to words over actions is how you get taken advantage, abused, deceived, etc.
> Whatever they were asked to do, they should just be upfront about.
Anthropic is not being asked to do anything, except renegotiate the contracts. The DoW Claude models run on government AWS. Anthropic has minimal access to these systems and does not see the classified data that is being ingested as prompts. It is very unlikely that Dario actually knows what the DoW wants to do with these models. But even if he did, it would be classified information that he is not at liberty to disclose.
However the product they provide likely has safety filters that cause some prompts to not be processed if it is violates the two contractual conditions. That is what the DoW wants removed.
He didn't talk around it. He wrote down specifically what the two issues were, which is precisely why now the entire world knows what's actually going on. If risking your company's existence to prevent a (potential) atrocity is weakness, I don't know what strength is.
Strength is saying what they were asked to do. I want to know!
Did the DoW ask them to make kill drones? Because if so THAT IS A REALLY BIG DEAL.
The vagueness is irritating. He’s saying they won’t do something, the DoW is saying they don’t even want them to do that, which should resolve the issue, but hasn’t. There is obviously something else at play here.
You're confused because you're taking everything the people involved are saying literally and trusting everything plainly at face value. The existence of the contradiction you're pointing out should be evidence that you need to think a level deeper, i.e., that you need to look at actions more than words. There's an incredibly easy resolution of the contradiction that is troubling you, and it's already been pointed out clearly above.
The DoD is explicitly asking for those things, by forcing contract renegotiation towards a contract that is identical in every way, except removing the prohibition on those things.
If the DoD did not want those things, it would not be forcing a contract renegotiation to include them, at great cost to the government.
> The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.
Tomorrow he could change his mind to "we want to use AI to develop autonomous weapons that operate without human involvement." the issue is that he wants Anthropic to change the use terms because "We will not let ANY company dictate the terms regarding how we make operational decisions."
And yet, if that statement were true, and not a lie, we would not be here right now, discussing their insistence upon being able to use software for precisely those things.
Is a pundit/politician lying to you a new experience?
Because mass surveillance has been happening by every tech company under every president since George W. Bush, and despite everybody trying to stop it they haven’t been able to.
OpenAI has already said that they’ll give up whatever info the government wants if they’re issued a subpoena; they don’t have a choice.
Companies have to comply with subpoenas (unless they can beat them in court, and with an alternative of going to jail). Subpoenas are supposed to be targeted at individuals and need some kind of process, usually judicial, each time one is issued. Mass surveillance - the Anthropic blog post raises the possibility of using AI to classify the political loyalties of every citizen - is a different thing.
A subpoena isn't "simply asking." Subpoena literally means "under penalty" in Latin. If the company does not comply they will be held in contempt of court and someone may well go to jail.
You make a valid point. Dario suggests that DoD wants to have the capacity to do domestic surveillance and autonomous killing. Sean Parnell said the DoD doesn't want those capacities. These statements are in conflict. Them talking past each other is one possibility. Without much evidence except the track record of the Trump administration, I think it is much more likely that Sean Parnell is lying.
The announcement hasn't worked through official legal channels, but Anthropic is taking it seriously. The official channel will be a written explanation to Congress, and could be classified.
Hegseth objected to guardrails being "woke". Something about "curly haired" almost-men telling him how he can use his "war fighters".
I speculate that Trump and Hegseth were both late to the realization that AI could unwind, for example, the next Panama Papers, and are doing this to try to demonstrate power to the industry. Musk tried to explain all this, but they actually encountered him as "autistic". This all looks like a disjointed conversation because we can see slightly more of the future than them.
Consider what the rise of things like shopify, squarespace, etc. did for developers.
In 2001, you needed an entire development team if you wanted to have an online business. Having an online business was a complicated, niche thing.
Now, because it has gotten substantially easier, there are thousands of times as many (probably millions of times) online stores, and many of them employ some sort of developer (usually on a retainer) to do work for them. Those consultants probably make more than the devs of 2001 did, too.
But as an outsider, its really not normal for agents of the state to detain people without legal basis. much less deliberatly make sure they can't be found. (citizen or not.)
You as a US citizen are not required to carry ID, so being arrested on the spot for not having proof of citizenship is grossly authoritarian.
These are all over the place in Tempe, AZ. I see them cruising through my neighborhood all the time.
The funny thing is that there is usually a guy on an ebike following right behind, and he's usually just decked out in sortof tactical-ish gear. Full mask, head to toe in all black. I feel bad for whoever it is in the summer, because it gets really hot here.
I will happily pay for high quality news. Every few months I check back in with The Financial Times to see if I can get it delivered to my house again (they used to deliver in Phoenix, but stopped, presumably they lost their printing partner here). My wife even tried to set up a PO box in another state and have the contents forwarded to us, but we could never get it working.
I also paid for Foreign Affairs for a long time, but eventually the quality of the paper (as in the physical material) dropped down a lot, and the number of ads went up.
Lapham's Quarterly (now defunct) wasn't really news, but happily paid for that.
Also plenty of substacks, patreon podcasts, etc.
--
My local paper just ran a story about a woman "trapped" in her Tesla because the battery died. They started the story with a "warning" to anybody who might be considering buying one. The solution, according to this article, was to locate the "secret" release button that opens the door. Of course to anybody who has ever ridden in the front seat of a Tesla this is an absurd framing of the physical door handle which opens the door in the exact same fashion as every door that has been manufactured for a vehicle for the last 100 years. If you own a Tesla you have probably had to tell somebody not to use this handle (since it seems like such an obvious way to open the door) because it doesn't crack the windows and could damage the window seal (or so the warning that pops up when you use it says).
I've been involved in some things a handful of times that made it into the paper. Technical laws being passed, corruption, complains about a system failure... In every instance the only thing that was really correct was the simple facts (law X passed, thing Y failed, person Z arrested). Anything more nuanced tended to be 'technically' correct but was phrased in a way that often would make you think the opposite of what actually happened.
How could this possibly comply with European "right to be forgotten" legislation? In fact, how could any of these AI models comply with that? If a user requests to be forgotten, is the entire model retrained (I don't think so).
This "ai" scam going on now is the ultimate convoluted process to hide sooo much tomfuckery: theres no such thing as copyright anymore! this isn't stealing anything, its transforming it! you must opt out before we train our model on the entire internet! (and we still won't spits in our face) this isn't going to reduce any jobs at all! (every company on earth fires 15% of everyone immediately) you must return to office immediately or be fired! (so we get more car data teehee) this one weird trick will turn you into the ultimate productive programmer! (but we will be selling it to individuals not really making profitable products with it ourselves)
and finally the most aggregious and dangerous: censorship at the lowest level of information before it can ever get anywhere near peoples fingertips or eyeballs.
> how could any of these AI models comply with that? If a user requests to be forgotten, is the entire model retrained (I don't think so).
I don't believe that is the current interpretation of GDPR, etc. - if the model is trained, it doesn't have to be deleted due to a RTBF request afaik. there is significant legal uncertainty here
Recent GDPR court decisions mean that this is probably still non-compliant due to the fact that it is opt-out rather than opt-in. Likely they are just filtering out all data produced in the EEA.
> Likely they are just filtering out all data produced in the EEA.
Likely they are just hoping to not get caught and/or consider it cost of doing business. GDPR has truly shown us (as if we didn't already know) that compliance must be enforced.
The popular sentiment has changed from enthusiasm about "digital", to disillusionment about big tech inserting themselves into our lives to monetize everything.
In 2009, smartphones were a novelty, and the iPad has not been announced yet. People were wowed by the new capabilities that "multimedia" devices were enabling. They were getting rid of the old, outdated, less capable tools.
Nowadays "multimedia" is taken for granted. OTOH generative AI is turning creative arts into commoditized digital sludge. Apple acts like they own and have the right to control everything that is digital. In this world, the analog instruments are a symbol of the last remnants of true human skill, and the physical world that hasn't been taken over by the big tech yet. And Apple is forcefully and destructively smushing it all into AI-chip-powered you-owe-us-30%-for-existing disneyland distopia.
This whole thing seems like people talking past each other, and that there’s something being left unsaid.
Anthropic doesn’t make a product that would assist with kill drones, and they don’t have the right to deny subpoenas.