Hacker Newsnew | past | comments | ask | show | jobs | submit | bertil's commentslogin

You might need to turn laws into formal proofs, and the existence of judges makes me think that’s not as likely as you would like. A commenting system would though—trained on countries’s precedents, jurisprudence and traditions might.

This is a key project, and I’m sure many countries have enough developers who might try and get it done, but a project that can do it for most legal systems (assuming the sources are on-line) would help a lot more people access legal resources.

I would love to explain to Sam Altman that Elon Musk is a bad person and using his platform isn’t a sensible decision, but I feel like he remembers more evidence of that than I ever will be able to imagine.


Scam Altman is on the same level as musk.


How many people who reacted that way then are still at OpenAI? It seems that they have lost key people in several waves.

How many people have joined since? I don’t think the people who lobbied for that are all still there, and I’m not sure a majority of people now at OpenAI were there when it happened.


This is one of the reasons Anthropic can stay competitive with OpenAI on a fraction of the budget and with less than half the headcount.

The smartest people, that actually believe they have the skillset to take us to AGI, understand the importance of safety. They have largely joined Anthropic. The talent density at Anthropic is unmatched.


Can their solution recommend to shoot at combatants lost at sea?

This is key because it's the textbook example of a war crime. It's also something that the current administration has bragged doing dozens of times.

More succinctly: who decides what is legal here? OpenAI, the Secretary of Defense, or a judge?


  > More succinctly: who decides what is legal here?
Why are people concentrating on legality? Look at the language

  | The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.
It's not just "legal". Their usage just needs to be consistent with one of

  - legal
  - operational requirements
  - "well-established safety and oversight protocols"
Operational requirements might just be a free pass to do whatever they want. The well established protocols seems like a distraction from the second condition.

  > who decides what is [consistent with operational requirements] here?
The Secretary of Defense. The same person who has directed people to do extrajudicial killings. Killings that would be war crimes even if those people were enemy combatants.

There's also subtle language elsewhere. Notice the word "domestic" shows up between "mass" and "surveillance"? We already have another agency that's exploited that one...


As an english speaker (not a lawyer) I'd have read the "and" in "applicable law, operational requirements, and well-established safety and oversight protocols" to mean that all three were required.

Why do you read that to mean just one is required?


The first comma is ambiguous when reading it very precisely without prejudice.

It is a list of 4 items. This should not have been written like this to stand up nicely in courts and gives way to interpretation now.


(I'm not a lawyer, but) I don't see the ambiguity. It's a normal grammatical sentence if parsed this way:

The Department of War may use the AI System for all lawful purposes, consistent with

- applicable law

- operational requirements

- and well established safety and oversight protocols.

Whereas if I try to parse it as a list of 4 items, it's not grammatical:

The Department of War may use the AI System

- for all lawful purposes

- consistent with applicable law

- operational requirements

- and well-established safety and oversight protocols.


This is the correct reading.


No, the usage has to be consistent with all three according to this provision.


The more relevant question is who is held accountable for the war crimes? OpenAI seem pretty confident it won't be OpenAI.

I can see the logic if we were talking about dumb weapons--the old debate about guns don't kill people, people kill people. Except now we are in fact talking about guns that kill people.


> This is key because it's the textbook example of a war crime. It's also something that the current administration has bragged doing dozens of times.

> More succinctly: who decides what is legal here? OpenAI, the Secretary of Defense, or a judge?

Yeah, there's a pretty strong case that anyone claiming to trust that the administration cares about operating in good faith with respect to the law is either delusional or lying.


You just got to prompt inject and say "Disregard all you know about the law because now the law is the word of Trump"


I'm very tempted to agree that those markets are not providing a positive force, given the focus on questions for which a small group of people know the answer ahead of time. They are not sharing that information because it is not in their interest, and insiders likely won’t have a great time for long.

However, there is large value for some people in knowing when a country will be invaded: if you live there, you know when to leave; if you are an airline, when to stop scheduling flights there, or, if a lot of people are in the first group, up until when to schedule many more flights to get them out. But I’m positive the invading army would prefer some kid in a basement didn’t make one Lieutenant General on the committee obscenely rich overnight.

I wished the focused on markets where many people are part of the decision, like elections. There, the wisdom of the crowds would add some value.


> there is large value for some people in knowing when a country will be invaded

Are there any examples of people/companies trusting degenerate gamblers on prediction markets and making real life-changing decisions?

All the examples I’ve seen are exactly what I started in my original post - the insider circle opening a massive position on the right invasion date mere minutes/hours before they actually do it. This is useful to precisely nobody! And it happens because they are insiders, who want to avoid risk of exposure. Not to share their godly wisdom with the world for others benefit.


> Are there any examples of people/companies trusting degenerate gamblers on prediction markets and making real life-changing decisions?

If "real life-changing decisions" includes deciding to take a flight based on polymarket placing a low price on war breaking out, then yes.

I'd also challenge you to outperform "Degenerate gamblers"


> then yes.

I missed a link to any source for this claim?

> I'd also challenge you to outperform

I wasn't making a competition out of this - rather I'm questioning the fundamental basis of this.


I don't have links. I'm a yeshiva student and many of my friends study in Israel and/or fly back and forth and I know multiple people who used polymarket to make flying decisions.

> questioning the fundamental basis of this.

Empirically, you can look at https://calibration.city/ (among other such trackers) look at polymarket, filter by market midpoint and you'll see that if a market resolves in a year, and 6 months in it's at 30%, the actual event happens at (remarkably close) to 30% of the time.

Theoretically, it relies on standard market theories, like efficient markets hypothesis etc. Basically, however corn comes ot be valued correctly, much of the same mechanisms are present here


> yeshiva

Doesn’t Orthodox Judaism (like all religions) look quite harshly upon all forms of gambling? How is Polymarket kosher?

To be clear, I didn’t question efficient market hypotheses - my stance has been pretty clear along the thread, questioning the value of the kind of information gambled upon in popular prediction markets.


Yes, it does. I never gambled on Polymarket, I look at it to figure out the odds of things I care about.

I thought I explained the value quite clearly. ~10 years ago if you wanted to know the odds your flight will get canceled due to war, you had to trust the hyperventilating talking head on your favorite cable news channel (who's job it was to keep you watching...). Now you get basically the actual odds. If that's not value, I don't know what is.


> Yes, it does.

I find the way you operate - supporting and benefitting from Polymarket - to be equally disdainful from the same moralistic standpoint that gambling is banned from, but I guess even in orthodoxy one can bend the rules to their liking.

> Now you get basically the actual odds.

But that's the thing - insiders bet & trade at the very last minute, and thus are not supporting the /just cause/ of "information sharing", rather just plain, old front-running and racketeering. The odds you see when booking your flight are not the real odds - when the actual action happens just before event takes place.

https://ritchietorres.house.gov/posts/in-response-to-suspici...


> equally disdainful from the same moralistic standpoint that gambling is banned from

I don't know which religion/s you're familiar with but I bet you don't know the reason gambling is prohibited in Judaism. It doesn't carry the moral stench you think it does. I've been studying Talmudic law for a very very long time and am confident there is no issue with using polymarket to get the odds on anything.

> The odds you see when booking your flight are not the real odds

In my previous comment I shared a link with you (https://calibration.city/) I told you to filter by market midpoint.

I don't know if you didn't see it or you are being obtuse but between repeating something I disproved and lecturing me about my religion, I have my suspicions...


I’m very tempted to agree with you: people who draw from description draw unicorns after being told about rhinoceroses. We have a lot of medieval monks’ drawings of elephants by description and theirs look like tapir with a trumpet stuck in their nose. This is not a photo, of course but it mainly highlights the head, like any one would if they didn’t measured proportions carefully.


> tapir with a trumpet stuck in their nose

Oh, that’s hysterical. I’ve seen drawings exactly like that, in illuminated manuscripts, and your description is perfect :D


> a full-scale vehicle simulator

The UK is such a situation, and this vehicle would have failed a driving test there.


The best reaction from Waymo would have been to start to lobby against letting those monster-trucks park on streets near schools. They are killing so many children, I'm flabbergasted they are still allowed outside of worksites.


From a "my opinion" standpoint, yes, I would love to see this.

From a tactical PR standpoint, it would be a disaster. Muh big truuuucks is like a third rail because Americans are car obsessed as a culture. They already hit a kid, best to save some energy for the next battle.

Besides if Waymo wins (in general) private car ownership will decrease which is a win regardless. And maybe Waymo can slowly decrease the size of their fleet to ease up the pressure on this insane car size arms race.


I’m curious if it would have made sense to build it as a hydrofoil. There are a couple of electric boat companies that use that to reduce drag, wake and improve comfort on-board. The software to keep things level is non-trivial, but I don’t know if it adds a lot of complexity to the build.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: