Hacker Newsnew | past | comments | ask | show | jobs | submit | thepasch's commentslogin

> 95% of which will go towards paying your legal fees

laughs in European


I laughed. No in europe when you win a case like this the judge usually forces the losing party to pay the legal expenses of the winner. Especially if the losing party is a big corporation.

It's the same in the US

It is not. Legal fees are rarely awarded in the U.S.

I should have said if you recover it in your damages, which every competent attorney will push for.

Legal fees are not something you are usually legally entitled to.

Your attorney can push for whatever illegal thing they can think of, it doesn't mean you will get it.


> Your attorney can push for whatever illegal thing they can think of, it doesn't mean you will get it.

It is not illegal to include legal fees in damages.


By illegal I mean contrary to American law.

Legal fees are literally not damages. A court granting legal fees would be doing that in addition to damages.

In most cases the jury will never even be told what your attorneys fees are, and they are not permitted to award them:

https://en.wikipedia.org/wiki/American_rule_(attorney%27s_fe...


Under what statute is it illegal to request legal fees?

Requesting and being granted legal fees are two different things.

The default "American rule" is that each party pays their own legal fees, unless there is a relevant fee shifting rule.


> Under what statute is it illegal to request legal fees?

You can request anything you want? Granting it would be illegal.

An attorney asking the judge to break the law and award attorney fees is literally asking for something illegal in most circumstances. There are exceptions. (By illegal I mean contrary to law.)

It's funny that 4 people downvoted me instead of bothering to check Wikipedia.

https://en.wikipedia.org/wiki/American_rule_(attorney%27s_fe...


Stop advertising pi, people. It _somehow_ continued to fly somewhat under the radar after that whole OpenClaw nonsense. Don’t make Anthropic’s sic their bloodhounds on them like they did on OpenCode.

People deserve to know it exists, I got tired of even OpenCode workflows/agents, installed OpenSpec but all this wrapped todos still not how i wanted I needed more control but dint wanted to write my own tool, then i ended knowing about pi, this got me interested at first read:

No plan mode. Write plans to files, or build it with extensions, or install a package. No built-in to-dos. Use a TODO.md file, or build your own with extensions. No background bash. Use tmux. Full observability, direct interaction.

This is very important to have control and ownership.

Pi is not for everyone, but the ones eventually want to have tools like (read, bash, edit, write, grep, find, ls) as building blocks.


Interestingly, since OpenClaw, there has been ~one post about Pi every week. But practically no one voted any of them except this one.

pi is an officially accepted harness of either Anthropic or OpenAI. I forgot which.

I feel like this misses the point of pi somewhat. The allure of pi is that it allows you to start from scratch and make it entirely your own; that it’s lightweight and uses only what you need. I go through the list of features in this and I think, okay, cool, but why should I use this over OpenCode if I just want a feature-packed (and honestly -bloated) ready-made harness?

It's just better opencode while still being lightweight I don't know what else to say.

It's just an opinionated fork, either you like it or you don't. I personally really like it.


The people who they’re going to piss off the most with this are the exact people who are the least susceptible to their walled garden play. If you’re using OpenCode, you’re not going to stop using it because Anthropic tells you to; you’re just going to think ‘fuck Anthropic’, press whatever you’ve bound “switch model” to, and just continue using OpenCode. I think most power users have realized by now that Claude Code is sub-par software and probably actively holding back the models because Anthropic thinks they can’t work right without 20,000 tokens worth of system prompt (my own system prompt has around 1,000 and outperforms CC at every test I throw it at).

They’re losing the exact crowd that they want in their corner because it’s the crowd that’s far more likely to be making the decisions when companies start pivoting their workflows en-masse. Keep pissing on them and they’ll remember the wet when the time comes to decide whom to give a share from the potentially massive company’s potentially massive coffers.


> I’m only waiting for OpenAI to provide an equivalet ~100 USD subscription to entirely ditch Claude.

I have a feeling Anthropic might be in for an extremely rude awakening when that happens, and I don’t think it’s a matter of “if” anymore.


> pi with Claude is as good as (even better! given the obvious care to context management in pi) as Claude Code with Claude

And that’s out of the box. With how comically extensible pi is and how much control it gives you over every aspect of the pipeline, as soon as you start building extensions for your own, personal workflow, Claude Code legimitely feels like a trash app in comparison.

I don’t care what Anthropic does - I’ll keep using pi. If they think they need to ban me for that, then, oh well. I’ll just continue to keep using pi. Just no longer with Claude models.


As a Claude Code user looking for alternatives, I am very intrigued by this statement.

Can you please share good resources I can learn from to extend pi?


Pi has specific instructions to extend itself.

You can just tell it to create an extension to connect to any AI API provider and it'll most likely one or two-shot it for you.

IMO it's the most self-aware of all of the current harnesses.


I have an irrational anger for people who can't keep their agent's antics confined. Do to your _own_ machine and data whatever the heck you want, and read/scrape/pull as much stuff as you want - just leave the public alone with this nonsense. Stop your spawn from mucking around in (F)OSS projects. Nobody wants your slop (which is what an unsupervised LLM with no guardrails _will_ inevitably produce), you're not original, and you're not special.


Irrational?


It's not going to "trigger" mass layoffs; it'll be used as a convenient scapegoat for mass layoffs that were always going to happen anyway to make room for more stock buybacks. Business as usual. Same shit, different hat.


Sometimes it feels like the advent of LLMs is hyperboosting the undoing of decades of slow societal technical literacy that wasn't even close to truly taking foot yet. Though LLMs aren't the reason; they're just the latest symptom.

For a while it felt like people were getting more comfortable with and knowledgeable about tech, but in recent years, the exact opposite has been the case.


I think it’s generally (at least from what I read) thought that the advent of smartphones reversed the tech literacy trend.


I think the real reason is that computers and technology shifted from being a tool (which would work symbiotically with the user’s tech literacy) to an advertising and scam delivery device (where tech literacy is seen as a problem as you’d be more wise to scams and less likely to “engage”).


They’re definitely what started it, but LLMs seem to be accelerating it at a terrifying rate.


This is a tool that is basically vibecoded alpha software published on GitHub and uses API keys. It’s technical people taking risks on their own machines or VMs/servers using experimental software because the idea is interesting to them.

I remember when Android was new it was full of apps that were spam and malware. Then it went through a long period of maturity with a focus on security.


> Is it a security risk? I hope not. (It's not.)

It very probably is, but if it's a personal project you're not planning on releasing anywhere, it doesn't matter much.

You should still be very cognizant that LLMs will currently fairly reliably implement massive security risks once a project grows beyond a certain size, though.


They can also identify and fix vulnerabilities when prompted. AI is being used heavily by security researchers for this purpose.

It’s really just a case of knowing how to use the tools. Said another way, the risk is being unaware of what the risks are. And awareness can help one get out of the bad habits that create real world issues.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: