Hacker Newsnew | past | comments | ask | show | jobs | submit | davidm888's commentslogin

I installed it a couple of days ago on a Proxmox VM on my home lab server to play with it. The key features are that it has local memory, generates cron jobs on its own and can be the one to initiate a conversation with you based on things that it does. Here are a few simple things I tried:

1. Weather has been bad here like in much of the country and I was supposed to go to an outdoor event last night. Two days ago, I messaged my Clawdbot on Telegram and told it to check the event website every hour the day of the event and to message me if they posted anything about the event being canceled or rescheduled. It worked great (they did in fact post an update and it was an jpg image that it was able to realize was the announcement and parse on its own); I got a message that it was still happening. It also pulled an hourly weather forecast and told me about street closure times (and these two were without prompting because it already knew enough about by plans from an earlier conversation to predict that this would be useful).

2. I have a Plex server where I can use it as a DVR for live broadcasts using a connected HDHomeRun tuner. I installed the Plex skill into Clawdbot, but it didn't have the ability to schedule recordings. It tried researching the API and couldn't find anything published. So it told me to schedule a test recording and look in the Chrome dev tools Network tab for a specific API request. Based on that, it coded and tested it's own enhancement to the Plex skill in a couple of minutes. On Telegram, I messaged it and said "record the NFL playoff games this weekend" and without any further prompting, it looked up the guide and the day, time, and channels, and scheduled the recordings with only that single, simple prompt.

3. I set up the GA4 skill and asked it questions about my web traffic. I asked it to follow up in a couple of days and look for some specific patterns that I expect to change.

4. I installed the Resend skill so it could send email via their API. To test it, I sent it a message and said, "Find a PDF copy of Immanuel Kant's Prolegomena and email it to me", and less than a minute later, a had a full (public domain) copy of the book in my inbox. Notably, the free version of Resend limits sending to your own email address, which might be a feature not a flaw until when/if I grow to trust it.

So right now it's on a fairly locked down VM, and it doesn't have access to any of my personal or business accounts or computers, at least not anything more than read-only access on a couple of non-critical things. Mostly just for fun. But I could see many uses where you want have keep an eye on something and have it proactively reach out when a condition is met (or just with periodic updates) and schedule all of this just by messaging it. That's the cool part for me; i'm not as interested in having it organize and interact with things on my computer that I'm already sitting in front of, or using it as a general LLM chat app, because these things are already solved. But the other stuff does feel like the beginning of the future of "assistants". Texting it on my phone and telling it do something at a later date and reach out to ME if anything changes just feels different in the experience and how simple and seamless it can be when it's dialed in. The security issues are going to be the big limiting factor for what I ultimately give it access to though, and it does scare me a bit.


> ... it doesn't have access to any of my personal or business accounts or computers, at least not anything more than read-only access on a couple of non-critical things

How have you set up read-only access? Network shares mounted as a guest/read-only user? Custom IMAP login with read-only access?


I agree completely, and the quote you cited more eloquently describes what I was trying to convey in some of my earlier comments. Coming from a software-development mindset, I started practicing law twenty years ago, and the hardest adjustment to make was to understand that even though the law seemed analogous to deterministic code on the surface, in reality it is much different. Today, both writing code and practicing law, I still have to remind myself of the difference sometimes. It is a completely different mindset when dealing with ambiguity, especially in the cases where ambiguity is sometimes desired, welcomed, or necessary. It is also eye-opening to realize that the parties never consider all of the future possibilities, and as such, it's not even possible for a document, however syntactically correct, to convey the intent of the parties 100% -- they don't even know their full intent as to infinite possibilities. I think many of the commenters, just like I did before, struggle to understand this. They don't realize that the "flaw" they're trying to correct is actually a feature -- a messy, sometimes annoying, irrational, subjective, human, necessary feature that's actually an important part of the functioning of society. It's intoxicating to think of a purely logical society driven by immutable, consistent, and brutally efficient laws for everyone, but upon further thought, it's a bit terrifying. Thankfully, it's also impossible.


I am both a transactional (contracts) attorney and a software developer. I've observered a number of fundamental parallels between contract drafting and coding. I can see how for certain "boilerplate" documents, computer-generated document assembly could produce decent results. But I honestly don't think that (human) lawyers can be eliminated, for several important reasons. First, consider how laws come about in the first place. It's an often messy political process with a lot of disagreement and compromises. What ends up passing might have deliberate ambiguities that purposely don't create certainty in a number of fringe cases. Second, most laws are either "over-inclusive" or "under-inclusive," meaning that the text can't possibly enumerate every situation, so there has to be interpretation around the edges. Third, some aspects of law involve subjective (not objective) standards and rely on such things as "intent," "reasonableness," "community standards," and "equity." These involve very human judgment on the part of a judge or jury and vary according to time, place, culture, ethics, community, etc. Although some cases can be resolved with mathematical efficiency, the legal process is usually inextricably intertwined with humanity and all of its subjective flaws, and as long as it's human beings writing the laws and humans applying interpretations and judgments, it will be a messy, emotional, occassionally illogical process.


That's exactly what I wanted to comment. The project frankly seems quite naive. At first, I thought it was a joke project. Only to realise, reading the comments, that it was a serious one.

I think the idea that laws are strict rules that are enforced in a robot-like fashion is a common misconception, specially in engineering circles. That's not true in civil law and certainly not true in common law. There are very simple and straightforward cases that can be thought more or less like that, but almost all criminal cases and many civil cases too are very much a game of convincing other people and not satisfying a set of rules. Jury equity is a thing, technically you can acquit someone by not making any efforts into convincing the jury that your client didn't break the law.

The law does not have the last say, humans do. Loopholes are just loopholes as much as the court considers them to be valid.


Yes laws can’t be implemented in a straightforward and neutral way. That’s not a benefit of laws or something we need to preserve. Exactly the opposite.

The fact the process is sitting on a case for YEARS because they can't decide how to interpret the facts, or which laws apply, or what they mean, or even simply due to the procedure being enormously inefficient for everyone involved, in fact usually means that whether you are found guilty or not in the end... you lose.

And humans can always have the last say. Computers don’t take that away from anyone. Having computable law doesn't mean 100% of it is computed by dry algorithms.


The goal of the justice system is not to be efficient or fast, it is to be just. It is a last resort for complicated cases that can't be easily settled between the parties. You don't want to have every single dispute going through the justice system. That sounds like a dystopian authoritarian future. You want the people in society to self organize and settle their matter privately as much as possible.

It is very much a benefit that it's interpretative and slow. We want it to reflect the culture and people's common sense. We want it to be as fail proof as possible (even if it take ages to come up with all the evidence and arguments). And fail proof here is not to interpret the law in the most pedantic of ways, but in the way that is the most just. The reason for lawyers and courts are exactly the edge cases that are difficult to agree upon.


And when we look at the actual status of the smart contracts, we realize how far actual programming is from their own stated goals... They call the incidents "hacks" although we simply see incomplete contracts being used the way they were written, in some edge case.


That gets to the heart of the matter. I have studied computer engineering and law as well. Statements like "Verilog allows programmers to draft circuits, Legalese's L4 will allow programmers to express law" demonstrate naivety to the real challenge. The authors should take a closer look at the disillusionment that has arisen after fifty years of rule-based knowledge representation and inference.


Consider also the disillusionment that a significant portion of the population feels when confronted with the nearly 1000 years of rule based knowledge representation that is common law.

Maybe a good law would be that if you can't be arsed to write the law in a formal language, or can't figure out how, then it shouldn't be a law in the first place :)


Human language, and therefore laws and court decisions written in human language, work because members of society share an enormous amount of tacit and background knowledge and share similar value standards based on their cultural history. The fact that all this information can be assumed and does not have to be explicitly specified is what enables us to communicate efficiently in the first place. If you had to specify the actual knowledge content in order to fully understand facts and draw computer-aided conclusions, the effort to write or read these texts would be almost infinite. The idea of writing laws and contracts in a "programming language" thus misses (once more) the real problem. And beyond that works like those of Gödel or Wittgenstein showed other limits of formal systems long ago.


The limits of formal systems are nowhere near being approached by law, which is generally pretty simple but written in obtuse language or full of exceptions that exist purely because of the fact that its very hard to change laws. Legal communication is precisely the opposite of efficient. That being said, some types of law are obviously less amenable to being described in formal language (like laws concerning murder or situations involving complicated human - human interaction). But that is absolutely not true for the average contract , which constitutes most of the legal industry's revenues and time. In that case the only problem is that laws are pointlessly complex. As another example, we'd all be a lot better off if taxes were formulated as a program.


> we'd all be a lot better off if taxes were formulated as a program.

They are. It is called turbotax and it is built and maintained by Intuit, who also has powerful lobbyists.

I think what you want is a non-spaghetti open-source software program maintained by a governance structure which is both

A. Competent and communicative.

B. Accountable to the same public which is in charge of doing performance reviews for the current legislators.


Laws are neither complete nor free of contradictions, and they don't have to. They are the result of a consensus among members of society that has grown over centuries. Democracy is significantly based on trust, transparency and comprehensibility. Legislation is subject to the permanent challenge of finding a balance between regulatory density and manageability. And even if it were socially feasible to rewrite all existing laws, to formalize them, and to massively increase the density of rules: the set of rules will always remain discrete, and it is a naïve assumption to want to fully capture the continuity of human action with a discrete set of rules; there will always remain an unspecified residue that requires human judgment. Striving for a perfect system fails merely because there is no universally accepted definition of a perfect system; and societies in which an attempt was made to realize the utopia of a perfect system were usually totalitarian or became totalitarian in a short time.


> Human language

You mean natural languages. Formal languages (of which programming code and math are examples) are also human languages, but more well defined and usually designed by few (opposed to emerged from usage by many).



How do you feel about building codes?


Roughly the same way I feel about HIPPA certification. Except that there is an additional interaction of building codes with zoning laws that leads to inflated housing prices in the US. Also, if you want to know if the house was built to code, you have to inspect it yourself, or at least be on site frequently enough to keep the trash from being stashed in the dead space.

There is of course a separation between safety and standards that is hard for laws and codes to grasp. The separation between intention and results is one of the reasons why you want these things explicitly defined as part of a formal specification for a law, it makes it possible to determine whether it is having the desired effect, and if it is not it could .e.g trigger a clause removing the code from being in effect. Rent controls would be a perfect example for this, though the whole point of my argument in my top level response is that trying to measure whether rent control is effective is the hard part (every serious study of rent controls shows that they are not).


These caveats make sense. I think that the role of computing and automation in law will be similar to their role in many other fields where the new technology does not necessarily eliminate humans completely, but instead increases their productivity by broadening the scope of their work, eliminating the bulk of repetitive and time consuming activities and allowing them to focus their efforts on areas that require nuance and judgement.

An interesting aspect of law-related automation concerns cases that could "be resolved with mathematical efficiency", but for whatever unfortunate reason end up slipping through the cracks in enforcement. Unequal, unpredictable and discretionary enforcement is often source of corruption and inequity, so filling those gaps would be welcome.


I'm not sure it always both "increases their productivity by broadening the scope of their work" and "allow[s] them to focus their efforts on areas that require nuance and judgement." It might increase productivity, but it could actually decrease critical thought. For example I've met some (disclaimer: some, not all) accountants who have become so reliant on software that they've essentially become data entry personnel and haven't nurtured the ability to think strategically about how or why to do certain things in light of the ever-changing tax code. It's easier to answer a software prompt. This isn't to say it has to be this way, and I agree that freeing up time doing mundane things should allow greater focus on the difficult and more important things. But often times it promotes a "good enough" mentality that enables higher volume, more competitive prices, more free time, or some combination of all three. So for every professional who is now able to do their work better, there are also probably some whose inability was disguised by the fact that they were able to do much of their job without a full understanding of the intricacies. This is especially dangerous in law, because a judge isn't going to care if you were spoon-fed language that "looked good" without fully understanding the overall legal context involved. It's often said that in law school, the most important thing you learn is how to "think like a lawyer." While that might be exaggerated a bit, it underscores the importance of critical thought and why it's a "practice." Automation is good if used as a tool to assist that process, but there are definite downsides if it's allowed to diminish ongoing competent understanding of a rapidly changing landscape. Or worse, if people without a legal background rely on it, thinking that they are getting the right "advice" which in some cases is very difficult to give even for seasoned professionals and is certainly not an exact science.


In practice, separating the subjective/objective elements is harder than it seems at first glance (and often impossible). The ideal, "perfect" contract would reflect the parties' intent with complete accuracy and would be clearly and fully interpreted by a judge/jury the exact same way. In reality, Party A and Party B want to "get the deal done." In many of the larger tech contracts I handle, the actual persons negotiating (sometimes even dedicated "contracts managers") are many steps removed from the stakeholders. Sometimes, it's a whole separate onboarding company. But even if it's the CEOs in the room, and they know fairly clearly what they want to put in a Statement of Work, they usually don't often consider (or want to think about) all of the intracies of the thousand ways something might go wrong. And standardized MSAs often fall very short (and sometimes lead to absurd results or contradictions); they are far from one size fits all and can't really be standardized to a menu of options either. 99% of what I do is dealing with retained background IP, warranties, indemnification, and liability limits. What is ultimately agreed upon is not usually something that can be determined with algorithmic certainty, nor would it matter, because the both parties would have to agree to use the same algorithm. In reality, it depends upon the particular type, severity, and likelihood of the risk to each party, each party's risk aversion, relative bargaining power, and -- much more than one might expect -- the entrenched corporate culture. A lot of the "art" in negotiating is presenting something to the other side that they can "sell" to their own superiors to get the deal done. And, very, very frequently, there's something irrational about it. For example, someone might not budge on narrowing an indemnification clause, but might bend on a blanket liability limit that would weaken it to practically nothing. Also, sometimes you might let something slide, because given the current state of the law with regard to how such a clause might be interpreted, you'd feel confortable enough that it would be interpreted in your favor -- i.e., in your risk assesment, you're considering how human beings (judge/jury) would interpret it. I've heard arguments before about how humans built laws and also built bridges, therefore laws can be reduced to scientific/mathematical/computational constructs. But unlike bridges, laws are based on this concept of "jurisprudence" which involves a foundation not built on mathematics, but instead built on ethics, morality, and philosophy. They're the thread of society. To fully "computerize" law, you'd have to do the same to society and humanity itself.


> In practice, separating the subjective/objective elements is harder than it seems at first glance (and often impossible).

Because, and this is where law just messes with my brain, the field deals as much in the NORMATIVE as it does in the objective and subjective. in my limited understanding...I'm left feeling like the normative is treated as objective by those in the field, but looks subjective to those outside of it.


It's the concept of "no code software development" applied to law, and it'll have the same problems. Edge cases by the million.

I wouldn't trust a computer to be my doctor, and I wouldn't trust one to be my lawyer either. As assistants to my doctor and lawyer, sure, but to replace them? Never.


Basically it’s good at solving trivial between parties that would probably member have bothered to sue each other. Anything else will probably be more expensive.

This legalese reminds me of UML. The only truly comprehensive way to capture the essence of a program is to write the program.


> I honestly don't think that (human) lawyers can be eliminated

That sounds very plausible. But do you think that human lawyers might gradually become a profession of people whose job partly involves interfacing with a software implementation of (large) parts of the legal code? And do you think there is any hope for defining (large) parts of the legal code in a formal language with well-defined semantics?


The challenge is that there are often many choices to make; the choices often benefit one party over the other; and either the choices themselves or the possible ramifications of those choices are not well understood by the parties. No two (separate groups of) parties are identical, and to an extent, no two projects are identical. So there's no single "right" answer that can be defined that works in every case, and it very often comes down to bargaining power and subjective risk aversion. I think there's more room for standardized or even automated contract provisions for less complicated transactions, but there are still some dangers. For example, a lot of real estate purchase agreements have been standardized by various state real estate associations that publish forms with various check-boxes for choices. The real estate agents in this case would be the ones "whose job partly involves interfacing with" software (or in this example, paper forms with physical checkboxes in many cases). From experience (I did real estate litigation for a few years), I can say that this didn't always work out so well... because the agents and the parties have to both understand the meaning of what they're choosing and the potential impact of those choices, and that requires an understanding of the law which is complex and not static. Relying on the form is nowhere near enough. This difficulty is vastly greater in areas of the law that are shifting more rapidly, like IP.


This is like how some people think philosophy is beautiful and pure logic and boils down to modus ponens but a lot of philosophy is making emotional and intuitive arguments for each proposition.


This is exactly like the "how can I use static types when I talk to other untyped things".

Computational law does not nor should it mean limiting ourself to purely objective concepts and automatically-resolvable disputes. We can still introduce as abstract parameters all the fuzzy humanistic things we want. This just forces us to separate those from the "boring parts", which will make everything more productive. This is a lot like how with fancy dependent types you can pass around proofs of undecidable/non-computable things -- "undecidable" and "subjective" are equally bad at run-time.

Done right, right, this is also good for fairness because it's exactly to the extent the objective and subjective stuff is all mixed together that "the party with the most expensive lawyers wins". All the drudgery keeps everyone but the rich out.

-------

All that said, there are still immense challenges to pulling this off. Doctors and lawyers are insanely protected classes in this country --- the last, most powerful guilds --- and everything is designed against this. In "physical small repeatable goods" capitalism, well you can always try to compete end-to-end to the final consumer and slowly eck out market. But court cases and ex lawyer judges make for relatively-rare, high-risk proving ground, and foxes guarding the hen-house!

Also, I am skeptical of this beginning with contracts / private sector and not law itself / governments. The way we write programs today is like a Gustafson's law vs Amdahl's law situation in which rather than reducing complexity/mental drudgery using machines, we simply fill up our expanded capacity with more --- from hand-calculating rocket trajectories to debugging garbage bloated software stacks. If as increased corporate profits went to longer EULAs and other paperwork, this could unleash a torrent of non-sense orders of magnitude greater: please sign these 50 MiBs!


You are taking this page too literally - the fonts are telling - it is starup-talk! There is a serious endeavour underneath - but at this stage they prefer to advertise the fun side of their project. I am sure they have well internalized critique like yours - because it is kind of obvious - and when they write "Without spending time or money on lawyers" - they don't mean it in an absolute sense - it is just that maybe in some cases you can generate a legal document by yourself, but in enough cases you'll need 10 times less lawyer help.


Unrelated to what you're saying, but how are you both an attorney and a dev? I think both professions on their own are demanding, let alone both.


Lots of coffee. Started coding in the 90s and put it aside for a while to go to law school and then did litigation full time for a few years; then transitioned to a dev and eventually management of Dev/QA in a software company; then after the 2009 recession I formed a company for which I do all the dev for as well as went back to practicing law (almost exclusively in the IP transactional context). Split my time about 50/50. I'd never be able to do it if I were still doing litigation and going to trial; fortunately contract drafting/review/negotiation is a lot more sane/flexible regarding scheduling, and in both cases I have an understanding boss (me). But I could definitely use a vacation. :)


I don't believe the project has the goal of eliminating lawyers in the world. The goal, I think, is to produce semi customized contracts from modular and parameterized components without lawyers. Lawyers today build up a cache of legal documents that they stingily dole out to clients who pay. Most often these documents are poorly written, often being copied from older documents passed down to them from older wiza... I mean lawyers, and then extended primarily by slapping on provisions and enumerations as they come up or are thought of.

If instead we had a standard library of contract components that could be composed in verifiably compatible ways, that would be a huge achievement. I think that's what legalese is doing.


There are a number of comments saying things like, "get them to accept the check on the spot," "entice them to sign the severance agreement quickly," "make sure they sign it fast before they change their minds and get a lawyer," etc. This is bad advice.

It is true that you should get them to sign an agreement that contains, among other things, a "general release" that basically says that they promise not to sue you for any reason. It is also true that to be enforceable, you must give something of value (i.e., "legal consideration"), and the severance pay is usually the principal consideration for such an agreement.

I would be extremely cautious, however, about getting someone to sign quickly. It seems to make sense on the face of it (get them to sign away their right to sue before they contemplate getting an attorney). But there is a flip side that no one is mentioning. If you shock someone by calling them into your office and telling that they're losing their job, put a legal document in front of them that they probably won't completely understand (it's going to have a laundry-list of statutory provisions that they need to expressly waive), put a check in front of them and say "here, sign this now!" you're going to unnecessarily jeopardize the enforceability of the release. The ex-employee's lawyer will argue that it was signed under duress and without sufficient knowledge, understanding and consent to constitute a valid waiver of the right to pursue certain claims.

Also, depending on the worker's age, there might be additional complications. The Older Worker's Benefit Protection Act applies to employees over 40, and requires the terminated employee be given 21 days to consider the release and 7 days afterward to change their mind. (The 21 days increases to 45 if there are two or more employees being terminated.) Although this particular statutory requirement doesn't apply to those under 40, it underscores the notion that allowing someone to consider a severance agreement for some amount of time and have the opportunity to discuss it with their own attorney if desired are both good things when it comes to enforcing the agreement. How long you give them is a matter of balancing numerous statutory and common law considerations (both state and federal) with the goal of getting them to sign quickly, and there are avenues, regardless of age, for their attorney to attack the validity of the release if they sign it without properly understanding and consenting to its terms.

I should also point out that severance agreements should always be prepared by an attorney. There are numerous considerations that vary state-to-state; there are specific waivers that need to be contained in them; there are different rules depending on the number of employees; possible notice requirements (in which case the severance might be designated as compensation in lieu thereof); rules about timing of final payments; whether or not benefits are paid out; etc.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: