maybe a percentage chance of solving puzzle tracker that updates a bit randomly slow so you don't necessarily know right away that you made a mistake, although it would have to be a bit weird, for example when you start you are not at 100% of solving puzzle.
The reason why it is better is that with search you have to narrow your search down to a specific part of what you are trying to do, for example if you need a unique id generating function as part of what you are trying to do you first search for that, then if you need to make sure that whatever gets output is responsive 3 columns then you might search for that, and then do code to glue the things together to what you need, with AI you can ask for all of this together, get something that is about what the searched for results would have been, do your glue code and fixes you would normally have done.
It trims the time requirement of a bit of functionality that you might have searched for 4 examples down by the time requirement of 3 of those searches.
It does however remove the benefit of having done the search which might be that you see the various results, and find that a secondary result is better. You no longer get that benefit. Tradeoffs.
Use the mole example as referring to any physical characteristic hidden by clothing that people want to remain hidden. It's an example to demonstrate that the AI is not "undressing" anybody. It is filling in an extrapolation of pixels which have no clear relationship to the underlying reality. If you have a hidden tattoo, that tattoo is still not visible.
This gets fuzzy because literally everything is correlated -- it may be possible to infer that you are the type of person who might have a tattoo there? But grok doesn't have access to anything that hasn't already been shared. Grok is not undressing anybody, the people using it to generate these images aren't undressing anybody, they are generating fake nudes which have no more relationship to reality than someone taking your public blog posts and then attempting to write a post in your voice.
sure, but if I make a fake picture of someone having sex with a horse and someone else confirms "my gosh, that's really them! I recognize that mole" then I suppose it is the same damage.
At any rate where some of this stuff is concerned, fake CSAM for example, it doesn't matter that it is "fake" as fakes of the material is also against the law in some places at least.
if the problem is just the use of the word "undressing" I suppose the usage of the word is completely analogical, as nobody expects that Grok is actually going out and undressing anyone as the robots are not ready for that task yet.
> To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
Copyright is not “you own this forever because you deserve it”, copyright is “we’ll give you a temporary monopoly on copying to give you an incentive to create”. It’s transactional in nature. You create for society, society rewards you by giving you commercial leverage for a while.
Repeatedly extending copyright durations from the original 14+14 years to durations that outlast everybody alive today might technically be “limited times” but obviously violates the spirit of the law and undermines its goal. The goal was to incentivise people to create, and being able to have one hit that you can live off for the rest of your life is the opposite of that. Copyright durations need to be shorter than a typical career so that its incentive for creators to create for a living remains and the purpose of copyright is fulfilled.
In the context of large language models, if anybody successfully uses copyright to stop large language models from learning from books, that seems like a clear subversion of the law – it’s stopping “the progress of science and useful arts” not promoting it.
(To be clear, I’m not referring to memorisation and regurgitation like the examples in this paper, but rather the more commonplace “we trained on a zillion books and now it knows how language works and facts about the world”.)
Duration of copyright is one way it was perverted, but the other direction was scope. In 1930 judge Hand said in relation to Nichols v. Universal Pictures:
> Upon any work...a great number of patterns of increasing generality will fit equally well. At the one end is the most concrete possible expression...at the other, a title...Nobody has ever been able to fix that boundary, and nobody ever can...As respects play, plagiarism may be found in the 'sequence of events'...this trivial points of expression come to be included.
And since then a litany of judges and tests expanded the notion of infringement towards vibes and away from expression:
- Hand's Abstractions / The "Patterns" Test (Nichols v. Universal Pictures)
- Total Concept and Feel (Roth Greeting Cards v. United Card Co.)
- The Krofft Test / Extrinsic and Intrinsic Analysis
- Sequence, Structure, and Organization (Whelan Associates v. Jaslow Dental Laboratory)
- Abstraction-Filtration-Comparison (AFC) Test (Computer Associates v. Altai)
The trend has been to make infringement more and more abstract over time, but this makes testing it an impossible burden. How do you ensure you are not infringing any protected abstraction on any level in any prior work? Due diligence has become too difficult now.
Actually, plenty of activists, for example Cory Doctorow, have spent a significant amount of effort discussing why the DMCA, modern copyright law, DRM, etc. are all anti-consumer and how they encroach on our rights.
It's late so I don't feel like repeating it all here, but I definitely recommend searching for Doctorow's thoughts on the DMCA, DRM and copyright law in general as a good starting point.
But generally, the idea that people are not allowed to freely manipulate and share data that belongs to them is patently absurd and has been a large topic of discussion for decades.
You've probably at least been exposed to how copyright law benefits corporations such as Disney, and private equity, much more than it benefits you or I. And how copyright law has been extended over and over by entities like Disney just so they could prolong their beloved golden geese from entering public domain as long as possible; far, far longer than intended by the original spirit of the copyright act.
As I understand it Lean is not a general purpose programming language, it is a DSL focused on formal logic verification. Bugs in a DSL are generally easier to identify and fix.
It seems one side of this argument desperately needs AI to have failed, and the other side is just saying that it probably worked but it is not as important as presented, that it is actually just a very cool working methodology going forward.
Obviously any observation I make is limited by my experience, but my experience of the last few years is that hardly anyone uses 'this' anymore in JavaScript because everyone uses arrow function, generally mandated by leadership. This observation is of course just limited to the places I've worked.
So the @Get('/')
index() {
return { message: this.greetingService.getGreeting() };
} example was weird for me to see, although not unwelcome.
I am however not that great at Typescript, which again my experience of work is that most of the developers I encounter aren't, and just use it as a lightweight structuring tool for JavaScript.
The @ decorator is thus always hard for me to reason about what is actually going on.
On the other hand I might be using this in the new year [despite my relative incompetence in Typescript], if I decide to build my next project on Node (I am considering Elixir to use Phoenix, hence having to say "might"), all of which is a long-winded way of saying looks pretty interesting and nice work.
Thanks for the feedback, Bryan! I really appreciate the honesty, and you’ve touched on a very real shift in the JS ecosystem.
Regarding the this and class-based approach: you’re absolutely right that the industry has leaned heavily towards functional patterns and arrow functions recently. The choice to use classes in Rikta is specifically to support Dependency Injection (DI).
In modular backend systems, DI makes it much easier to manage services (like the GreetingService in the example) and, more importantly, makes unit testing significantly simpler because you can easily swap real services for mocks.
As for the decorators (@): I completely understand why they can feel like "magic" or be hard to reason about. In Rikta, we use them to declaratively attach metadata to your code (e.g., "this method should handle GET requests"). It keeps the boilerplate out of your logic, but I realize it requires a bit of a mental shift if you're used to a more literal, functional style.
Don't worry about the "TypeScript competence" part! One of my goals with Rikta is actually to provide enough structure so that you don't have to be a TS wizard to build something solid. The framework handles the "heavy lifting" of the types, leaving you to focus on your business logic.
Elixir and Phoenix are fantastic choices (concurrency there is hard to beat!), but if you do decide to stick with Node for your next project, I’d love to have you try Rikta. Feel free to reach out if you hit any walls—I’m always looking to make the "getting started" experience smoother for everyone, regardless of their TS experience level.
when I say "not unwelcome", I generally have not been overly traumatized by 'this' usage as some people have, its tricky nature being somehow enjoyable to my mind.
reply