+1 for Tauri, I've been using it for my recent vibe-coded experimental apps. Making rust the "center of gravity" for the app lets me use the best of all worlds:
- declarative-ish UI in typescript with react
- rust backend for performance-sensitive operations
- I can run a python sidecar, bundled with the app, that lets me use python libraries if I need it
If I can and it makes sense to, I'll pull functionality into rust progressively, but this give me a ton of flexibility and lets me use the best parts of each language/platform.
Its fast too and doesn't use a ton of memory like electron apps do.
Also, Rust's strong and strict type system keeps Claude honest. It seems as if the big LLM models have trained on a lot of poorly written TypeScript because they tend to use type assertions such as `as any` and eslint disable comments.
I had to add strict ESLint and TypeScript rules to keep guardrails on the coding agents.
I've been cruising on rust too, not just because it works great for LLMs but also the great interop:
- I can build SPAs with typescript and offload expensive operations to a rust implementation that targets wasm
- I can build a multi-platform bundled app with Tauri that uses TS for the frontend, rust for the main parts of the backend, and it can load a python sidecar for anything I need python for (ML stuff mainly)
- Haven't dived too much into games but bevy seems promising for making performant games without the overhead of using one of the big engines (first-class ECS is a big plus too)
It ended up solving the problem of wanting to use the best parts of all of these different languages without being stuck with the worst parts.
Maybe have it build some toy apps just for fun! My wife and I were talking once about typing speed and challenged each other to a typing competition. the existing ones I found weren't very good and were riddled with ads, so I had Claude build one for us to use.
Or maybe ask yourself what do you like to do outside of work? maybe build an app or claude skill to help with that.
If you like to cook, maybe try building a recipe manager for yourself. I set up a repo to store all of my recipes in cooklang (similar to markdown), and set up claude skills to find/create/evaluate new recipes.
Building the toy apps might help you come up with ideas for larger things too.
Its additional context that can be loaded by the agent as-needed. Generally it decides to load based on the skill's description, or you can tell it to load a specific skill if you want to.
So for your example, yes you might tell the agent "write a fantasy story" and you might have a "storytelling skill" that explains things like charater arcs, tropes, etc. You might have a separate "fiction writing" skill that defines writing styles, editing, consistency, etc.
All of this stuff is just 'prompt management' tooling though and isn't super commplicated. You could just paste the skill content into your context and go from there, this just provides a standardized spec for how to structure these on-demand context blocks.
I've been coming around to the view that the time spent code-reviewing LLM output is better spent creating evaluation/testing rigs for the product you are building. If you're able to highlight errors in tests (unit, e2e, etc.) and send the detailed error back to the LLM, it will generally do a pretty good job of correcting itself. Its a hill-climbing system, you just have to build the hill.
Nah, I think the implementation is just off. Graphics need HW acceleration for modern resolutions, but the whole thing should be fine in vanilla JS. Afaik wasm is just an abstraction on top of a jsvm
> Afaik wasm is just an abstraction on top of a jsvm
it is, but as a compiler target there's tons of opportunity for automatic optimization -- in my experience wasm (from rust) tends to be faster then then hand-written js for the same function (although, i'll admit, javascript is far from my strongest language, so take that with a grain of salt)
Wasm objects are what you get in C or other low-level language, with a linear heap and zero metadata. That alone makes it vastly faster and easier to JIT than JavaScript.
Seconded, I would be interested in knowing people's workflows + experiences developing MacOS and iOS apps with claude, etc.
From the repo here, it looks like its just using swift command line tools, which might just work well enough with cursor/vscode/etc. for small projects. You won't have Xcode's other features but maybe thats fine for an agentic-first development workflow.
Since LLMs were introduced, I've been of the belief that this technology actually makes writing a *more* important skill to develop than less. So far that belief has held. No matter how advanced the model gets, you'll get better results if you can clarify your thoughts well in written language.
There may be a future AI-based system that can retain so much context it can kind of just "get what you mean" when you say off-the-cuff things, but I believe that a user that can think, speak, and write clearly will still have a skill advantage over one that does not.
FWIW, I've heard many people say that with voice dictation they ramble to LLMs and by speaking more words can convey their meaning well, even if their writing quality is low. I don't do this regularly, but when I have tried it, it seemed to work just as well as my purposefully-written prompts. I can imagine a non-technical person rambling enough that the AI gets what they mean.
Thats a fair counterpoint, and it has helped translate my random thoughts into more coherent text. I also haven't taken advantage of dictation much at all either, so maybe I'll give it a try. I still think the baseline skill that writing gives you translates to an LLM-use skill, which is thinking clearly and knowing how to structure your thoughts. Maybe folks can get that skill in other ways (oration, art, etc.). I don't need to give it essays, but I do need to give it clear instructions. Every time it spins off and does something I don't want, its because I didn't clarify my thoughts correctly.
Setting up SpeechNote with Kokoro is one of the the best things I've ever done.
I can speak faster than I type, and the flow state is much smoother when you can just dump a stream of consciousness into the context window in a matter of seconds. And the quality of the model is insane for something that runs locally, on reasonable hardware no less.
Swearing at an LLM is also much more fun when done verbally.
The prompt the user enters is actually not the prompt. Most agents will have an additional background step to use the user's prompt to generate the actual, detailed instructions, which is then used as the actual prompt for code generation. That's how the ability to build a website from "create a website that looks like twitter" is achieved.
My 85 year-old father could probably resolve 90% of his personal technology problems using an LLM. But for the same reason every phone call on these subjects ends with me saying "can it wait until I come over for lunch next week to take a look?", an LLM isn't a viable solution when he can't adequately describe the problem and its context.
Yeah, we've already seen that over the past few decades. It's both a limitation and a benefit, but until recently it was the only thing we had (well that, and just hiring another person to act as an LLM for us). LLMs are an upgrade.
> No matter how advanced the model gets, you'll get better results if you can clarify your thoughts well in written language.
This definitely agrees with my experience. But a corollary is that written human language is very cumbersome to encode some complex concepts. More and more I give up on LLM-assisted programming because it is easier to express my desires in code than using English to describe what forms I want to see in the produced code. Perhaps once LLMs get something akin to judgement and wisdom I can express my desires in the terms I can use with other experienced humans and take for granted certain obvious quality aspects I want in the results.
> So far that belief has held. No matter how advanced the model gets, you'll get better results if you can clarify your thoughts well in written language.
I've heard it well described as a k-type curve. Individuals that already know things will use this tool to learn and do many more things. Individuals that don't know a whole lot aren't going to learn or do a whole lot with this tool.
- declarative-ish UI in typescript with react
- rust backend for performance-sensitive operations
- I can run a python sidecar, bundled with the app, that lets me use python libraries if I need it
If I can and it makes sense to, I'll pull functionality into rust progressively, but this give me a ton of flexibility and lets me use the best parts of each language/platform.
Its fast too and doesn't use a ton of memory like electron apps do.
reply