Hacker Newsnew | past | comments | ask | show | jobs | submit | quintu5's commentslogin

It’s all available in their GitHub repo.

One major downside of native rendering is the lack of layout consistency if you’re editing natively and then sharing anywhere else where the diagram will be rendered by mermaid.js.


Yes that's true. For my use-case I want to render the diagram out to a png though and embed it in a confluence page.


This is a perfect use case! The v0.3.0 crate will have: - parse() → AST - layout() → positioned elements - render_svg() → SVG string - render_png() → via resvg (no browser needed)

CLI usage would be something like:

mermaid-rs diagram.mmd -o diagram.png> # or pipe from stdin> cat diagram.mmd | mermaid-rs --format svg > output.svg>

For your mark integration, you'd be able to call it as a subprocess or use it as a Rust library directly if you're building in Rust.

If you want to follow progress or have input on the API, feel free to open an issue on the repo!


Markdown viewing is one of the core use-cases I had in mind when building the Tachi Code browser extension (https://tachicode.com/).

Open a raw .md file in your browser and it'll automatically open in a side-by-side editor/preview. If viewing is all you want, you can set the default preview mode for markdown files to be fullscreen.


Thanks. That's a start, I guess!


Well you see he ran it through in through an LLM, but LLMs are lossy, so who can say if the output was a direct result of the copyrighted code or if the model focused on his unique prompting words and conjured the output from its own latent space without referencing copyrighted input at all? /s

Alternatively, we could take the model makers’ view and say that if they didn’t want their code reused, they wouldn’t have made it publicly accessible on the internet.


the model makers opinion is no different than "if she didn't want to be ogled/cat called, she shouldn't have been wearing [insert literally any type of clothing here] when travelling from A to B in a public place"


You'll get no argument from me on that interpretation.


Maybe it's time to start dusting off the ol' Jenkins-fu?

Charging per minute for self-hosted runners seems absolutely bananas!


I’ve used this library on a couple of projects with great results. One, a drag-and-drop IaC builder and the other a GitHub Actions-like task execution graph viewer.


There’s a pattern to emoji use in docs, especially when combined with one or more other common LLM-generated documentation patterns, that makes it plainly obvious that you’re about to read slop.

Even when I create the first draft of a project’s README with an LLM, part of the final pass is removing those slop-associated patterns to clarify to the reader that they’re not reading unfiltered LLM output.


For larger tasks that I know are parallelizable, I just tell Claude to figure out which steps can be parallelized and then have it go nuts with sub-agents. I’ve had pretty good success with that.


I need to try this because I've never deliberately told it to, but I've had it do it on it's own before. Now I'm wondering if that project had instructions somewhere about that, which could it explain why it happened.


It sometimes does it on its own, but to get it to do so consistently, it needs to be told. Doubly so if you want it to split off more than one sub-agent.

This works great for refactors that touch a large number of files. You can knock out a refactor that might take 30 minutes, a persistent checklist, and possibly multiple conversations, and one-shot it in two minutes and a single prompt.


More like they can better react to user input within their context window. With older models, the value of that additional user input would have been much more limited.


Having never owned a tablet, finding out now that iPad didn’t have a native calculator until 2024 is shocking!


yeah but can't you use the ipads built in ssh to just use bc on a linux that you remote to?


Is this a parody of the Dropbox comment or is this sincere? I don’t think iPads have built in ssh… and even if they do, this is a far cry from an app. It assumes you have a Linux machine on your local network and are willing and able to set up ssh to connect to it as well as learn command line tooling for making calculations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: