Hacker Newsnew | past | comments | ask | show | jobs | submit | knlb's commentslogin

https://knlb.dev -- new digital garden; https://explog.in -- previous blog, leaving it as is for now though I expect I'll slowly absorb it into the garden.


> Do you debug JVM bytecode? V8's internals? No. You debug at your abstraction layer

In the fullness of time, you end up having to. Or at least I have. Which is why I always dislike additional layers and transforms at this point.

(eg. when I think about react native on android, I hear "now I'll have to be excellent at react/javascript and android/java/kotlin and C++ to be able to debug the bridge; not that "I can get away with just javascript".)


Exactly yes, that's what I was going to comment. You sometimes need to debug at every layers. All abstractions end up leaking in some way. It's often worth it, but it does not save us from the extra cognitive load and from learning the layers underneath.

I'm not necessarily against the approach shown here, reducing tokens for more efficient LLM generation; but if this catches on, humans will read and write it, will write debuggers and tooling for it, etc. It will definitely not be a perfectly hidden layer underneath.

But why not, for programming models, just select tokens that map concisely existing programming languages ? Would that not be as effective ?


I've never really had the patience to fiddle a lot of with the hardware but have always wanted to use e-ink screens for working, specially on the move. (Tried the hacks for remarkable pro with a friend's recommendations, ultimately never kept using it).

The boox palma with android + (obsidian | termux + tailscale) when I need it has actually worked out well for me for writing | programming with a portable keyboard (nuphy). I even did this year's Advent of Code on it. (https://knlb.dev/logs/aoc25 has some photos)


I used to be scared of Awk, and then I read through the appendix / chapters of "More Programming Pearls" (https://www.amazon.com/More-Programming-Pearls-Confessions-C...) and it became a much easier to reason about language.

The structure can be a bit confusing if you've only seen one liners because it has a lot of defaults that kick in when not specified.

The pleasant surprise from learning to use awk was that bpftrace suddenly became much more understandable and easier to write as well, because it's partially inspired by awk.


I learned the basics of AWK in a few minutes from here: https://learnxinyminutes.com/awk/ — and I agree with you, it was worth it!


Even shorter is this comic from Julia Evans: https://wizardzines.com/comics/awk/


The whole post feels like it was edited/modified by ChatGPT; `What we opened — in English, not a changelog`, `Why it matters (no fluff):`, `We are big believers in notebooks — full stop` are patterns that always make me feel like an LLM wrote it (sentence followed by a marketing qualifier).

I really liked Deepnote the product when I last used it, but the post definitely feels off.


Been thinking about this lately because I find my writing style is a little bit like annoying slightly sycophantic overly-hyphenated-with-an-emdash-here-and-there LLMs. Since LLMs are trained on the Internet, wouldn't some portion of posts that fall in the middle of the "voice bell curve" always sound like LLMs and thus be open to this critique even when they are 100% human written?


This is just the generic LinkedIn marketing prose


Well, you are probably right. Pasting the annoucement in gptzero yields this: 16% AI generated 84% Mixed 0% Human


"We are big believers in notebooks — full stop" could be more succinctly written as "We are big believers in notebooks."


I don't think an LLM wrote it; this has been their brand voice for a long time...


Perfetto is definitely one of my favorite tools to use ever, thank you for working on it!

My personal favorite tool I've built this year is to dynamically generate a trace from a sql query, and allow quickly combining queries. Something like `SELECT timestamp, track, name, ` etc. where column names get transformed to packets automatically.

That way I can overlay multiple py-spy traces and instrumentation into a dynamically implemented generated perfetto trace, loaded into a perfetto iframe using the ping/pong mechanism at https://perfetto.dev/docs/visualization/deep-linking-to-perf....


Thanks for the nice words! Your tool sounds super neat!

We're look at integrating some sort of similarish things into Perfetto itself where, for a synthetically generated trace, you can say "run this SQL query and add a debug track for it on trace load". See the discussion on https://github.com/google/perfetto/issues/1342 :)


Thanks for the post, this is pretty cool!

I feel like I've seen Cupti have fairly high overhead depending on the cuda version, but I'm not very confident -- did you happen to benchmark different workloads with cupti on/off?

---

If you're taking feature requests: a way to subscribe to -- and get tracebacks for -- cuda context creation would be very useful; I've definitely been surprised by finding processes on the wrong gpu and being easily able to figure out where they came from would be great.

I did a hack by using LD_PRELOAD to subscribe/publish the event, but never really followed through on getting the python stack trace.


CUPTI is kind of a choose your own adventure thing, as you subscribe to more stuff the overhead goes up, this is kind of minimalist profiler that just subscribes to the kernel launches and nothing else. Still to your point depending on kernel launch frequency/granularity it may be higher overhead than some would want in production, we have plans to address that with some probabilistic sampling instead of profiling everything but wanted to get this into folks hands and get some real world feedback first.


Another vote for the Glove 80: I used the Kinesis Advantage 2 for 10 years (after a few initial signs of finger pain developing), then tried the new 360; and recently got the glove 80 so I could easily travel with it and fell in love with the keyboard.

It definitely doesn't feel as solid as the Kinesis or ergodox (which I used intermittently as well) but is the most comfortable keyboard I've used, the LEDs are actually useful (for showing battery life and bluetooth connections), and there are enough keys (including function keys); I don't like having to reason about layers at all, I want to be able to smoothly transition to my laptop's keyboard in a pinch).


My current personal site (https://knlb.dev) is built with a single 500 line python file that starts with

  #!/usr/bin/env -S uv run --script
  # -*- mode: python -*-
  #
  # /// script
  # requires-python = ">=3.12"
  # dependencies = [
  #    "pyyaml", "flask", "markdown-it-py",
  #    "linkify-it-py", "mdit-py-plugins"
  # ]
  # ///
The HTML templates & CSS are baked into the file which is why it's so long. flask so that I can have a live view locally while writing new notes.

uv's easy dependency definition really made it much easier to manage these. My previous site was org exported to html and took much more effort.

(With the conceit that the website is a "notebook" I call this file "bind").


Do you have the whole thing available? I love SSGs, especially personal ones.


Not the latest one just yet, I still want to play with it a lot more / bake in some more features -- but it's also terribly simple and doesn't do much at the moment.

The org mode one is at https://explog.in/config.html.


My similar trick is to rely on the tmux scrollback and pipe tokenized output into fzf so I can easily autocomplete in zsh against anything on the visible tmux screen

https://www.threads.com/@kunalb_/post/C6ZQIOVpwMd https://gist.github.com/kunalb/abfe5757e89ffba1cf3959c9543d9...

which has been really useful.


I have been using xterm's default dabbrev-expand to do the same via "Alt-/" ( https://github.com/ttsiodras/dotfiles/blob/master/.Xresource... ) - which works regardless of what shell you're in.

But I was curious for your approach... so I asked Claude to convert it to bash: https://claude.ai/public/artifacts/01a49347-1617-4afe-8476-0...

Works like a charm - pinned it to Ctrl-k, which was free in my setup. I guess I don't have to depend on XTerm for this any more :-)

Thanks!


Fantastic. Thanks for sharing!


love it!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: