I've had great experiences being managed twice by very humble engineers who've made the transition to EM. Both were sacked within the year by their boss because they didn't play the corporate politics game.
It's so disheartening to learn that one works for a manager who doesn't care about having the most skilled team, or best product, but rather someone who has selected for "Who will kiss up to me no matter what? Who will never tell me anything I don't want to hear?"
If you're following a pipe (such as `kubectl logs | less +F`), <C-c> is sent to all processes in a pipeline, so it stops less from following and it stops the other process entirely. Then you can't start following again with F, or load more data in with G.
Less provides an alternative of <C-x> to stop following, but that is intercepted by most shells.
Funnily enough, it literally tells you right there on the bottom line: “Waiting for data... (^X or interrupt to abort)”. No shame in not noticing, just another case of blindness to long-familliar messages I guess.
By the shell or by the kernel’s terminal discipline or by the terminal emulator? AFAIU the shell is basically out of the picture while `less` is running.
> I can <C-z> while less is running to background that process using the shell, so the shell is clearly not completely gone.
The shell isn’t gone, but it isn’t active either from what I understand. The function of converting the user’s typing ^Z on a terminal (or a ^Z arriving on the master end of a pseudoterminal) into a SIGTSTP signal to the terminal’s foreground process group is[1] a built-in function of the kernel, much like for ^C and SIGINT or ^\ and SIGQUIT. (The use of ^Z resp. ^C or ^\ specifically, as well as the function being active at all, is configurable via a TTY ioctl wrapped by termios wrapped in turn by `stty susp` resp. `stty intr` or `stty quit`.) So is the default signal action of stopping (i.e. suspending) the process in response to that signal. The shell just sees its waitpid() syscall return and handles the possibility of that having happened due to the process stopping rather than dying (by updating its job bookkeeping, making itself the foreground process group again, and reëntering the REPL).
I am not saying that doing job control by filtering the child’s input would be a bad design in the abstract, and it is how terminal multiplexers work for instance. I admit the idea of kernel-side support for shell job control is pretty silly, it’s just how it’s traditionally done in a Unix system.
Whew! Advanced Unix system programming level stuff. I've dabbled a bit in that field, in C, on Unix, some older versions on PCs. It was fun. Any recommendation for a tutorial style book or site or blog on the subject, other than man pages and the Kerrisk book (TLPI, which is more of a reference), for Linux?
It’s not. It’s been through several editing rounds. (I was one of the editors.) In theory, we don’t have a problem with AI generated content if it meets our high editorial requirements, but all Tweag technical blogs go through a rigorous, manual review and editing process to keep standards high.
As I've read through the post, seeing phrases like "Why this matters for performance", usage of em-dashes and lists/bullet points, screams AI written to me. I appreciate you saying it wasn't, but such is the fate of who wrote this to write like LLMs do nowadays. I also liked to use em-dashes and bullet lists but am consciously avoiding them now.
My current company started on AWS/GCP for the credits. Right now we're on Lambda for the GPU prices and GKE for some webservers that we cba to move. We dual-upload data to s3 and gcs still (which isn't too expensive, it's effectively write-only and the auto-archive features work for us). Cloud SQL database but pgBackRest to the other cloud.
We're not HA across clouds; we decided to chase RPO over RTO.
About once a week I see someone cut in even though the person is literally tailgating. The driver at the back has to brake+swerve to not cause a high speed collision. There's actually nothing you can do to prevent these people from getting ahead of you. Don't worry about what they'll do, it's insane anyways. Just try not to die.
I'm there right now at my current job. It's always the same engineer, and they always get a pass because (for some reason) they don't have to do design reviews for anything they do, but they go concern-troll everyone else's designs.
Last week, after 3 near-misses that would have brought down our service for hours if not days from a corner this engineer cut, I chaired a meeting to decide how we were going to improve this particular component. This engineer got invited, and spent thr entire allocated meeting time spreading FUD about all the options we gathered. Management decided on inaction.
People think management sucks at hiring good talent (which is sometimes true, but I have worked with some truly incredible people), but one of the most consistent and costly mistakes I’ve observed over my career has been management's poor ability to identify and fire nuisance employees.
I don’t mean people who “aren't rockstars” or people for whom some things take too long, or people who get things wrong occasionally (we all do).
I mean people who, like you describe, systemically undermine the rest of the team’s work.
I’ve been on teams where a single person managed to derail an entire team’s delivery for the better part of a year, despite the rest of the team screaming at management that this person was taking huge shortcuts, trying to undermine other people’s designs in bad faith, bypassing agreed-upon practices and rules and then lying about it, pushing stuff to production without understanding it, etc.
Management continued to deflect and defer until the team lead and another senior engineer ragequit over management’s inaction and we missed multiple deadlines at which point they started to realize we weren’t just making this up for fun.
Google's monorepo is in fact terabytes with no binaries. It does stretch the definition of source code though - a lot of that is configuration files (at worst, text protos) which are automatically generated.
Great advice! Personally, I got immense value from writing notes but never when I wrote them during the lecture. 30 minutes after the lecture has ended is a perfect time time to sit down in the library and write notes for what the lecture was about. This gives enough time to reflect about the big picture, but not so too much time that the details are lost.
My experience is different: TOML isn't obvious if there's an array that's far from the leaf data. Maybe that's what you experienced with the hierarchical data?
In my usage of it (where we use base and override config layers), arrays are the enemy. Overrides can only delete the array, not merge data in. TOML merely makes this code smell more smelly, so it's perfect for us.
Git worktrees are global mutable state; all containers on your laptop are contending on the same git database. This has a couple of rough edges, but you can work around it.
I prefer instead to make shallow checkouts for my LXC containers, then my main repo can just pull from those. This works just like you expect, without weird worktree issues. The container here is actually providing a security boundary. With a worktree, you need to mount the main repo's .git directory; a malicious process could easily install a git hook to escape.
Cool. Operationally, are you using some host-resident non-shallow repo as your point of centralization for the containers, or are you using a central network-hosted repo (like github)?
If the former, how are you getting the shallow clones to the container/mount, before you start the containerized agent? And when the agent is done, are you then adding its updated shallow clones as remotes to that “central” local repository clone and then fetching/merging?
If the latter, I guess you are just shallow-cloning into each container from the network remote and then pushing completed branches back up that way.
I just have the file path to the inside of my LXC container. If you're using Docker you can just mount it. I only need the path twice (for clone, and for adding a git remote). After that I just use git to reference the remote for everything.
I probably don't have the perfect workflow here. Especially if you're spinning up/down Docker containers constantly. I'm basically performing a Torvalds role play, where I have lieutenant AI agents asking me to pull their trees.