> Can people keep a good mental model of the repo without writing code?
This is the age-old problem of legacy codebases. The intentions and understanding of the original authors (the "theory of the program"[0]) are incredibly valuable, and codebases have always started to decline and gain unnecessary complexity when those intentions are lost.
Now every codebase is a legacy codebase. It remains to be seen if AI will be better at understanding the intentions behind another AI's creation than humans are at understanding the work of other humans.
Anyway, reading and correcting AI code is a huge part of my job now. I hate it, but I accept it. I have to read every line of code to catch crazy things like a function replicated from a library that the project already uses, randomly added to the end of a code file. Errors that get swallowed. Tautological tests. "You're right!" says the AI, over and over again. And in the end I'm responsible as the author, the person who supposedly understands it, even though I don't have the advantage of having written it.
Reviewing someone else's PR, who used Copilot but barely knows the language, has been a mixture of admiration that AI can create such a detailed solution relatively quickly, and frustration with the excess complexity, unused code, etc.
That was true of me as well, and at the same time, I was working alongside parents who worked 7:30-5:30 with a break to pick up the kids from school.
Nobody wants a "visit" from the founder, anyway. They want timely two-way flow of information, access and guidance on the occasions when they need it, and maybe (maybe) an occasional chance to hang out socially as a group with no reference to work. Nobody wants the founder randomly dropping by during work hours to assess morale.
I don't think it's possible to want to troll about those things without at least somewhat believing them. To troll about them at the expense of your career and reputation takes a deeper belief that goes beyond trolling.
I suspect growing up in an era where community, the newspaper, radio and TV spewed religious, racist, and sexist content gradually increased sensory memory related neural activity that fostered biochemical and epigenetic effects that over time become effectively immutable.
Not sure why we are being coy about the triggers. Society of his youth and the biology are well documented.
I did. Different genetic expressions. The intelligence to realize language is just memes, not truth.
Scott Adams put himself on a pedestal above anyone else in his comics; he was Dilbert. The only smart person in the room. He was always a celebrity obsessed with his own existence. Little difference between him and Tim the Toolman or a Kardashian.
Low effort contributor whose work people laughed at due to social desirability bias. No big loss.
That's a really wild, miserable reading of the strip. For one, Adams himself was a manager, not an engineer, so he had more in common with the PHB, or even dogbert/catbert than Dilbert. For another, he explicitly said Dilbert was based on a specific, undisclosed person he knew. For yet another, many strips were based on anecdotes/stories sent to Adams by his readers.
it doesn't take even a serious reading of Adams to realize he was dogbert, not Dilbert. He mocked Dilbert, he thought he was a loser that did understand how to manipulate the system.
I take the "premium" experience of Apple hardware for granted, and if I wanted to buy a Windows laptop, I'd have no idea which brands make similar quality laptops for Windows. I'm shocked at the Windows laptops I see in the wild.
Is that what Microsoft Surface laptops are? Did Microsoft get in the game themselves to make sure a premium-quality laptop is available for Windows?
Yes about the surface part. They thought all other 2-in-1 touch devices were terrible and needed to do something.
But Microsoft stopped caring about hardware a long time ago. I would never recommend overpriced Surface Pro to anyone unless you have a special need for that form factor. As to Surface Laptop, you have so many better choices.
I think the "wheat domesticated humans" argument is about changes to our behavior, our culture and social structures, rather than genetic change. It isn't domestication in the evolutionary sense. It would be like keeping zebras on a farm with horses and doing your best to tame and train them. You might be able to change their behavior so that they behaved differently from wild zebras, but it wouldn't be domestication unless you bred them over generations to produce a population that was genetically different from wild zebras.
> I don't like the term "enums" because of the overloading between simple integers that indicate something (the older, more traditional meaning)
I disagree with this. I'm old as hell, and I learned programming in a context where enums were always ints, but I remember being introduced to int enums as "we're going to use ints to represent the values of our enum," not "enums are when you use ints to represent a set of values." From the very beginning of my acquaintance with enums, long before I encountered a language that offered any other implementation of them, it was clear that enums were a concept independent of ints, and ints just happened to be an efficient way of representing them.
"Enum" is literally defined as a numbering mechanism. While integers are the most natural type used to store numbers, you could represent those numbers as strings if you really wanted. The key takeaway is that a enum is a value, not a type.
The type the link was struggling to speak of seems to be a tagged union. Often tagged union implementations use enums to generate the tag value, which seems to be the source of confusion. But even in tagged unions, the enum portion is not a type. It remains just an integer value (probably; using a string would be strange, but not impossible I guess).
Disagree. Enums are named for being enumerable which is not the same thing as simply having an equivalent number.
It’s incredibly useful to be able to easily iterate over all possible values of a type at runtime or otherwise handle enum types as if they are their enum value and not just a leaky wrapper around an int.
If you let an enum be any old number or make the user implement that themselves, they also have to implement the enumeration of those numbers and any optimizations that you can unlock by explicitly knowing ahead of time what all possible values of a type are and how to quickly enumerate them.
What’s a better representation: letting an enum with two values be “1245927” or “0” or maybe even a float or a string whatever the programmer wants? Or, should they be 0 and 1 or directly compiled into the program on a way that allows the programmer to only ever need to think about the enum values and not the implementation?
IMO the first approach completely defeats the purpose of an enum. It’s supposed to be a union type, not a static set of values of any type. If I want the enum to be tagged or serializable to a string that should be implemented on top of the actual enumerable type.
They’re not mutually exclusive at all, it’s just that making enums “just tags” forces you to think about their internals even if you don’t need to serialize them and doesn’t give you enumerability, so why would I even use those enums at all when a string does the same thing with less jank?
> Enums are named for being enumerable which is not the same thing as simply having an equivalent number.
Exactly. Like before, in the context of compilers, it refers to certain 'built-in' values that are generated by the compiler; which is done using an enumerable. Hence the name. It is an implementation detail around value creation and has nothing to do with types. Types exist in a very different dimension.
> It’s supposed to be a union type
It is not supposed to be anything, only referring to what it is — a feature implemented with an enumerable. Which, again, produces a value. Nothing to do with types.
I know, language evolves and whatnot. We can start to use it to be mean the same thing as tagged unions if we really want, but if we're going to rebrand "enums", what do we call what was formally known as enums? Are we going to call that "tagged unions" since that term now serves no purpose, confusing everyone?
That's the problem here. If we already had a generally accepted term to use to refer to what was historically known as enums, then at least we could use that in place of "enums" and move on with life. But with "enums" trying to take on two completely different, albeit somewhat adjacent due to how things are sometimes implemented, meanings, nobody has any clue as to what anyone is talking about and there is no clear path forward on how to rectify that.
Perhaps Go even chose the "itoa" keyword in place of "enum" in order to try and introduce that new term into the lexicon. But I think we can agree that it never caught on. If I, speaking to people who have never used Go before, started talking about iotas, would they know what I was talking about? I expect the answer is a hard "no".
Granted, more likely it was done because naming a keyword that activates a feature after how the feature is implemented under the hood is pretty strange when you think about it. I'm not sure "an extremely small amount" improves upon the understanding of what it is, but at least tries to separate what it is from how it works inside of the black box.
It feels obvious that that's where the term originated, but I've never seen it used as a definition. In a mathematical context, something is enumerable if it can be put into 1:1 correspondence with the integers, but it doesn't need to be defined by a canonical correspondence. This suggests that being a finite (in a programming context where the set of ints is finite) set of discrete values is the defining feature, not the representation.
> In most applications of int enums, the particular integers can be chosen at random
I’m not sure the definition of "enum" enforces how things are identified. Random choice would be as good as any other, theoretically. In practice, as it relates to programming, random choice is harder to implement due to collision possibilities. Much simpler is to simply increment an integer, which is how every language I've ever used does it; even Rust, whose implementation is very similar to Go's implementation.
But it remains that the key takeaway is that the enum is a value. The whole reason for using an enum is for the sake of runtime comparison. It wouldn't even make sense to be a type as it is often used. It is bizarre that it keeps getting called one.
Sum types can be put into 1:1 correspondence with the integers, barring the inclusion of a non-enumerable type in a language's specification that can be used in sum types. However I would observe that this is generally a parlor trick and it's fairly uncommon to simply iterate through a sum type. As is so often the case, the exceptions will leap to mind and some people are already rushing to contradict me in the replies... but they are coming to mind precisely because they are the exceptions. Yes, you can sensibly iterate on "type Color = Red | Green | Blue", I've written code to do the equivalent in various languages many times and most complicated enumerations (in the old sense) I do come equipped with some array that has all the legal values so people can iterate over them (if they are not contiguous for some reason), so I know it can be done and can be useful. But the instant you have a general number type, or goodness help you, a generalized String type, as part of your sum type, you aren't going to be iterating on all possible values. And the way in which you can put the sum types into a 1:1 correspondence won't match your intuition either, since you'll need to diagonalize on the type, otherwise any unbounded array/string will get you "stuck" on the mapping and you'll never get past it.
So while you can theoretically argue it makes sense to call them an "enum" I don't like it precisely because "enumerating" the "enum" types (being sum types here), in general, is not a sensible operation. It is sensible in specific, but that's not really all that special. We don't generally name types by what a small percentage of the instances can do or are, we name them by what all instances can do or are. A degenerate sum type "type Value = Value" is still a sum, albeit a degenerate one of "1", but nobody ever enumerates all values of "type Email = Email { username :: String, domain :: String }". (Or whatever more precise type you'd like to use there. Just the first example that came to mind.) There are also cases where you actively don't want users enumerating your sum type, e.g., some sort of token that indicates secure access to some resource that you shouldn't be able to get, even in principle, by simple enumerating across your enum.
If it's called an "enum" I want to be able to "enum"erate it.
I was very happy with Pocket. After Mozilla discontinued it, I switched to Instapaper, which I barely use, for reasons I don't fully understand. All I know is that the Instapaper home screen feels unhelpful and off-putting to me.
The public health discourse about protein is in a weird place right now. The recommendations are higher than ever, yet people are constantly told not to think about protein, or to worry about excess protein intake instead.
Case in point: the Mayo Clinic article titled "Are you getting enough protein?"[0]
It claims that protein is only a concern for people who are undereating or on weight loss drugs, yet it cites protein recommendations that many people find challenging to meet (1.1g/kg for active people, more if you're over 40 or doing strength or endurance workouts.) To top it off, it's illustrated with a handful of nuts, which are pretty marginal sources of protein. It's bizarrely mixed messaging.
Different libraries composing well together is the default assumption in most of software development. Only in Javascript have people given up on that and accepted that libraries don't work together unless they've been specifically designed, or at least given a compatibility layer, for the framework they're being used in.
Qt widgets don't work together with GTK widgets, and nobody considers this a crisis. I'm pretty sure you can't use Unreal engine stuff in Unity. GUIs require a lot of stuff to compose together seamlessly, and it's hard to do that in a universal way.
HTMX achieves its composability by declining to have opinions about the hard parts. React's ecosystem exists because it abstracts client-side state synchronization, and that inherent complexity doesn't just disappear. When you still have to handle the impedance mismatch between "replace this HTML fragment" and "keep track of what the user is doing", you haven't escaped the complexity. You've just moved it to your server, and you've traded a proven, opinionated framework's solution for a bespoke one that you have to maintain yourself.
If anything, the DOM being a shared substrate means JS frameworks are closer to interoperable than native GUI toolkits ever were. At least you can mount a React component and a Vue component in the same document. They're incompatible with each other because they're each managing local state, event handling, and rendering in an integrated way. However, you can still communicate between them using DOM events. An HTMX date picker may compose better, but that's just because it punts the integration to you.
This is the age-old problem of legacy codebases. The intentions and understanding of the original authors (the "theory of the program"[0]) are incredibly valuable, and codebases have always started to decline and gain unnecessary complexity when those intentions are lost.
Now every codebase is a legacy codebase. It remains to be seen if AI will be better at understanding the intentions behind another AI's creation than humans are at understanding the work of other humans.
Anyway, reading and correcting AI code is a huge part of my job now. I hate it, but I accept it. I have to read every line of code to catch crazy things like a function replicated from a library that the project already uses, randomly added to the end of a code file. Errors that get swallowed. Tautological tests. "You're right!" says the AI, over and over again. And in the end I'm responsible as the author, the person who supposedly understands it, even though I don't have the advantage of having written it.
[0] Programming as Theory Building, Peter Naur. https://pages.cs.wisc.edu/~remzi/Naur.pdf
reply