Author is still early in their exploration and has some definite mistakes in here. Probably the biggest one is around the error handling, thinking that the only way to interact with an error is through the error interface itself. That is intended as a baseline interaction, a fallback for when nothing else is appropriate, such as just slamming an error into a log. If you want to interact with specific errors, you should use the various functions in the errors package [1] to check for specific types, and then use those specific types in whatever they support. Go error support is quite good; you can return an error object that says "The user was not found, the file the user was supposed to be found in was not found, and while trying to log this the log failed to accept the write" as various error types composed together, and any consumer of the error using the errors package can pick out the various bits of the error they understand using those tools without having to understand the error binding them together or the components it doesn't care about . You should never use string manipulation to check errors unless you have no choice, and whatever library left you with no choice should have an issue filed against it. It doesn't come up often, but it does sometimes come up; most recently I had the AWS SDK emitting an error I could only use string functions on, but I think they've since fixed it.
I don't like the term "enums" because of the overloading between simple integers that indicate something (the older, more traditional meaning) and using them to mean "sum types" when we have the perfectly sensible term "sum type" already, that doesn't conflict with the older meaning. If you want sum types, a better approach is to combine the sort of code structure defined here: https://appliedgo.net/spotlight/sum-types-in-go/ with a linter to enforce it https://github.com/alecthomas/go-check-sumtype , which is even better used as a component of golangci-lint: https://golangci-lint.run/
I'd also add my own warnings about reaching for sum types when they aren't necessary, in a language where they are not first class citizens: https://jerf.org/iri/post/2960/ but at the same time I'd also underline that I do use them when appropriate in my Go code, so it's not a warning to never use them. It's more a warning that sum types are not somehow Platonically ideal. They're tools, subject to cost/benefit analysis just like anything else.
> If you want sum types, a better approach is to combine the sort of code [...]
I'm currently of the opinion that where you truly need this type of thing, write tests that use the ast package to validate that your expectations hold. That way you don't need to do anything strange with the code, and logic failure will show up alongside all of your other logic failures.
While it does venture into implementation details that shouldn't be tested, Go offers a discriminator between your actual tests and throwaway tests (i.e. `package foo_test` v.s. `package foo`), so as long as you've clearly mark the intent I find this to be an acceptable tradeoff. As implementation changes, and you no longer need that validation, others will know that your throwaway tests are intended as such.
> I don't like the term "enums" because of the overloading between simple integers that indicate something (the older, more traditional meaning)
I disagree with this. I'm old as hell, and I learned programming in a context where enums were always ints, but I remember being introduced to int enums as "we're going to use ints to represent the values of our enum," not "enums are when you use ints to represent a set of values." From the very beginning of my acquaintance with enums, long before I encountered a language that offered any other implementation of them, it was clear that enums were a concept independent of ints, and ints just happened to be an efficient way of representing them.
"Enum" is literally defined as a numbering mechanism. While integers are the most natural type used to store numbers, you could represent those numbers as strings if you really wanted. The key takeaway is that a enum is a value, not a type.
The type the link was struggling to speak of seems to be a tagged union. Often tagged union implementations use enums to generate the tag value, which seems to be the source of confusion. But even in tagged unions, the enum portion is not a type. It remains just an integer value (probably; using a string would be strange, but not impossible I guess).
Disagree. Enums are named for being enumerable which is not the same thing as simply having an equivalent number.
It’s incredibly useful to be able to easily iterate over all possible values of a type at runtime or otherwise handle enum types as if they are their enum value and not just a leaky wrapper around an int.
If you let an enum be any old number or make the user implement that themselves, they also have to implement the enumeration of those numbers and any optimizations that you can unlock by explicitly knowing ahead of time what all possible values of a type are and how to quickly enumerate them.
What’s a better representation: letting an enum with two values be “1245927” or “0” or maybe even a float or a string whatever the programmer wants? Or, should they be 0 and 1 or directly compiled into the program on a way that allows the programmer to only ever need to think about the enum values and not the implementation?
IMO the first approach completely defeats the purpose of an enum. It’s supposed to be a union type, not a static set of values of any type. If I want the enum to be tagged or serializable to a string that should be implemented on top of the actual enumerable type.
They’re not mutually exclusive at all, it’s just that making enums “just tags” forces you to think about their internals even if you don’t need to serialize them and doesn’t give you enumerability, so why would I even use those enums at all when a string does the same thing with less jank?
> Enums are named for being enumerable which is not the same thing as simply having an equivalent number.
Exactly. Like before, in the context of compilers, it refers to certain 'built-in' values that are generated by the compiler; which is done using an enumerable. Hence the name. It is an implementation detail around value creation and has nothing to do with types. Types exist in a very different dimension.
> It’s supposed to be a union type
It is not supposed to be anything, only referring to what it is — a feature implemented with an enumerable. Which, again, produces a value. Nothing to do with types.
I know, language evolves and whatnot. We can start to use it to be mean the same thing as tagged unions if we really want, but if we're going to rebrand "enums", what do we call what was formally known as enums? Are we going to call that "tagged unions" since that term now serves no purpose, confusing everyone?
That's the problem here. If we already had a generally accepted term to use to refer to what was historically known as enums, then at least we could use that in place of "enums" and move on with life. But with "enums" trying to take on two completely different, albeit somewhat adjacent due to how things are sometimes implemented, meanings, nobody has any clue as to what anyone is talking about and there is no clear path forward on how to rectify that.
Perhaps Go even chose the "itoa" keyword in place of "enum" in order to try and introduce that new term into the lexicon. But I think we can agree that it never caught on. If I, speaking to people who have never used Go before, started talking about iotas, would they know what I was talking about? I expect the answer is a hard "no".
Granted, more likely it was done because naming a keyword that activates a feature after how the feature is implemented under the hood is pretty strange when you think about it. I'm not sure "an extremely small amount" improves upon the understanding of what it is, but at least tries to separate what it is from how it works inside of the black box.
It feels obvious that that's where the term originated, but I've never seen it used as a definition. In a mathematical context, something is enumerable if it can be put into 1:1 correspondence with the integers, but it doesn't need to be defined by a canonical correspondence. This suggests that being a finite (in a programming context where the set of ints is finite) set of discrete values is the defining feature, not the representation.
> In most applications of int enums, the particular integers can be chosen at random
I’m not sure the definition of "enum" enforces how things are identified. Random choice would be as good as any other, theoretically. In practice, as it relates to programming, random choice is harder to implement due to collision possibilities. Much simpler is to simply increment an integer, which is how every language I've ever used does it; even Rust, whose implementation is very similar to Go's implementation.
But it remains that the key takeaway is that the enum is a value. The whole reason for using an enum is for the sake of runtime comparison. It wouldn't even make sense to be a type as it is often used. It is bizarre that it keeps getting called one.
Sum types can be put into 1:1 correspondence with the integers, barring the inclusion of a non-enumerable type in a language's specification that can be used in sum types. However I would observe that this is generally a parlor trick and it's fairly uncommon to simply iterate through a sum type. As is so often the case, the exceptions will leap to mind and some people are already rushing to contradict me in the replies... but they are coming to mind precisely because they are the exceptions. Yes, you can sensibly iterate on "type Color = Red | Green | Blue", I've written code to do the equivalent in various languages many times and most complicated enumerations (in the old sense) I do come equipped with some array that has all the legal values so people can iterate over them (if they are not contiguous for some reason), so I know it can be done and can be useful. But the instant you have a general number type, or goodness help you, a generalized String type, as part of your sum type, you aren't going to be iterating on all possible values. And the way in which you can put the sum types into a 1:1 correspondence won't match your intuition either, since you'll need to diagonalize on the type, otherwise any unbounded array/string will get you "stuck" on the mapping and you'll never get past it.
So while you can theoretically argue it makes sense to call them an "enum" I don't like it precisely because "enumerating" the "enum" types (being sum types here), in general, is not a sensible operation. It is sensible in specific, but that's not really all that special. We don't generally name types by what a small percentage of the instances can do or are, we name them by what all instances can do or are. A degenerate sum type "type Value = Value" is still a sum, albeit a degenerate one of "1", but nobody ever enumerates all values of "type Email = Email { username :: String, domain :: String }". (Or whatever more precise type you'd like to use there. Just the first example that came to mind.) There are also cases where you actively don't want users enumerating your sum type, e.g., some sort of token that indicates secure access to some resource that you shouldn't be able to get, even in principle, by simple enumerating across your enum.
If it's called an "enum" I want to be able to "enum"erate it.
I don't like the term "enums" because of the overloading between simple integers that indicate something (the older, more traditional meaning) and using them to mean "sum types" when we have the perfectly sensible term "sum type" already, that doesn't conflict with the older meaning. If you want sum types, a better approach is to combine the sort of code structure defined here: https://appliedgo.net/spotlight/sum-types-in-go/ with a linter to enforce it https://github.com/alecthomas/go-check-sumtype , which is even better used as a component of golangci-lint: https://golangci-lint.run/
I'd also add my own warnings about reaching for sum types when they aren't necessary, in a language where they are not first class citizens: https://jerf.org/iri/post/2960/ but at the same time I'd also underline that I do use them when appropriate in my Go code, so it's not a warning to never use them. It's more a warning that sum types are not somehow Platonically ideal. They're tools, subject to cost/benefit analysis just like anything else.
[1]: https://pkg.go.dev/errors