Over 15 years I've worked both in large corporations in cross-functional teams, small teams (5-10) at small to medium companies, and alone as a freelancer taking on projects across many different technologies. My focus in the past few years has been on being a force multiplier for my team and increasing predictability, stability, and developer enjoyment of output. I'm also a big proponent of mentoring and frequently create tech talks from the daily work notes I take so that the lessons from any challenges I face benefit not only me, but my team as well.
My preference is functional languages and companies that use Nix and/or NixOS, but it's more important to have a healthy company and team than any specific technology.
I also do Parkour in my free time, and I have given lessons to coworkers in the past who are interested :)
> Do. Not. Break. Workflows. Software that doesn't understand this principle is something I don't trust not to break itself when its developers decide to get cute.
What use would magit be if it presented the exact same interface as `git rebase`?
Do you know what it would lose?
> Did I stop and read the docs to figure out how it wanted me to rebase in this brave new world? Hell no.
I suppose you don't since you didn't read the docs.
Why try it in the first place if you expected it to be the exact same?
> they're case classes then you're moving around a lot of unnecessary data fields and you've got the overhead of making fifty intermediate case classes yourself.
Haskell developer here... What about case classes and lenses? Do they solve this?
As I understand it lenses don't change the underlying data structure. For ETL you need a way to basically say "the code only uses fields X, Y and Z so we will only load X, Y and Z during runtime." Automatically based on usage without having to keep updating your lens definition. Modern on disk file formats are columnar so they can very efficiently read subsets of the data. If your data has 200 columns than reading the 199 unnecessary ones can be very slow.
They could help with the intermediate data structure but some of them aren't subsets or trivial derivatives. So you really need an inline way to create single use case classes. I think frameless in Scala can do some of this for standard transformations but that requires the black magic of shapeless.
Spark in Python (and the untyped Dataframe API in Scala) compiles everything internally before running it to achieve the above. So it's trivial to have unit tests on empty data structures which "type check" your Spark code.
Cardelli's definitions are extremely odd; if you take them literally then Python is type safe but Java is not.
In everyday language a downcast, even a checked one, is not a type-safe operation, and so to the extent that Go's limited type system makes it impractical to write programs without downcasts I'd say that Go is fundamentally type-unsafe.
Well, it's a matter of taste, I guess. In my opinion any downcast that reliably distinguishes between valid and invalid values is type-safe. The behaviour of
defer func() {
if e := recover(); e != nil {
fmt.Printf("not a string %v\n", e)
}
}()
v := interface{}(5)
u := v.(string)
fmt.Println(u)
is well-defined: it will print "not a string", always.
By that logic there's no such thing as a non-type-safe language, because all programs have behaviour.
Normally one would say a checked downcast like that is not type-safe, because you can't reason locally about the type behaviour of the downcast based on knowing the type of v. You would have to know the value of v, which is a Turing-complete problem in the general case.
reply