Really excited for the new syntax, thank you everyone involved for making Python better each day. IMO Python's most powerful feature is its' amazing community. :)
Depends no? If your "100%" is based on the after value rather than the before value, you have added 100%. Though I guess there is an implicit expectation that it is the before value that is used.
It went from having 0 pattern matching to having 1 pattern matching -- that seems like '100% more of this thing we are talking about' to me. This of course only makes sense if you see '1 pattern matching' as a precisely defined abstract construct, but I guess the 100%-more joke usually requires that perspective (and is a joke precisely because it's so non-sensical to think this way).
That really is a weird one, because here in the UK we say 'fortnightly' (for 'every other week'); so 'biweekly' would definitely be taken to mean 'twice per week'.
That also matches 'biannual' (an event that occurs every other year is 'biennial'), so 'biweekly' as 'fortnightly' is certainly to be proscribed, in my book.
I only have Collins' concise to hand, which gives the definitions I have and doesn't even mention (not even proscribed or US etc.) other uses. And Collins is pretty permissive.
And no they're not. 'Biennial' events occur once per two year period; 'biannual' events occur four times per two year (twice per year) period.
For example, in horticulture 'a biennial [plant]' is one that has a single flowering/reproduction cycle over two years. (Cf. 'annuals', 'perennials'.)
Except you do. If you go from 10 to 15, would have 50% more. The math: (15 - 10)/10. If you go from 0 to 1: (1-0)/0. Well, that is not gonna work, but it tends towards infinity as your starting number gets closer and closer to zero.
I've really taken to writing a lot of Hy [0] -- somewhat irresponsibly, since Hy is maintained by a ragtag band of brilliant mavericks; but `hy2py` works great if you `eval-when-compile` most of your macros. Anyway, Hy syntax is
```
(with [f open(file 'r)] (do stuff))
```
rather than
```
with open(file,'r') as f:
```
I switch [1] to Python (often to write tests in a language the node.js crew will be able to skim) and the one thing that trips me always is the wrong ordering of the `with` statement in straight Python. You don't say `for range(10) as x`...
I hope the Hy crew do the sensible thing about case statements. I don't even know what the sensible thing is, though.
[0] http://hylang.org
[1] Yes, you can import Hy code from Python and vice-versa!
That doesn't resolve (at least not obviously to me) the ambiguity where you want to match a Point where the first value is equal to the existing variable x, while the second value is anything and you want to capture it in a new variable y.
I prefer the suggestion that appeared at one point where captured variables would have to be suffixed with ?, like this:
case Point(x, y?):
For those not aware, the solution today is that x must contain a . in its name by virtue of being a member of an object or an imported module:
Syntax cleanliness is ultimately subjective, so if you don't see a problem with that then I don't think I can persuade you. To me, it just seems obvious that it's more painful than what I wrote. But I'll mention a couple of specific reasons anyway:
* The most concerning thing is knowing to write that in the first place. I think that if the ? syntax were the real one then you'd soon get used to writing a snippet like in my comment, but with the current syntax it's always going to take a bit of mental mind bending to come up with what you've written.
* Even if you get it right when writing it, what you've written is vastly more verbose, and adding that redundant variable name (twice!) is just going to be distracting when reading it back.
By the way, what you wrote is not equivalent to what I wrote (I wonder if that means your snippet is not as obvious as you're suggesting): you've matched against "something" rather than "x". Instead you'd need to do:
> The most concerning thing is knowing to write that in the first place
This is something you learn from reading the docs / book on the feature. This applies to all language features.
> Even if you get it right when writing it, what you've written is vastly more verbose
Slightly more verbose, but also more consistent with the rest of Python syntax. Python syntax already has `if` clause that's well understood, while you're proposing a special-case syntax. I believe language consistency beats brevity, and explicit is better than implicit.
BTW the ? syntax looks to me very similar to optional / null-aware / error-aware dereferencing operator used in some other languages (Kotlin, Groovy, Rust).
> This is something you learn from reading the docs / book on the feature. This applies to all language features.
Surely you agree that, even having read and understood documentation for a language feature, there are differing levels of difficulty in using it? That depends on intrinsic complexity of the feature and also the incidental complexity from the design of the language feature. Your comment seems to imply that it doesn't matter what syntax the language feature uses because you'll have to learn about it in any case. I'm sure you don't really mean that!
> I believe language consistency beats brevity, and explicit is better than implicit.
I agree with both parts of this statement, I just disagree that the syntax they chose satisfies it.
The syntax is not consistent with anything else, because nowhere else can you have (0, x, foo.y) mean "assign to x (but don't assign to 0 or foo.y)". That includes if statements, with statements, and left hand side of assignment statements. Choosing a new syntax for something that has different behaviour from existing syntax actually is being consistent.
Similarly, using a character like ? (or some other distinctive syntax) to demark which parts are being used for pattern matching is being explicit. In Point(x, foo.y, 0), saying that only x is assigned to because it's not a literal or dotted seems like a great example of implicit behaviour to me.
Again, like I said at the start, this is all subjective so I think we'll have to agree to disagree. I just wanted to object to those quoted sentences because it looked like you were saying that it is objectively true that ? syntax is implicit and inconsistent while the syntax they chose is explicit and consistent - and that isn't objectively true.
Ah, I completely agree with what you have written here. My earlier post applied to using the if clause for matching to the variable value instead of capturing the value from the object.
I view pattern matching from a bit different angle - instead of viewing the case expression as answering "does the object match the pattern", I view it instead "can the object be deconstructed to a thing looking like the pattern".
That is, if you write the exactly same expression after matching on the right side, it should evaluate to the original object being matched. And I believe this holds in Python just as in other languages with pattern matching.
However, Python way of making it work is indeed a bit strange - because pattern matching is allowed to mutate variables from the outer scopes, while in some other languages I'm used to, capturing symbols just shadow the symbols in the outer scope, without changing them. This is a bit scary part of Python implementation.
Fortunately we have tools like Docker and to a lesser extent Python Virtual Environments to help deal with this in a sane way.
But yeah for 1 off zero dependency Python scripts I won't be using the new features until major distros ship with 3.10 by default. Part of the appeal to me of Python is most distros and operating systems have it installed by default. Being able to curl down a single file and run it is really nice. For that to work requires using a version of Python that most folks have by default. That means coding against 3.6's features (at least we can use f-strings now!).
Ugh. I hate this rapid release cycle with introduction of new syntax. One of my colleagues keeps insisting on using the latest and greatest version all the time so everything he writes keeps breaking everyone's environments since they're not savvy to upgrade python, install packages and point the IDE to the new python installation.
It's a total 180 from when he pushed python 2 when it only had a few years until it's EOL.
Like f-strings look nice and all but there's nothing wrong with the format method and there's always the locals hack to achieve the same thing.
That's not really enough as you depend on libraries you don't own. All the sudden in order to update a small library you use you need to update to a new version of Python. So, yeah, new syntax is always disruptive. My utopia has new syntax being added very seldomly with loads of notice period and a `from __future__ import newsyntax` phase before the syntax is considered "production ready". So, hopefully, by the time people start using it, all relevant version of Python supports it.
Maybe I've been working with Python for too long but I don't see the value in Python's pattern matching. if/elif/else are all four letters just like case and I remember reading that Guido didn't want a match and case construct.
Maybe I am just grumpy but I feel like Python the simple language that drew in a lot of developer is going the C++ way of adding features for the sake of adding features.
Have a look at the the tutorial pep and you'll realize like I did, that pattern matching and case switches solve very different use cases, and that if-else are a long shot from pattern matching.
I can absolutely see that it could be useful, what I am saying is that I would prefer the language to stay lean and steer away from adding features offering minimal quality of life improvements.
As for Guido being the sponsor, he also pushed for it in 2006 and it was rejected for lack of popular support. His opinion back then still resonates with me today: https://www.python.org/dev/peps/pep-3103/#rationale
This is nice and all, but part of me feels like having code structured around the shape of tuples returned by functions has an element of pattern-smell (and by pattern I mean design pattern).
>having code structured around the shape of tuples returned by functions has an element of pattern-smell (and by pattern I mean design pattern).
This is a near content-less critique, which amounts to:
"shaping code around what's returned by functions looks like a design pattern (and implied: design patterns are bad)"
Well, design patterns are neither bad, nor good. They are what they are: a high level description of common solutions to common problems.
By themselves design patterns are not problems or problematic.
It's applying them where they don't fit the problem - or where something simpler will suffice - that's problematic (common e.g. to J2EE era Java).
Some also say that design problems point to a language deficiency (that is, make up for a lack of language feature, like e.g. closures or dynamic typing).
If we accept the above folk wisdom, built-in pattern matching would be the exact opposite of what you say.
It wouldn't be a "design pattern smell" but rather a first-class feature that frees you from having to workaround its lack with design patterns.
But I'd go further and say that the complain is nonsensical anyway.
What exactly does "having code structured around the shape of tuples returned by function" even mean?
If there's a need to extract values from the results of a function (e.g. in a parser, a protocol handler, etc.) then we don't have alternatives, we will structure our code "around the shapes returned".
The question if whether we will do it through ad-hoc solutions (like chained "if" statements, a series of switch/case, etc.), through some convoluted design pattern like a Visitor, or by first class support for exactly our need. This first class support is exactly what pattern matching syntax offers.
And far from being some smell/Python novelty, it's an old staple found in Haskell, ML, Ocaml, Lisp, and several other places besides...
I am honestly amazed you could interpret it that way, rather than the intended: it seems like a questionable design pattern.
> What exactly does "having code structured around the shape of tuples returned by function" even mean?
To me it sort of implies a function is serving multiple purposes, essentially multiple functions mashed into one -- and you figure out what really happened by the shape of it's return value. It seems to go against everything we think of functions as pure-as-possible constructs that do one thing well, and that is represented by a clearly defined return value.
I agree there might be edgy uses cases like parsers -- but few people are really writing compilers/protocol handlers, and I am afraid that first-class features might be misused by less experienced programmers who start to write frankenfunctions coupled with convoluted pattern-match blocks -- as opposed to traditional, elegant functional decomposition.
And I think functional languages are fine, however I don't always agree that just because a feature works well in the FP space it is a good idea to bring it in as a first class feature into a OO/imperative language...
>I am honestly amazed you could interpret it that way, rather than the intended: it seems like a questionable design pattern.
"Design patterns are bad" is an often repeated "folk wisdom" in dev circles. So by under-definining what's meant (leaving it open to interpretation) it's a very easy intepretation for others to assume of the comment.
The wording supports this too:
"has an element of pattern-smell (and by pattern I mean design pattern)"
This doesn't seem to imply "questionable design pattern", but "this smells like a pattern", and given that you object to it, one assumes that for you "smelling like a pattern" is bad.
Also, "seems like a questionable design pattern" would be a bizarre interpretation to take -- as this is a first-class syntax, not a design pattern.
>To me it sort of implies a function is serving multiple purposes, essentially multiple functions mashed into one -- and you figure out what really happened by the shape of it's return value. It seems to go against everything we think of functions as pure-as-possible constructs that do one thing well, and that is represented by a clearly defined return value.
I see where you're coming from, but there's nothing really violating purity here, or even "clearly defined return values".
Those clearly defined return values could still take several variations of a shape or even several shapes. Those won't have to be arbitrary shaped or non-pure. It's about handling a class of shapes.
AST nodes for example from a parsing function, protocol parts from a protocol handler, network responses in a server that might or might not have a payload, game states in a logic loop, and so on, fall into this. And pattern matching allows you to handle this common need easily.
>What exactly does "having code structured around the shape of tuples returned by function" even mean?
Not the OP, but pattern matching makes the code more concrete and less abstract. Looking at it from another point of view, it is the opposite of duck typing. Or makes things less generic. Now you need to pass in a tuple, where as for a different solution, a tuple could have been used, or a list, or your own class that responds to __getitem__, etc.. This is a well known limitation in the Haskell world.
Exciting! Especially I like the named attribute matching [1] and dict matching [2]. Scala does not have that. If you have a case (data) class with 13 parameters, you need to match on all of them, instead of doing `MyClass(param=value)`. Similarly you can not easily pattern match on `Map` (`dict`).
One thing which I am missing though is matching on f-strings. Such a powerful feature in Scala.
I don’t understand what you are saying? In scala you do not have to match on all parameters. You can use the underscore operator to skip the other attributes.
Also you can match a map since every map is just a sequence of Tuples in scala. However why would you ever do this? It is really impractical for large sequences and if I have only a very limited number of attributes I would not use a map I guess.
> You can use the underscore operator to skip the other attributes
And have to add an underscore in the right position when you ever add an attribute? With partial matching like described, you don't have to change the pattern when adding fields to the data structure. I guess the GP was referring to this.
Well that is a little bit different I guess. You probably could do some unapply shenanigans to achieve the same result but that off the top of my head I would not know how to do that.
Anyone have a clever solution for optionally using new python features while still supporting lower python 3 versions? For packaging purposes. Could put try except blocks everywhere or selectively import 3.10 modules vs <=3.9 modules, for example.
I find that new features aren’t necessary for their functionality, but they often can make code more terse, so having “duplicate” code with the same functionality but without new terseness kind of defeats the purpose. I suppose having two versions of the same module lets you end support for an old version pretty simply, but is that useful?
I think there are packages that detect the python version and do some crazy stuff in setup.py. But I think the most responsible thing for a widely used library to do is hold back on using new python features until they are a few years old.
I disagree. I mean sure, you can write something akin to functional programs in python, but the language as a whole is "mutability first" to a very large degree.
From almost every language I have used, I find python one of the furthest from my idea of functional programming (which means different things to different people). Which, in my mind, also makes it a pretty darn useless ML. It is a fine language on its own, but understanding it in terms of lisp (or even worse, scheme) or, say, standard ML is a not very useful.
I think a good exercise regarding this is trying to write YOUR-FAVOURITE-LANGUAGE in ANOTHER-LANGUAGE.
Writing idiomatic scheme in standard ML is surprisingly straightforward. Writing scheme in python is awful, regardless of functools.
Functional Python for me breaks down with the lazyness of generators. Sure, map returns a generator and that's fine. But you can't pass the result to multiple consumers as the generator has an internal state.
It's like a weird version of linear types where every value is only allowed to be used once.
APL introduced the name "reduce" and was the first to have it by ten years or so (published 1962, implemented 1966). Fairly sure Lisp had map first as the Each operator only appeared in nested APLs around 1981. APL may have had Compress before Lisp had filter, but they're not really the same: APL's isn't a higher order function; it takes one boolean filter list and another list to be filtered.
Reduce and Compress are both denoted / which is sometimes a problem.
In a language like APL map adds very little compared to reduce/fold. In scheme, fold is a lot more verbose than map. Map saves on typing and on householding. In APL? I would argue gain is a lot smaller. I have seen actually useful APL programs in less code than a fold-left filter in scheme.
> Union[int, float] -> int | float
Really excited for the new syntax, thank you everyone involved for making Python better each day. IMO Python's most powerful feature is its' amazing community. :)
btw there is a small typo @ https://docs.python.org/3.10/whatsnew/3.10.html:
> Pattterns -> Patterns
Does anyone know where it needs to be fixed?