I think the aspect you're missing is Erlang's total focus on 'green threads' (erlang processes - or just process) as the basic unit of computation. Just like Unix's "everything is a file" philosophy it's a mundane description that had wide-ranging benefits for building long-running services.
First, in the most basic situation, the VM will manage all of your parallelism. It will figure out how many physical CPUs and how much memory it has to work with and will maximize those resources. It will dynamically limit the resources consumed by a particular process such that it will remain responsive. Because erlang doesn't allow modifying values and uses message passing, memory corruption is extremely rare. This has pretty heavy performance and efficiency costs, but what you gain is extremely high flexibility in terms of scale.
To a first approximation, Erlang actually cares very little about how many machines an Erlang program is running. Once you embrace processes as the basic unit - which machine is running a process, as long as all machines can pass messages, matters very little. Also, again, all the VMs know what resources they have and can re-balance processes to best utilize those resources across many nodes. All without special configuration[1]. The very brave can even live-swap the code running inside a VM - existing processes spawned on the new code will execute and close as normal (no time limits, the VM has the old code still), while you swap your network interfaces over to your new code.
The process-centric design has other nice implications. Test can be run fully in parallel[2], which keeps a low-efficiency-per-thread VM very snappy in practice. Each process can fail fully interdependently as well, which means that cascading failures are quite rare. There are still lots of reasons not to live-console into production, but in a language that doesn't allow you to change memory values and also gives your shell full isolation, it's a lot more practical than in other languages.
It's worth saying that none of this is "magic" or from a special quirk of the language - any language that totally committed to message passing and allowing the VM to manage greenlet processes would get the same benefits. IMO these tradeoffs are very worthwhile for the kinds of things Erlang / Elixir specialize in (long running network-connected services) and I would encourage anyone who's been using Python for these kinds of things to consider this platform as an alternative.
[1] This is over-stating it quite a bit - you do need to care about multi-node issues and you do want to configure things if you're getting into weird resource bottlenecks...but it also will "just work" a lot of the time. Generally you should twiddle setting if you expect a weird workload in the cluster generally and if you have an occasional spike it will "mostly work."
[2] This isn't totally true in practice, you often have resources outside Erlang (databases, etc) that need to be shared between a large number of processes which limits parallelism.
First, in the most basic situation, the VM will manage all of your parallelism. It will figure out how many physical CPUs and how much memory it has to work with and will maximize those resources. It will dynamically limit the resources consumed by a particular process such that it will remain responsive. Because erlang doesn't allow modifying values and uses message passing, memory corruption is extremely rare. This has pretty heavy performance and efficiency costs, but what you gain is extremely high flexibility in terms of scale.
To a first approximation, Erlang actually cares very little about how many machines an Erlang program is running. Once you embrace processes as the basic unit - which machine is running a process, as long as all machines can pass messages, matters very little. Also, again, all the VMs know what resources they have and can re-balance processes to best utilize those resources across many nodes. All without special configuration[1]. The very brave can even live-swap the code running inside a VM - existing processes spawned on the new code will execute and close as normal (no time limits, the VM has the old code still), while you swap your network interfaces over to your new code.
The process-centric design has other nice implications. Test can be run fully in parallel[2], which keeps a low-efficiency-per-thread VM very snappy in practice. Each process can fail fully interdependently as well, which means that cascading failures are quite rare. There are still lots of reasons not to live-console into production, but in a language that doesn't allow you to change memory values and also gives your shell full isolation, it's a lot more practical than in other languages.
It's worth saying that none of this is "magic" or from a special quirk of the language - any language that totally committed to message passing and allowing the VM to manage greenlet processes would get the same benefits. IMO these tradeoffs are very worthwhile for the kinds of things Erlang / Elixir specialize in (long running network-connected services) and I would encourage anyone who's been using Python for these kinds of things to consider this platform as an alternative.
[1] This is over-stating it quite a bit - you do need to care about multi-node issues and you do want to configure things if you're getting into weird resource bottlenecks...but it also will "just work" a lot of the time. Generally you should twiddle setting if you expect a weird workload in the cluster generally and if you have an occasional spike it will "mostly work."
[2] This isn't totally true in practice, you often have resources outside Erlang (databases, etc) that need to be shared between a large number of processes which limits parallelism.