It's fun to see that deck since my first (software) job out college was working with that team (about a year after this slide), when only two of those four mentioned individuals remained on the team, but consequently I got the pretty rare/fun opportunity to write Erlang.
We had a major advantage for new technology rollout as we did "devops" (e.g. the ops team decided they did not have bandwidth to support us and we needed to launch), and were building greenfield technology (Y! BOSS) with relatively few integration points with existing Y! technology (except for Vespa which is similar to SOLR/ElasticSearch and some weird C++ libraries that were somehow mislabeled from "junky prototype" to "high technology" and were ported forward).
Years later, my sense is that the biggest initial stumbling block was getting the existing devs to have any interest in learning a new technology / way of doing things. I think we lacked some perspective there, and should have made a much larger effort to get the team excited and trained with the technology (in jobs since, I've never had a team who turns down technology training), and could have probably won our local team over if we'd been more intentional.
Ultimately though, the final stumbling block was Y! itself, which was very focused on keeping the number/diversity of technologies low. I think at the organizational level this is probably the right decision, so I can't really fault them for that. Ironically Node.js popped up just a year or so later as the great language hope to rescue Y!, and did manage to get significant traction, so if you wanted to study adoption, finding someone who could explain how they got Y!'s Node.js adoption going in the right direction would be pretty fascination.
The weaknesses they point out pretty much still holds today - gaps in documentation and no critical mass/small community.
The small community size means you have a hard time finding API's for anything but the most mainstream databases/services which is the major drawback with Erlang.
I think it has become a lot better today with Elixir coming into the picture, as the two can communicate pretty much seamlessly and Elixir has been gaining a lot of steam lately, it's improving.
Not to mention that one of the big focuses of the Elixir community has been better documentation too.
That's not to say there isn't a long way left to go though, there is definitely room for improvement still.
Very true. We are adding Elixir team (very happy with it so far), most Ruby shops I know are either looking at Elixir or started using it. Given current rate of grows it looks like it will be very sizable community very soon.
Where is Elixir gaining most of its traction with companies? What types of problems? I've seen the Phoenix web framework mentioned many times, for example:
I'm using it to convert a rails app into a stateful websocket-based service. Honestly I'm barely using the Phoenix framework aside from the channels and potentially presence. (Which isn't a knock on Phoenix, which is awesome, has great documentation for the most part, a very helpful community) I expect I'll make more use of it at some point but if you treat your app as something deeper than basic CRUD this is where the stateful processes of BEAM really come in handy.
In another app that was green-field we're using it to build a highly scalable estimation engine, which is to say, a glorified calculator over a somewhat complex data model.
I agree, I'm using Elixir in a number of production systems and I haven't come across a show-stopper yet, database packages, SOAP, documentation has improved quite a lot, even on Erlang.
I think it's also gotten a lot better since 2008 simply because concurrency has become so much more important. Multi-core chips were only just beginning to get popular at the time this was written, which was really driving a new focus on concurrency, since that was seen as the only way we'd really see speed gains. Because of that focus, Erlang became more relevant, and more popularized.
We had a major advantage for new technology rollout as we did "devops" (e.g. the ops team decided they did not have bandwidth to support us and we needed to launch), and were building greenfield technology (Y! BOSS) with relatively few integration points with existing Y! technology (except for Vespa which is similar to SOLR/ElasticSearch and some weird C++ libraries that were somehow mislabeled from "junky prototype" to "high technology" and were ported forward).
Years later, my sense is that the biggest initial stumbling block was getting the existing devs to have any interest in learning a new technology / way of doing things. I think we lacked some perspective there, and should have made a much larger effort to get the team excited and trained with the technology (in jobs since, I've never had a team who turns down technology training), and could have probably won our local team over if we'd been more intentional.
Ultimately though, the final stumbling block was Y! itself, which was very focused on keeping the number/diversity of technologies low. I think at the organizational level this is probably the right decision, so I can't really fault them for that. Ironically Node.js popped up just a year or so later as the great language hope to rescue Y!, and did manage to get significant traction, so if you wanted to study adoption, finding someone who could explain how they got Y!'s Node.js adoption going in the right direction would be pretty fascination.