I’ve been actively using elixir for ML at work, and I would say it’s a solid choice.
The downside - unfortunately while bumblebee, Axon, and Nx are libraries that seem to have a fantastically engineered base most of the latest models don’t have native elixir implementations yet and making my own is a little beyond my skill still. So a lot of the models you can easily run are older.
But the advantages - easy long running processes, great multiprocessing support, solid error handling and recovery - all pair very well with AI systems.
For example, it’s very easy to make an application that grabs files, caches them locally, and runs ML tasks against them. You can use process monitoring and linking to manage the locally cached files, and there’s no runtime duration limit like you might hit in a serverless system like lambda. Interprocess messaging means you can easily run ML in a background task and stream results asynchronously to a user. Additionally, logs are automatically streamed to the parent process and it’s easy to tag logs with process metadata, so tracking what is going on in your application is dead simple.
That’s basically a whole stack for a live ML service with all the difficult infrastructure bits already taken care of.
The downside - unfortunately while bumblebee, Axon, and Nx are libraries that seem to have a fantastically engineered base most of the latest models don’t have native elixir implementations yet and making my own is a little beyond my skill still. So a lot of the models you can easily run are older.
But the advantages - easy long running processes, great multiprocessing support, solid error handling and recovery - all pair very well with AI systems.
For example, it’s very easy to make an application that grabs files, caches them locally, and runs ML tasks against them. You can use process monitoring and linking to manage the locally cached files, and there’s no runtime duration limit like you might hit in a serverless system like lambda. Interprocess messaging means you can easily run ML in a background task and stream results asynchronously to a user. Additionally, logs are automatically streamed to the parent process and it’s easy to tag logs with process metadata, so tracking what is going on in your application is dead simple.
That’s basically a whole stack for a live ML service with all the difficult infrastructure bits already taken care of.