Julia's Tidier.jl ecosystem is getting there too. It uses macros to mimic this 'special' evaluation framework of R, so the code is also readable in a similar way.
If you compare every single language Clojure can emit - Clojure&Java, Clojurescript&Javascript, babashka&Bash, Clojure-Dart&Dart, Jank&C++, Fennel&Lua (even though technically Fennel is only Clojure-like) - the number of delimiters (and often even number of parens) would be universally higher than in the Clojure code. I guarantee it. Clojure has a lower delimiter-to-structure ratio. Java has parens that exist purely for syntactic obligations - `(if (`, `for (`, etc. It's not that Clojure has fewer parens absolutely, yet it has no wasted ones - that is 100% true.
Do you have an idea whether these are specific types of problems that is giving Julia poorer performance? From what I recall, people were reporting better speeds with Julia than with Numba (e.g., [1]). My impression was that you are basically able to bring more of your code to LLVM with Julia than Numba, so it would make sense.
Thank you for the article! We're mainly interested in floating-point performance and energy consumption w/r/t to solving differential equations and tridiagonal systems of equations, while running on a 128-core compute node. Our current results will likely only be presented in May, but here are last year's results: https://www.cs.uni-potsdam.de/bs/research/docs/papers/2025/l...
Our Julia code is parallelised with FLoops.jl, but so far Numba has shown surprising performance benefits when executing code in parallel, despite being slower when executed sequentially. Therefore I can imagine that Julia might yield better results when run in a regular desktop environment.
It was touched 9 years ago, but maybe you have ported it to current standards. I don't think we had multithreading at that time, only multiprocessing.
Is your Julia implementations available somewhere? (Sorry if it is in your paper but I missed it).
I vaguely remembered in the past that working with threads leaded to some additional allocations (compared to the serial code). Maybe this is also biting us here?
As far as I know the code was ported to use @floops, with minor optimisations in addition to that.
I think it's quite possible that it's an allocation issue, that's something we're looking into, although I don't have any specific results for Julia yet.
Are you using Polyester.jl? Large numbers of threads are not optimized with Base threads usage due to GC interactions + the hierarchical threading adds overhead vs "unsafe" thread techniques which don't support the worksharing. Polyester is thus required to get very low overhead threading matching performance of non-worksharing scenarios.
I have a small benchmark program doing tight binding calculations of carbon nanostructures that I have implemented in C++ with Eigen, C++ with Armadillo, Fortran, Python/numpy, and Julia. It's been a while since I've tested it but IIRC all the other implementations were about on par, except for python which was about half the speed of the others. Haven't tried with numba.
To bring Julia performance on par with the compiled languages I had to do a little bit of profiling and tweaking using @views.
The JuliaParallel/rodinia repo says that the focus of those benchmarks is the CUDA versions. I suspect that the CPU versions have not had much optimization effort spent on them. Julia isn't a magic wand, but you can usually get within a factor of 2 of C++ with similar effort.
Cluster environment with virtualized cores may cause slower performance of Julia's parallel code. People recommend Threadpinnig.jl to solve the issues.
That study must have compared beginners in LaTeX and MS Word. There is a learning curve, but LaTeX will often save more time in the end.
It is an old language though. LaTeX is the macro system on top of TeX, but now you can write markdown or org-mode (or orgdown) and generate LaTeX -> PDF via pandoc/org-mode. Maybe this is the level of abstraction we should be targeting. Though currently, you still need to drop into LaTeX for very specific fine-tuning.
Yeah I started using GitLab for the same reason and also that FSF "approved" of its CE version. But doesn't hosting private repos on GitLab and using public repos on GitHub just give GitHub that much more monetizable value?
It does, and I've even had Gitlab as the primary repo for some time. But if your projects pick up any steam, github mirrors are going to pop up whether you run them or not - I've had people mirror my projects onto github because it means less questions for them when they want to package them for their organisation or minor packaging system than pulling source from "not-Github". Of course, the license allows them to do that, and they're upfront why they're doing it, but if there's going to be a github mirror anyway, may as well have it official.
Also if we're being honest, despite Gitlab being the #2 platform, you're going to get less contributions than on Github as people just aren't going to want to sign into a second service. Now most of my public projects are like "I made this, I put it here to show off, and use it if you like" so if people _don't_ use it, it's no big deal for me, but if you're in it for revenue or clout or just like seeing usage numbers going up, it's clearly not the optimal choice.
It is quite annoying of the lock in. I prefer using GitLab for private projects but it means if I want to FOSS those I now need to support two different platforms to have FOSS projects and my own stuff
It's one of those languages that outgrew its original purpose, as did Python IMHO. So non-matrix operations like string processing and manipulation of data structures like tables (surprisingly, graphs are not bad) become unwieldy in MATLAB - much like Python's syntax becomes unwieldy in array calculations, as illustrated in the original post.
An understated advantage of Julia over MATLAB is the use of brackets over parentheses for array slicing, which improves readability even further.
The most cogent argument for the use of parentheses for array slicing (which derives from Fortran, another language that I love) is that it can be thought of as a lookup table, but in practice it's useful to immediately identify if you are calling a function or slicing an array.
Indeed, there are many high-quality alternatives (sometimes described as "MATLAB clones" back in the day) that never gained bigger traction.
Among modern alternatives that don't strictly follow MATLAB syntax, Julia has the biggest mindshare now?
GNU Octave, as a superset of the MATLAB language, was (is) most capable of running existing MATLAB code. While Octave implemented some solvers better than MATLAB, the former just could not replicate a large enough portion of the latter's functionality that many scientists/engineers were unable to fully commit to it. I wonder whether runmat.org would run up against this same problem.
The other killer app of MATLAB is Simulink, which to my knowledge is not replicated in any other open source ecosystem.
reply