John Ternus, SVP of Hardware Engineering, is considered the front runner for CEO right now. The board wants a more product oriented CEO this time. Things could change but makes me optimistic.
Recently I've been experimenting with using multiple languages in some projects where certain components have a far better ecosystem in one language but the majority of the project is easier to write in a different one.
For example, I often find Python has very mature and comprehensive packages for a specific need I have, but it is a poor language for the larger project (I also just hate writing Python). So I'll often put the component behind a http server and communicate that way. Or in other cases I've used Rust for working with WASAPI and win32 which has some good crates for it, but the ecosystem is a lot less mature elsewhere.
I used to prefer reinventing the wheel in the primary project language, but I wasted so much time doing that. The tradeoff is the project structure gets a lot more complicated, but it's also a lot faster to iterate.
Plus your usual html/css/js on the frontend and something else on the backend, plus SQL.
This discussion makes me think peer reviews need more automated tooling somewhat analogous to what software engineers have long relied on. For example, a tool could use an LLM to check that the citation actually substantiates the claim the paper says it does, or else flags the claim for review.
I'd go one further and say all published papers should come with a clear list of "claimed truths", and one is only able to cite said paper if they are linking in to an explicit truth.
Then you can build a true hierarchy of citation dependencies, checked 'statically', and have better indications of impact if a fundamental truth is disproven, ...
Could you provide a proof of concept paper for that sort of thing? Not a toy example, an actual example, derived from messy real-world data, in a non-trivial[1] field?
---
[1] Any field is non-trivial when you get deep enough into it.
I'd say my expectation is papers should be minimal in their effect, and compounding. If your project proves new facts, either they should be clearly enumerable (with as much specificity as possible), or your project/presentation/paper should be broken up to the point your findings ARE enumerable.
hey, i'm a part of the gptzero team that built automated tooling, to get the results in that article!
totally agree with your thinking here, we can't just give this to an LLM, because of the need to have industry-specific standards for what is a hallucination / match, and how to do the search
Corporations don't need cameras to track people, they have had the ability to track bluetooth emissions for well over a decade. Unless you turn off a lot of connectivity settings, smartphones are pretty much open tracking devices.
I must be doing something wrong because incremental builds regularly take 30-60 seconds for me. Much more if I add a dependency. And I try to keep my crates small.
As a sibling comment points out, it's likely to be mostly link time, not compilation time.
The most recent Rust version ships with `lld` so it shouldn't be the case anymore (afaik `lld` is a bit slower than the `mold` linker, but it's close, much closer than the system linker that was previously being used by default).
It's still my number one complaint about Rust, even though it has definitely gotten better over time. Partly my fault - I'm stuck on a slightly underpowered Windows machine at work. My Macs at home compile significantly faster. But as soon as I add certain crates like serde, tokio, windows, and some others, the compile times grow quickly. It also means that tasks Rust isn't necessarily designed for but can be used for (like web backends) become frustrating enough to dissuade me from using it as a do-it-all language despite certain aspects of the language being really nice. Even a 30-45 second tweak-test loop becomes annoying after a while. Again more of a personal problem than anything, but the point is I personally am constantly frustrated with the compile times.