Hacker Newsnew | past | comments | ask | show | jobs | submit | eoi's commentslogin

Also worth noting is multirust-rs, a reimplementation of multirust in Rust, which notably adds Windows support. I haven't tried anything fancy with it yet but so far it just works.

https://github.com/Diggsey/multirust-rs


The intention is to unify this, multirust, and rustup.sh. It will probably just be branded "rustup."


Thank you so much for posting this. I was expecting something like this to pop but never tried searching.


The article defines bias as follows:

> Want to know if the selection process was biased against some type of applicant? Check whether they outperform the others. This is not just a heuristic for detecting bias. It's what bias means.

Under that definition, you have been biased against A. [edit: on reflection I see this as a weakness of his definition. I missed that your selection process does in fact select the best candidates.]


Yes, but that's not the common usage of the word. Or how most people understand it. With that usage you could say ivy league schools are NOT biased against Asians since Asian graduates aren't more successful than non-Asian ones, except nobody does.


Hypothetical logic is flawed anyway. Higher ability Asian graduates could be less / only equally successful in the workplace due to pervasive external bias too. A lot of this is exacerbated by the fact that "soft skills" are more important for high status careers, and your "soft skills" are pretty much defined by tribal associations. It's the core of how we interact socially, and it causes problems that are really only fixed by alleviating scarcity.


Unless you know what exactly caused A to outperform others, you won't really know if the process is biased or what made it biased.

When asserting biases, you must first distinguish them from random noise. Using pg's logic, every selection process that isn't perfect is biased.


>> This is not just a heuristic for detecting bias. It's what bias means.

> Under that definition

That's not a definition. It's a claim about what the term "bias" means.


"How does Rust's speed compare to C/C++" is something of an interesting question. I looked at some micro benchmarks that Rust did poorly on, and listened to some chatter, and this is some of what I saw:

1. Alioth's Regex DNA: this was one of Rust's worst in the benchmark game. The author (burntsushi) of the Regex library being used (written in Rust) hypothesized that it was because of the algorithm the lib used. He worked on it, and now Rust performs excellently on that benchmark.

2. Alioth's n-body: the fast implementations are directly using SIMD. In Rust that is currently on an active path towards stabilization.

3. A networking micro benchmark -- I lost the link here, but the Rust implementation made the simple mistake of allocation a large array by pushing one element at a time to an initially empty dynamic array. Rust compared well after the fix.

4. Rust's buffered reader zeros it's internal buffer, and its default buffer size is [was?] pretty large. This made some IO operations pretty slow without manually tuning the buffer size, and still left some amount of overhead after tuning. In the Reddit thread where this complaint was brought up someone announced a pull (since merged) to significantly reduce the zeroing overhead.

I found these examples encouraging, except for the last. The former are regular growing pains of any language ecosystem and benchmark implementation. Only the last looked like an issue where Rust's design might cause a measurable run time cost. In all cases it was cool seeing people tackle the issues and get a good speedup. Worth mentioning is that although Rust's benchmark game results are still behind C's, they are now in a similar ballpark with C++'s.

[1] http://benchmarksgame.alioth.debian.org/u64q/performance.php... , https://www.reddit.com/r/rust/comments/3b2i0f/psa_regex_is_n...

[2] https://www.reddit.com/r/rust/comments/3i85lg/simd_in_rust/ , http://benchmarksgame.alioth.debian.org/u64q/performance.php...

[4] https://www.reddit.com/r/rust/comments/3cgaui/trying_to_find...

[Rust/C++] http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan...


I'd like to mention that I've been happy with the docs, once I learned to find them.

To take the given example, I would start at http://doc.rust-lang.org/std/ and hit `s` to start a search. Searching for `str::lines` gives http://doc.rust-lang.org/std/primitive.str.html#method.lines as the second result, which has the two methods conveniently next to each other, succinctly explains them, and shows an example of using each one.

I've had good luck with docs for the few third party libraries I've tried so far too. The recommended doc generation creates pages in the same format as the std docs, so as you get comfortable navigating them you get comfortable navigating many third party docs as well.


I clicked one of the legitimate downloads. The link takes you to a new page and waits a few seconds before starting the download.

The most prominent element of this page, centered just below the header, is a large bright green "Start Download" button. That button is part of an advertisement, but is blatantly designed to get the majority of its clicks from users who intended to download software from the project hosted on SF. I see it as a malicious download.

I realize you may have been referring specifically to the recent SF malware bundling, but I want to stress that this ad came up for me on my first try clicking one of those links. Ad's like that have been regular on SF for years; it's impossible to believe that they have made it a priority to prevent them. The opposite seems more likely: the page design minimizes the legitimate controls and emphasizes the scam link.

Even if I know the installer is free of opt out malware I would hesitate to send a SF link to a friend or family member. The clearest call to action they are likely to see is a malicious download impersonating the software they want.


I agree with you that there is value in these varied opinions. I think it's part of why this sort of post tends to get upvoted regardless of whether it is positive or negative about the technology.

In his blog post written roughly a year ago about his first week using Go he says this:

> gonna be dealing with Go for a while and will have to make the best of it.

I could be wrong, but I get the impression that if it were solely up to him the project might have been written in Erlang either from the start or after trying Go for that first week.


I think it's more likely a deliberate move by the White House. They want us to believe it was Russia, but they either aren't ready to publicly substantiate the claim or don't want to be held to it politically, so they make the accusation through an anonymous source.


I think a machine learning model provides a nice version of Graham's "The same book would get compiled differently at different points in your life."

Using an artificial neural net analogy instead of a compilation analogy: "The same book would optimize your neural net towards a different local minimum at different points in your life."


In a way I agree with this at the level of a package's external interface, but I think it's nice that with CL style packages you can call your local functions by their natural name. E.g. it may be "my-package::string-split", but because it is only used within "my-package" it can always be referred to as "string-split".


I've used StumpWM before and enjoyed it. It's pleasantly surprising to see how much development seems to have happened on it in the last year.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: