Hacker Newsnew | past | comments | ask | show | jobs | submit | syncopate's commentslogin

sqlite's test suite is substantial and not open source, I doubt you could really fork it in a meaningful way


That raises a practical barrier to forking, but someone who is willing to get a testing suite up and running can fork the project and benefit from upstream changes on an ongoing basis.


I particularly like how it can predict hair styles. Not only how a barber would do dreads but also how they would color it and how they happened to brush their hair on a given day. Very impressive!


Isn't that called geofencing? It's been around for years. I always felt that the main reason e.g. Facebook wants you to use their app is to be able to supply data for conversions of geofencing ad campaigns, e.g. someone saw an ad for a promotion at some burger place and then actually came to close proximity of their wlan.


I am not Jeremy but I have taken part in the current course and also took some lessons by Siraj (who also does lessons at udacity), Andrew NG, and Karpathy. I think all of them are awesome teachers. Like Siraj's lessons, the fastai course is full of practical examples, whereas Ng/Karpathy's courses are more theoretical. What makes the fast.ai course unique compared to Siraj (who covers a lot of topics very briefly) is that Jeremy delves into depth, taking a long time explaining things, so you have a lot of opportunities to understand what's going on. In the 2018 course there is actually not that much different content compared to the 2017 course, rather Jeremy took the time and tried to make everything easier for everyone to understand and built a better python library. Also another unique point about Jeremy is that he and Rachel want not only to teach people but rather inspire a community of people working together in teams. The fast.ai library will not be just a project for the course, it will be something that a lot of people will use and contribute to to make pytorch better.


Is Rachel the lady that always keeps interrupting Jeremy so that he goes out of flow all the time, making lectures less enjoyable?


Dr Rachel Thomas, the co-founder of fast.ai, is the person who takes the highest voted questions from our 600+ in-person students and passes them on to me, as I asked her to do. At least 4 different people must have wanted the question asked before she asks it.

This approach helps ensure that if I haven't explained something clearly enough that I get another chance to do so, and, more importantly, keeps me fresh and energized throughout each lesson (I get stale and boring without some interaction).


I am a firm believer in candid feedback; I talked to multiple people taking fast.ai and all of them took it as disturbing the flow of lectures. I really like fast.ai, it's an excellent hands-on course, so please don't get offended.


Since you appreciate candid feedback, I would like to explain why I down-voted you. Your post comes off as super rude and dismissive. You may have some valid criticism to give but, take more care in how you deliver it. It's a bad look for HN.


I appreciate your feedback! :) It's difficult to convey mental state over a single line of text, giving rise to various interpretations that weren't in the intent. Imagine I am your best friend and I say the same in a playful fashion - would that be more acceptable?


Absolutely! That's actually in the hacker news commenting guidelines: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith." https://news.ycombinator.com/newsguidelines.html

So to answer you question, yes, it would be more acceptable if you were saying that to me in person. In fact, I agree with you about the flow of the lectures, and I was looking for someone to bring that up and I'm glad that it's getting discussed here.

However, Jeremy and Rachel are both reading these comments and we should strive to provide thoughtful, fleshed out feedback. The work they've done on Fast.ai is a tremendous lift, and deserves more than a drive-by comment.


Personally, I find the questions helpful, and for everyone who's watching on video, you can skip whatever portion you like, can't you?


The thing is you can visibly see how it interrupts Jeremy; he is in the middle of explaining something, then has the face of a surprised person, loses context for a few seconds etc. Better IMO would be if he just finished a small section in its entirety then did Q&A, instead of allowing himself being interrupted all the time. And often those questions are missing the point, which is to be expected with newbies, so the lecture loses efficiency. If I didn't know most of the basics already, I'd have problem following them. I found just doing the exercises better for learning.


I also find the questions very helpful. I understand for many people who already have a good understanding, these questions might be interrupting the flow. Many a times the questions his students raised were things that I had not thought of, sometimes they even stumped Jeremy. Fortunately Jeremy in most cases gives answers which vary in length, according to whether it's a concept which he will clear later, or whether it is an important concept that needs to clarified then itself.

So I think this more effective way of learning/teaching even though there is a loss in efficiency.


Rachel's mic is always off until I turn it on. So if she's asking a question, it's only after she's visually indicated that she wishes to do so, and I've found a time in my presentation that I'm ready to take it. So I'm literally never being interrupted, and can't be surprised by the fact that she's asking a question (although I may well be surprised by the content).

I do try to limit the time I wait to take a question, since I don't want to move on with a topic where I've failed to properly explain some foundational piece.


I like the questions. Some of the doubts I also have. They improve a lot the course.


Same, there have been several times I have had a question and someone thankfully asks the exact one I had. And Jeremy does a great job explaining it. It really helps to get at some of the why's behind the magic.


You are missing the point, that is full part of their experience opening to unrepresented classes and styles. It is human together with efficient, the heart added to the mind.


I wanted to use rust for web development for a while but always struggled with just getting an OAuth2 example running in the past, has this improved recently?


Everything is always improving. Unless you let us know what you had issues with, no one can tell you if it's been made easier.

For instance, are these issues with rust you're having, or with a particular framework? Without any information, it's a meaningless question.


If this project discussed here is using react and rust together, could I just use an react example for OAuth2 to get started with? Up to some months ago, there were no examples of using any of the Rust web frameworks with OAuth2.


Couldn't one try to at least limit the feedback loop by spreading sulfur over the affected regions by plane? (Simulating a volcanic eruption that reflects sunlight)


We will almost certainly be spraying sulfur dioxide (and dealing with the acid rain) in the next few decades. It will be too little too late though.


Why simulate a volcanic eruption? We could just bury some nukes in some volcanoes and blow the tops off. Instant cooling!

Also, I hear nuclear winter is really 'cool'. We could always start a war in Korea...


It increases the complexity of the attack. Usually, stack cookies make ROP harder these days but guessing the cookie only has a complexity of 8*256 (on OpenBSD), but xor'ing the return address with another value does increase the complexity even more. And that might be good news for programs that use fork a lot (like nginx) and hence don't get a refresh for ASLR/stack cookies for every request (like e.g. sshd on OpenBSD does [which does does fork/exec to ensure ASLR/cookies are refreshed]).


OpenBSD has been expanding the fork+exec model throughout its source tree, since the OpenSSH preauth work done by Damien Miller, many more followed. The list includes bgpd/ldpd/eigrpd/smtpd/relayd/ntpd/httpd/snmpd/ldapd and most recently slaacd & vmd.

A few remain but are being converted as they are discovered.


Perhaps OpenBSD should consider randomizing the per-process stack canary value upon fork().


How does that work? Should the kernel walk the stack to change all the saved cookie values of the forked copy? I doubt the kernel even knows where the saved cookie values are stored on the stack. Also, that would make fork quite slow, depending on how the deep the stack was when the fork happened.


The post-fork canary value could be paired with the stack pointer at which it became valid. If not valid, the process could walk a linked list of pre-fork canary and stack pointer pairs, to find the correct value to use. Would be interesting to see the performance hit on such an approach.


Or not. The stack canary is not the only random value reset upon exec.


It was on posted as Show HN six days ago but didn't generate any discussion: https://news.ycombinator.com/item?id=14512531


When you deploy: Will you lose requests that are currently being processed while you restart the service? Also, will a server still receive requests while the new code is being started?


Generally: no. The worker processes will finish up their current request before shutting down and being reaped. At the whole server level, Einhorn will indeed make sure that requests are still being served as the workers get shuffled out.


Hey, I am a bit late, but I wonder how you generally handle unicode in C++, since the language itself does not have much support for it. That's what always makes me weary of writing any kind of server in C++.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: