Hacker Newsnew | past | comments | ask | show | jobs | submit | bvod's commentslogin

This is not only much needed, but also will help shape how our legal system can adapt to new technologies.

I had the privilege of sitting in on an Election Law class last year at YLS. The topic was gerrymandering, with a discussion of the legal arguments presented in Vieth v. Jubelirer.

For non-lawyers, the plaintiffs arguments for what should constitute illegal gerrymandering is technically complex, using statistic concepts, graphs (computer science), and even np-completeness. In essence, the argument was to use computers to draw all possible congressional districts, score them on the basis of discarded votes, and if the scoring of the drawn districts is greater than two standard deviations from the mean district, determine it is unfairly drawn. I found particularly striking an audio recording the professor shared of a lawyer struggling to answer John Robert's questions on technical topics. The professor used this as an example to be prepared to answer questions that you may not have a background in, even if the expert witnesses had already explained the concepts. Unfortunately, the court rejected the proposed determination of unfair gerrymandering in a 5-4 decision, with the dissent stating that the presented way to determine unfair gerrymandering was clever, correct, and should be revisited.

As we continue to push the frontiers of what we can do with computers, we need informed lawyers who can clearly present deep technical topics, and we need judges who are capable of understanding them.


Interesting. My country Pakistan, took a different approach in the last elections. An algorithm [1] was agreed upon, on how to draw constituency boundaries for each district. Further, there can ordinarily only be 10% variation between constituencies in a single district. The entire delimitations exercise was done in open, and there were multiple review steps.

I imagine the algorithm could be further improved, but it at least ensured some amount of certainty and transparency.

------ [1] As far as possible, the delimitation of constituencies of an Assembly shall start from the Northern end of the district1[**] and then proceed clock-wise in zigzag manner keeping in view that population among the constituencies of an Assembly shall remain as close as may be practicable to the quota: https://www.ecp.gov.pk/documents/laws2017/1-3-2020/The%20Ele...


I'd argue all of that diving into technical details didn't end up mattering that much, either in Vieth itself or especially after Rucho v. Common Cause.


I can get behind statistical and CS concepts being used to detect gerrymandered districts. There's a whole related field of anomaly detection.

My quackery sense tingles when I hear NP-completeness was mentioned in the argument. Do you have more info on the claimed relevance to gerrymandering?


Of course, you can find the full explanation in the amicus brief: https://www.brennancenter.org/sites/default/files/legal-work...

It is NP complete in determining to an absolute degree that a redistricting plan is excessively unfair, as the number of possibly districts grows exponentially. Demonstrating to a quantitative degree is more clear (eg stop drawing more maps after a few billion).

I highly recommend anyone interested read at least the summary of the above brief, but relevant details from page 4 are reproduced:

"With modern computer technology, it is now straightforward to (i) generate a large collection of redistricting plans that are representative of all possible plans that meet the State’s declared goals (e.g., compactness and contiguity); (ii) calculate the partisan outcome that would occur under each such plan, based upon actual precinct-level votes in one or more recent elections; (iii) display the distribution of the outcomes across these plans; and (iv) situate the State’s chosen plan along that continuum to reveal the degree to which that plan is an outlier. One can analyze outcomes for a statewide plan as a whole, or for an individual district within a plan. In this way, it is now straightforward to measure the quantitative degree to which a partisan gerrymander is excessive."


I'll check it out. Thanks!

Edit: I didn't find anything in that particular resource. A similar work mentioning complexity is here: https://desh2608.github.io/static/report/ohio.pdf

Roughly, it boils down to a constrained search for the best mapping of precincts into districts, which is NP-hard.


I tried to find the curriculum structure for this course to check if it involves any of the topics you brought up, but could not do it.

If this course doesn't include relevant legal topics, do you know any other "programming for lawyers" course that you would recommend?


Here is the OpenCourseWare site for the material [0]. It includes all the lecture videos, slides, and assignments plus notes, subtitles, and transcripts for each.

Based on the lecture titles, statistical concepts may be obliquely touched on but probably not graphs or np-completeness.

[0] https://cs50.harvard.edu/law/2019/


Oh come on. That’s a bunch of buffoonery presented by an academic who wants to sound much smarter than they are, and has zero application to reality. Our legal system is about dividing up the pie amongst those who can pay for it.


If you exclusively consider the value of the puts, then yes they made a killing. But Mexico didn't use the puts to take a huge directional speculative bet - it used them to hedge oil price risk, so the profits from this trade are structured to offset losses elsewhere. For example, sustained low oil prices will could put companies out of business or even make Mexican oil uncompetitive on a global scale.


Yup. But I didn't take the offer.


The fingerprint scanner is intended to benefit the company, not the users. It makes it significantly more difficult to share an account, which protects Bloomberg's $24,000 / user / year revenue.


One doesn't pay per user, but only for the terminal. I.e. you could have thousands of users per terminal. Per user licenses are called "Bloomberg Anywhere" which is quite similar to Refinitiv's EIKON.


I would recommend talking with a lawyer before taking any actions to circumvent international sanctions. There are very severe civil and criminal penalties.


C++ is deep and nuanced, so reading books will help structure your learning. I've found Scott Meyer's books to be great for starting out. Those will give you a fantastic foundation, from which you can dive deeper. Those and others have added significantly to my ability to write clean and maintainable software.

This SO post is a great guide for where to look: https://stackoverflow.com/questions/388242/the-definitive-c-...


The problem is that the rev/user isn't constant. If they get more users, they earn more from each user. And conversely, if they lose users, they earn less from everyone. So even if a small percentage of their users opted to pay monthly, that would have a negative impact on facebook's earnings on the remaining population.


Exactly. An individual user isn't worth a lot, the network is.


I feel like your pricing is off. $29/week is too expensive for consumers, yet far too cheap if you plan to get reimbursed from insurers. Especially considering you'll need a lot of people and show improvements for them if you want insurers to pay - this pricepoint won't let that happen.


If you compare that to paying for a dietitian, it's actually not TOO bad. BUT, I think churn will be high due to price.

I could see a declining fee model. Just a thought.

We are trying a low FODMAP for my wife. I could see her paying for it for 2-3 months, then we would probably find that expensive at $29/w...but, if after 3 months, it went down to say $29 per month, then it would totally make sense as we would use it a lot less, I think.

At least if we compare to when she stopped gluten, lactose and a few other things 6 years ago...the worst was the first 2-3 months.


A dietitian is a different service though. Registered dietitians are trained, accredited professionals that can legally give you medical advice. This service will not give you medical advice.


Also good thought on a pricing model that declines after the first several months - definitely something we're considering introducing.


> If you compare that to paying for a dietitian, it's actually not TOO bad. BUT, I think churn will be high due to price.

Could the high pricing be because this is really a guide to a solution -- once customers have need for a guide, they'll stop using the service; that is, churn is naturally built in due to the nature of the service.


Same thoughts here. I don't have too much time to chit chat with the trainer there, I know it's required but for example I would keep exchanges to a minimum.


We've actually toyed with having a version without the coach that's quite a bit cheaper (~9/month) - is that something you'd find more compelling?


My gf has lots of problems with various foods and manages to keep on top of it fairly well - although she has very little concrete evidence about what it actually is that is causing the problems.

I could see her trialling a comprehensive ($30 a week) package for a few weeks, then dropping to a less intensive (~$10 a week) package that kept her on the right track.

I personally don't think she would pay $30 a week unless the service gave her a significant material difference in lifestyle (which I certainly imagine it could have the potential to do).


What would it do? I looked (probably too) briefly...and thought the the main value was mostly the coach. Or at least, the value I saw that I was "OMG, we need this in French _now_".


It would have some expanded functionality to take the place of the coach - it would give turn-by-turn directions each day on what to eat, and educate the user on how to avoid FODMAPs / other eating options. But it would have less customization (at least at first), and the user obviously wouldn't be able to get questions answered in the same way


Gotcha. So that could make sense. Someone could start with the coaching, then downgrade to this option instead of just cancelling due to price once they don't use the coaching services enough.

i.e., it would help you with churn a bit, I think.


Can anyone explain how they decide which requests to reject? The blog post just mentions that excess RPS gets rejected, but couldn't rejecting arbitrary requests cause other problems?


Requests are rejected essentially when an atomic counter of inflight requests hits the limit. It's important to note that the library doesn't actually keep any kind of queue of requests. That's really not necessary because every system already has a ton of queues in the form of socket buffers, executor queues, etc...

Yes, the basic implementation does reject arbitrary requests. We do have a partitioned limit strategy (currently in the experimental state, which is why it wasn't brought up in the techblog). The partitioned limiters lets you guarantee a portion of the limit to certain types of requests. For example, let's say you want to give priority to live vs batch traffic. Live gets 90% of the limit, batch gets 10%. If live requests only account for 50% of the limit then batch can use up to the remaining 50%. But if all of a sudden there's sustained increase in live traffic you're guaranteed that live requests will only be rejected once the exceed 90% of the limit.


My guess is they use a 0 / small queue in front of the request pool. If queue is full (indicating the server is at its concurrency limit), it returns a 429 (which is sort of weird - return a 503 instead). I don't think that is part of the library though - the library just provides the low level bricks.


I think their point is basically "It doesn't matter" - any client that sends a request which then gets rejected automatically retries and is bound to get a server that is up and has capacity. The retry happens so fast that even with just a naive retry implementation, the end-user won't even notice the interruption.

From https://medium.com/@NetflixTechBlog/performance-under-load-3...

> The discovered limit and number of concurrent requests can therefore vary from server to server, especially in a multi-tenant cloud environment. This can result in shedding by one server when there was enough capacity elsewhere. With that said, using client side load balancing a single client retry is nearly 100% successful at reaching an instance with available capacity. Better yet, there’s no longer a concern about retries causing DDOS and retry storms as services are able to shed traffic quickly in sub millisecond time with minimum impact to performance.

Edit: In terms of how they decide what to reject, from reading the blog post, there is a queue and there is a limit to how big the queue can be. Requests that come in while the queue is "full" get rejected immediately. They don't wait in the queue and timeout.


Interesting topic but the title is misleading - the article concludes saying scientists still haven't figured out the algorithm. It would be nice if the title actually reflected the content of the article


It does. The article says: "Garnier’s study helps to explain not only how unorganized ants build bridges, but also how they pull off the even more complex task of determining which bridges are worth building at all."

The final quote is from another researcher in a reaction quote about army ants. It's not clear what the context this other researcher has in mind. Of course 'they aren't as simple as we might think' is a pretty safe guess.

Here's the meat:

"To see how this unfolds, take the perspective of an ant on the march. When it comes to a gap in its path, it slows down. The rest of the colony, still barreling along at 12 centimeters per second, comes trampling over its back. At this point, two simple rules kick in.

The first tells the ant that when it feels other ants walking on its back, it should freeze. “As long as someone walks over you, you stay put,” Garnier said." [the 2nd rule is less explicitly stated, so you need to read the article to get a sense of it]


"'We’re trying to figure out if we can predict how much shortcutting ants will do given a geometry of their environment,' Garnier said." Garnier clearly states that they haven't figured out the algorithm. Then the separate researcher says “We describe army ants as simple, but we don’t even understand what they’re doing"


Then we disagree about what 'the algorithm' means. They have a bridge-building method, but can't reproduce the ants' decision on where to place them. Place the emphasis where you like to decide if they succeeded or failed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: