Hacker Newsnew | past | comments | ask | show | jobs | submit | dilatedmind's commentslogin

They are the same, there is also some functionality for managing firebase auth users in the gcp console.


philly school district schools are 14% white. the school district i went to in the philly burbs is 79% white.

philly school district is in a bad place financially. https://www.inquirer.com/news/pa-school-funding-trial-philad.... A district with more money will have more resources for gifted programs.

I imagine this has it's roots in the demographic and population shift the city has seen starting in the 50s. Philly's population in 1990 was 75% of what it was in the 50s. I'm not an expert in this area, but I'm sure there was overhead in maintaining infrastructure, paying pensions, etc as the population shrunk.

At this point, maybe the federal government should just bail out city school districts in this situation. Why should an underfunded school district be paying a chunk of its budget on debts?


for this specific example, I think the shared library is not the correct approach. Queues work for simple fifo behavior, but in this case they also need fairness (and perhaps other logic, like rate limiting per client, different priority for certain clients etc)

For example, "Customer-A is clogging up the work queue and starving other customers out". The solution to this could look something like linux's completely fair scheduler, where every client is allocated some amount of messages per interval of time. This means messages need to be processed in a different order then they are enqueued, and queues are not good at reordering messages.

I would suggest implementing the queue abstraction as a rest or grpc service, backed by a database like postgres, which holds message payloads and everything related to message state (in progress, retry after times, etc). Now we can implement all the necessary scheduling logic within our queue service.


Thoughts on a couple of your points:

- we don’t need any kind of backwards compatibility, we just update everything.

if you don't care about backwards compatibility, then you can stay on v1 forever. Have you considered a monorepo? That would simplify updating packages and give you the behavior you want.

- For the client to update, it’s not a simple path change in go.mod

if a package moves from v1 to v2, there are breaking changes in either api or behavior. I think this implies more than a simple change to go.mod. This also allows importing both versions of a package if necessary.


> there are breaking changes in either api or behavior.

So instead off focusing on those changes I have to first fix potential dozens of files for no reason at all.


> So instead off focusing on those changes I have to first fix potential dozens of files for no reason at all.

You don't have to, if you don't want to upgrade to a new major version.

If you do want to upgrade to a major version, which means that there are breaking changes in a package's API or behavior - you sure as hell want to check the correctness of every single line of code written using that package. Since every file that uses that package must contain an import statement, the import statements are an easily greppable indicator of which files you have to check and potentially fix.


> you sure as hell want to check the correctness of every single line of code written using that package

Yes, I do. Doing the monkey job of changing every import line (which will be done with a global search/replace) is definitely not that.

If you're relying on import statements to tell you which files to check, you're definitely doing something wrong


What mechanism, if not the presence of import statements, do you use to locate all the places where a certain package is used in a large project?


1. You build your project, and it fails in the places where API changed.

IDEs can even pinpoint those locations without building a project

2. You run tests and they fail if the behavior changed

Literally nowhere is "oh, do an automatic search/replace of imports" is a tool for fixing your project or figuring out necessary changes. Except in go, apparently.


Willingness to play russian roulette with major versions may work in small organizations. In large-scale, you can't risk subtle behavior change that still compiles. That's the source of many black magic debugging sessions.


Grepping and replacing import paths isn't a good tool for finding subtle behavior changes. Especially in big projects.


Yeah, but checking each line of code that uses the import is. Especially in big projects.


depending on how you define transaction, this doesn't seem possible?

my approach has been to make all operations idempotent and ensure they are all ran at least once.


I'm probably out of my element here, it's been a while, but... does that not scream "race condition" concern? Obviously it's going to be application-specific; given the context though, are you just expecting "validation" from the 'other' side (ie reject requests with old checksums/timestamps) maybe? Or is this just a highly-theoretical example/mindset?


I think an example of what the commenter is describing is something like:

1. User clicks "buy now" for whatever is in their shopping cart 2. Client generates some kind of transaction ID representing that they wanted to purchase the contents of the shopping cart (could be deterministic ID) 3. Client submits this request to the server 4. Server persists the intention to start processing the purchasing of the shopping cart with transaction ID of X 5. Server synchronously or asynchronously starts handling the side effects of the purchase 6. If at some point the client got an error message it can still submit the same request with the same transaction ID to retry and even if the initial request was received (but perhaps lost before getting to the client) it's cheap and easy to make it idempotent by using the transaction ID

Race conditions would be made more difficult by having everything idempotent based on the transaction ID and having the transaction ID (optionally) generated deterministically.

2 phase commit is an extremely heavy weight pattern and finds far less use than something like the above.


  > generated deterministically
in this case generated deterministically means generated by some immutable value based on the initial transaction properties (who is buying what, with x quantity at y time) and not just a random uuid?


I think that would be preferable but depends on the use case


I worked on a project which required some medium scale web scraping (less than 100 million pages), and went with node primarily because of puppeteer.

The system had a couple dozen worker processes doing the scraping, and one coordinator which maintained a queue of pages which needed to be scraped. There was some logic to balance requests between sites, so we weren't making more than a request/s to any in particular. The coordinator just had a rest api endpoint, which the workers would hit to get their next job and to return w/e data.

Each worker process was ran on a separate aws instance, believe it was a t2 with unlimited cpu enabled. These are only a few dollars a day, and it was necessary to have as many ip addresses as possible (at least 5% of the sites we were scraping had some preventative measures in place, but they all seemed to be ip based)


> Each worker process was ran on a separate aws instance, believe it was a t2 with unlimited cpu enabled

I wonder if these kind of processes are cheaper on Lambda.


I think he's making fun of his past self, thinking he knew everything when he was younger.


When I was young and foolish, I also used to defend random CEOs on Hacker News. But now I'm older, and rationalizing their absurd and offensive rhetoric is too time consuming for my desired lifestyle.


Bit he's actually making fun of his current self because now he has the gig and doesn't want to ruin a good thing.


I agree this can be awkward, especially if you let these constructs propagate through your codebase and database. However, if a string or int can be null, then all strings and ints are essentially pointers, so you've just introduced this construct everywhere.

A couple things I have tried:

- hope default values align with your business logic, eg an empty string isn't a valid name and 0 isn't a valid age.

- for partial updates, populate the existing values before unmarshalling, then unmarshal on top. Missing fields in the json won't overwrite the existing values

- unmarshall into a map[string]interface{}, which gives you the semantics you want.


i would suggest bias your implementation against false negatives. They can always come back and update it if it's wrong, and their url could just as easily be "valid" but incorrect, eg any typo in a domain name.

if it's really important, you could try making a request to the url and see if it loads, but that still doesn't validate its the url they intended to input.

might be cool to load the url with puppeteer and capture a screenshot of the page. if they can't recognize their own website, it's on them.


I think this article misses two important points on siloed qa teams

1. Qa doesn’t know what can be covered by unit or integration tests

2. Since they treat our code like a black box, they may create permutations of tests which which cover the same functionality

Maybe this is part of the draw of having a qa team. Feature coverage rather than code coverage. The downside is this can create a huge number of expensive to run manual tests which may be hitting the same code paths in functionally identical ways.

The tooling for automating manual tests of web apps is almost there: puppeteer, recording user inputs and network calls, replaying everything and diffing screenshots.

Since qa tests are tied to features and not code, There’s also the problem of having to run all qa tests even if you’re releasing minor code changes. My build tools are smart enough to return cached results for unit tests whose dependencies didn’t change, but there’s no equivalent for qa tests.


Yeah, this article is shallow and avoiding the deficiencies inherent to Rainforest's offering. They are defining QA challenges as a nail so they can sell you their hammer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: