Hacker Newsnew | past | comments | ask | show | jobs | submit | nrh's commentslogin

needs the ball sizes to represent the storage capacity!


Great suggestion.


That’s not true. Don’t shame me like that


Listen to his charming and deeply entertaining story of his early years at SF Examiner, told at The Moth in 1999: https://themoth.org/stories/rookie-reporter

15 minutes you won't regret!


I'd love to try this, but most of the places that I would want to use it are servers, and the rust requirements are way beyond where debian-stable lives.

Too much hassle to muck with backports and package pinning for a QoL tool - my feedback would be to try to make this install without tomfoolery on the stable branch of major distros.


Rust executables generally compile to static binaries. No you don't need to install Rust on the server, just compile once locally and copy the binary.


That is exactly the problem. Debian prefers the dynamic linking approach, where libraries can be updated individually. The problem usually comes down to having all the cargo dependencies packaged inside the debian repo. Which is doable, a similar thing happens for pip and other tools. But still kinda messy


I'm confused why that's a problem. Just because Debian prefers dynamic linking doesn't mean it's incapable of running static binaries.


For most of the rust tools I tried—not this one though—they provided precompiled static binaries that is so easy to deploy to any systems, including ancient distros (like RHEL 7).

So it is easy to solve this problem—just provide pre-compiled binaries on GitHub Releases. I believe the rust ecosystem made this relatively easy.


There's a "debian" folder, I suspect it's trivial to build a Deb and manually install?

Seems to have pretty modest install dependencies?

https://github.com/shell-pool/shpool/blob/master/debian/cont...


I had to do this:

  git clone https://github.com/shell-pool/shpool.git
  cd shpool
  sudo apt install libpam0g-dev
  # you may need to install build-essential and other packages too as the
  # build-depends field in debian/control is incomplete 
  dpkg-buildpackage -b
  sudo apt install ../shpool_0.6.2_amd64.deb
  systemctl --user enable shpool
  systemctl --user start shpool


And the most demanding part of that listing can be done on a build machine.

This benefit of compiled languages is often overlooked by folks who are (mostly/only) familiar with dynamic languages like PHP, Python or Js/Ts


i had to `dpkg-buildpackage -b -d -us -uc`


I'm pretty into this place's business philosophy: just ship us your stuff with a little bit of information and we'll repair it. don't call ahead, don't ask for a quote, don't send us garbage that's not worth repairing. mucking around and/or unnecessary communication will just slow things down. We'll fix it and ship it back and bill you. End Of Story. I wish my work-life were so simple.


http://www.longislandindicator.com/p30.html

> We don't carry spare parts or we just have a dislike for some of these brands, so please do not send the following ... Alina, Craftsman, Helios, Kanon, Peacock, NSK, Fowler, Starrett, Sylvac, SPI, Scherr-Tumico ... Anything made in China.


How the hell that even works is beyond me; usually in the case of phone or laptop repairs it means crazy high backlogs and other issues along the way. Bonus points for those repair people who never say no and in effect spend all their life in the workshop.


ATE0!

Once upon a time I worked for what was (once) the world's largest BBS. Remember the US Robotics Courier HST? 16.8kbps! Now imagine a room filled with shelf after shelf of Couriers, several hundred individual lines, at least a few hundred amps worth - all negotiating vigorously, clicking on-hook and off, all the time. Fun times.

Pretty sure I could still tell the difference between a 14.4k and 16.8k negotiation by ear.


Nifty. Which BBS was this?


An article much more relevant to that question: https://techcrunch.com/2017/02/02/slowchat/

It took Instagram Stories two quarters to catch up to Snapchat's total MAU.


it seems to come down to two main points, as far as consumer tech companies go:

1. GAAP says that revenue from a software service subscription (e.g. cloud service) can only be counted as revenue once the service has been delivered, not at time of sale.

http://www.marketwatch.com/story/ea-to-ease-non-gaap-usage-a...

http://www.grayboxpdx.com/blog/post/revenue-recognition-for-...

2. Equity-based compensation is accounted for as a cost.

http://mercercapital.com/financialreportingblog/equity-based...


That is the standard definition for accounting and has been that way for a while:

Persuasive evidence of an arrangement exists Delivery has occurred or services have been rendered The seller's price to the buyer is fixed or determinable Collectibility is reasonably assured.

Stock compensation doesn't make it not an expense. It is coming from somewhere. It is coming from the equity holders pockets therefore it is a real cost.

The basic definition is Assets=Liability+Equity.

So rewriting it E=A-L

If you you pay a person with cash: E-C=A-C-L

When you pay person with equity:

E-X where X is equity you are losing in both cases they are equivalent. So stock option are not free and need to be treated as an expense.


Spotifier here. For the record, I hate puppet with the fiery intensity of a thousand suns. It's also pretty hard to magically make go away. I could probably do a whole talk on why puppet is difficult to kill, it's the kudzu of config management. I think it is the worst.

We've got our warts and a pile of tech debt, and I wouldn't want anyone to think otherwise. Containers are a part of a long-term strategy to get away from puppet and onto more idempotent units of deployment, and move a lot of what is considered to be "configuration" back into the build process where it belongs.


Is there any public available information where we can read about deployment process in the company?


Spotifier here. Frankly, price is not the biggest factor in a decision like this. If we were going for the lowest cost cloud option, it probably wouldn't be either AWS or Google - there are other providers who are hungrier for business that would be willing to do deep cuts at our scale.

The way we think about this is that there are basically two classes of cloud services: commodities and differentiated services. Commodities are storage/network/compute, and the big players are going to compete on price and quality on these for the foreseeable future (as with most commodities).

The differentiated services stuff is a bit more interesting. Different players have different strengths and weaknesses here - AWS has way, way better capabilities when it comes to administration and access control and identity management, for example (which is actually pretty important when trying to do this in a large org). The places were Google is strong (data platform) are the places that are most important for us as a business.

Compelling: dataproc+gcs, bigquery, pubsub, dataflow Made it safe: high-enough quality, cheap enough.

What more would you like to know?


Hey nrh,

Nice! Did you have the data tooling built out before you went to Google Cloud? If you did, I could imagine the migration was pretty hard as well.

Also, all of those seem relatively possible with AWS Redshift, Kinesis, and Data Pipelines. I'm interested what Google Cloud had to offer, spec-wise.


I would categorize data tooling as a moving target - we have some, it's never enough, it probably won't ever be enough. It's a moving target (p.s. obligatory we're hiring!).

I think this Quora post does a good job of redshift vs. bigquery: https://www.quora.com/How-good-is-Googles-BigQuery-as-compar...

More generally speaking, (excuse me for being a little hand-wavey here) many of the AWS offerings feel like polished, managed versions of familiar tools. Redshift, for example, feels a bit like "hey we figured out how to abstract away a bunch of mysql instances to feel like a big processing cluster". That's not a bad thing, necessarily. The google stuff feels much more intentional - "we need to solve the problem of doing these sorts of queries at scale" vs. "we need to solve the problem of scaling mysql to solve these types of queries"

Maybe they're just better at abstraction, but whatever - that works for me!


So it sounds like BigQuery was the deciding factor in this case?


I'd say the data platform overall, bigquery is certainly great . So are some other bits.


Awesome, good to know. Thanks!


Are you dropping Cassandra?


Spotifier here. This is an important point. I have nothing bad to say about Cloudera or HWX (disclaimer: we're an HWX customer - we've had a pretty good experience), but I don't really see a compelling reason at this stage to manage your own cluster(s) (HIPAA/regulatory constraints, maybe?)

Getting shared-storage and indepedently operated/scaled compute clusters on top of that storage isn't easily achievable with the standard Hadoop stack, and building that on top of HDFS is non-trivial.


In fact, I don't think large orgs like you (Spotify) really want independently operated clusters. That prevents easy sharing of data, causing data silos to appear. You really want to have true multi-tenancy, which isn't in Hadoop yet. Hadoop has worked more on Kerberos support at the cost of features like easy-to-use access control - Apache Ranger or Sentry anybody!?!?


Yahoo's Hadoop clusters operate in a multi tenant fashion for precisely this reason: to ease sharing of data between groups.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: