I'd love to try this, but most of the places that I would want to use it are servers, and the rust requirements are way beyond where debian-stable lives.
Too much hassle to muck with backports and package pinning for a QoL tool - my feedback would be to try to make this install without tomfoolery on the stable branch of major distros.
That is exactly the problem. Debian prefers the dynamic linking approach, where libraries can be updated individually. The problem usually comes down to having all the cargo dependencies packaged inside the debian repo. Which is doable, a similar thing happens for pip and other tools. But still kinda messy
For most of the rust tools I tried—not this one though—they provided precompiled static binaries that is so easy to deploy to any systems, including ancient distros (like RHEL 7).
So it is easy to solve this problem—just provide pre-compiled binaries on GitHub Releases. I believe the rust ecosystem made this relatively easy.
git clone https://github.com/shell-pool/shpool.git
cd shpool
sudo apt install libpam0g-dev
# you may need to install build-essential and other packages too as the
# build-depends field in debian/control is incomplete
dpkg-buildpackage -b
sudo apt install ../shpool_0.6.2_amd64.deb
systemctl --user enable shpool
systemctl --user start shpool
I'm pretty into this place's business philosophy: just ship us your stuff with a little bit of information and we'll repair it. don't call ahead, don't ask for a quote, don't send us garbage that's not worth repairing. mucking around and/or unnecessary communication will just slow things down. We'll fix it and ship it back and bill you. End Of Story. I wish my work-life were so simple.
> We don't carry spare parts or we just have a dislike for some of these brands, so please do not send the following ... Alina, Craftsman, Helios, Kanon, Peacock, NSK, Fowler, Starrett, Sylvac, SPI, Scherr-Tumico ... Anything made in China.
How the hell that even works is beyond me; usually in the case of phone or laptop repairs it means crazy high backlogs and other issues along the way. Bonus points for those repair people who never say no and in effect spend all their life in the workshop.
Once upon a time I worked for what was (once) the world's largest BBS. Remember the US Robotics Courier HST? 16.8kbps! Now imagine a room filled with shelf after shelf of Couriers, several hundred individual lines, at least a few hundred amps worth - all negotiating vigorously, clicking on-hook and off, all the time. Fun times.
Pretty sure I could still tell the difference between a 14.4k and 16.8k negotiation by ear.
it seems to come down to two main points, as far as consumer tech companies go:
1. GAAP says that revenue from a software service subscription (e.g. cloud service) can only be counted as revenue once the service has been delivered, not at time of sale.
That is the standard definition for accounting and has been that way for a while:
Persuasive evidence of an arrangement exists
Delivery has occurred or services have been rendered
The seller's price to the buyer is fixed or determinable
Collectibility is reasonably assured.
Stock compensation doesn't make it not an expense. It is coming from somewhere. It is coming from the equity holders pockets therefore it is a real cost.
The basic definition is Assets=Liability+Equity.
So rewriting it E=A-L
If you you pay a person with cash:
E-C=A-C-L
When you pay person with equity:
E-X where X is equity you are losing in both cases they are equivalent. So stock option are not free and need to be treated as an expense.
Spotifier here. For the record, I hate puppet with the fiery intensity of a thousand suns. It's also pretty hard to magically make go away. I could probably do a whole talk on why puppet is difficult to kill, it's the kudzu of config management. I think it is the worst.
We've got our warts and a pile of tech debt, and I wouldn't want anyone to think otherwise. Containers are a part of a long-term strategy to get away from puppet and onto more idempotent units of deployment, and move a lot of what is considered to be "configuration" back into the build process where it belongs.
Spotifier here. Frankly, price is not the biggest factor in a decision like this. If we were going for the lowest cost cloud option, it probably wouldn't be either AWS or Google - there are other providers who are hungrier for business that would be willing to do deep cuts at our scale.
The way we think about this is that there are basically two classes of cloud services: commodities and differentiated services. Commodities are storage/network/compute, and the big players are going to compete on price and quality on these for the foreseeable future (as with most commodities).
The differentiated services stuff is a bit more interesting. Different players have different strengths and weaknesses here - AWS has way, way better capabilities when it comes to administration and access control and identity management, for example (which is actually pretty important when trying to do this in a large org). The places were Google is strong (data platform) are the places that are most important for us as a business.
Compelling: dataproc+gcs, bigquery, pubsub, dataflow
Made it safe: high-enough quality, cheap enough.
I would categorize data tooling as a moving target - we have some, it's never enough, it probably won't ever be enough. It's a moving target (p.s. obligatory we're hiring!).
More generally speaking, (excuse me for being a little hand-wavey here) many of the AWS offerings feel like polished, managed versions of familiar tools. Redshift, for example, feels a bit like "hey we figured out how to abstract away a bunch of mysql instances to feel like a big processing cluster". That's not a bad thing, necessarily. The google stuff feels much more intentional - "we need to solve the problem of doing these sorts of queries at scale" vs. "we need to solve the problem of scaling mysql to solve these types of queries"
Maybe they're just better at abstraction, but whatever - that works for me!
Spotifier here. This is an important point. I have nothing bad to say about Cloudera or HWX (disclaimer: we're an HWX customer - we've had a pretty good experience), but I don't really see a compelling reason at this stage to manage your own cluster(s) (HIPAA/regulatory constraints, maybe?)
Getting shared-storage and indepedently operated/scaled compute clusters on top of that storage isn't easily achievable with the standard Hadoop stack, and building that on top of HDFS is non-trivial.
In fact, I don't think large orgs like you (Spotify) really want independently operated clusters. That prevents easy sharing of data, causing data silos to appear. You really want to have true multi-tenancy, which isn't in Hadoop yet. Hadoop has worked more on Kerberos support at the cost of features like easy-to-use access control - Apache Ranger or Sentry anybody!?!?