Hacker Newsnew | past | comments | ask | show | jobs | submit | hellepardo's commentslogin

mlpack, a C++ machine learning library, includes xeus-cling notebooks directly on their homepage: https://www.mlpack.org/

The xeus-cling work is awesome and has made it possible to do data science prototyping in C++. There are lots of other C++ notebook examples in the examples repository: https://github.com/mlpack/examples/


> This field pushes away the Van Allen Belts, a radiation swim-floaty that surrounds Earth's middle.

"swim-floaty"? I get what they are referencing, but can we really not come up with a better term than... "swim-floaty"?


That was my reaction as well. That's the point where I decided to stop wasting my time with the article, which is a real pity given that it's the second sentence.

I used to respect Popular Mechanics as the more technical and less overtly pro–military-industrial–complex cousin of Popular Science, but evidently they've fallen victim to the dumbing down of science too - not to mention the sensationalism, as evidenced by the headline.


I searched "swim-floaty" on DuckDuckGo, just in case it was a dialect thing, and amusingly this article is the first result, and seemingly the only one in the page that uses this exact phrasing.


I figured "Belt" is enough of a descriptor/visual, but apparently "swim-floaty" is the better choice of words. Go figure.


Belt is used already for the radiation belts, and I think this is referring to the equatorial toroid where those aren't.


It's popular mechanics. It's meant to get clicks from...a certain type.


So, don't get me wrong. This is a fun project and a neat little device. I don't mean to take away from it in any way.

However it is really important to consider why baby monitors are so primitive: because the cost of a false negative is huge. I didn't see any mention of this in the author's experiments (only a '>98% accuracy' note). So let's talk about this a little bit: is "accuracy" what we want? Probably not---I don't care if I get accidentally notified, but I care very much if I don't get notified when the baby is crying. So you want to weight your classifier's predictions heavily against false negatives (at the price of false positives). It would be good to make an ROC curve to characterize this behavior. More importantly though, any predictive model assumes a stationary distribution; i.e., training conditions accurately reflect test conditions. But will they in real life? What about when your neighbor's house is under construction? Can interference from chainsaws cause the model to fail to detect the baby crying? What about the dude down the street with his super loud motorcycle? What happens then? I bet the training set doesn't have situations like this.

I really, really don't want to come off like a wet blanket here. But I feel obligated to, because this is a model that directly impacts the welfare of a human, and so we should at least talk about or discuss potential drawbacks. (Again, cool weekend project, just, we need to be clear about the implications of outsourcing the decision of whether the baby is crying to a black-box model where we can't interpret what it's doing.)


> (only a '>98% accuracy' note). So let's talk about this a little bit: is "accuracy" what we want? Probably not---I don't care if I get accidentally notified, but I care very much if I don't get notified when the baby is crying

Presumably this is 98% accuracy per sample or something, not per session of crying. I wouldn't want to leave my kid completely unanswered 1/50th of the time if he actually needed me, but I think with the model as trained, regular crying would eventually get noticed and that's fine with me.

Somewhere else on the thread though somebody else called out the real downside of this model: it overfits on regular crying. Kids make all kinds of noises. Mine made a completely different, eldritch noise one time when he noticed a spider, for instance.


> the real downside of this model: it overfits on regular crying

I'm with you there, that's a big downside too, but that's not the "real" downside---there are like seven different downsides present with the data science going on here and it's hard for me to say which is the biggest issue, because they're all issues. Data science is not trivial!


This reminds me of the home-built garage door opener that popped up here a few months ago. When it comes to safety devices, there are literally man-decades of expertise that has gone into these devices. It is never a good idea to rely on something you whip up yourself over the weekend or convince anyone else that they could.


Was going to come here to say something very similar. I'm no luddite, but some things really shouldn't be left to tech like this...


Maybe not this specifically but a similar tech should be considered to detect babies in hot cars.


Automobile manufacturers are doing this now, IIRC. With sensors on the seats to remind you that you put something heavy into the car.


Hmm, I don't think so - seat sensors are used to enable / disable passenger airbags, my memory is that they only register weights over 40lb / 18kg.


> In 2016, General Motors took the lead among automotive manufacturers by introducing the Rear Seat Reminder, a technology designed to nudge drivers to check their back seats as they exit their vehicles. It uses an audible alert and a front panel message to tell drivers to check the rear of their vehicle for occupants.

> Rear Seat Reminder technology became standard on all new Chevrolet, Buick, GMC, and Cadillac four-door sedans, SUVs, and crossovers starting with the 2019 model year, and also will be standard on all 2020 model year GM pickup trucks, said GM spokesperson Phil Lienert.

> Kia, Nissan, and Subaru offer rear-seat alert systems in many of their models, according to Car and Driver, and Hyundai announced on July 31 – National Heatstroke Day – that it planned to incorporate the technology across all models by 2022. One of Hyundai’s newest innovations is the Ultrasonic Rear Occupant Alert, in which a sensor can detect the presence of a child (or pet) and activates a loud horn if the driver leaves with the child inside.

https://mashable.com/article/car-seat-alarms-prevent-hot-car...


Interesting, though hard to see how that prevents people from leaving children and pets in the front seats...


I'm not sure where you live but in the United States, in some states, children are required by law to ride in the back seat of an automobile until they are, I believe, 8 years of age. I believe that various state and federal agency recommendations are for children to ride in the back seat of an automobile until they are 13 years of age.

Presumably, then, parents are not putting their young children who cannot operate a car door into the front seat of a car and leaving them.

Pets? Sure, that's a different story.


Fearing the worst outcome ought to inspire you to make the tech better, not avoid it altogether.


But the tech exists. It's not like we're using tin cans and string and trying to replace that with a gizmo; we have the gizmo, and now we're trying to make an open-source version of it. While the goal is admirable there's no real benefit from using a less-tested Raspberry Pi project. There are less risky ways to learn the same lessons - a video walky-talky maybe.


As a father, I generally agree with what you both are saying regarding being conservative with tech choices on delicate areas such as babies, but I‘d also add that 99% of situations of my baby making random noises (that also trigger a baby monitor) are not something that are going yo have an impact on the baby long term. Most times my baby has lost the pacifier in the dark, and finds it before I reach upstairs (or annoyingly, between me waking up, and me reaching my bedroom door).

Also there’s the scary converse: some important things do not make a noise, such as a baby suffocating in her sleep.


> So you want to weight your classifier's predictions heavily against false negatives (at the price of false positives).

Which makes me think that a simple trigger based on ambient sound level probably does the job... I suspect that many baby monitors work that way.

This also actually makes sense because I likely want to be alerted in case of noise in general rather than just cries, just in case.


I have to agree - I like tinkering and hacking as much as the next dev, and I don't like to be negative, but... what is the point of AI/ML here, where a simple sounds trigger works just fine? What problem is it actually solving - I only see this approach causing problems.


Forgive me for bringing it up, but this is also the reason why smart gun technology is a complete dead end.


Cars are every bit as dangerous as guns. But no one here seems to be in such a panic with Tesla or self-driving cars.

I feel there is a bit of undercurrent of Luddite sentiment going around here. Lots of people mentioning how "primitive" a baby monitor is or should be. But is it? It's an LCD video screen powered by modern advancements in battery tech connected wirelessly to a CCD video camera, with sensing technology to detect when a baby is crying and even provide other information such as room temp. None of this existed 20 years ago. Not to mention that between now and back then, baby monitors went through a long phase of being hot garbage. Many still are garbage.


> I feel there is a bit of undercurrent of Luddite sentiment going around here.

Oh, I'm no Luddite, don't get me wrong---I'm a machine learning researcher. I have no problem with data science. The difference between all of the complex technology you pointed out inside the baby monitor and what we're talking about here is that all of that complex technology is robust and, for the most part, well-designed! The data science work here has tons of issues, and I think the vast majority of reactions are reacting to that: machine learning and data science really can be useful... but not if you apply it really badly. Someone else commented elsewhere in the thread that a simple thresholding algorithm would be just as effective---and not suffer from the myriad potential problems present with blindly applying TensorFlow because it's cool.


I wonder if there are huge advances to be made in suppressor technology? Other than the noise, guns are basically perfect in their intended function. I guess you could try to further reduce recoil.


There have been some pretty fascinating experimental weapons over the the years. For example, the H&K G11 [0] used caseless ammunition. The potential advantages of that are no need to eject a spent casing and a soldier can carry more of it since it weighs less.

[0] https://en.wikipedia.org/wiki/Heckler_%26_Koch_G11


That would be a game changer for sure.


If we could get a caseless ammo system to work I think that'd be pretty game changing. Lots of factors going against that though.


Yes, that's a great point.


Oh, there's a lot more stuff that can be done to guns:

1) improve aim: embed wind and angle sensors, possibly even battlefield intel (position, weather, land layout) to account for any drift that might impact the bullet. Also, auto-fire if a designated target is in the crosshair (train the AI on human faces, combine with the previous sensoring). I would not be surprised if this technology will be developed rather sooner than later with armed robots, and then made smaller until it fits in a gun or at least a rifle.

2) improve/rethink propulsion. Right now almost all guns operate by some form of bullet in a casing with explosive propellant (excluding the rare caseless guns and co2/pressured gas sports guns). Railguns are already a thing at "ship scale", it will be only a matter of time until it gets scaled down to hand-held guns.

3) improve projectiles. Right now bullets are dumb pieces of metal. Why not have active bullets (e.g. subminiature rockets) or bullets laced with poisons so that even a scrape kills in the end?

4) improve... guns themselves, as a concept - think laser guns a la Star Trek, highly focused microwave, sound or other energy.

In the end humanity will always improve ways to kill each other, and all the concepts are already there in sci-fi (and in the case of poison bullets, the Russians made it a reality with the Markov murder).


I think those are all really great ideas for new inventions, some of which might eventually replace firearms. But I still think firearms are basically topped out. Anything else done to them complicates them more than it improves them.


Wonderful analysis.


Similar experience here in Georgia. It sounds like there is more to do on setup morning here, since we had to show up at 5am (and didn't have things really ready until 6:30 or so). There are so many seals to check and record; lots and lots and lots of paperwork.

In Georgia, the ballots are printed from a terminal that the voter uses and then scanned, leaving both an electronic count and a paper trail (the ballot itself is ejected from the bottom of the scanner into a sealed ballot box).

However, our scanners jammed 40 minutes into the day; after a couple hours, a technician managed to come to our precinct and opened the ballot box and revealed that a lackluster design in the ballot box caused the ballots coming out of the scanner to sometimes curl up and jam. Without any realistic solution, we just had to open the ballot box every time it jammed (supervised every time to ensure no monkey business) and push any stuck ballots out of the way of the scanner so that more could be scanned. Amusingly we had good success regularly giving the machine a good shove to dislodge any stuck ballots.

We also had problems printing receipts---in our case, we only need to print 3 from each scanner, but we ran out of scanner receipt paper. Since another precinct called us during the day looking for extra receipt paper... it wasn't available. But, we dodged a bullet, since there was just enough paper to print 2 of the 3 receipts. (1 gets posted on the door of the polling place; 2 go to the county. We wrote an apology on the receipt and only sent 1 to the county. I verified on the Secretary of State website that the votes tabulated for our precinct matched what our receipts printed. Cool to be able to double-check like that!)

I spent a while thinking about what a pollworker would need to do to illegally cast ballots. It would be a tall order indeed and would require cooperation and secrecy from everyone there, since the only way to cast a ballot is to scan it, and everyone can see the scanners at all times. I can't see it realistically happening in any precinct.


Is it just me, or has Github's quality of service been continually degrading over the past several months? What is going on internally? Is this because of the Microsoft acquisition? Increased usage? An internal transition to Azure?

...is it time to move away from Github?


It might also be covid related. People are working from home, people responsible for system upkeep might not be immediately responsive, more demand on the servers for whatever reason, etc.


I would agree that the coronavirus could be a factor here. At the same time, I've been noticing issues since probably December or January (before the coronavirus started being a real problem), which makes it seem like maybe there are multiple issues.

Of course, I'm not actually internal to Microsoft or Github, so I have no idea and it's all opaque to me.


Alternative explanation - they've been deploying big new features with some regular cadence lately. New features carry risk and I think we're seeing that.


Same here - seems like these issues have been going on since last year, especially with Actions.


Maybe it's the demand side? With remote work, the intensity of usage went up at least in our company as we rely more on written communication. At the same time, some people will use the time to start side projects or get into programming.


On the other hand, shouldn't there be a productivity drop with so many people working from home while their kids also aren't in school? I'd expect that to offset any increase in demand—after all, Git isn't Slack; remote work shouldn't cause people to push all that much more often, right?


Depends... I'm getting about 40% more done, with fewer interruptions, and not having a couple hours of commute and lunch driving.


Personnel may be partially unavailable, but - home or office - developers are doing the same job as they always did. At least from the users point of view it didn't change that much.


Cloud servers everywhere are also under much heavier load. They may have moved much of it into Azure, which is famously overloaded right now.


My guess is opening their paying offering for free and also Github actions which is CPU-intensive and does much more than the traditional CI/CD tools.


Ideally GitHub Actions would be completely independent of GitHub's core services/servers (e.g. how Travis CI, Circle CI, etc. works), but that seems like it may not be the case.

Also, I'm still anticipating a fuller report on the database issues they mentioned have been the root cause of many outages over the past few months.


I prefer to pay 8$ per month for a stable service any time over 4$ per month over a service that fails during a critical build, just like it happened to me this time.


Not just you.

This page https://web.archive.org/web/20190801000000*/https://github.c... has been[1] “500 internal server error” since late-December (globally it seems). Nobody cares (“not a priority” (c) support), nothing on githubstatus.

[1] blue circles on web.archive are errors too


If you view the historical uptime, it does seem like there have been more incidents in the past three months, but otherwise the waters look calm (as reported at least): https://www.githubstatus.com/uptime?page=1


Yes, and so this is actually a thing that bugs me, because I use Github every day. Over the past several months, there have been numerous days in which I've had problems (`You can not comment at this time`, 500s, etc.) and no corresponding status report.

It seems like the historical uptime page paints a far rosier picture than I am actually experiencing.


I wonder how companies like github decide to determine this when outages are geospecific. Do they not report until an outage is affecting 50% of a geographic region before its reported as a partial outage?


If you do nothing, it lands by default on the "git operations" view, which is by far the most stable, since, well, it consists of executing the battle-tested git program.

If you want to see more the state of github "extras", you'd need to select "github actions" or "webhooks", which have a fair amount of downtime (about once a week or so, which seems about right).

Interesting how the most stable component of the company is the open source one of course ^^


It's almost as if keeping a complex service like Github online and available to millions of users is hard.


Building skyscrapers is hard. Does that make it OK for them to fall down regularly?


GitHub hasn't collapsed killing thousands or needing to be completely rebuilt though, so that analogy doesn't work. This is more like there's a flood in the lobby so maintenance has closed the front door for a bit.


I didn't mean for the point to be about the consequences of the failure. What I was trying to argue against was the notion that it's fine for things to fail, just by virtue of them being hard. There are a lot of complicated systems in the world that work extremely reliably.


Planes sometimes crash without killing people or needing to be completely rebuilt; that doesn't mean that this is clearly undesirable.


It's not a good thing that Github is down. It's an inevitable thing that comes from complexity at scale though. Hard things are hard, whether that's planes, buildings, or web apps.


I wonder if it has something to do with the past 3 months have been coronavirus related, higher internet usage?


They've been pushing lots of features in the platform so not surprising that it's a bit unstable now.


In an ideal world, pushing new features would have no impact on stable mature features like browsing files, comment threads, etc


Internal to Amazon we consider that about 80% of issues/outages/etc are due to changes. This may sound "duh" but this is over 10k plus investigations.

Much if the work is just minimizing the impact of this changes by finding them before customers do.

This includes things like unit, integration testing, canaries, cellular/ zonal / regional deploys, auto rollbacks, multi-hour bakes, auto load tests, and much much monitoring. Not to mention cross team code reviews, game days, ops reviews.


Ideal I agree ... and yet the real world is exactly the opposite ;)


That’s part of the micro service promised land, right?


No, I think it's more part of how to run a complex system with a lot of people changing stuff at once. Having good monitoring, kill switches, staged rollout, continuous deployment, and so on are all things that contribute more making a reliable service than how microserviced it is.


Too bad this is the real world.


Which world is that?


If you're looking for somewhere else, SourceHut has had no unplanned outages in 2020, despite being kept online by an army of one. The software and infrastructure is just more resilient and better maintained. Our ops guide is available here:

https://man.sr.ht/ops/

It's also the highest performance software forge by objective measures:

https://forgeperf.org/

Full disclosure: I am the founder of SourceHut.


It's unfair to Github to make the claim that your infra is more resilient and better maintained. Their load is orders of magnitude greater than yours. My driveway also doesn't have potholes it doesn't mean it's more resilient than the freeway.


I don't think so. I backed up the claim here:

https://news.ycombinator.com/item?id=22936448

SourceHut is at least 10x lighter weight and has a distributed, fault tolerant design which would allow you to continue being productive even in the event of a total outage of all SourceHut services.


Sidebar, I just want to say, you are one of the few people I’ve observed doing actual “modern” web development.

When most people talk about “modern web” or modern anything in software they think it means “using all the latest tools”.

That often means things like ES6 and Webpack, which have nice surfaces, but which create nightmares under the hood.

That’s the opposite of what modern architecture was. It was about embracing the constraints of materials. Given the properties of concrete, what is the limit of what you can do with it. Go there, and no further. And don’t cover it up, just finish the dang slab and get on with the rest of the house.

ES6 means transpiling, which means webpack, which means a massive machine of hidden complexity, which if you’re lucky exposes a nice smooth surface where everything is arrow functions and named exports. And if you’re unlucky is a flimsy piece of cardboard over the nightmare underneath.

You (SourceHut) seem to be building a UI that actually takes note of how the browser is. And you are trying to push the big numbers... how reliable your service can be, how many endpoints can one person maintain, while letting the materials of the web (forms, urls) dictate the details.

That’s true modernism.

So, bravo. I’m glad to see you out in the world. It takes courage to step outside of the norm and I’m rooting for you.


Just wanted to interject that browsers (other than IE 11) have over 98% coverage for ES6 without transpiling.


Care to expand on this some more? Perhaps you have some other examples of both good front-end and server side, modern, day web development?


I'm sure you've done a great job building up your infrastructure, but if you have the level of traffic Github has, what would your uptime be?


Who's to say? It's not GitHub scale, and even if everyone in this thread moved to SourceHut, it still wouldn't be GitHub scale, but it would be serving your needs just fine. I feel totally comfortable recommending SourceHut over GitHub as a service which can be expected to have better uptime and performance, because it is a fact - even if we operate at different scales.

And I believe sr.ht would beat out GitHub at their scale anyway. The services are an order of magnitude more lightweight. And the design is more fault tolerant: we use a distributed architecture, so one part of the system can go down without affecting anything else - as if GitHub's issues could go down without anything else being affected. And many of our tools are based on email, a global fault-tolerant system, which would allow you to get your work done more or less unaffected even if SourceHut was experiencing a total outage. We'd automatically get caught back up with what you were up to in the meanwhile once we're online, too.

I've spoken to GitHub engineers about some of the internal architectural design of GitHub, too, I'm confident that SourceHut's technical design beats out GitHub's in terms of scalability. And, despite already winning by a good margin, I'm still spending a lot of effort to push the envelope further on performance and scalability.


> Who's to say?

And then you go on to say it. I'm glad that SourceHut exists, and I like many of its principles, and it's probably better designed too, but walking into a thread where someone is having an outage and then claiming that you'd do much better is in poor taste no matter how good you are or how many of your services work offline.


I responded directly to someone who said they were considering alternatives, and wouldn't've otherwise.


Right, and I think it is great to bring up how your service can handle outages better than GitHub would due to it being decentralized. The part I have issue with is saying that you'd do better than GitHub about keeping your site up, pointing to the issue that they are in the middle of resolving–that just seems like kicking them while they're down, especially since you haven't actually shown that you can do better. (Yes, you have good uptime in the past, but I don't see what's stopping the power going out to some of your servers, or you pushing a bug into production, or any number of other things that shouldn't go wrong but often do, especially as the number of users increases.)


>what's stopping the power going out to some of your servers

Redundant power supplies

>pushing a bug into production

Nothing, but again, SourceHut is demonstrably better in this regard: because it's distributed, a bug in production would only affect a small subset of our system, and the system knows how to repair itself once the bug is fixed.

And I don't think I need to apologise for kicking Golaith while he's down. Someone said they want alternatives, so I pitched mine with specific details of how it's better in this situation, and that doesn't seem wrong to me. I would invite my competitors to do the same to me. We should be fostering a culture of reliability and good engineering - and if I didn't hold my competitors accountable, who will? "Here's an alternative" has more teeth than "I wish this was better."


[deleted]


I'm referring to the commit to which I initially replied:

https://news.ycombinator.com/item?id=22935985

"...is it time to move away from Github?"


Yeah, I reread your comment, but you responded before I deleted apparently :-) My mistake.


> SourceHut over GitHub as a service which can be expected to have better uptime and performance, because it is a fact

Most of us could throw any of the open source solutions on a $20 Linode instance and probably have excellent uptime. How many active repos do you host, and on how many servers?


About 18K git & hg repositories, for about 13.5K users. We also run about 5,000 CI jobs per week, including for some large projects like Nim and Zig, Neovim, OpenSMTPD, etc. We have 10 dedicated servers at the moment. And I didn't throw an open source solution on these servers - I built these open source services from the ground up.


So you're comparing your scalability with a company with over 40m users and 100m repos.

Can you talk about the geographic distribution of your 10 servers?


I would like to remind you of my earlier point:

SourceHut is not the same scale as GitHub. This does not change the fact that SourceHut is faster and more reliable. We have an advantage - fewer users and repos - but still, that doesn't change the fact that we're faster and more reliable.

This has been objectively demonstrated as a numerical fact:

https://forgeperf.org

And yes, 9 of those servers are in Philadelphia (the other is in San Franscisco, but it's for backups, not distribution). That doesn't change the fact that, despite being more distant from many users, our pages load faster. In this respect, we have a disadvantage from GitHub, but we're still faster.

GitHub and Sourcehut are working at different scales. That doesn't change the fact that SourceHut is faster.


I was considering your claim:

> we use a distributed architecture

> SourceHut is faster

I wasn't questioning that some of the web features are fast. I'm sure when Github was 10 servers their pages were fast too. I suspect if I threw Gitlab on a 9-server cluster on AWS they'd also be quick.


Not geographically distributed, but distributed in the sense that different responsibilities of the overall application are distributed among different servers, which can fail independently without affecting the rest. Additionally, the mail system on which many parts of SourceHut relies is distributed in the geographical sense, among the hundreds of thousands of mail servers around the world which have standard and 50-year-battle-tested queueing and redelivery mechanisms built in.

And yes, throwing GitLab on a 9 server cluster on AWS might be fast. But, I'm ready to bet you that SourceHut will be faster than it still, and I have a ready-to-roll performance test suite to prove it. And I know that SourceHut is faster than GitLab.com and GitHub.com, and every other major host, and you don't have to go through the trouble of provisioning your own servers to take advantage of SourceHut's superior performance.


> This has been objectively demonstrated as a numerical fact:

While your tests are indeed objective, I don't think they're very useful. For example, why does your performance test ignore caching?

GitHub's summary page loads 27KiB of data for me unauthenticated, which is about 6% of the 452KiB you're displaying in your first table. The vast majority of developers who browse GitHub will not be loading 452KiB of static assets every single page load.

Anecdotally, GitHub's "pjax" navigation feels about as fast as SourceHut on my aging hardware.


Even with caching, SourceHut is a lot smaller than that. SourceHut benefits from caching, too - the repo summary page comes from 2 requests and 29.5K to 1 request and 5.7K with a warm cache. And in many cases, the cache isn't the bottleneck, either - dig into the Lighthouse results for specific pages to see a more detailed breakdown.


Thanks for being so transparent about your operations.

Maybe this is somewhere in the manual and I missed it, but do you have some way of automating the configuration of your hosts and VMs? For example, do you use something like Ansible?


No, I provision them manually. Being based on Alpine Linux makes this less time-consuming and more deterministic. At some point I might invest in something completely automated, but right now the manual approach is simpler - and if it's not broke, don't fix it.


Ah, OK. Also, have you written anywhere about why you chose to use colocation rather than VPS (or "cloud") hosting, or leased dedicated hosting for the CI system? If you could use someone else's hardware rather than having to select, buy, and set up your own, then at least in theory, you could spend more time on other things. But I'm sure you have your reasons for making the choice that you did. I'm just curious about what those reasons are, if you're inclined to share.


There are lots of reasons, but the most obvious one is cost. All of SourceHut's servers are purpose-built for a particular role, and their hardware is tuned to that. The server that git.sr.ht runs on is pretty beefy- it cost me $5.5K to build. I paid that once and now the server belongs to us forever. I ran the same specs through the AWS price estimator, and it would have cost ten grand per month.


A little bit off topic. I just wondering it support git write access over https? Not just read only.


No, write access is only supported over SSH, for security reasons. SSH key authentication is stronger than password authentication, and git.sr.ht doesn't have access to your password hash to check anyway.


Yeah I'm sure there are no possible scaling issues between your service (how many people use it / repos acitve) vs github or gitlab....


your $2/month pricing. Is it $2/person/month?


Yes.


I set up a self-hosted Gitea this year and moved my repos over and couldn't be happier with it. It's faster than GitHub, clones the GitHub design/UI so that everything's where I expect it to be, has a dark mode, and supports U2F. It's easy to deploy, back up, and maintain, the Gitea devs have done a great job.

It's much less complicated (both from an admin standpoint, as well as a UI standpoint) than GitLab. I paired it with a Drone installation (also self-hosted) for CI and (sometimes) CD.

It all works great, and is way easier than I thought. If there's downtime, I'm (usually) in control of when or how long, as I have root on the box.

I'm also not giving my money to a giant military contractor (Microsoft, the owners of GitHub) any longer, which is a huge deal for me from a personal moral standpoint (YMMV).


Positive side for such downtime, turns out people gather to look at your landing/home page now just to see if the service is up. Can the cost of a downtime be effective to grab few customers for your new feature just published on your homepage?


you could move to gitlab, but from what im hearing the pricing is higher than github (is this still true?)

Barring that you always have the tried and true (and for some reason abhorred by start-ups) option of running your own gitea or gitlab instance. Its not hard, and most of this stuff can be done in dockerless containers if you want.

If cloud servers are getting "overloaded" as some commenters say, you could even buy a few racks or u's of colo somewhere or use a cloud provider that isnt the most popular meme on YC. Vultr and ramnode are both good options and youd be supporting a small business, not Bezos next giga-yacht.


Github actually recently reduced their prices to match some of Gitlab's offerings.

https://news.ycombinator.com/item?id=22867627


>Vultr and ramnode are both good options and youd be supporting a small business, not Bezos next giga-yacht.

vultr is considered a small business now? crunchbase lists them as having 50-100 employees, and they seem to be owned by Choopa, LLC which some sources list as having 150 employees.


GitLab community advocate here, just wanted to share the most up to date GitLab pricing information: https://about.gitlab.com/pricing/ Thanks!


It has been since the Microsoft acquisition.

I chalk most of the early ones to moving services over to Azure.

Lately though, I don't know. Azure is running pretty close to capacity, so maybe it's part of the problem


May it has more to do with their changes in pricing and a surge in uptake of customers?


That would point to Azure hosting. Anyone notice a similiar pattern?


Github is still hosted on AWS though afaik.


As of the end of 2017, they were using their own datacenters.

https://github.blog/2017-10-12-evolution-of-our-data-centers...


As many readers are stating there seem to be larger, internet-wide issues today in US.


For hosting your own repos AWS CodeCommit works very well.


1) Microsoft took over 2) M$ migrates some ADO (Azure DevOps) features to Github (e.g., Github Actions) 3) If Github was not on Azure before M$ bought it (very likely, but needs citation) they will probably migrate to Azure at some point


I'm pretty sure Github Actions work predates the MS acquisition... I'm also pretty sure that they are trying to align the backend systems more to Azure, but have no insight into how much of that took place.

The fact that you used "M$" indicates that you are predisposed to blame Microsoft for actions that are likely not from the parent, and discount any changes from the top down that have occurred within MS. And while I have a lot of issues with MS and Windows in particular, MS today is not the same as MS even a decade ago.


I would guess that since they introduced free private repos, usage has been increased a lot. Eg. I used to use bitbucket but switched over to github when they did that, cus the github desktop program is nice and works a lot more smoothly with gtihub as opposed to bitbucket.


True, and I think that this makes the statistics somewhat misleading. The Atlanta metro area has a population of roughly 6 million and it's not very centralized.


My friends and I play D&D because we have no real other option. We used to play Minecraft and other collaborative building games as a group, but then one in our group went fully blind. There is a complete lack of good multiplayer computer games for entirely blind players (admittedly that is quite a challenge), but D&D requires only imagination, which all of us still have. Highly recommend if you have friends with vision disabilities.


No real other option? There are dozens of other excellent RPGs available that rely more on imagination than sight. D&D is merely the gateway game.

There are of course D&D spin-offs and clones like Pathfinder and 13th Age, old school (OSR) "retro-clones" like Dungeon Crawl Classics, Lamentations of the Flame Princess and many, many others. Then there are the classic non-D&D games like Shadowrun (in its 5th edition now), Traveller, Warhammer Fantasy Roleplay (4th edition just released), and GURPS. There's Savage Worlds for fast-paced pulp-style adventures, FATE for absolutely anything you can possibly imagine (including publications for Dresden Files and others). There's FFG's excellent Star Wars games (Edge of the Empire, Age of Rebellion and Force and Destiny), and dozens if not hundreds of smaller indie games, many of which are completely free.

We are truly living in a golden age for roleplaying games. D&D is merely the most visible and best-known one.


Ah, sorry, you are absolutely right. When I said 'no real other option' I was missing the numerous other RPGs that are out there. I did not mean to denigrate them by omission. I meant more that we were forced away from computer games.


How about MUDs? There are a lot of choices with screen reader support these days.


Nice to see mention of Traveller, I thought I was the only person that even knows about it anymore. I think I spent more time designing ships than actually playing it, but I have fond memories of both Traveller and Car Wars (and still have the sets along with my AD&D books and modules).


Traveller definitely still exists, but I have no idea how many editions there are these days. I'm not sure anyone knows.


What non-D&D game would you suggest to someone who enjoys D&D but would like to explore other systems?


I’d like to recommend checking out Numenera, from Monte Cook Games. It’s kind of a sci-if/fantasy mashup. It takes place on Earth one billion years in the future. Eight great civilizations have appeared and disappeared in that time, leaving the world full of ruins and starnge technology, all of which is inscrutable to the people who live there now. The game materials have high production values, on the same level as the D&D books, and about the same level of complexity of game mechanics. The thing I particularly like about Numenera is its emphasis on exploration and discovery rather than killing things. There is still fighting, if you want there to be, but the focus of the game is on going out into the strange world and uncovering it’s weirdness.


If you like the basic 'fantasy' setting of D&D, but want a game with a more gritty and 'low' fantasy feel I very can highly recommend trying to find a copy of Warhammer Fantasy Roleplaying.

If you want a game which is more realistic and almost entirely 'straight' historic medieval Europe, but where magic, as they believed in it at the time, is real, go check out Ars Magica. Ars Magica is especially recommended if you like playing mages and want a game with one of the most fleshed out and 'realistic' magic system ever seen in a role playing game.


There are way too many options to give a simple answer to that question.

If you want to stick close to D&D, Pathfinder and 13th Age are obvious choices. If you prefer something a bit more raw, less polished maybe, deadlier, where survival is a goal in itself and combat may be better avoided, try one of the OSR systems, like DCC, LotFP, Labyrinth Lord, OSRIC, etc. Lamentations of the Flame Princess is weird horror and explicitly 18+. If you want the feeling of D&D but with a system that focuses more on the story and the experience than on all the numbers in D&D, then try Dungeon World. A lot of people lauded Dungeon World for recreating the feeling they had when they first played D&D.

If you want to get further away from D&D, well, what direction do you want? Fantasy? SF? Cyberpunk? Historical? Martial arts? Horror? Steam punk? Espionage? Military? Old West? TV shows?


> If you want to get further away from D&D, well, what direction do you want?

SF or Cyberpunk


R Talsorian Games' Cyberpunk 2.0.2.0 is a nice retro (think William Gibson Neuromancer) game.

https://talsorianstore.com/collections/cyberpunk

But if I were to recommend a single (set of) games, it would be the classic World of Darkness games, like Mage: the Ascension 20th anniversary Ed:

https://www.drivethrurpg.com/m/product/149562

For a game with some interesting mechanics, you might enjoy Underground:

https://www.drivethrurpg.com/m/product/2873

And for something... Different, we've had a lot of fun with Microscope:

http://www.lamemage.com/microscope/


Shadowrun is the canonical cyberpunk RPG.

Starfinder is, I believe, Pathfinder in space.

GURPS is setting-agnostic.


>Shadowrun is the canonical cyberpunk RPG.

Aside from the dated and weird essentialization of Native American cultures, Shadowrun's setting is really good and fun.

Unfortunately it's hard to run a game with a decent narrative flow just because the combat system is so complicated. My group decided to shame people out of playing mages or riggers just because we didn't want to have to deal with simultaneously doing combat in cyberspace and the astral plane at once. It really puts a damper on having a fun game that flows. I wouldn't recommend it for someone new to pen&paper RPGs.

On the other hand, the tedium of combat gave us a strong incentive to talk our way out of problems instead of going the murder-hobo route.


GURPS works better in some eras trying to do modern you have dozens of skill's to keep track of.


If you want pure cyberpunk, take a look at R. Talsorian's Cyberpunk 2020.

If you like fantasy mixed in with your cyberpunk, Shadowrun is the gold standard. A word of warning: Shadowrun has a rather heavy, complex system, because it does absolutely everything. But I like it a lot.

Generic systems like GURPS and Savage Worlds can do cyberpunk of course, although I don't think GURPS Cyberpunk has been updated to the 4th edition. No doubt something exists for Savage Worlds, but I have no idea what.

There are other cyberpunk systems that I know very little about, but others are enthusiastic about, including Eclipse Phase (seems to include space and transhumanism, so it's probably not pure cyberpunk, but it might suit your taste), or Ex Machina.

Sprawl seems to be the Apocalypse World/Dungeon World adaptation for cyberpunk.

SF is much broader. The original SF RPG is of course Traveller, which is somewhat retro; the game predates computers and doesn't have many (any?) robots either. But if you want to travel around in a space ship, this is great.

Stars Without Number is an SF game that translates ideas from the OSR movement to the SciFi setting.

There are of course several different Star Wars games, including the original d6-based game by West End Games (recently republished by Fantasy Flight Games), the d20 (D&D-like) Saga Edition, and the Edge of the Empire-style games by Fantasy Flight.

GURPS is great at SciFi, and I'm sure Savage Worlds does it too.

Diaspora is a small but really cool hard SF game based on the Fate system. I love how you first generate the worlds together and then generate the party together. In space combat, dumping heat is a major concern.

Paranoia is weird dystopian funny SF. The Computer is your friend.

Starfinder is the SF version of Pathfinder. I assume the system is therefore D&D-related, but I honestly don't know.

Dark Heresy takes place in the Warhammer 40K universe.

But there are dozens if not hundreds of others.


In addition to the three listed above, Alternity (if it's still in print?) is a reasonable SF system.

Oops, not in print since 2000... yep.


Your list is excellent, but wanted to throw one more out there: Cyberpunk 2020.


Definitely a great game too. But there are dozens, if not hundreds, of games I have omitted. There's a lot out there.


I’d probably say thousands of games we both omitted, but it wasn’t my intent to list them all and very likely not yours either.


Yeah, I have no intent to try to list them all. It's better to point to RPGgeek.com[0], which lists nearly 10,000 RPGs. (Though that's counting different editions of the same game as separate games.)

[0] https://rpggeek.com/browse/rpg


Losing one's imagination seems more common than losing one's sight. :(


Fortunately there are plenty of cookie-cutter computer games available for people with impaired imagination.


That sounds like a good thing to me. It would be unfortunate if going blind was more common


The more common a disability becomes, the less of a disadvantage it becomes - with some lag, of course - the world does tend accommodate for the (visible) average.


Can confirm. I have a condition which makes it hard to see in sharp detail more than 8ish meters away. My wife has the same, as do many others in my family to some degree. There is a very robust industry producing adaptive devices for nearsightedness.

I've even heard things like contact-lenses-as-a-service advertised on general interest podcasts.


There’s even this thing where they use lasers to burn away chunks of your eyeballs, because yeah, losing unnecessary weight and all makes you see better. I had it done a few weeks ago and it’s life changing!


It is still undoubtably useful, even if thanks to technology we can manage without.


Reading as a replacement to everyday stimuli that we are all too used to like video games and youtube is what I found to help my imagination flourish like I remember it did when I was younger.


This comment inspires me to work on my art.


I used to have several blind friends who were very successful in text based MUDs. It's possible finding one with an active userbase is getting harder and harder.


Shades (a very early MUD) used to have a Deaf/Blind Player and she used to come to eyeballs with her Guide Dog.

Indra Shah (booker prize shortlist author) wrote a book called Cyber Gypsies that covers this late 70's online community


I was thinking about text based games for blind people, but since I don't know any blind people that I can easily ask this, I'll put it here in the hopes that somebody who knows the answer will notice it:

Presumably, text based games are played with a screen reader. Would music and sound interfere with the persons ability to play? I was wondering if you could mix text and 3D audio to create a richer environment.


That is different from person to person. As long as it's not overpowering the voice it should be ok for most.

Many preffer to be able to set the reading speed though (2x and 3x not uncommon) and to be able to skip to the important part of the message. Especially important when you can't use visual pattern scanning on text that shows up often.

Mabe use the browser to create a textbased game? The tools exists there already and the users are used to use them.


Thanks.

I’m especially interested in creating a 3D soundscape, maybe something similar to what is described here: https://www.gamasutra.com/view/feature/131900/playing_by_ear... but not necessarily instead of text, but rather to augment it. Based on what you’re saying, it would probably work well enough: have separate volume controls for music, ambient sound and sound effects (as most games have already anyway — many games have a voice volume too, but I guess voice should be left wholly up to the screen reader and it should control the volume/speed).

Using a browser sounds like a good idea. Definitely wouldn’t want to implement screen reading capability yourself!


These days, the number of non-pay MUDs with an average of 100+ people on them at a time can be counted on two hands.


How about paid ones?


http://astaria.net/wm_client/webclient.php is still going, but I think it has only a few dozen on average.


I haven't done anything but one of the quick D&D campaigns with the prebuilt characters but I am REALLY enjoying gloomhaven as an alternative. I know there's some vision required there but it seems similar to D&D.


It's an alternative if your favorite part about D&D is fighting (if so I'd recommend 4th instead of 5th). The role playing you do in Gloomhaven is non-existent in comparison.


Can you please ask your friend from the blind community's perspective, what their take is on using Minecraft Education Edition as a way to access Minecraft? It is more programmable than Minecraft Jave Edition, and supposedly one can interact with the game completely from within the API. The API [1] looks fairly complete, but I can't tell if it has what a blind game player would want to build from.

I just saw a pic-to-Braille conversion bot on Reddit, and your comment made me wonder if something similar could be built for blind Minecraft players. So far, I'm not aware of open source Minecraft-alikes that exposes the game purely through an API, though an open modding API like Minetest's [2] could probably be leveraged.

Googling around for this information leads to a lot of dead ends talking about the in-game Blindness effect, and I'm not a domain expert in what blind gamers would want to see, anyways. But it would be really cool to see the blind community add new dimensions to current game genres through game interaction APIs (though managing that and botting using the APIs would be an open problem).

[1] https://education.minecraft.net/wp-content/uploads/Code_Conn...

[2] https://dev.minetest.net/Main_Page


A couple of thoughts on Minecraft, though note I am not familiar with Education Edition:

1. We played Minecraft with the specific intent of making visually appealing buildings. So at some point, when you can't see, that's not going to be fun no matter what you do...

2. Minecraft really doesn't have any accessibility whatsoever. You can scale the UI, but... high contrast mode? If you could even get that working with the base game, it's definitely not going to work with the mods we were playing with. As my friend went blind, it got harder and harder for him to deal with any zombies or anything that was moving, since it took him so long to slowly scan the screen and understand where he was. We considered trying to make mods to make things a little bit easier, but struggled with coming up with any mod that would actually improve things. :)

3. My quick glance at the API suggests that we'd end up with a situation where we were playing the game and he was... programming. That may work for some people, but I think for this group that borders too close on after-hours work...


Maybe Keep Talking & Nobody Explodes would be an option - I haven't played it, but what I'm reading is that it's a game where one player has to defuse a bomb while the others have to give him instructions. I'm sure that could be converted to braille or some other format that doesn't require vision. Would be a great project to adjust that for the visually impaired.


Unfortunately that wouldn't work. For the person who's reading the instructions, there's a lot of flipping through pages and skimming an entire page for instructions on the particular item.

That fast skim reading can't be done with braille.


It can however be done with a well structured document and a screen reader. Worked with a blind guy in college who used screen readers and it was incredibly fast moving through pages.


The game relies heavily on visual queues for efficient puzzle-solving, so I don't think its very suited.


I dont think so...the complicated wires, keypad, and maze puzzles in particular seem to be a problem. People can make custom bombs without those puzzles but His blind friend would still need a way to search through the manual at a fairly quick pace.


Tau Station was built to be entirely accessible to blind users: https://blog.taustation.space/blog/making-tau-station-an-acc....


I was wondering - do you tell your blind friend what he/she rolls or how do you deal with dice?


Tactile dice don't seem too difficult a thing to make: https://www.geeknative.com/53239/braille-dice-d20-style/

Plus, you don't even have to use dice—anything that gives a uniformly random outcome is alright. (E.g. local “Choose your adventure”-clone books had dice sides printed on each page.)

Now, tracking the character sheet and consulting the rules are probably more of a nuisance to the person.


If we are in the same place, either someone else will roll for him or he'll roll and we'll tell him what he got (and he has little choice but to trust us :)). If we aren't in the same place, he rolls virtually (did you know you can type '/roll 1d20' in Google Hangouts and it rolls?) and gets the result via a screen reader.

For a while he was DMing and he would use a screenreader to access his notes plus the rolls. We don't typically use maps or boards, but instead try to do it all with descriptions of places. It does mean that the rooms we enter all tend to be fairly simply shaped, and it's possible that each of us has pictured a slightly different room, but it all works out in the end.


I love theater of the mind sessions. Maps have their purpose, but it's so much more fluid and imagination-intensive when your boundaries are visualized by your mind. The game loses something when you tie it down with predrawn environments and grids.


My sons play D&D fanatically, I remember it from years ago. Very impressive you made the effort to include your visually impaired friend. I learn't something thanks


Sorry to hear about your friend's misfortune.

Are they using their disability during gameplay, and/or in-character?


The mlpack library (http://www.mlpack.org) also has a C++ implementation of Hamerly's algorithm, as well as Elkan's and some other accelerated variants. My experience has been that while Hamerly's algorithm can be a lot faster than the unaccelerated implementation, it's not always the fastest---that depends on dataset conditions.


I don't think "emsmallen" can be said reasonably :)


Huh, turns out you're right, which is entirely why em- and en- both exist and mean the same thing.

https://en.wiktionary.org/wiki/em-#Prefix


Thank you for saving me from needing to write the exact same thing!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: