I installed this via mise. `mise use ubi:seeyebe/gmap --global`.
I noticed I had to run `gmap heat` to generate a cache folder and db locally before running `gmap heat tui`, otherwise nothing shows up.
It's scary to have the cache folder locally because I could accidently check that in if I don't place it in a .gitignore; is there a better way to handle something like that?
Ah yeah, that changed recently, you now can use the tui and it will fetch anyways.
Good shout on the cache folder. Right now it just lives locally (.gmap), so yeah adding it to .gitignore is the way to go for now. I’ve been thinking about better ways to handle it. maybe an XDG-compatible path or something configurable. If a better idea comes up, I’ll def switch to it.
I hate that many projects use Discord where anything else (IRC, Matrix, whatever) would work but in this case especially. Tencent owns an undisclosed share in Discord and just generally seems like a bad choice of communication platform if you want to look better than doge
Discord works by clicking a button - you get to signup, you fill in your email, bam you're on the Discord channel in your browser. Zero friction, nothing new to learn for the user.
I have never had that experience with discord, despite already having an account. It's truly mysterious what the actual link --> my account has the discord path should be.
I find IRC much easier, but I haven't used it in about 15 years.
Discord basically phased out in browser. There is no link to it only the link to get the 500 MB behemoth app. If you manage to open web ui because it is in your browser history it hits you with a captcha and email verification code every time. Etc
Enterprise tend to favor big bang migrations on a specific date because somebody higher on up set a particular date and everything falls into place with a Gantt chart running waterfall. The reality is that it falls onto a few technical folks to triage a large amount of teams, including the ones from the company they're trying to break away from (which introduces friction). This includes significant risk to the project.
"TSB chose April 22 for the migration because it was a quiet Sunday evening in mid-spring."
This might've gone better if TSB chose months prior to April 22nd for the long duration migration and testing to be completed, and a period of weeks or months for going live post migration. The F5 load balancer (hardware commodity) could've slowly cut over the traffic 10% at a time to the new migration site to get a feel for user experience. Coordination with the TSB network team would be necessary to accomplish that.
It is a tough spot though, I hope the team learned something from that.
> The F5 load balancer (hardware commodity) could've slowly cut over the traffic 10% at a time to the new migration site to get a feel for user experience.
I doubt an F5 load balancer would work in this specific case. But there definitely should've been a software router-adapter that routed requests to two systems and converted their replies to a single format. This would've let them migrate their customers in batches instead of a big bang cutover.
That's what I've always done when migrating data to a new banking system.
What I've seen work well, for multiple migrations, (at an admittedly smaller scale) is using shadow writes/reads and a source-of-truth toggle.
The API layer made requests to both the old DB and a new DB that had been populated during a small window of scheduled downtime.
We spent a couple weeks/months running checks in production that the old DB and new DB were returning identical results, though still returning the old DB's results as source of truth. Eventually, we flipped the source of truth to the new DB, and some time later decomissioned the old DB.
Great approach imo. Once you get flipped over to the new source of truth you have to make sure the business prioritizes decommissioning the old DB, though. I've seen certain departments (looking at you, BI) treat the old database as a permanent backwards compatibility layer.
I think part of the problem is that TSB had extremely limited access to the existing system, which affected their understanding of how that existing system worked, the kinds of testing they could do, and their ability to do incremental migrations.
That might've been the case, but it can't be the excuse.
They were spending €100mil a year on maintaining the current infrastructure. The team's incentive is to save a large portion of that through the migration, and somebody higher up should've given them unlimited access to do so.
They were paying €100mil a year to a direct competitor who owned and operated the current infrastructure for them. The amount of money they were paying was, if anything, a direct incentive not to give them the access required for the migration.
> They were spending €100mil a year on maintaining the current infrastructure.
As I understood they were made to pay that as the infrastructure was being used of another bank
I installed this via mise. `mise use ubi:seeyebe/gmap --global`. I noticed I had to run `gmap heat` to generate a cache folder and db locally before running `gmap heat tui`, otherwise nothing shows up.
It's scary to have the cache folder locally because I could accidently check that in if I don't place it in a .gitignore; is there a better way to handle something like that?