Hacker Newsnew | past | comments | ask | show | jobs | submit | yosser's commentslogin

I've just gone through the rather painful and protracted process of reverting from Tahoe to Sequoia.'Reverting' rather than 'downgrading' since Tahoe is in no sense an upgrade.

I consider myself quite tolerant of UX quirks, iPhones are still pleasant to use, particularly if you select 'reduce motion' from the accessibility settings.

Tahoe though, bugs aside, is just genuinely unpleasant to use and interact with. By far the most offensive thing to me is the pointless rounded rectangle thing. It delivers absolutely no value at all to the user and defies any form of justification. How in any form is this a decision designed to improve things for the user?

The other multiple weirdnesses commented on elsewhere while unpleasant are more liveable with, but I honestly never found a single change that improved my interactions with the computer. How on earth can you have spent a whole year on this and why didn't anyone have the authority to pull the plug?

I would no longer recommend a new mac to my anyone. A second hand model running a previous operating system makes far more sense.


Did you follow a guide to revert black?


Sort of, but be warned it wont automatically revert a time machine backup of your Tahoe files to a Sequoia system, you have to manually copy them. Similarly you have to recovery boot into disk utility to flatten your hard drive before installing Sequoia from (say) a memory stick.

It all seems a bit needlessly tricky. Frankly though - for me at least - it's worth the trouble.


I used to live just down the road from Fred Dibnah's house when I was in my early 20's. It was quite an interesting old building with a fairly steeply sloping garden that led down to an old workshop packed with steam engines and similar mechanical bric a brac. Despite the layout you could easily see down to the bottom of the garden from the pavement as you passed by.

I had a sort of standing joke where I would wind my girlfriend up by pointing out that it was Fred Dibnah's place every time we walked past. I did this perhaps a little too often.

One day, she had had enough and told me in a very loud voice that 'she couldn't give a shit if it was Fred Dibnah's house'. That's when I saw his startled face peering up from behind a traction engine. Sorry Fred. I hope you can forgive me from that big old chimney in the sky.


It just cloned a very primitive slack copy for me

https://light-whale-5opfal.manyminiapps.com

Well done guys.


Appreciate the kind words :). Looks like there's a lively conversation in there!


So this guy never ever restarted his car after a short interval except for when he bought vanilla ice cream? Additionally, he never varied the time intervals in the shop when he was buying aforementioned ice cream?

Is it is an understatement to suggest this is a highly unlikely circumstance?


Doubtful. We all fall into patterns in life; and regular activities where we go in and out have very predictable times.


The customer's location would have been helpful information. That kind of determinism seems impossible in Los Angeles, but very likely in Solon, IA.


You don't think it's likely that the regular 5 minute stops were the first warning sign and most common time he restarted his car after a short interval? Especially vapor lock on a hot day when one is likely to buy ice cream? Maybe he would've noticed it in more places if it got worse. And since he was going for ice cream, he was unlikely to dawdle around waiting for the product to melt?


He probably learned how to reproduce the problem, experimented with it, and when the engineer came, he was proud to reproduce it and embellish his story a bit to make it seem perfect.


A lot of men are extremely efficient shoppers. They know what they want and where it is. They don't browse around at all. They have the cash or card ready. Walk in, pick up the ice cream, pay, walk out. Could be done in 30 seconds if there are no other customers.


Isn't this a red herrring? I thought the law didn't require the removal of E2E encryption, but rather mandated the addition of a back door that submits some kind of meta data summary to a third party service?


Is that better? The effect is the same.


I think they meant to say that this is just a PR stunt by saying they won't remove encryption (because they don't even need to to comply)


That depends entirely upon who gets to define "End-to-end", and whether they are held to task for any inaccuracies in that definition


is it truly E2E encryption if it has a backdoor?


whatsapp has that.


I'm 62. I started out by coding basic and machine code on a ZX-81 I won in a competition when I was on the dole in the early 1980's.

These days I earn a handsome salary as a front end developer - mostly React, but other frameworks as and when required. I pull in very good side income as a freelancer too. Over the last 15 years I have had absolutely no problem getting a job at any stage, usually winding up with a number of offers within a few days of leaving my previous job.

I imagine I've come across ageism from time to time, but honestly it has made little difference to me or to my prospects. I'm told I look quite young for my age, but I dye no hairs.

Take heart oldies, if I can do it, so can you.


Usually a simple 6502 assembler, debugger combination with an integrated editor on a dedicated PC, hooked up a cable into some kind of magic box on the target NES. In the UK at least a system called PDS was popular though it wasn't uncommon for development houses to have custom written development environments.

At that time, if you were lucky, you'd have a 20Mbyte hard drive on the PC and 600k or so of Ram.

In our case we had written a few custom graphics tools but in the main graphics were either hand drawn onto graph paper, or drawn in deluxe paint on the Amiga.

Some of the Japanese companies had very peculiar rules.I know of one well known company who kept their programmers and artists in entirely separate offices. Artists would burn their finished graphics onto an eeprom and the poor programmers would simply be presented with the rom images to do what they could with.


> Kirby's Dream Land was developed by Masahiro Sakurai of HAL Laboratory. Much of the programming was done on a Twin Famicom, a Nintendo-licensed console produced by Sharp Corporation that combined a Famicom and a Famicom Disk System in one unit. As the Twin Famicom did not have keyboard support, a trackball was used in tandem with an on-screen keyboard to input values; Sakurai described the process, which he assumed was "the way [game programming] was done" at the time, as similar to "using a lunchbox to make lunch."

https://en.wikipedia.org/wiki/Kirby%27s_Dream_Land


Here’s more info along with some images: http://sourcegaming.info/2017/04/19/kirbys-development-secre...


One of the trickier things to manage on a NES (without additional hardware on the cartridge) was maintaining a static information panel on the bottom of the display - for the score and so forth - while the portion of the display devoted to gameplay scrolled freely around both vertically and horizontally.

The trick we used at Zippo (Wizard and Warriors II and 3, Solar Jetman etc.) a trick given to us by Rare, was to change the scroll registers and I think the character look up location at the moment the screen refresh cycle reached the appropriate point on the display. These registers would then need to be reset during the vertical blanking interval. So all in all either 100 or 120 times a second depending on the TV system.

Since we couldn't afford to keep the CPU hanging around doing nothing while we waited for the cathode ray tube gun to hit the right point on the screen the trick was a two parter.

First you would get the sound chip - such as it was - to play an inaudible sample of a determined length which would then trigger a CPU interrupt at more or less the right time, within say two or three scanlines of the position of our static panel. You would then position a spare sprite on top of a visible pixel at precisely the right point on the screen so that when it flipped its hardware collision detection bit this was precisely the right time to switch the scroll registers etc.

These were the kinds of shenanigans that made programming the NES intricate and time consuming, and also in later years I suspect made the job of emulator writers something of a misery.


> also in later years I suspect made the job of emulator writers something of a misery

Most certainly. These tricks force developers to emulate systems down to individual cycles in order to get the timing right because getting them wrong will result in visual glitches or worse.

https://mgba.io/2017/04/30/emulation-accuracy/

https://mgba.io/2017/07/31/holy-grail-bugs-2/

Byuu, the author of bsnes, wrote some very detailed articles about this as well. I can't seem to find them anymore though. His domain has also been excluded from the internet archive for some reason.


> His domain has also been excluded from the internet archive for some reason.

Because Byuu, who later went by "Near", tried to scrub as much personal information from the internet in the hopes of staying ahead of constant doxing attacks by an awful internet community called KiwiFarms. It didn't work. In the end, Near/Byuu committed suicide months ago to escape the constant harassment.


https://knowyourmeme.com/memes/events/near-byuu-suicide is one of the least-comprehensible stories I've ever read. I've been on the Internet since 1989, and practically none of that means anything to me.


> ...awful internet community called KiwiFarms.

The free speech Alamo. It often leads to funny results, like lesbians finding it to be the only place they could complain about some psycho in Canada targeting women operating small businesses - Youtube and Twitter banned several people over it before the Canadian media started covering it. Sometimes it leads to not so funny results, because that is the cost of agency - its kinda crazy that even has to be pointed out... but we seem to be at that point.


Holy shit, I didn't know. I'm sorry, I'm shocked and just don't know what to say. Thanks byuu for all your work. Hope you find peace. You will be remembered.


Isn't it actually down to sub instructions?


Your post is why I keep coming back to HN to read- thank you so much for sharing. I love the undocumented stories from the trenches.


> was to change the scroll registers and I think the character look up location at the moment the screen refresh cycle reached the appropriate point on the display. These registers would then need to be reset during the vertical blanking interval. So all in all either 100 or 120 times a second depending on the TV system.

That's a cool way to do it on the NES. While the NES did not have a raster interrupt, requiring such tricks (though you mentioned the later cartridges with the extra chip that added one), the Game Boy did have a raster interrupt, and the SNES essentially went all in on it. Switching the mode mid-frame became a very common method then.

By the way, PAL/NTSC is 50/60Hz, but a frame consists of two fields (odd/even lines), so 25 or 30 frames per second for PAL/NTSC respectively. So I guess this means you probably did the trick 50/60 times per second? (Unless you used more than two configs per frame maybe.)


NTSC draws each frame as two fields; first the 240 odd rows, then the 240 even rows, for a total of 480 interlaced rows, or "480i".

However, older video game consoles (such as the NES and SNES) typically output a malformed NTSC signal, designed to trick the TV into drawing just the odd lines, over and over again, sometimes referred to as "240p". As a result, it's effectively 60 independent frames per second, not 30 frames of two fields.


Yes. Even if that weren't the case, the NES would have needed to separately buffer the whole frame (or worse) if you did not want to repeat the scroll register dance per field, which is where I went wrong.


My guess is the trick is needed four times per frame. For every field you need it once to start the static portion drawing, then once more to stop.


Ah, duh, you're completely right. I somehow hallucinated another buffer between PPU-generated frames and video output. Which would have been rather expensive at the time...

Also I only just learned that because the NES does not actually output the half scanlines that make interlacing work, both fields are drawn on top of each other, effectively making it 50/60 actual frames per second anyway, instead of interlaced fields! (https://wiki.nesdev.com/w/index.php/NTSC_video)


Ah yes, in retrospect I think the NES was refreshing at 25/30 times a second, so double that for the number of scroll register updates we needed to make.

It's been a long time!


When I hear stories like these, I always think that nobody is doing stuff like this these days; but in retrospect, I think it's just that I've never asked or seen it discussed.. so..

What are you guys doing these days that has this same hacker-ethos applied to it?


It's semi-common on PS4 to abuse the audio coprocessor to do things other than audio, just to eek out that little bit of extra processing power. After all, it's programmable, so why not? In fact Sony specifically made the PS5's audio processor non programmable to make sure developers don't do this and actually focus on its strengths and things that it can do very well with 3D audio and such, rather than make it crunch physics or whatever.

Also I'm aware of at least one game that uses an internal executable restart routine to reboot and reload data on PS4 and X1 since it's just easier than dynamically unloading all engine data and loading it again following an in-game update. While not forbidden by either 1st party, it's certainly somewhat unorthodox.

In general, I think that even with next(current) gen there is still a lot of the good old school hacker mentality going on, it's just that you can't hear about it because it's all NDAd. Wait 10-20 years and those stories will start to come out.


> Also I'm aware of at least one game that uses an internal executable restart routine to reboot and reload data on PS4 and X1 since it's just easier than dynamically unloading all engine data and loading it again following an in-game update. While not forbidden by either 1st party, it's certainly somewhat unorthodox.

Microsoft put out a video soon after purchasing Bethesda where someone from that team mentioned doing a similar trick in the Xbox port of Morrowind. If the game was approaching the relatively tight 64MB RAM limit of the platform it could soft reboot the console during a loading screen and then reload to the current point.


Yeah, MVG did a video on it ;-) I can confirm that the exact same trick is still used on modern games.


Very cool! Sounds a lot like all the tricks you had to employ for the Atari 2600. If someone is interested in this, I can highly recommend the book "Racing The Beam". It's hard to imagine these days how complicated even the simplest things were back then.

15 years back I did some hobbyist programming for the GameBoy. Drawing a status bar there was comparatively easy. The hardware allowed you to set some registers to define a "window region", overlaying the main background map.


No hblank interrupts? Game boy has them to make this sort of thing relatively easy. Hblank interrupts is also how "mode7" style pseudo 3d for a Mario kart style game can be made (on GBA or SNES).


You could get the equivalent of hblank interrupts with a mapper, such as the mmc3 used in Super Mario Bros 3. That is how SMB3 renders its hud.

https://en.wikipedia.org/wiki/Memory_management_controller#M...


We abused hblank to death in Infinity for Game Boy Color (unpublished in 2001, later dumped on GitHub), for things like parallax scrolling, tilted overworld map, screen transitions, and scrolling inner text.


I looked for that, found it on github and then looked at the build instructions, followed the link provided to GBDK and saw my credit as one of the contributors. Totally forgot about that... :)


Are you participating in the re-launch of development for Infinity? (great to see it happening)

If so, what approach are you taking for the toolchain? Keeping the existing version that had been used/modified? Updating to newer tools (and code changes that may go along with that)?


Everything's up to Incube8, although I'll be advising. In general, my recommendation is to consider better tooling for managing assets and running builds but be very careful with the code (e.g. don't change compiler versions, don't refactor anything).


Thanks for the reply. It'll be interesting to follow the progress.

Can definitely see wanting to keep the compiler and code in a known state. Though modern SDCC generates code that's probably 50% faster in some cases and with 90% less bugs than versions from the early 2000s.

Lots of great Game Boy releases have been coming out.

Edit: add a little more.


Atari had a patent on a cpu instruction to wait for hblank, I suspect Nintendo didn't put anything similar in the hardware as a result. But it was needed, so interrupt generation ended up in cartridges... And since they didn't get sued, put in the next gen hardware.


IIRC, SNES had those, NES did not.


man it must have been such a weird era where the screen was part of the "system architecture"

today everything seems so decoupled


The hardware was so limited and expensive that rather trickery paid off.

For instance, on Sinclair Spectrum the memory refresh and the keyboard matrix scanning were intimately coupled; tape I/O and screen border color circuits were also somehow coupled, which was easy to see with every tape operation.


I know, I mean the thought process when designing a program in such constraints where any aspect of a system might be a useful resource and you had almost no other choice (kinda like abrash rotated texture trick on pentium) .. it's so at odds with current processes where everything is split and isolated as much as possible.


Computers do over 1000x as many instructions per second now.

They do more instructions per second than a whole 15min game play.

Imagine trying to coordinate all those interactions without abstractions.


What mechanism existed for telling what scan line the TV was on?


That’s what the “hardware collision” bit is about. You place sprite 0 at a certain location on screen, and you can find out when that sprite gets drawn (with some certain other restrictions / assumptions). IIRC, the collision bit gets set when a pixel from the sprite is drawn overlapping a background pixel. This is how early games like Super Mario Bros. drew the status bar, but Super Mario Bros. could just spin the CPU waiting for this.

There are two other mechanism I know off the top of my head.

One is to put some circuitry on the cartridge, and then arrange the usage of tiles such that all the background tiles are in one bank and all the foreground tiles are in another. If you do this right, you get an address line which cycles high and low once each scanline–and you can put a circuit on the cartridge which counts the scanlines, triggering an interrupt. This was only available if you could put that circuitry (usually, a “mapper”) on the cartridge.

The last method is to count cycles.

I would note that among other differences, this is much, much easier on a Game Boy. The Game Boy has a register you can read which tells you which row you are on. Not the only thing that’s easier on a Game Boy—there’s also a hblank interrupt, and the tilemap is larger than the screen. Anyone interested in NES programming but unsure about how much they like dealing with obscure technical problems may want to try Game Boy programming first to get a taste for it.


And how would one know the bit was set... polling?


Yes: but it's messy. You spin in a loop with a BIT instruction polling the PPU_STATUS register and a BVC instruction pointing backwards waiting for bit 6 to be set. Besides the fact that you're stuck doing this, the main problem is synchronization, this little loop takes 7 CPU cycles (with each CPU cycle taking 3 PPU cycles, or 3 "pixels" of distance) leading to an 21 "pixel" variance. When dealing with the very short horizontal blanking period between scanlines, it leaves very little time to do anything meaningful with the PPU in that window.

If you've ever noticed some "weird flickering" near status bars in NES games, this is probably the culprit: Few very skilled developers were able to pull it off cleanly.


Yes, polling.


The TV syncs to the clock in the input signal that the console generates. The console knows what line the TV was on because it's the line that the console is currently outputting. Analog TVs were very simple devices where the signal coming in via the wire or antenna was basically pumped directly to the CRT to manipulate the beam. While at the user's end it looked like a raster, it's all analog tricks.

Imagine a text file pushed out theough a serial port, all the data is just dumped on to the wire and the other end worries about interpreting carriage returns and line feeds. You just imagined how old line printers worked!

When the TV wasn't able to figure out the sync signal you'd get rolling [1] or tearing [2] where the picture was being displayed the best the TV could make out.

1: https://youtu.be/bGXEqzCS4nE?t=28 2: https://www.youtube.com/watch?v=FVOVk3psy-w


You meant 50 or 60 times a second. SNES wasnt that fast. And yeah what you were describing was common


They mean that they need to fiddle with those specific video registers twice a frame, which is 100 or 120 times. Vblank timing isn't too hard, the base system has an interrupt for it; scanline timing is hard though, you either need to carefully time your loop (and redo it for 50hz systems), or poll on things, or use extra hardware built into the cartridge (mappers).

I don't think there was an IRQ for sound, but it might have made sense to poll the audio unit rather than the graphics unit for some reason.


Was this any less harrowing than racing the beam on an Atari 2600?


Of course Microsoft bought Rare Ltd., so I guess they thought they might as well acquire some Nintendoish IP one way or another.


No, it's a javascript error. The second time around I agreed to the terms and conditions and then the site worked fine.


Yikes. Thanks, I'll look into it. The terms and conditions have no influence on this, but it could that that one of the scores required to create the life expectancy prediction is missing (age, gender, bmi, country).


My life expectancy was the only score that it didn't give me. It just stayed at ... and then marked that as not normal


I found the issue, and it should hopefully be fixed. It happened if your IP address couldn't be used to predict your location.


Does it fall back to US if the IP address can't be used? Because this would be the first time a GeoIP database would move my EU IP address to the US...


Yes you are correct.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: