Hacker Newsnew | past | comments | ask | show | jobs | submit | the_mat's commentslogin

It's easy to forget that the SNES game library was mostly a sewer filled with "character with attitude" platformers. Loads of really, tremendously bad, unoriginal stuff


I didn't forget that, but that's true of pretty much every system, no? Most games are bad.


I suspect you're right, though I'm hoping it was done unintentionally, that he hired a company or acquaintance to "promote" the game, and it resulted in bot-fueled reviews and downloads.

The smoking gun is that he had two other games in the top ten, and that happened without in-app cross-promotion. People would have to look-up Flappy Birds in the app store and click on the "also by" link to even find those games, and there's no way enough people would be doing that to place them in the #2 and #6 slot.


Think about the CPU landscape in 1984. You had all these 8-bit home computers running at sub-2MHz. The IBM PC was 16-bits running at 4.77MHz, and most of those were actually using 8-bit buses (the 8088 vs. 8086). The super expensive IBM PC/AT was the top of the line: a 6MHZ 80286.

In comparison to all of this, a processor running at 8MHz that could do full 32-bit operations (though, yes, the external bus was half that) and with 16 32-bit registers. and no segments...mind blowing!


It's not quite that cut and dry. The 80286 was actually significantly faster than the 68000 at what-at-the-time-was-considered typical code[1]. The 68000 had a much cleaner and forward-looking ISA of course, but it paid a cost for that 32 bit architecture in more elaborate microcode that the 286 didn't need to worry about.

[1] That is, things that fit in mostly-16-bit data sets. Once framebuffer manipulation became the dominant operation a few years later, that status would flip. Nonetheless if you were trying to compile your code, model your circuit or calculate your spreadsheet as fast as possible in 1984, you'd probably pick a PC/AT over a Mac (if you couldn't get time on a VAX).


Yes. The 68000 was very nice to program for, but internally it was obviously from an earlier era/too far ahead of its time/a big pile of shit (delete as you see fit). You'll hunt far and wide for an instruction that takes less than 4 cycles, long instructions take more time again, variable-width shifts take 2 cycles per shift, a division can take 150 cycles, and with the effective address calculation timings on top things can really mount up. (See, e.g., http://oldwww.nvg.ntnu.no/amiga/MC680x0_Sections/mc68000timi...)

If you look at the cycle counts for 8086 instructions - see, e.g., http://zsmith.co/intel.html - they're much closer to the 68000 ones. Compared to the 68000, the 286 is just on another level.


Another cost of the "more elaborate microcode" is that there was no way to resume or restart execution of the current instruction after a bus error exception. The 68010 fixed that by dumping internal state onto the stack. I wished that after 68010 was released that Motorola promoted the 68010 for new development and the original 68000 only for situations where the system software cannot be modified (since to fix the problem the exception stack frames had to be changed). Motorola had some patents describing how it worked: https://www.google.com/patents/US4493035


This is true, but the '10 wasn't released until the Mac was well into development (and the Lisa was nearing release). Motorola sort of missed the window there. And in any case no 68k Macs would ever end up making significant use of an MMU anyway. By the time that became possible the platform had moved on to 68020/30 parts.

Obviously all the Unix vendors jumped on the '10 instantly. The MK68000 itself was never dominant there.


Yes, but the 68010 is pin compatible with the 68000 at the hardware level and the software changes are trivial.


I felt the same way about Wolfenstein 3D. The graphics were neat, but the game was a big step back from its 2D namesake on 8-bit hardware. It was just run and shoot and rub along walls hoping for a secret panel. I never understood why so many people wanted to clone it. Mostly the engine, I guess, because the game was just an ok design.

Doom, though, was amazing. It was a much more realized world, both visually and in terms of mood and design. Eventually the levels got a bit too puzzley, but it certainly deserves all the accolades heaped upon it.


This is a topic that essentially comes down to taste, but I'll attempt to outline the core appeal. You're right in saying that Wolf3D was mechanically simplistic when compared to the original - however, these choices were intentional. Carmack's engine provided a low-latency high-framerate simulation, albeit a relatively simple one. This was a fidelity in experience that was completely novel, so every decision was made to maximize the tactile feel of the moment-to-moment action. More complex gameplay elements were determined to be less of a priority if they a) technically slowed down the engine or b) kinetically slowed down the gameplay. This game was an appeal to the immediacy of the onscreen action, even moreso than the first-person perspective. Thankfully, this continued to be a core value for id, which is more than could be said for other shooters that offhandedly throw away input fidelity.


This is the end for id.

The only thing id has had going for it are Carmack's engines. In recent years his stuff has been as amazing as ever, but so many commercial engines are only a fraction of a step behind, and the difference hardly matters.

Design-wise id is a complete mess. They're stuck back in the 1990s. RAGE appears to have had no leadership and no vision, and the actual design work that shipped is amateur-hour at best.


ZeniMax will fold the id software llc, keep anybody worth keeping, and work on making money with the intellectual property (either internally or farming it out). There's nothing else left at id that's worth what ZeniMax paid for the company other than the IP and the game engine. I'm sure they knew the day would come sooner than later that they'd lose the Carmack value factor in id.


Was that cut and pasted from some kind of template for empty, verbose corporate-speak?


It's a word-template for "Reorganizational Memo - Large Software Business". While I'm here: "I see you're trying to reorganize your company, would you like help with that." - Clippy

Tip your waiter/waitresses, try the lamb chops.


He must have used the Dilbert reorg memo generator.


The "it feels so much faster" comments don't fly with me. The iPhone 4 still feels slick and speedy in September 2012. The 4S was faster, but it wasn't noticeable except for certain apps. Now the 5 is 2x faster again, but it's how much CPU is really needed to slide icons around and play music and look at photos?


Can you imagine the pain caused by transitioning away from HFS+? Is there something so fundamentally wrong with it to justify all the compatibility headaches of a new filesystem?

Under the hood, there's some low-hanging fruit that I'm surprised Apple hasn't gone for yet:

* Turn off the writing of ALL memory to disk every time a laptop is put to sleep. Got an SSD MacBook Pro with 16GB? Every time you close the lid, sixteen gigabytes get written. Every time.

* Disable file access times so every file that gets opened doesn't need to have the timestamp updated.


Leave both alone. Sorting by last access time is really useful. Saving the memory dump means there's space for recovery when batteries run out while asleep – it just takes a bit longer.


atime is on by default to ensure POSIX conformance. you can disable it by setting noatime which I believe is supported by HFS+ in both it's case and non-case sensitive variants


You missed a key point: with OpenGL 1.3 you don't need to follow the glBegin/glVertex/glEnd paradigm. You can create a display list or you can use vertex arrays. In fact the reason for including display lists in the first place was to make it so you didn't have to send one vertex at a time.

I don't agree with JWZ, regardless. OpenGL ES was a good simplification to the overall system. But he is right in that an optional compatibility layer that lives on top of OpenGL ES would have kept the original API valid for those who need it (and without slowing down the core of OpenGL ES).


The main thing Apple did with the iPad was make it work without any major gotchas. Sure, you could have stuck a desktop CPU and desktop OS in a touchscreen device. Other people tried that. It wasn't appealing.

The iPad gave long battery life, a comfortable form-factor, solid usability and reliability, an impossibly low price point for such high-end hardware, and most importantly it was a tablet above all else. Every app becomes the machine. That's entirely different than carrying a desktop around in an iPad-sized box.

Be careful if you think this is simply dumbing down a desktop, because it shows you're missing the point.


What Apple did was remove all of the barriers that got in the way of a good tablet experience for individual consumers. In essence, they made the iPad an application platform instead of a general purpose computer.

They combined that with their amazing manufacturing and engineering prowess and created a real winner.

From my perspective, the iPad is a dumbed down device. I only have access to the underlying operating system via API, don't have an ability to directly access shared corporate resources that are interesting to me, and cannot run arbitrary software without alot of hassle.

Dumbed-down devices are fine -- i own and love my iPad. But they're fine until you need to do something that the vendor doesn't want you to do, and Apple is a vendor who likes to define a single "one true path" to do many things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: