The one thing that nobody seems to appreciate when it comes to pushing forward towards retina displays is just how many more pixels, how much more memory, and how much slower everything is on the same hardware.
Yet, anyone who has ever been a gamer knows that changing your resolution has a huge impact on your FPS.
It is foolish to think that Microsoft can make the screen much higher resolution, have a similar chip, and not have serious performance problems in terms of responsiveness.
I'm sure Microsoft can get a lot of this fixed and they will, but like the smartphone revolution, sometimes you have to wait for hardware to catch up with software or you have to write the software to take full advantage of the hardware.
Agreed, and it's not limited to Microsoft. The iPhone 4 and the iPad 3 both had 2x the resolution of their predecessors but barely managed to keep the same level of performance.
That's one reason Apple kept the iPad 2 around for so long. It still performs really well and for most users that makes a bigger difference than higher resolution.
As someone who is not a gamer, I am curious -- why is it not the GPU that is the most critical when talking about the FPS and refresh rate for things like pen input? Is the issue that the CPU is "in the loop"? Is this a matter of the built in graphics on the Haswell chips not being quite good enough for this application, or is it actually the CPU?
I mostly wonder because I've found the improvements to the GPU in the Haswell processors to be game changing. I can use Solidworks reasonably well on my core i5 laptop! The idea of doing that with integrated graphics on any other computer would have been laughable.
Will a faster CPU really improve this (i7 vs i5)? Or is it a matter of getting a higher end GPU attached to the same CPU (i3 with more powerful integrated GPU)?
why is it not the GPU that is the most critical when talking about the FPS and refresh rate for things like pen input?
GPUs are typically about reducing latency for a large amount of parallel processing. Things like stylus latency are more about reducing the latency for a very tiny amount of serially processed data, where the latency requirements are quite a bit more strenuous. (A kerjillion triangles, versus under 4 bytes total for x,y, and pressure, though there is a bit more than just that for rendering a drawn line with attendant mathematics.)
People who edit sound and do studio recording know from experience that people paying close attention can start noticing lags upwards of 5-10 ms or so. When you're concentrating on drawing, you're definitely paying close attention.
The way game input works compared to OS input is very _very_ different.
Games go through a more direct mode of user input (directX) whereas the OS has to take into account that where your clicking can do a million different things (basically games take -almost- exclusive use of your mouse/keyboard). This becomes even more problematic when it's in a drawing application and they're now running math over your input to figure out exactly what you meant.
I'm with you though. They really shouldn't have come this far with screwing up the latency on the pen again (this fixed it on the surface 2 after talking to gabe last time as well...)
The CPU is required to process the digitizer input, figure out what it's trying to do, communicate it to the application, process it within the application, turn that into pixels on a constantly-updating canvas, and express that canvas as a texture to the GPU.
The GPU is responsible for displaying that texture to the screen.
As a gamer, there's a similar issue in gaming: a lot of modern games are getting bottlenecked on the CPU (just like drawing on the Surface is), because drawing to the GPU requires several layers of software abstraction. This means that a lot of games (mostly MMOs lately) will lag no matter how good your video card is if you don't have a sufficient video card. In fact, up to 40% of the CPU use of some games is not spent by the game, but by the drivers that sit between the game and the GPU.
The problem was traditionally the link between the northbridge (which connects connects the CPU and RAM) and the southbridge (handles peripheral connections like SATA, PCI, PCI-E, etc) components of the motherboard. Intel has now changed their architecture in the last 4 or 5 years to be a single Platform Controller Hub where the CPU provides the functionality previously found in the northbridge chip.
In this particular case, a "faster motherboard" might have helped with pushing updates to the GPU faster, though my guess is that graphics memory becomes the concern with double the resolution (which in turn needs to have updates pushed by the CPU, so it could help anyway)
In modern Intel processors (especially in portable devices), the GPU is integrated on the CPU die, so pushing updates doesn't count (unless you count pulling data from main memory to provide to the GPU).
"It is foolish to think that Microsoft can make the screen much higher resolution, have a similar chip, and not have serious performance problems in terms of responsiveness."
Sure. It's not, however, foolish to wonder why they'd RELEASE a laggy device. Hoping that an already small market will shell out $1000+ for a device that might get a future SW update to work as well as the last model is ridiculous.
Yet, anyone who has ever been a gamer knows that changing your resolution has a huge impact on your FPS.
It is foolish to think that Microsoft can make the screen much higher resolution, have a similar chip, and not have serious performance problems in terms of responsiveness.
I'm sure Microsoft can get a lot of this fixed and they will, but like the smartphone revolution, sometimes you have to wait for hardware to catch up with software or you have to write the software to take full advantage of the hardware.