I happened to have just finished writing a thesis on such a combination. The size of the little droplets is determined by the chemical wavelength of the reaction-diffusion subsystem. There’s a nice video and a pdf here: https://maximzuriel.nl/dynamics-and-pattern-formation-in-act...
I found this rule in notes of Feynman's lecture given in the 70's.
The rule is extremely valuable if you are in need of doing calculus regularly.
Enjoy!
The degradation is speculation on my part. I haven't ever experienced it.
Yes, the 30 fps rate is for small updates. A full screen update (scrolling) is commonly less than ~200 ms, and there are still ways to bring that number down.
I agree, the Libra 2 is great :) Try koreader, it's noticeably faster than the stock reader application.
Note that even electrical displays can suffer burn-in, with CRT, LCD, and LED screens all exhibiting this. (I'm unsure about plasma displays, as I don't understand that technology.)
Yes! But intuition tells me that suspended particles moving past each other make a more fragile system than solid state devices.
Also take into account the difference in refreshes between reading a book (0.05 HZ) and writing (30 HZ). That is, using the display as a general purpose monitor necessitates 600 times as many partial refreshes. I approximate full screen refreshes occur 6000 times more often.
If degradation goes linearly with usage then the lifespan of the display would decrease significantly if used as a monitor. It would be good if someone in the industry could comment on this.
I measured and split the latency between the main tasks of a single frame draw.
The main culprit was network delay as I am transmitting raw pixels (one u8 per pixel) compressed with zlib. That's a hit of ~140ms for half a screen.
Next in line is the screen refresh (unmeasured, perceived).
Then the optional post processing (~20ms for half a screen), and housekeeping, like keeping track of dirty regions (about as long).
Lastly writing to the framebuffer (less than 20ms, I don't remember exactly how long).
I took great care to optimise the process, and my next step was to transmit multiple pixels as a single u8 int, the physical display cannot render 255 distinct shades of gray.
Interesting. But updating one character should be much faster than updating the whole screen then surely since you don't have to send so much data?
By the way I suspect compressing multiple pixels into one is unnecessary - just quantise them and let the compression deal with it.
Also zlib is not designed for image compression. I'm sure there is something more suitable, e.g. QOI.
In fact, given that you're mostly compressing mono text I wouldn't be surprised if some kind of dynamic sprite atlas kind of system was better, like in JBIG2.
Anyway if it is network latency that seems like good news because you should be able to get it to near 0. What is the ping to the reader?
P.S. parent was right in doubting the claim, as a parallel connection from a client on a regular desktop refreshes at 30 HZ regardless of the size of the update.
The explanation is that I take end-to-end network measurements (from request of update to a full buffer of pixel bytes). That delay might be due to the slow processor on device, or an inefficiency in the networking code in my application.
I haven't noticed any degradation, but I put the warning up just in case. There is research suggesting that the ink "drops" stick together or break up after so many refreshes.
You can quickly skim this page for more info (the title should be findable on libgen): sciencedirect.com/science/article/pii/S0030399217315487