I don't think "Offline Is Just Online with Extreme Latency" is a useful concept because it doesn't encapsulate the key difference: pessimistic or optimistic UI.
For example, say you have a form. If you built it thinking online first you'll probably have some pessimistic UI which shows a spinner and waits for the server to respond with ok/error. You can't simply think, okay since we're offline, more latency -> show spinner for longer. You have to re-architect things so that the UI is optimistic, commits to a local database and that local database is synced up to the server when you come online.
In my experience optimistic UI is way more complex to build. Many times the complexity is worth it though.
I agree with the sentiment, but for entirely different reasons.
An offline capable application that sometimes tries to sync can be called "Online with Extreme Latency", but that is not what true ~~Scotsman~~ offline is.
Offline/online is first and foremost about data locality and data ownership. In an online application (regardless of latency!) the source of truth is "the server". Offline application itself is the source of truth and "the server" is a slave.
The OP seems to be talking about thin vs fat clients. Fat client is still a client - it fetches data. Offline application is the data source for "the server", it's the server that has to adapt to local changes not the other way around. Naturally, this is problematic since now you have multiple sources of truth. However, shifting source of truth to "the server" does create online application with extreme latency - fat client.
I believe thinking about it as "offline vs online" obscures what's actually difficult about offline operation. It's not storage or ownership of data that's difficult, it's conflict resolution.
The longer you're disconnected, the more things diverge, the better your merging/resolution schemes/algorithms need to be.
There are a lot of ways of thinking about and approaching the problem, some of which work worse or better under different circumstances. You see it in distributed consensus, multi-master systems, version control systems, etc.
There's ongoing work in the form of things like operational transforms and CRDTs, but clearly their needs to be a lot more progress in the area.
A good example of this is are mobile applications used in logistics for tracking movements within a terminal. Often the wifi environment is terrible and the operators will spend significant time not connected. They often have fat clients with local storage, and when the reconnect the sync bidirectionally - they upload whatever they have done since last connection, and they download changes others have made that may affect them in the meantime.
A lot of it is also about accessibility to someone with limited network access.
When you are developing with an always-on connection and a high-performing device, it's easy to ignore all the scenarios where someone doesn't have either one.
I think I agree with the analysis. It's why aspects like data verifiability are potentially more important than fancy peer-to-peer routing and transfer protocols. Once you have Merkle proofs of data, it's just as powerful whether it is on the client or server.
Trust and anti-corruption are just two of many properties you need for data locality, arguably less important. Conflict resolution or consensus is still very, very difficult. P2P routing and transfer is a bit of a red herring. You still need consensus between your local store and the remote store. Fireproof does it using IPFS underneath which in my experience tends to be quite slow, but is a working solution.
Isn't that more like "decentralized vs centralized" though? In my experience talking with coworkers about online vs offline apps, the article's definition is what we use. Granted, I've worked exclusively in Web dev for years so maybe it's all contextual.
Yes and no, IMO. While these are different classifications, some concepts can be cross applied.
Decentralized essentially means absence of built-in omniscient entity - each end every connection is treated as one between equally correct peers. It does not mean that a pair of entities cannot establish hierarchy, it simply means they do not have to.
Take git for example. There is nothing forcing us to treat github/gitlab/etc as the omniscient node even if it is central point of data exchange - you are free to disagree with the upstream changes. Local data is always there, all local overwrites are explicit and so on. Contrast this with something like online game. The server will tell the client what their state is - there is a central entity controlling the truth.
Web can be interesting in this context, since there are multiple "tiers" of nodes: user applications, backend servers, databases. Data control and conflict resolution properties usually are different among nodes in the same tier and between tiers.
I don't view things like the article states, and I likely won't for a number of reasons other commenters have stated, plus one more:
If offline is online with with extreme latency, then "extreme latency" must include "infinitely long latency". But accommodating infinitely long latency requires a different approach than if you assume that a server will be contacted at some point.
An app I was designing about six years back would have been something that some would use in their yard and others would use in a national park.
In the yard you have extreme latency. Sooner or later you'll get thirsty and you'll walk into WiFi distance.
In a national park there's no connectivity. You won't be able to get or receive anything until long after the moment has passed. None of the answers you get will be relevant anymore for most people (what percent of people who visit a national park visit the same one over and over again?).
Then I realized screen contrast wasn't 'there' yet so I shelved it, got busy, and there it remains collecting dust.
Yes, but I think most people care about availability of data over true ownership of data.
If I’m running a business and my internet goes out, there’s a huge selling point to a local DB that allows my operations to keep chugging along until Comcast figures out their mess. Internet returns, my DB syncs with a server offsite, and now I have that backup.
It doesn't matter if it's a proprietary format. Proprietary formats can be reverse engineered and modified and converted. There is a world of difference between a proprietary file that I have online in a form that I can't access directly (and I have no idea how it's being accessed, changed or stored by the service itself or third parties) and a file that I have on my harddrive that I have complete control over.
Those cases should both be solved with optimistic UI, so there’s no difference.
You can have a little checkmark to indicate that it’s synced, like many chat apps do.
> In my experience optimistic UI is way more complex to build. Many times the complexity is worth it though.
Yeah, can attest to this. I’m working on an app[1] that strikes the trifecta: p2p, real-time and offline first. All of those things combined makes the amount of tooling, design patterns and resources available shrink to a tiny sliver compared to a typical web-based tech stack. I have researched probably 100 projects that sound promising but almost all have been a poor fit for one reason or another. I opted to build almost everything from scratch.
Kudos to the JS ecosystem. They are way ahead in this space, with rxdb, watermelon, yjs, automerge etc. Unfortunately I couldn’t use any of them because I use a split-language stack.
rotate CRUD mindset to CQRS, now you have a start event and an end event, this is the extent of the pending sync state which can be afforded to the user at any granularity - document level, message level, form or field level, etc.
The cost is that other view queries don't reflect the pending event, only the view that issued the command has an association to the event. Which is usually the UX you want. Consider a master/detail form app where you insert a new record into a collection, but the view is sorted/filtered/paginated and your new record does not match the criteria and therefore vanishes. That is never the right UX. A less surprising alternative UX is to limit record creation to a specific form for the business rules of creation, and that form exists in the context of a specific collection view with business rules around newly created entities of that kind, with an invariant that newly created entities will always appear at the top or bottom of that collection (e.g. new messages are always added to the bottom of the chat history view). Now the optimistic update bypasses the database and simply writes through to the view optimistically and then when the ack is eventually received it seamlessly changes state without any UI jank.
Now you can send many new messages rapidly and if the nth message fails and the rest of the messages went through you get a properly located error. With a CRUD mindset, probably the form is simply disabled until each individual insert succeeds - which means your chat app cannot keep up with you typing!
When you need a single source of truth you cannot use optimistic UI.
E.g. if the user is a realtor, then she can't tell the customer "you have now bought the house" if there is no online connection and you're waiting for a sync. You can fill out and upload the form async, but commitment must be online.
You cannot tell the customer "you might have (or might not have) bought the house, we will only know later".
This is correct but you can do tricks to change who the counterparty is, which is often useful. If, instead of putting up a yard sign and saying "this house is for sale" and waiting for someone to come by and complete the transaction, the homeowners entered a contract with a brokerage to sell the house to anyone who is willing to abide by a list of terms (minimum price, occupancy date, etc.), then the brokerage can respond immediately and say "great, the house is yours". This is essentially the equivalent of sending a sell order to a stock broker.
In this case the optimistic UI might be good to have a step like “this request is pending”. So when the form is submitted the data is ready to send and will send when it can. Ideally with an indicator that the user is offline and needs to be online to sync.
That's true even if you're example isn't perfect. Some types of transactions have business requirements that demand immediate consistency or responses. Contraindications certainly don't invalidate ideas, though. Use the right to for the job.
> Those cases should both be solved with optimistic UI, so there’s no difference.
Ah, yes, "in theory, the theory and the practice are the same". Unfortunately, in practice, the theory and the practice are not the same, and those cases are regularly not solved with optimistic UI (even though they should), so there is difference.
Git has one interface for disconnected commits and another one for connected ones. Most of the command set is bifurcated around this. Sure there are places where you do the same act for local and remote but those are more the exception than the rule. It's very exposed. They've done very little no create any sort of abstraction across them, and that's probably the right answer.
Ugh. This makes me very sad. Git is a ux disaster, and my gut instinct when realizing I was doing the same thing as git would be to seriously question my thought process.
First I heard of "optimistic UI" was with GraphQL and Apollo, but I had seen the behavior that they someone, perhaps them, gave this name for a lot before.
I don't find the concept "optimistic UI" to be more useful for talking about it or the implementation by Apollo to be more elegant. It's fine but it doesn't solve any problem except one that was created by Apollo's binding of server data to the client, React-style.
A related concern would be surfacing the "online" part as explicit resources and actions for the user instead of implicit, background magic. If you are sufficiently offline in your lifestyle, you need to plan your communication phases.
I recently got into using a GPS sports watch. It is the kind of thing I would want to use in an offline fashion, i.e. go somewhere off the grid and track my hikes or bike rides. These devices are designed to function offline for a stretch of time, but they have a requirement to eventually sync to an online system. They will eventually fill up with recorded sensor data and you want to offload that somewhere else before clearing local device storage. More importantly, satellite positioning receivers need a cached "ephemeris" file that helps them predict which satellites will be overhead at a given time to operate efficiently, accurately, and quickly.
Unfortunately, the manufacturers have been infected with smartwatch expectations. They started designing it as if it is always online, and the syncing functions are implicit background behaviors. When doing sporadic syncs and going back offline, it is hard to influence it to "get latest ephemeris" when you known you have internet connectivity and will be going offline again. Worse, these results are localized and it implicitly gets ephemeris for its current location. The UI doesn't allow you to indicate a destination and pre-load the right data to allow fully offline function on arrival.
> More importantly, satellite positioning receivers need a cached "ephemeris" file that helps them predict which satellites will be overhead at a given time to operate efficiently, accurately, and quickly.
Interesting, I've never heard of that particular optimization for GNSS before. I know GPS transmits ephemeris information in each frame, since that's the input data for the positioning calculations. I've got a number of Garmin watches, and they've always been able to get a position fix even after being disconnected for weeks.
I find GNSS implementations very interesting, which manufacturer is making their watches like this?
Garmin watches do! Go into the System -> About menu and there is a page showing ephemeris status.
Their documentation states that this may expire after approximately 30 days or if you travel more than 200 miles, while it will update during syncing.
I've seen it expire in less than two weeks with daily use of GPS but phone syncing disabled. It will still get position, but it can be the difference between an almost immediate fix after opening an activity menu or a delay for tens of seconds to minutes. Distance and pace measurements also seem to be lower quality when operating without a current ephemeris file.
> You have to re-architect things so that the UI is optimistic, commits to a local database and that local database is synced up to the server when you come online.
Isn't that exactly what the article argues for though?
> This kind of idea would move you away from a product full of API calls to one based on data synchronization.
> You have to re-architect things so that the UI is optimistic, commits to a local database and that local database is synced up to the server when you come online.
I'm not sure you got the point. The design guideline that "Offline Is Just Online with Extreme Latency" already reflects very specific architectural requirements, and state transitions in the application life cycle. We're talking event-driven architectures, batching events, flushing events in offline/online transitions or even when minimizing windows, pulling events when going online, etc etc etc.
I'd go even further and claim that this whole "pessimistic vs optimistic UI" thing is just "adequate vs broken UI design",regardless whether the app is even expected to go online.
The really hard part is conflict resolution. You updated something 5 times while offline, but another user deleted that something between updates 2 and 3, or made a divergent update of their own. There are so many potential scenarios like this in a multi-user app. It's a huge rabbit hole, and you can end up in situations where it's impossible to sync in a user-friendly way. Essentially you are taking on all the challenges of distributed databases. Maybe it's worth it, but you should know what you're getting into.
Online-only collaboration systems already have to do this. The local view has to synchronize state between other local views and the canonical version on the server. Offline changes are online edits with high latency.
I've also worked on systems like that where a mobile user could loose connection at any time and they are indeed very complex to get right.
I'm not sure there's a perfect way to do it but we ended having have certain functions that had to be done online and others where the user built a "request" for service that was handled optimistically with failures sent to the user's inbox later.
Not exactly - there’s a bunch of non-trivial stuff you need to worry about when both local and remote states represent sources of truth. Things like CRDTs and event sourcing make it easier, but there’s still more complexity than dealing with only one source of truth.
In an online-first experience, any local state being held is (for the most part) an optimization or is intentionally ephemeral. You can just toss it away at a performance penalty if it ever gets too hairy.
In an offline-first experience, you need to be very careful that you treat everything like a source of truth. You also need to deal with schema migrations and business logic migrations, since you need those to partially live on the client.
Was required to use a MS sql server database for a project that would go offline for 5-10 minutes every hour. (Getting admins to fix it was a no go).
Now of course I was judged on uptime of my app that relied on it.
Finally just cloned the data to a local MySQL and refreshed data frequently. Database team totally clueless that app servers were hosting their own databases.
For example, say you have a form. If you built it thinking online first you'll probably have some pessimistic UI which shows a spinner and waits for the server to respond with ok/error. You can't simply think, okay since we're offline, more latency -> show spinner for longer. You have to re-architect things so that the UI is optimistic, commits to a local database and that local database is synced up to the server when you come online.
In my experience optimistic UI is way more complex to build. Many times the complexity is worth it though.