Then you need central coordination, either a single central server containing the counter, or something like Snowflake where you have multiple counters, each assigned orthogonal blocks ahead of time (that need to coordinate with a central server).
UUIDs/ULIDs/etc are fully distributed, you can have two clients assign an ID without coordinating with ~0% of collision.
You could also split the u64 to have the first 24 bits be unique to the client, and the last 40 be unique to the produced character. This would still allow 1 TiB of data produced per user and session. The single mutex would be the user ID counter.
An incrementing u64 requires either atomic increments between concurrent clients or recalculation logic to consistently find the newly incremented ID after conflicting information syncs. UUIDs just spit out a unique ID without any complexity or associations with other clients.
There's closely related idea that might work, though. Each device editing text could be assigned a 32-bit ID by the server (perhaps auto-incrementing). Devices then maintain a separate 32-bit ID that they increment for each operation they perform. The ID used for each character is (device_id, edit_id), which should fit nicely in 8 bytes.
Indeed, this is close to what Yjs (popular CRDT library) does: each client instance (~ browser tab) chooses a random 32-bit clientId, and character IDs combine this clientId with local counters. https://github.com/yjs/yjs/blob/987c9ebb5ad0a2a89a0230f3a0c6...
Any given collaborative document will probably only see ~1k clientIds in its lifetime, so the odds of a collision are fairly low, though I'd be more comfortable with a 64-bit ID.
Not everything, and this already changed a while back.
> The core of the Tcl interpreter has been replaced with an on-the-fly compiler that translates Tcl scripts to byte codes; a new interpreter then executes the byte codes. In earlier versions of Tcl, strings were used as a universal representation; in Tcl 8.0 strings are replaced with Tcl_Obj structures ("objects") that can hold both a string value and an internal form such as a binary integer or compiled bytecodes.
The hard thing about modeling is not the math to get to present value of the stock. It's figuring out which assumptions make sense.
Assuming that a revenue growth rate of 84,762.39% is (a) a valid number and (b) expected to remain the same over the next X years does not quote-unquote "make sense"
Haha the initial model is not a good representation of value as it projects the lasts years metrics out 5 years. 80K% growth for 5 years would do that. Thanks for pointing it out. I need to incorporate some boundaries for the initial values lol.
I put in quite a few stocks and the results were often strange. Negative values, under a dollar, etc. These were all stable companies, not penny stocks.
To be fair to the creator, he lets you put in your own assumptions via the revenue growth fields (which is what's skewing your projections). That's frankly better UX than one of its competitors, which does not let you do that.
The concern isn't that they would invest bed on obviously wrong numbers. The concern is that someone invests based on believable numbers that are incorrect. There's no information on accuracy measures or verification. Offering this model to others could be an SEC/FINRA violation.
Likely the people who downvoted have no experience with the legal part of stock related discussions/tools (implied expectations/suggestions, etc), there is a reason why you see "not a financial advice" and other legal texts presented in videos discussing stocks.
Also there is no real company / person mentioned as an owner at least on the website so the privacy policy is kind of a "trust me bro".