> Linux gets a bad reputation because 20-ish years ago Ubuntu sent out free CDs and became the dominant OS.
I've been an Ubuntu user for 20 years, and RedHat and Suse prior to that. Ubuntu just worked. Debian had packages for everything, including from 3rd party vendors. It lets me focus on my work, and not worry about the OS, or compiling packages, or finding installers. When I had issues (rare), the large user base meant that someone had already figured out a solution to the problem.
The flavor of Linux doesn't matter so much in my opinion.
There are so many useful snippets of good advice on this thread.
I'd like to mention sport again, but with an addition: find a sports coach you can afford. This changes sport from being a destination to a path, and you'll avoid injuries - which is something you'll need to be careful about as your grow older. Im in my mid 40s, for context.
I wouldn't give too much credit to rules like this. Data structures are often created with an approach in mind. You can't design a data structure without knowing how you will use it.
If anything it's the other way round, if you're not talking about business domain modeling (where data structures first is a valid approach).
> If anything it's the other way round, if you're not talking about business domain modeling (where data structures first is a valid approach).
And even there, the data models usually come about to make specific business processes easier (or even possible). An Order Summary is structured a specific way to allow both the Fulfilment and Invoicing processes possible, which feed down into Payment and Collections processes (and related artefacts).
To elaborate on @jeswin's point above (IDK why it got downvoted)... a data structure is basically like a cache for the processing algorithm. The business logic and algorithm needs will dictate what details can be computed on-the-fly -vs- pre-generated and stored (be it RAM or disk). Eg: if you're going to be searching a lot then it makes sense to augment the database with some kind of "index" for fast lookup. Or if you are repeatedly going to be pllotting some derived quantity then maybe it makes sense to derive that once and store with the struct.
It's not enough for a data structure to represent the "fundamental" degrees of freedom needed to model the situation; the algorithmic needs (vis-a-vis the available resources) most definitely matter a lot.
I'm saying that if you care about performance, data structures should be designed with approach specific tradeoffs in mind. And like I've said above, in typical business apps, it's ok to start with data structures because (a) performance is usually not a problem, (b) staying close to the domain is cleaner.
You said: "You can't design a data structure without knowing how you will use it."
But the whole discussion involves knowing how you will use it; the advocacy is for careful consideration of data structures (based on how you will use them) resulting in less pain when designing/choosing algorithms.
"Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious."
If you want native binaries from typescript, check my project: https://tsonic.org/
Currently it uses .Net and NativeAOT, but adding support for the Rust backend/ecosystem over the next couple of months. TypeScript for GPU kernels, soon. :)
True p2p is the only approach that will work, not federation. I'd go futher and make the protocol high-friction for federation.
It's true that many p2p attempts have failed, but it's also the only solution that doesn't require someone running servers for free. There's evidence of success as well: napster (and bittorrent). Both were wildly successful, and ultimately died because of legal issues. It might work when the data is yours to share.
I can't imagine a world where a p2p social network is practical. Not when each node is an unreliable mobile phone that's maybe on cellular. Even with something like ipfs you have pinning services, bittorrent has seed boxes, because pure p2p is impractical.
I sort of agree, but federation is good. It's funny that you use bittorrent as an example because it involves every single user running servers for free.
If people can both be an origin for content and a relay for content, and modulate the extent to which they want to do either of those things, there's not really much of a difference between "federation" and "true" p2p. Some people will be all relay, and some people will be all content. Some content people might be paying relays, and some relays might be paying content people. Some relays will be private and some relays will be public. Some people will maintain all of their own content locally, and some people will leave it all on a specialized remote server as a service and not even care about holding a local copy.
Also, browsing would either have to be done through a commercial or public service (federation again), or through specialized software (no one will ever use this and operating systems will intentionally lock it out if they see it as a competitor.)
The problem with wishing this all into existence, though, is that bittorent (not dead) exists and is completely stagnant. There is often a lot of talk about improving the protocol, and the various software dealing with it, and none of it gets done. If bittorrent would just allow torrents to be updated (content added or removed), you could almost piggyback social media on it immediately. It's not getting done. Nobody is doing it, just writing specs that everybody ignores for decades.
So I guess my belief is that "true p2p" is a meaningless term and target when it comes to creating recognizable social media. "True p2p" would be within a private circle of friends, on specialized software. Might as well be a fancy e.g. XMPP group chat; it's already available for anyone who wants it. Almost nobody wants it. Telegram, Whatsapp, and imessage are already good enough for that. They may not be totally private, but they're private enough for 99.9999% of people's purposes, and people are very suspicious of the 0.0001% who want something stronger.
I actually think you're using "true p2p" here to sort of handwave a business model into existence (trying to imply mutuality, or barter, or something.) Whereas I think the business model is the part that needs to be engineered carefully and the tech is easy.
Very interesting. I made a similar project called tsonic [1] which compiles TS to C# and then uses NativeAOT to compile it down to native code.
We'd have faced similar issues, and I'm curious how you solved it. From your examples:
function add(a: number, b: number): number {
return a + b;
}
The challenge I faced here with JS/TS was that they have a single "number" type which can be carry ints, longs, floats etc. In most application code, you don't want a and b to be floats - you want them to be integers; such as in a loop counter. There are a whole bunch of types in native code that have no equivalent in TS.
It took me several months to get to a usable state, working on issues one by one.
I've been an Ubuntu user for 20 years, and RedHat and Suse prior to that. Ubuntu just worked. Debian had packages for everything, including from 3rd party vendors. It lets me focus on my work, and not worry about the OS, or compiling packages, or finding installers. When I had issues (rare), the large user base meant that someone had already figured out a solution to the problem.
The flavor of Linux doesn't matter so much in my opinion.