Perhaps, but it's easily overlooked because every project that needs runtime reflection badly enough has already rolled their own. (Some game engines have multiple reflection systems, eg. shader parameters vs. serialization.) This framework takes a particular approach and makes it an (almost) standalone component.
Why do/did you go for runtime reflection for all that? I thought you would use it for external scripting or mods or things like that, but not for eg serialization. I guess I am missing something.
Everyone misses compile time reflection because it solves many use cases easily (like serialization) without having to go for a complex solution or incur in performance penalties of runtime reflection. In contrast, I doubt many people care about general runtime reflection which I’d expect to be a mess in C++ and most likely not as fast as people wanted.
The biggest advantage I find with runtime reflection it that it's possible to load and manipulate data that has no C++ definition at all. This is actually handy when working with 3D geometry consumed by a GPU.
This might come true some day, but it seems not in C++20 and in recent static reflection proposals there was no way to distinguish reflected from non-reflected members in the same class.
I am hardly a C++ user nowadays, but such stuff interests me, because the Windows Development team managed to push C++/WinRT as replacement for C++/CX, but are declining any improvement to Visual Studio support (at least comparable to C++/CX) until ISO C++ gets similar capabilities to C++/CX.
So given that some C++ usage is required depending on which APIs you want to access from .NET, you can imagine there are many WinDevs not very happy with the downgrade in tooling support.
Back to your point, in what cases might such distinction be relevant?
Here, only one data member is meant to be serialized. The others members are there to accelerate lookups into the first data member. (Full disclosure: that structure isn't actually serialized yet, but the Arc80 Engine which uses Plywood has similar examples.)
Interesting! Could sufficiently enhanced compile-time reflection (say we do get supercharged CT reflection in C++2b*) be used to implement fully transparent&automatic versioning in serialization frameworks?
I know very little on this subject but I think boost serialization, though capable of handling multiple versions of a given class, requires manually 'tagging' source files as being say, version 1,2,etc.
I haven't looked at the C++ reflection proposal in a bit, but I think that attributes are reflected so you should be able to use your own attributes to mark non serializable objects
It's the latter: A standalone framework that can be used by other projects. It's like a library (or suite of libraries) with separate modules for platform abstraction, containers, JSON, etc. A bit more "batteries included" than vanilla C++, and you only link with what you use. These modules are organized into a workspace that helps set up new build pipelines, to compensate for the lack of a standard build system in C++.
OK. That's not really clear from the article. It makes the strange statement that the update function must be called "at a moment when each thread is quiescent – that is, not doing anything else." I don't believe there is any way in C++ to have a thread be doing more than one thing at a time, so if a C++ thread calls update it must by inspection be not doing anything else. Maybe you could restate this. I assumed since this had been stated this way it must really mean that all threads must have reached this state, and one thread must call update.
I think you might be right. Maybe I should edit the post and not call it "more scalable", since I only have six cores to test on. But if it does top out at a higher core count, there are several ways it could be optimized.
Either way, Junction is BSD-licensed while TBB is GPL/$, so I hope it'll find a use somewhere.
If I implement the mutex as you suggest -- by using the native semaphore directly, with no separate counter -- the running time of "testBenaphore" increases from 375 ms to 3 seconds on my Windows PC.
As mentioned in the article, most mutex implementations already use this trick. So you can just use std::mutex, and things are fine.
That looks like some API/kernel call overhead; you've moved the fast path of the semaphore implementation into user space. But what you have there is undeniably a semaphore implementation: atomically tweak a counter, and based on that result, wait or signal.
> you've moved the fast path of the semaphore implementation into user space.
I see now why your original comment was a bit inflammatory. I should have been more clear in the post that by "lightweight", I meant exactly that: "fast path in user space". I guess not everyone shares this vocabulary. I'll improve the post.
You're right that this lightweight mutex is a semaphore, of course. But not every semaphore is a lightweight mutex. So the technique isn't pointless.
Seems like that's what any sane modern user space sync primitives would do: do the atomic ops in user space for the fast path, enter the kernel when it's time to block. Like the "futex" in Linux.
I suppose it is a fair point that the thing you use to enter the kernel doesn't have to be itself a semaphore.
Long ago when developing for the PS2 gaming machine, we used this exact technique to achieve a 10x faster mutex on that platform than using the semaphore directly as a mutex, so it hasn't always been pointless.
> It may just be poor wording, but I don't think this sentence makes sense
I can see how you could might have interpreted that sentence differently. I just tweaked it a little in the post, mainly changing "the sample" to "this sample". Hopefully it's precise enough for most now.
Carmack is one of my heroes but I don't think that's particularly true. He's famous for not liking deep stories and throwing people straight into the game.. yet most AAA titles nowadays have ridiculously elaborate stories and it can take an age to get into gameplay. His opinion is always worth listening to, but I'm not convinced most people follow it.