This reads like someone who's coming out of CS, it's really hard to know this from outside the industry but these are two very different fields culturally. I interview for 10-20 cyber positions a year and a lot of your site doesn't pass the sniff test:
1.95% interview success rate and 400+ users? On effectively a new site?
2."Learn from industry leaders and seasoned FAANG professionals with real-world experience", like who? There are only so many cyber FAANG staff (and their time is very expensive) and not only do you not list backgrounds/who they are, you don't even list who you are on the about page
3. The hands on labs don't seem to exist? There are also plenty of sites you could point to that do this but I suspect you want the users for subscription $
I say all this because cyber is an industry based on trust and there's very little to trust about the site as it is.
Last thing I will add, LLMs in this field are struggling, you need a crazy amount of data to tune it properly and I fear you may end up doing more harm than good by having the model suggest made up things as good answers. I think a good path to solve this problem would be curating the questions for your background field (then hiring others for theirs) and having low-high value answers.
1.95% interview success rate and 400+ users? On effectively a new site?
2."Learn from industry leaders and seasoned FAANG professionals with real-world experience", like who? There are only so many cyber FAANG staff (and their time is very expensive) and not only do you not list backgrounds/who they are, you don't even list who you are on the about page
3. The hands on labs don't seem to exist? There are also plenty of sites you could point to that do this but I suspect you want the users for subscription $
I say all this because cyber is an industry based on trust and there's very little to trust about the site as it is.
Last thing I will add, LLMs in this field are struggling, you need a crazy amount of data to tune it properly and I fear you may end up doing more harm than good by having the model suggest made up things as good answers. I think a good path to solve this problem would be curating the questions for your background field (then hiring others for theirs) and having low-high value answers.