I’m curious what this means:
> run it on a large, complex TypeScript codebase
What do you mean by “run it?”
Are you putting the entire codebase all at once into the context window? Are you giving context and structure prompts or a system architecture first?
Most of the people I see fail to “get” GPT assistants is because they don’t give context and general step-by-step instructions to it.
If you treat it like a really advanced rubber duck it’s straight up magic but you still have to be the senior engineer guiding the project or task.
You can’t just dump a 10,000 LOC file into it ask some vague questions and expect to get anything of value out of it.
I’m curious what this means:
> run it on a large, complex TypeScript codebase
What do you mean by “run it?”
Are you putting the entire codebase all at once into the context window? Are you giving context and structure prompts or a system architecture first?
Most of the people I see fail to “get” GPT assistants is because they don’t give context and general step-by-step instructions to it.
If you treat it like a really advanced rubber duck it’s straight up magic but you still have to be the senior engineer guiding the project or task.
You can’t just dump a 10,000 LOC file into it ask some vague questions and expect to get anything of value out of it.