Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used to write my research papers (in linguistics) this way, I'd make piles of useless notes, writing and rewriting sections of it while never having anything to show for it. Then when I understood all of it, I'd write the paper (say 10-45 pages) as fast as I could type, usually in a single sitting. I'd have to go back and edit it and such, but the bulk of the work was getting all of the problem and my solution to it into my head.

Interestingly, as I headed to grad school, this became more and more difficult as the problems became harder and harder. Eventually I had to devise a new system for writing papers (which I can't describe adequitely) because the problems and their solutions became to large to hold in my head at once.

To me, this is the interesting case, how do you solve problems that are too large to hold the solutions to in your head at once? The obvious answer is "break it into smaller pieces", but that is frequently very difficult to do. Probably the answer lies in "go back to the basics", and make sure that you have the foundations cold so that they aren't occupying stack space.



If you have the foundations cold, then you can make a word for each foundational concept, and a syntax for each way the foundational concepts interact. Then you've started the kind of bottom-up programming pg describes.


Yes, if you writing in forth. I'm less sure that the bottom up approach that forth and lisp allow translates as readily into generalized research.

This is one of the ways in which programming is different from other research. In programming, when I define a term(/function/word), it bloody well means that. In linguistics, if I define a notion, well, whether or not it means anything at all is exactly the point of the research.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: