> Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.)
> Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.
It's a bit ironic then for this post to rely on such a convoluted interpretation instead of the simple, dumb one.
> ... since you are confident that you understand how the code works, having written it yourself, you feel that you must be able to figure out what is going on. ... Suddenly you see it, and you're blinded by a bright light as all the pieces fall into place.
There's a particular frame of mind like 'I feel that there's a bug in this specific bit of code'. I often find that once I make that shift it's suddenly easy to see the bug. I have tried to cultivate this feeling, but it's not easy to suddenly conjure up. A bit like flow state.
While I think it is wise to heed the warning implicit in Kernighan's aphorism, the question it poses can be answered by noting that finding errors provides additional information, allowing you to debug it without first getting any more clever. What makes debugging harder than writing is that in the former, you are dealing with reality, not your simplified, incomplete and possibly tacit assumptions about it.
And then repeat, with the new thing that should be impossible but is still happening, until I get to the root cause.
Being better at debugging doesn't necessarily makes you better at writing less complex, more approachable code. Though debugging should teach you which patterns cause bugs or impede debugging.
And what do you do once you are so skilled your software doesn't have bugs? :P
Regarding self improvement, Aaron Swartz put it better here: https://web.archive.org/web/20240928215405/http://www.aarons... (tldr: do things you don't like)