Having interviewed programmers, and having thought about this problem from a human factors perspective, I've usually come to the conclusion that coding tests at interview are, in the end, almost useless.
Let's describe this using a slightly different story...
Suppose I was interviewing someone -- let's call him Bert -- for a human factors job. I ask him:
"Bert, why do you think programming interview candidates usually fail simple coding tests, even when all the other empirical evidence (their career thus far, references, and degree from a top rated university) suggests they should be competent?"
And Bert responds:
"Because they're just all that dumb; top tier university professors are fools who can't assess students; the exam results are all fake; and programmers' bosses are all just bozos that can't see whether any code is being produced even if they are programmers themselves. The empirical evidence from the rest of their careers is wrong and my toy interview question asked by an untrained interviewer over the telephone is right!"
Bert would not be getting the job. (However much the grumpy misanthrope in me might want to cheer him on in railing against the world!)
Like it or not, coding at interview is very different from coding on a job. To use a loose analogy: a great many people struggle at public speaking -- shove a microphone in front of them and ask them to talk for a minute about rain, and they say "Um, er, um... Mummy can I go home now?". That does not mean they don't know English or what rain is. Dumbing the question down to "Ok, just talk about water then" doesn't solve the problem - because it wasn't the topic that was the problem in the first place.
In an interview, we have a fake task (nobody wants to use the code), in a fake setting (an interview), via a false interaction (over the telephone!), with a false assessment (one interviewer whose word is final, no compiler, no user, no sales, no code metrics or unit tests), a fake timeframe (a few minutes on each 'project'), false pressures (your job depends on the next ten lines of code), and somehow we expect to have valid results. Speaking as a scientist, that's just nuts.
At this point, most people reply "Sure, but we don't care about missing out on good candidates, only not hiring bad ones." I have worse news for you. You are probably still hiring as many bad candidates as if you selected your candidates by rolling dice. Most interview coding tasks are so over-simplified that they no longer select for programming or thinking skills at all -- the "programming on stage" skill dominates completely. The irony is that by selecting for "skill at interview coding tasks" you might find yourself effectively selecting for people who have done a lot of interviews and honed that skill -- but you actually want to hire the person who has hardly done any interviews because no company ever wants to let him leave.
No comments:
Post a Comment