The methods in widespread use for interviewing and selecting software engineers are appalling. It is only because they are so bad that ridiculous methods like those often used at Google in which applicants are given trick puzzles to solve can seem like an improvement. The sad thing is, asking candidates to solve mental puzzles is better than what's usually done, which is not much at all. Come on, people -- we can do better!
Typical Selection Practices
Managers need more programmers. They get a job requisition from finance, then go to HR to get candidates. Since HR knows nothing about the substance of the work that needs to get done, what ends up going into the job requirements are a bunch of motherhood-and-apple-pie blah-blah (self-starter, etc.) and a list of keywords of the technologies in which experience is required.
HR screens candidates based on whatever is in the resumes and may interview candidates, basically to see whether they mouth the expected platitudes when prompted by the HR people. Those who play the game are then passed to the programming department.
Typical Interview Practices
The candidate is typically scheduled for a round of interviews with programmers and managers. Since the managers rarely know much and have usually forgotten what little they used to know, they don't ask questions of substance; they basically find out if they like the candidate and if they believe the candidate will "fit in" and follow orders. The fellow programmers who interview remember their own interviews, in which questions of substance were few and far between, so they basically chat up the candidate and decide whether they like them. At the end of this "rigorous" process, if everyone agrees, the candidate is accepted.
What a joke! When you're hiring a musician, an audition (in which the musician performs) is standard practice. When you're hiring a writer, reading things previously written by the candidate is standard practice. So when you're hiring a writer of software programs, naturally you'd expect that reading programs previously written by the programmer would be standard practice -- but it's not!
Leading Edge Interview Practices
Instead, the leading edge at places like Google is to hit the candidate with trick questions. For example: "Suppose you were suddenly shrunk to the size of a nickel and found yourself at the bottom of a blender. The blender is going to start in a minute. What would you do?"
If I were the size of a nickel, most of my neurons would be gone, so I wouldn't be me anymore. But more seriously, how relevant is the kind of skill questions like this test to writing programs?
I could make an argument that this kind of thinking is relevant to a kind of programming that is important, but very rarely needed: algorithmic design. No one has ever (to my knowledge) measured it, but I would be surprised if algorithmic programming amounted to as much as 1% of all the code in a typical application. The vast majority of code that's written needs different kinds of skills: visualizing user interactions, understanding data structures and data flows, understanding and effectively using complex subsystems, and many other activities. These activities benefit little from the kind of skills and instincts required for solving trick puzzles.
Are there people who can do the puzzles and be great "regular" programmers? Of course. The problem is the reverse: there are many people who would be perfectly adequate programmers who are flummoxed and generally disconcerted by questions of this kind. I've been in groups of trick question masters. They're great for finding what's complicated in basically simple things and other arcane but fundamentally counter-productive skills. Other than that -- you can have 'em! Take 'em all -- please!
What can be Done?
There are lots of simple steps that can be taken to improve the outcomes of sourcing and selecting software people. Any step you take is likely to make things better. I hate to do it, but I have to admit that even trick questions are better than the "Hi, how ya doin'" method of interviewing. But surely we can do better.
Reading code. When hiring writers, we read their past works. Why are we so reluctant to do so for people who write code? I suspect it's because very few people would actually be able to read the code and make a reasonable judgment of the author -- for all too many typically mediocre programmers, that would amount to a tour de force way beyond their limits. That's OK -- find out who can read code with meaning and judgment, and they become your main filtering agent. Maybe, just maybe, you'll end up with .. people who write better code! What a concept!
Auditions. What's wrong with an audition? If someone is supposed to know database design really well, show them one of your current ER diagrams and ask for comments. Tell them a change that is proposed and ask how they'd make it. Pose a recent tough problem you had to solve (which you've already solved) and ask them to solve it.
Detailed archeology. A candidate programmer may not know much about your stuff, but she'd darn well be an expert on stuff she's coded in the past. Find a subject where your experience overlaps hers, and ask for a detailed rendition on what she did, why and how -- and what she learned and would do differently today.
Subject Matter Testing. Yes, testing. Like an audition, only more objective. If someone really is the expert php programmer they claim to be, they'll ace the test. No problem.
Conclusion
Software hiring is an embarassing mess. In a field that is over-the-top exacting with fairly objective pass/fail criteria (the program works or it crashes), the methods we use to ask new people to join us are random, ad hoc, and almost completely unrelated to finding out whether the candidate can actually perform as required. Asking trick questions can actually be an improvement, but that's not saying much. We can and should do better, and even a little better can make a huge improvement in the quality of the people who build our software.
Comments