Why Code Challenges Should Be Peer Reviewed
However hard you try, it’s pretty much impossible to create a decent code challenge with zero ambiguity without creating instructions that read like legal briefs. Throw in ‘exam pressure’ and good old fashioned: ‘I don’t read instructions; I jump in feet first’, and you have a recipe for misinterpretation and misunderstanding.
That’s why, by popular demand, we are introducing candidate feedback into our code challenges.
We took our inspiration from conversations with developers who had taken our code challenges and clients who’d looked through our code reviews. We were also partly inspired by Andy Davis’s post about failing a code challenge and having the decision overturned once he spoke to the hiring team.
Want to test your coding skills and try a peer reviewed code challenge? TRY NOW
Why have a manual code challenge?
We believe code challenges should be rewarding for the candidate. If someone is going to give up 2+ hours of their time to take your challenge the very least they should get back is the results. And when we speak about results, we don’t mean a score; we mean a thorough evaluation of the code in line by line analysis, with summary points consisting of constructive feedback pointing out the good and not so good elements of their solution.
Want to license a Geektastic code challenge to analyse the coding skills of your candidates? TRY FOR FREE
The problem with automated code reviews
Automated reviews are troublesome because:
- There is no mechanism to create a dialogue
- The undue pressure is akin to feeling like being back at school
- A candidate can’t justify their decisions
- A candidate can’t respond to the reviewer’s comments.
Sure, businesses put engineers under pressure to deliver. Commitments made by the team during sprint planning inevitably put demands on individual members, but nothing compares to the clock ticking away (even the thought of it takes me back to cold exam rooms, teachers pacing up and down, staring at the analogue clock on the wall that seemed to be running at double time). Even in open-ended challenges where the time element is removed there is still the monkey on your shoulder encouraging you to do things you might not do if you were sitting relaxed at your desk with your team.
Understanding the person, and not just the code, can help you see where decisions that might not be in your favour could be much better with context.
Why companies use Geektastic to find better developers
At Geektastic, we try and make the code challenge experience as ‘real’ as you can while allowing constraints to measure and compare candidates. This means creating challenges that are aligned to the company candidates are applying to, creating product features or solving problems that they might experience when they land the role. We acknowledge that a 2-hour time-constrained code challenge creates an unreal situation and adds pressure that doesn’t usually exist during the average working day.
Want to license a code challenge to analyse the REAL skills of your candidates? TRY FOR FREE
What developers should get from the experience
You invested your time in reviewing, but the developer also invested theirs and not giving an explanation is essentially stopping them from improving their skills. Every so often we’d see a developer write into the team asking if they could respond to the peer review. Most of the time it’s calm, collected and just wanting to explain their solution (we have had one developer go ape$hit, but luckily it’s pretty rare). Sometimes they disagree with the comments, but other times it’s because they interpreted the problem differently.
Here’s an example.
Most of our challenges say the code should be ‘of production quality’. Our Python code challenge requires the solution to be run once to establish if it works. The candidate used cached lookup tables to improve (‘real world’) performance as this was his interpretation of ‘production quality code’. The reviewer marked the candidate down as he felt he had over-engineered the solution. (one might argue it’s OTT to markdown but that’s another matter). Once the developer fed back on these comments, everything made sense, and the results were adjusted accordingly. That developer is now VP of engineering at a very exciting prominent AI firm.
Why manual code reviews help make better decisions
Allowing these feedback loops inside the challenge allows for interaction between the reviewer (whether that’s one of our reviewers or the hiring team’s) to take place, speeding up conversations that traditionally either didn’t happen or if they did, were delayed until a follow up call, email back and forth or face-to-face interview.
Work with Geektastic to find better developers
As always, we’d love to hear your thoughts on this and any other experiences you have had with code challenges. If you’d like a demo or want to hear more about Geektastic and how we are changing the way software engineers are hired, please email [email protected]