computer screen with coding

How to Review a Take Home Code Challenge

– Part 3

(Guest post by Andy Davis)

This is the third part of our series of posts on to carry out a peer review on a code challenge. You can read part 1 here and part 2 here

So what else is left to review?

We’ve talked about Naming, Comments, Code that works.

Why haven’t we touched on Design patterns, immutability, value types, object models, proper understanding of SOLID principles?

Well, those are all great, but those submissions fall well inside the category of “Good Clean Code that works” and those submissions are going to be easy to spot well before we get to that level of detail.

But there must be more bad stuff? Yes, of course there is. But again it’s going to be pretty apparent before you need to get to that level of working off a code review checklist.

For the record, here are some other smells to look out for:

  • Long procedural code, without any object model (OO)
  • Primitive abuse (string and int for everything)
  • Commented out code (why commented out?)
  • The God object (general_helper)
  • Tests that don’t work due to environment - e.g. have been coded against a server and need a specific setup that the test itself doesn’t do
  • Switch statements (implying a potential OCP violation)
  • The new keyword in the middle of code (implying DIP violation)

 

Enough about the code?

Let’s turn our attention to ourselves and think about the mindset we should have as code reviewers.

Some candidates will think of things that others don’t. They will even think of things that we didn’t think of at all. The more we mark the same challenge, the more we start to learn about it and the more material we have in our pockets to mark people down on. This is grossly unfair, and more importantly it is likely doing what precisely what we don’t want - screen out good developers.

So what is the most important mindset when reviewing a challenge submission?

Having empathy in the review process

The fact that we, human beings, have been chosen to mark the submission is because we can see things that a machine cannot. We can give the candidate the benefit of the doubt and put ourselves in their shoes. We can overlook things that need overlooking, and we have the potential to see what the developer was trying to do, rather than what they actually did.

A machine marker would need some serious AI to try and figure out the things that we can figure out in just 1 minutes of looking over the code. We mustn’t start acting like machines deciding if it is right or wrong. It goes far deeper than right or wrong.

Becoming the test subject

The simplest way to do this is to do the challenge ourselves (under the same conditions as the candidates). We do this, not to find the ideal solution, but to make us realise that we will not come up with perfect code. We probably won’t even come up with code that we ourselves are happy with, let alone a reviewer.

We will compromise, we might not finish it, and we might even make a complete mess of it - it happens. Days later we will realise something better than the way we did it, and we will have the urge to change our solution. WE MUST NOT CHANGE IT.

This slowly leads to treating a candidate unfairly and screening out potentially decent candidates. Our brains like to change our memories. Over time we will start to believe we made a better solution than we did and we will unfairly use this against the candidates we are marking.

Luckily there is an easy way out of this trap.

Having a baseline solution

Tip: Save a copy of our first attempt at the solution, including all its shortcomings and agree with ourselves that we will not change that code. When we receive a challenge to review, we should take 30 seconds to scan over our own solution.

This will snap us back into the open and fair mindset and let us get down from our ivory towers. We owe this to the candidate and the hirer.

We are biased - what should we do?

We have strong opinions, and we believe our opinions are right. Truth is we might be, but these views won’t always help us when we are reviewing code challenges.

Opinions on big topics like SOLID and design patterns are ok. We are often paid for our opinions and experience on these things, so this isn’t what I’m talking about.

What I’m talking about is suggesting that superficial preferences are better one way than another.

Examples

  • Opening braces { on the same line or a new line
  • Indenting 4 spaces instead of 2
  • Spaces before or after various code constructs
  • Naming conventions (like under_scores or camelCase)
  • Single line if/else statements
  • Using an alternative line of code that does the same job
  • Using an alternative framework that does the same job

It is hard to do, but we have to resist the urge to write the review comment suggesting one way is better than the other way when all it really comes down to is a preference.

We do not need to comment on personal (or even team) preferences, and we can find out later if a developer is open to doing it the way the team does it.

Want to test your skills and take a peer reviewed Geektastic code challenge?

Register now

 

Let’s recap and wrap up

Good code probably means they’re a good developer. Clean, well-tested, production-like code covering many or all of the requirements means they are good. Short of plagiarism, they are unlikely to be a bad developer who is somehow a coding challenge specialist - and even if that is the case, they will be found out at the next stage.

Dreadful code that doesn’t work and doesn’t make any sense probably means they are bad. Be constructive as they may be a junior and have a passion for learning better ways, even if the code tells you they are bad right now.

Average (and sometimes less than average) code doesn’t necessarily mean they are a terrible developer. There are things that can be found in bad code that mean the code is not that bad, e.g. tests expressing intention even if the code doesn’t quite work, good variable and function naming, the decision to use an object model, thoughts about DRY, KISS and the boy scout rule, there are lots of potential things to pick out.

Tip: Don’t give a blanket ‘no’ just because the code wasn’t perfect or was a little tricky to get your head round.

Make our best effort to look for the positive

Hiring is expensive and finding a diamond in the rough is extremely valuable. Even great developers won’t write amazing code under the pressure of a hiring process. Realise that and start looking for good stuff.

Tip: When critiquing code, it is all too easy to get into the negative mode of only fault finding. Don’t let the positives slip by unnoticed. Comment on the positives. List out the positives. When positives are put alongside the negatives, the code submission may not look as bad as we first thought.

Be a human

Let’s remind ourselves that we are human. We are a human being reviewing other human beings’ code. Have a human-to-human experience. It is a chance to exercise benefit of the doubt, to find those rough diamonds, especially if the code is less than ideal.

I’ve made plenty of code submissions where my code was far less than my best, yet I still got hired and still did an excellent job for the client. I’ve also been screened out and in some cases completely ignored when I wanted feedback - I’m sure you have too.

Equally, I’ve seen and reviewed hundreds of pieces of code. I’ve seen some absolute shockers, pure elegance, and everything in between. I’ve been kind, I’ve been rude, and I’ve even got a review completely wrong to the point of embarrassment - but it happens - after all, we’re human.

Let’s review coding challenges the right way, with an open mind.

 

Want to license a peer review code challenge to REALLY analyse the skills of your candidates?

Register now

About Andy Davis

Andy is a software developer. He’s been writing software for over 20 years and helps businesses solve interesting business problems, by helping them create the right software. He also helps teams with their developer hiring strategy as he believes that hiring the right people has a multiplier effect on awesome software teams.