laptop with coding on table

How to Review a Human Reviewed Code Challenge

– Part 2

(Guest post by Andy Davis)

In the last post (link here), we started by looking at what role the code you read plays into managing expectations in the review process.

In this post, we’ll be talking about the clues that lurk in code to help spot a good developer, as well as traits.

What are some clues that point to a good developer?

To find good developers, we need to first agree on the things that make a good developer:-

  • A good team player
  • Efficient at solving problems
  • Can work with greenfield and brownfield code
  • Uses best practices
  • Cares about the value of the software to the business
  • Understands that software is expensive, “no code” is “good code”
  • Can pair program and crank something out alone
  • Is always learning

What are even better clues that point to a good developer?

Since we are focusing purely on code and challenges, we can distill this list to someone who writes maintainable code.

Maintainable code is:-

  • Code that works
  • Code that communicates its intention
  • Code that is easy to change

Tip: Code maintainability is the primary thing I look for when marking a coding challenge. Sometimes I won’t even run the code because the fact of whether it works 100% or not is as important as whether I could pick up the code and make a quick change to make it work.

How to check that Code works

There are several ways to know if the code works.

  • The developer says it does
  • Manually test it
  • It has automated tests of some kind proving it works

The cheapest proof is automated tests that live alongside the code. Tests that were written by the developer at the time the code was written and can run at any given time to prove if the system is still working. There are commonly referred to as Unit Tests. There are also Integration tests and Acceptance tests, but I won’t get into that here. Let’s just call them Tests.

Tests prove code works as it is supposed to. Without tests, we can break the code and not know so until a bug is discovered in production, or if we are lucky, by a manual tester.

Tip: a coding challenge solution that is submitted without any tests or with code paths written that don’t have tests is a HUGE ALARM BELL. As the reviewer, I cannot know the system works without running through various scenarios myself manually, or if I am sensible, by writing tests that cover all the requirements.

Spotting traits in code

Here is a simple guideline to follow that can help you spot good and bad traits in a code review.

Good traits

  • It works and has tests that prove it works

Bad traits

  • It doesn’t work
  • It works, BUT it has no testsCode that works, but has no tests is bad because we could change a line of code (by accident or intentionally) and break the code. No one would know, and the closer a bug gets to production, the more expensive discovery and fixing are.
  • It doesn’t work AND has no testsCode without tests hides its intentions. The intention is locked securely in the head of the original developer, which is of no use in a team environment.

Want to test your skills and take a peer reviewed Geektastic code challenge? Register now

How do we know if the code communicates a challenge’s intention?

There is a quote that I read on Jeff Atwood’s blog, but apparently originates a bit earlier than that:

“Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.”

To put less dramatically; write code for the maintainer, not for yourself (i.e. If we receive some code, we will hope they had done the same for us as it will make our job much more manageable.)

The simplest ways to spot code that communicates intention are that it is:

  • Well tested
  • Expressive naming (of everything, including tests)

Well tested

In the context of a coding challenge, this is where we start awarding points for workings. Even if the code doesn’t quite work, if the is intention expressed in the tests, it means the requirements are documented with the code in a live and runnable way.

Runnable documentation covering the whole system means that changing the code is simply a case of changing the documentation (the test) in the right place. This leaves a failing test and allows a targeted code change without fearing the rest of the system will break.

That is an example of Test Driven Development (TDD) - For more information, read Test Driven Developer by Example - Kent Beck

Coding challenges that were completed using TDD are quite easy to spot. They will have a regression suite of tests that can be run (as this is a side effect of driving the code). TDD is not the only way of having well tested (and therefore working) code. A candidate can arrive at this without writing the tests first. But if they don’t write the tests at all, then that is a different story.

Good naming

There is no better way to express intention than using well-named variables, objects, functions.

Variables named with a single letter like a,b,c,x or the annoying geek habit of foo/bar are of absolutely no use in expressing an intention. Also there is no need these days for shortening words. We don’t have much screen width limit, and there is nothing worse than seeing variables like fname lname.

One very poor name I saw recently was

foreach (var itemExists in personList) {…}

You’d think that itemExists was a boolean, but it’s not. It was a string of the person’s name. The code inside the for loop then used that itemExists variable. The line of code didn’t read well - a simple name fix to person tidies the whole thing up:

foreach (var name in personList) {…}

We spoke earlier about the importance of writing tests, and tests are a prime example of where good naming is significant. Consider the two following test method names:-

test_name_splitter() { … }

splitting_a_name_with_a_single_space_splits_it_into_first_and_last_name() { … }

To someone new to this type of naming, the second one looks stupidly verbose. If this is you, then take the time to think about it. The method name is our one shot at expressing intention to the next person to maintain the code. So it stands to reason that this method name should be a sentence and the sentence should express the intention as well as it can.

Since code changes, we can think about the next requirement. Maybe it is to split names with two spaces into first middle and last name - how would we fit that into test_name_splitter() naming style? test_name_splitter2()? An extra assert inside the same test? Or maybe a new test:splitting_a_name_with_two_spaces_splits_it_into_first_middle_and_last_name() { … }

This is where tests start to get powerful. Documentation starts to emerge. Furthermore, this documentation is live with the code, if the code changes, then the documentation changes (providing the software maintainer knows what they are doing).

Tip: Some people like to run document generators over the top of test names but to be honest that is just window dressing, a document isn’t needed at code level. This stuff is best kept in the code solely for developers to find and change code with confidence.


Historically, comments were thought to be good. Phrases like “All code must be commented” used to get banded around, and in some companies you could even get official warnings or fired if you didn’t comment your code.

In this day and age, we are more likely to hear the opposite. A blanket rule of “no comments allowed” is arguably a better rule as it forces us to name things well and make the code itself readable rather than throw a comment block on confusing code.

The real solution is only using a comment if the code needs a comment, and only if that comment serves a real purpose, e.g.

*// this is not the obvious way of doing this*

*// but the obvious way doesn’t work for reason X*


The comment example above serves a purpose. However, the following serves no purpose at all.

// initialise a new Person object instance

Person = new Person()

This comment is not wrong; it’s just litter. It gives no more information than the line it decorates. However, the major downfall of these type of comments is that they can mislead us. Take this example from one of the coding challenges I reviewed recently:

//if not found return null

if (found) return null;

The comment tells me the exact opposite of what it does - clearly, the developer flipped it at some point. I’m sure it was right when it was first written, but before it was even checked in (submitted), it was out of sync with the code. Fortunately this one was easy to spot but imagine a large codebase with hundreds of examples of this - what a minefield!

Tip: if you think you need to comment the code to explain what it does, consider renaming some of the variables, or wrapping functionality in a new well-named function or class - now see if that comment is still needed. Go on, delete that comment, I dare you.

There must be more things to look out for?

Of course there are, and we’ll be discussing them in part three when looking at bias, becoming the test subject and how to review with empathy.

You can read part 3 here.

Want to license a peer review code challenge to REALLY analyse the skills of your candidates? Register now