The interviews are over. You combed through 250 resumes. You interviewed maybe a half dozen. But now you have to decide. Who do you hire? Do you hire anyone? How do you compare the performance of the candidates across all the dimensions you are considering and identify the best hire? How do you combine the feedback of the many people who took part in the process? Conversations with disparate stakeholders about candidates can easily descend into arguments about personal peccadillos. Over the years, I’ve developed a simple and flexible framework that lets me identify the distinctions between the candidates that make a difference and focuses discussion where it needs to be to build consensus.
The Decision Framework
Among the tools that help you decide, the decision matrix is probably the most common. And for good reason. A decision matrix is powerful when you have several good alternatives to choose from, and a variety of factors to take into account. A decision matrix is also easy to use. List your options as rows in a table, and the factors you will consider as columns. You then score each combination/factor pair and add these scores up to give an overall score for each option. Slightly more sophisticated models let you weight the factors differently by the relative importance of the factor.
I start with a simple decision framework. The candidates are the options. To make the process as simple as possible, I limit myself to five factors:
- Knowledge/Skills — Does the person have the knowledge/skill to do the job?
- Learning Mindset — Can this person learn what they need to learn to do the job? Every organization and every job is different. And I think that an openness to, nay a passion for, learning is a must-have in a knowledge worker.
- Fit — Does the person fit in well with our culture? This is tricky. You want someone who will blend in, not fit in. A little change or challenge is good. But, for example, we have a quick-paced, chaotic environment. That isn’t for everybody. We’ve rejected candidates with the technical skill who came from larger, slower environments who we didn’t feel could thrive. Maybe you have a different problem.
- Drive — Will this person drive results? Again, this is about culture. It is an important part of our culture. I sometimes refer to this as a “foot-on-the-gas” approach, and that’s what I mentioned in the article on job descriptions. Regardless of what you call it, it is about finding people who will take the initiative and get sh#t done.
- Presence — I also use a variant of “could I put this person in front of our leadership team and/or our customers?” Knowledge workers need to present information and solutions to leaders in your organization. They also need to present to your clients or customers. This is about presence, emotional maturity, and communication skills.
I strongly recommend that you use at least the first three factors when hiring knowledge workers. Drive or presence may not be as important to you. There may be other factors that in your case are more important. Try to limit yourself to between five and nine factors. You want to capture the critical attributes of your candidates but not overwhelm with too many factors.
Once you’ve identified the decision factors you will use, go back to your interview rubric and identify which questions will help you assess each of the factors. You should have a good mix of questions to get you to Knowledge/Skills and more than a few for Fit and Presence. If Learning Mindset and Drive are important to you, I expect to see a few questions that could help you assess a candidate in those dimensions. Tracing these decision factors back to your interview rubric will help you on the next step — rating the candidates.
Rating the Candidates
Unlike a traditional decision matrix, I don’t score the candidates. I rate them with one of three values. You’ll notice that this is like what I do when reviewing resumes.
- Does Not Meet — the candidate does not show acceptable competence in this area;
- Meets — the candidate shows acceptable competence in this area;
- Exceeds — the candidate excels in this area.
Yes, this approach lacks the nuance of more sophisticated scoring models. Those models may capture subtle distinctions between candidates, but those distinctions don’t make a difference. Say you use a scale of 1-10 to score your candidates. You can easily articulate the difference between a 1 and a 10. You can probably also articulate the difference between a 5 and a 10. But that’s the difference between a 9 and a 10? Or between a 4 and a 6? As we’ll see in the section on the decision process, you can spend a lot of time arguing about whether Jessica scores a 7 or a 9 on Fit. Using three values lets me keep it simple and less ambiguous. Each candidate either gets a minus, a plus, or a double-plus.
You will need to define acceptable competence in context. Again, your interview rubric will help you. In your rubric you’ve already defined baseline, good, and excellent responses. Now you can decide how to use those responses to define “acceptable competence.” I usually start with a 60% threshold — candidates must provide a “good” response for at least 60% of the questions associated with a factor to acceptable. Depending on the role, I may identify one or two questions that are must have. A candidate who doesn’t provide at least a good response to those questions does not show acceptable competence. Finally, you need to define what it takes to achieve an Exceeds rating. I usually look for a higher percentage of good responses and least a few excellent ones. These are guidelines. I design them to be at least a little fuzzy.
To make this work, you will track your assessment of the candidate’s responses during the interview. A simple tick mark next to a question is a good start. Was it a baseline response? A good one? An excellent one? If it was especially good or bad, you can jot down some notes why. That will help you in rating the candidates, and for the discussion that will follow.
The Decision Process
The simplified decision matrix is powerful when you are using a panel of stakeholders to interview candidates. Once you have finished your interviews, I bring the panel together into a conference room.
The first thing I do is set up the whiteboard. I the decision factors on one side and the names of the interview panel across the top. I remind everyone of the various factors and what they mean and explain the rating criteria
Next we vote. Everybody gets a stack of PostIt notes. I ask them to rate the candidate on the decision factors on the PostIt note. We do this individually. Everybody rates the candidate themselves.
Going around the room, I’ll collect the ratings and add them to the whiteboard. A consensus usually appears pretty quickly. In this example, you can see that we all agreed that this candidate has the knowledge and skills to do the job. But there are some differences.
Next we have our discussion. I’m the outlier on learning. So I have to explain why I didn’t think the candidate showed an acceptable competence in learning. This should be in my notes. I explain my reasoning and the other panelists offer theirs. We keep it short, just a few minutes. They’re not trying to change my mind. We’re trying to give the hiring manager the benefit of our experiences.
We’ll do the same thing with fit. Here Michelle and I are both outliers, but I was positive, and she was negative. So we want to hear her explanation of why she felt the candidate wasn’t a good fit. We repeat for the remaining factors. The discussion is short and focused on the specific decision factor and why we varied in our response.
These ratings aren’t real, but they do reflect the patterns I see when I use this approach. Most people agree on most factors. Sometimes there are outliers. When there are, we use them to have a focused discussion on the reasons for the ranking. We aren’t trying to change people’s minds. Although that happens. We are trying to ensure that the hiring manager — the ultimate decision maker — the best information to decide about a candidate.
A single Does Not Meet rating doesn’t disqualify a candidate. That’s one interviewers assessment of one factor. A row of Does Not Meet ratings is a good sign that you don’t have a good candidate. If you get a lot of those, reassess your interview rubric, your rating system, or your candidate pool.
A column of Does Not Meet may show a harsh interviewer or bad chemistry. Similarly, a column of Exceeds may show a lenient interviewer or some great chemistry. If either continues across several candidates, revisit expectations with the interviewer.
The goal of the interview process is to identify a person to hire to fill a specific job in your organization. It’s decision time. Armed with the information from the decision matrix, which comes from the interviews conducted by your interview panel, you can make one of two choices:
- Hire this person.
- Don’t hire this person.
Would you hire this person?
The tools I’ve described (the interview rubric, the audition, the decision matrix) are all designed to help you answer that question. You have the individual ratings and an understanding of why people deviated from the consensus.
Looking across the factors, I see someone who has the required knowledge and skills and the learning mindset I’m looking for. This person is also a good fit for the organization and shows the presence we need. I struggle with drive. Two of the interviewers didn’t see it. One really saw it. Hopefully, the discussion we had provided some insight into what people saw during their sessions. That might sway you one way or another where there is a difference like that.
As the hiring manager, it is your decision. I would pass. You might hire. The framework is just a tool to help you decide. The decision is yours. The framework I’ve provided is a tool for you to make better hiring decisions. Try it the next time you have to hire a knowledge worker in your organization. And let me know how it worked.