Article 4: When laws and merit-based recruitment conflict!
Recruitment practice in Kosovo’s public sector is tightly regulated by a range of laws and sub-legal acts. Minimising opportunities for corruption or nepotism in recruitment is an important aim in how these laws are constructed. However, the way this is sometimes done in the way laws and regulations are drafted can be at the cost of merit-based recruitment principles and practice. It is important to ensure that in the search for integrity that the primary purpose of the process and law (ensuring merit-based practice) is not undermined.
For a process to be merit-based the criteria that are measured must be related to requirements of the role; the techniques used must be able to measure the criteria accurately, and based on these two principles, the person with the highest score should be appointed. For more on the Principles of Merit-based Recruitment see our previous article here.
Unfortunately, when laws and regulation are being drafted there are several ways in which they can conflict with these principles.
If the criteria do not accurately reflect what is frequent and important in performing the role applied for, then the recruitment process is measuring the wrong things.
The law/regulation defines the criteria that will be included in the assessment for the role. The organisation operates in a very technical field and the criteria reflect this. However, the role is not directly related to personally delivering those technical requirements (e.g. Director of Finance or Executive Director). This would result in someone with superior understanding of the technical nature of the organisation scoring more highly – but on criteria that are not directly relevant to the ability to perform the role. In this instance it may be more important that the criteria reflect the areas relevant to the role (e.g. financial understanding and strategic/leadership capabilities).
It is common for roles to require someone to have a specified amount of experience (for example 5 years, 8 years or even 10 years) in order to be eligible. Numbers of years do not specify the type, range or nature of experience that is required to perform the role effectively. They are an overly simple way of creating proxy-criteria. While experience is likely to be related to the ability to perform the role – the arbitrary nature of the timescale does not ensure that this experience has been gained or not. It would be clearer, better and more meaningful to specify what things someone should have experience of doing.
A further negative impact of timescales is that they can be a way to restrict the pool of applicants. This is understandable from a workload/burden perspective and because it can be applied objectively (a number of years can be clearly measured easily) but it is not necessarily merit-based unless that timescale has some meaningfully objective relationship with ability to do the role. Possibly more importantly, when criteria restrict who is eligible in a way that impacts more on one group than another (e.g. if women in the applicant population are less likely to have the 8 years’ experience compare to men) this is indirectly discriminatory unless it is objectively justifiable and related to the ability to perform the role. This further embeds historic inequality.
The use of timescales as criteria is not unique to Kosovo, it is used within recruitment internationally. It does have the benefit of being specific and measurable – which may avoid otherwise more subjective criteria. This may be something to revisit when levels of public trust and confidence in how public institutions recruit has improved.
There are a large number of different assessment techniques that can be used in recruitment. These include applications, CV’s, performance appraisals, multiple-choice questions, essay questions, strategic planning papers, psychometric ability tests, personality measures, oral presentation/interviews and simulation exercises.
It is important to remember that any assessment technique is only a means to an end and not the end in itself – it is a tool to enable the measurement of job-relevant criteria. The technique’s ability to measure the criteria and do this in a way that is meaningful to the ability to perform the role is essential.
When a technique is used that does not provide an effective measure of the criteria, and/or it is not predictive of performance in the role – this is contrary to merit-based principles.
Presentations and interviews tend to be widely used because they are capable of measuring a broad range of criteria in a way that has been shown to predict future job success – if done correctly. Simulation exercises can be costly and complex to create properly and are used less frequently despite being very effective at predicting future performance. Practices such as knowledge-based multiple-choice questions are a measure of knowledge - these tend to be more effective at junior to mid-level roles where specific declarative knowledge makes up a more justifiable proportion of the role. In comparison, senior roles require more judgement than knowledge and knowledge-based multiple-choice questions are less effective in these contexts.
The role for an Internal Auditor is advertised. The assessment process includes creating a strategic business plan for the organisation and an interview.
All applicants submit their strategic business plan and the assessment commission are disappointed in the quality of these and expect poor performances from the candidates in interview. However, they are surprised that within the interview some of the candidates are very good and have lots of relevant experience.
In this scenario the strategic business planning is not an activity an Internal Auditor would be responsible for doing. Therefore, it may have some tangential relevance to their role – but it is not directly relevant.
Depending on how the scoring of the assessment works (see next section) this could result in no candidates meeting the minimum scoring threshold to be considered appointable. Ironically, this would result in suitable candidates being rejected based on an inappropriately applied assessment technique.
Specifying within laws or regulations the number of percentage of points that will be awarded to certain assessment criteria or techniques is very common within Kosovo. However, it does require some deeper consideration.
Weighting criteria or assessment techniques explicitly define some aspects as more important, relevant or valuable than other aspects. This can only make sense in a merit-based recruitment process when those weightings are objectively determined.
To achieve this there is a need to have systematically and empirically defined what relative importance, relevance and value each criteria or technique has in determining who will perform better in the role. There are several ways of doing this.
The first is to undertake a study of what predicts performance in the role – however, this requires certain technically knowledge and skills to do properly and can be time consuming and costly.
Alternatively, it can be achieved by bringing together a range of stakeholders with close knowledge of the role and have them discuss, debate, and agree on the relative weighting – although this needs to be carefully facilitated.
A third approach is to base weightings on the extensive existing empirical research about how different assessment methods predict job performance with careful consideration about the nature of the role this evidence is being applied to.
Scenario: A law/regulation defines the scores associated with assessment techniques or criteria this ensures that these cannot be manipulated to place a preferred candidate higher than other candidates.
Positive Impact: This is the positive aspect of reducing opportunities for corruption and nepotism.
Negative Impact: Specifying scores within laws and regulations creates a weighting of the assessment techniques or criteria. Those techniques or criteria (e.g. application documentation, written exercises, interviews, etc) which have a greater percentage or points are being valued more highly. This only remains merit-based when those points reflect the extent to which those techniques or criteria relate to ability to perform the role applied for.
Reality: Currently, in practice, the weighting of scores is not decided in any systematic manner and therefore can hold little or no relation to the ability to perform the role. This reduces how merit-based the recruitment process is and it can result in a less effective candidate scoring more highly than a more effective candidate – this is clearly undesirable.
A recruitment process uses two assessment techniques; a written exercise and interview. The regulation requires that the written exercise receives 65 points of the scores and the interview 35 points. In effect, making the written component more important than the interview.
Candidate A scores 60 points in the written and 10 points in the interview – achieving a total of 70 points.
Candidate B scores 40 points in the written and 25 points in the interview – achieving a total of 65 points.
Clearly Candidate A is the highest performing candidate and following merit-based principles should be appointed. However….
The role the candidates have applied for is a senior leadership role. Written exercises have a limited ability to measure leadership qualities whereas interviews can measure leadership qualities more fully and effectively. The result is that the candidate who performed better on the more relevant part of the assessment (interview) is not appointed because the way the assessment has been weighted.
In this scenario what is wrong is the way in which the law has defined the weighting of the two elements in a way that does not reflect how those techniques relate to the ability to perform the role. Therefore the law is unintentionally making the process less merit-based.
A guide for those drafting laws and regulations
To help those involved in drafting laws or considering the impact of laws on merit-based practice there is a checklist of questions they should consider.
Are the criteria stated clearly and can they only be interpreted in the way they are intended to apply? If not, this should be clarified and defined further.
In demonstrating the criteria, is it clear what evidence is required and how/where this will be measured in the process? If not, this should be clarified and defined further.
If the criteria include alternative ways they could be met (e.g. “or similar degree”, “or degree relevant to the field of work of the organisation”) is this something that could be reasonable to expect potential applicants to know what is and is not relevant? If not, this should be clarified and defined further.
Are the criteria objectively justifiable as a requirement to perform the role applied for (including their relevance for the level of the role applied for)?
What method or approach has been used to identify the criteria’s frequency/importance and how does it predict someone’s suitability for the role applied for?
If there is a time-scale included within the criteria – is this objectively justifiable, could the actual experience required to perform the role be a better and clearer way to state the criteria?
If there is a qualification or experience requirement – is this objectively justifiable as something that is genuinely required to perform the role?
Could the criteria apply disproportionately to one or more groups protected by equality laws? If yes, what evidence is there that the criteria are objectively justifiable requirements for the role?
Does the assessment technique provide an accurate and sufficient measure of the criteria to be relevant as a measure of job suitability?
If the assessment technique is accurate but too limited to cover sufficient criteria with accuracy, are there other assessment techniques that could provide a measure of the remaining criteria not covered by this assessment technique.
How well does the assessment technique provide a measure that is relevant and appropriate for the level of role applied for? If unsure, consult an assessment expert who can help explain the evidence-base associated with different assessment techniques.
Have assessment techniques/criteria been weighted in any way (points or percentages)? If yes, what methods of determining the relative weightings has been used to demonstrate proportionality to ability to perform the role?