A few keys to understand the re-engineering of the peer evaluation process

2019-06-05T15:23:00+00:00May 21st, 2019|Multilateral Agreement|

In April 2016, EA Multilateral Agreement Council (MAC) initiated a discussion on the re-engineering of the peer evaluation system. Paulo Tavares, Chair of the EA MAC, is giving his feedback on progress with this significant project, which will pave the way for the future of EA peer evaluation system.

What are the driving forces for the project of a Renovated/innovative/New peer evaluation system? (EC/EU expectations, costs/efficiency, adapt to market changes, be responsive, develop contents and benefits of the system etc.)
EA has been performing peer evaluations for more than 20 years and there was a need to reinvent ourselves taking into account the challenges we have in front of us: the new standard ISO/IEC 17011:2017, the changes in the international arena at the IAF and ILAC level and especially EA’s fundamental role, set in Regulation 765/2008 by the European Commission, to confirm by means of peer evaluation the competence of the national accreditation bodies, considering the increased importance of accreditation in the recent years. We should also not forget that the traditional approach to peer-evaluations is introducing more and more challenges in particular with the need of extremely large evaluation teams in order to cover the increasing number of scopes of the MLA recognition.

 

The MAC certainly had many different reasons to start this project. Could you tell us how could you see their motivation and how did it translate in the development of this project? How many members were actively involved and how many discussions and comments did you receive?

In every large group, you can find different levels of motivation (or perceived motivation) due to different reasons including cultural background, maturity in accreditation, focus on its own AB internal circumstances and problems, etc. However, it was somehow surprising to witness the sense of urgency, even outside the MAC management group members (which frequently includes the most vocal persons in the MAC). I think that this comes from the understanding that some of the improvements could be almost immediately implemented.

MAC members and evaluators were the ones to provide the key issues that we needed to focus on for the re-engineering of the peer evaluation process since they are well connected to the market and authorities’ expectations and, equally, the fact that they are active participants in the peer evaluation process.

Thus, a survey was launched to all MAC members, Team Leaders and Team members with around 33 questions referring to the peer evaluation process. The level of responses was about 70% from MAC members and 80% Team Leaders (TL).

The results of this survey showed that the peer evaluation process has several key strengths (it involves all NABs and promotes a real peer approach, allowing sharing experience also between team members), but also weak aspects (need for further harmonization of peer evaluators, the difficulties of managing a big team over one week and the need to improve the reporting).

Based on the results of the survey and on preparatory discussions at the MAC level, a workshop was organized in 2018 for half a day during the MAC meeting, where all MAC members were grouped in 9 working groups to discuss the following issues based on the MAC management group recommendations: coverage of MLA scope, man-days, peer evaluation cycle, changes in the decision making process, harmonization of team members, level of support to be provided by the Secretariat and reporting. All MAC members participated actively within the working groups proposing what they see as the best solutions for the re-engineering.

Based on the outcome and proposals made by the working groups, the MAC Management Group further elaborated a consolidated paper for the MAC that was be discussed during the last EA MAC, in Reykjavik (Iceland), in May 2019.

 

What are the main results in terms of organization, new tools, change in behaviors, cultural changes – RE sampling, scoping, team composition, reporting, etc.?

Some conclusions are very clear e.g.:
– The MAC needs to go digital in order to increase its efficiency for simple decision-making cases, while not losing the opportunity of benefiting from face-to-face discussions;

– The MAC needs to concentrate mainly on its responsibilities as the decision-making body on MLA related issues, leaving the administration of the peer-evaluation process to the professional umbrella of the Secretariat;

– The on-site peer-evaluations need increased flexibility without creating any discrimination concerns. The success of accreditation and the success of the MLA have led to the increase of the size of the peer-evaluation teams, which are difficult to manage and sometimes disproportionate compared to the size of the NAB under peer-evaluation;

– There is a need for a fast track consulting mechanism with the technical infrastructure (i.e. the technical committees) in certain technically challenging circumstances. Careful attention is needed in relation to this to avoid discussing specific accreditation bodies cases outside the MAC framework;

-There is also a need for a better balance between technical and managerial competences inside the peer-evaluation teams i.e. a better peer-evaluation needs a good knowledge of the challenges of managing a NAB. Still, there is a clear message that EA shall provide the appropriate environment for the development of the current peer-evaluators.

On the other side, some of the key issues seem to need reinforced consensus, even at the global level, it could be tackled in a project stage 2:
– This is namely the case with the concept of a representative sample, not only on a level 3 approach (the standards against which competence is assessed) but also in relation to different technical disciplines and sector schemes within a specific level 3;
– It is also in the case with on-site effort to be delivered, in terms of man-days. It is clear that a risk-based approach is needed but easier said than done! It will be quite difficult to establish a proportionate model that gets generalized support.

 

What was the role of the MAC MG, compared to the role of the TFG?
The Management Group (MG) is established by the Council (MAC) with the aim to achieve a more effective management and harmonization of the peer evaluations and operations of the MAC, including meetings. It has no decision-making authority of its own.

The regular MAC Task Force Groups (TFG) are involved in the preliminary part of the decision-making process, having the task to evaluate the peer evaluation report on completeness, clarity and good understanding and to make a recommendation for a decision by EA MAC.
The TFG activity is limited in time, as the members are appointed for a specific report of a specific NAB.

The two structures will continue to exist even if some of their competencies and/or operational responsibilities should in future be transferred to the Secretariat.

 

How does this articulate with ILAC/IAF MLA/MRA management?
As previously mentioned, several changes are proposed in this new project. Some of them may challenge the current IAF/ILAC rules. In those cases, EA will have to seek the opinion of IAF/ILAC and to influence the development of those rules.

 

Have you heard of or are there similar initiatives taken at ILAC/IAF or regional levels? If yes, do they develop on the same basis, in response to the same or comparable expectations?
As mentioned before, other Regional groups are facing with similar circumstances. Still, I have the feeling, maybe unfairly, that there is a general expectation in relation to the outcome of our project to be used also as an input in their own developments.
Whatever the case, it needs to be highlighted, that there is a joint IAF/ILAC task force dealing with the rethinking of the peer-evaluation process, not only in relation to the evaluation of accreditation bodies, but also in relation to the evaluation of Regional groups. EA is, of course, actively participating in that task force.
One of the most discussed topics nowadays is the coverage of scope and we are pushing for a harmonized approach.

 

As the MAC Chair, what do you think are the coming challenges? What is the realistic timeframe for the implementation of the new system? What will be the benefit for the operations of the system at the MAC management level? What changes do you anticipate at the Secretariat level, at the members level and at the evaluators level?

A realistic timeframe… Well, I think some of the changes could be put in place almost immediately and others within a not too distant future.
The major benefit would be a flexible and a faster process.
The MAC Secretariat will play a key role, being much more involved, not only with the management of the peer evaluations, but also by undertaking part of the activities carried out by the MG.

 

How would qualify the project: Strategic? Necessary? Innovative? Why?
I would say all of them: strategic, because the peer evaluation process is the core process for EA; needed, because, as I mentioned in the beginning, there is a need for improvement; innovative, because the changes we will propose are original and new.

 

On a personal basis: what would you like to say?

I would like to thank all my colleagues for their contributions to this project, especially the enthusiasm and commitment of the MAC management group members and of the Secretariat.

I consider this mainly as an evolution, not a revolution. In this sense, it is important to note that, while improvements are indispensable, the current peer-evaluation process is a good tool that has been serving us and the national, European and global economies, in their broader sense, for more than 2 decades now – the overall level of satisfaction with the current process ranks at 7 on a scale of 1 to 10.

To conclude, a special thanks to our peer-evaluator who developed and implemented the current process. They are frequently facing non-easy circumstances, in a fast-changing environment and they are still willing to work outside their “comfort zone”.

Improvement of the peer-evaluation process should not be looked as a one-shot event but essentially as something to reconsider continuously. That is why, it is essential to think outside the box and to be actively involved in the international level policy making bodies.