The overall activity of CEON/CEES is reform-oriented. We believe that transformation of research sector of developing, transition, emerging and small (DTES) countries is an inevitable process. Changing the presently dominant system of evaluation is viewed as first step and driving force of this process. The evaluation we are pleading for has to be (1) open, (2) comprehensive and permanent, (3) non-arbitrary, (4) non-invasive, (5) output-oriented, (6) quality-oriented (7) field-adjusted (8) fair, (9) authoritative, and (10) cost-effective.

  1. The evaluation of performance in DTES countries should be open and translucent:
    Data on research performance of all ST&I subjects supported by public funds have to be publicly available. Only as such, they can help creating an ambiance in which results and competency, instead of social penetrance, decide on all forms of gain and promotion. By this condition only, members of the academic community will accept concentration of investments in favour of smaller number of projects, institutions, and individuals, which is a precondition of the ST&I progress of DTES countries.

  2. The evaluation should be comprehensive and permanent:
    It has to be organized as monitoring of all measurable outputs of research work. Temporary and episodic evaluation does harm regular scientific activities, threatens motivation, and opens the door for various instrumental behaviours, unacceptable from academic and dangerous from social viewpoint.

  3. The evaluation should be data-driven and non-arbitrary:
    Practical evaluation should be based on quantitative data, even when necessarily has a form of expert judgment. Unbiased evaluation, especially in “small science” environments, is not possible if based on negotiation. Arbitrary decision-making has disruptive effects on motivation, most particularly of devoted, fertile, and young researchers.

  4. The evaluation should be non-invasive:
    The evaluation procedures must not violate the dignity of researchers. Requests to the researchers to re-submit their (same) bibliographies, to give proofs for their published results, to evaluate them themselves, and to look themselves for their citations, in information age can only be qualified as defeating for evaluators and humiliating for researchers. Self-evaluation of individuals can in science exist only as a private act. In DTES countries where research is financed mainly from public funds, the duty of scientists is to produce results and make them public. Value of these results can be estimated only by others. The only obliged to do it are governmental bodies which dispose of science budget and are responsible for investments in science. It is for them to ensure resources and instruments for research results evaluation. The resources of the kind have to be public, since their responsibility for the investments is public too.

  5. The evaluation should be oriented towards raising national output:
    The primary role of evaluation is to stimulate research performance. Research performance is evidenced by the contribution to the industries and society, as an ultimate criterion, and contribution to the science itself, as an intermediary criterion. Since the former is difficult to measure or is poorly evidenced, and assuming that measuring is a must in evaluation, contribution to the science has to be used as the basic operational criterion for research performance. This means that it has to be understood as the share of institutions and individuals in research output of DTES countries, as compared to the output of developed countries. And this practically means that research performance measurement should primarily be based on citation indexes which are used for international comparisons. Such understanding is additionally justifiable because research output of DTES countries is relatively low in comparison with its human potentials. Improving national ranking in ST&I should be the first-order obligation of the academic communities and governments of DTES countries.

  6. The evaluation should be quality-oriented:
    Research performance is measured by the productivity and impact, i.e. citation rate. In general, and particularly so in DTES academies, productivity is a reflexion of work agility and responsibility, while the impact is a measure of contribution and quality. In the states of lower research capacities, contribution and quality is the weaker side of performance. Inflation of the law-quality research products is evident in DTES countries, which must be attributed to the present award system. System of evaluation is a potentially efficient mechanism to correct this weakness. Therefore, the stress in measuring research performance in such countries should be placed on citation rate, instead of productivity.

  7. The evaluation should be field-adjusted:
    A system of evaluation has to respect differences among research fields and disciplines. Evaluation is an art of comparison, and comparison is allowed only within disciplines of approximately equal expected productivity and impact. Expected productivity varies among disciplines in terms of type of results, while impact varies in terms of geographical range, i.e. internationality . Therefore, expected productivity defined by so called minimal conditions, has to be different for different research fields, while expected impact has to be based on citations received both in the country and abroad. Criteria for standards in evaluation must be different for various research fields, while the selection of indicators of performance can and should be the same.

  8. The evaluation should be fair:
    Evaluation must preserve traditional academic, as well as public ethical values. All decision-makers in evaluation, even when working within collective bodies, must be exposed to the ethical public judgment. Evaluation indicators have to inspire sharing of knowledge and discourage violation of the intellectual property, as well as other forms of taking advantage of other people’s contribution. Individual or institutional strategies aimed at gaining points in evaluation system rather than at producing useful findings should not be allowed to pay off.

  9. The evaluation should be authoritative:
    Decisions pertaining performance evaluation, especially decisions by authorized governmental bodies, must have the authority of legitimacy. Once made, they must not be disregarded or changed, except in regular procedures. Final decisions should be accepted by all and with all their consequences, regardless of the fact that they are necessarily imperfect.

  10. The evaluation should be rational and cost-effective:
    Evaluation must not be a burden for science budget. Information resources used for evaluation should primarily serve as the tools for dissemination of research findings, and only accessorily for the evaluation. Data acquisition solely for evaluation purposes is not cost-effective. Investments in evaluation should be restricted to the development of evaluation indicators and methods for extracting evaluation-relevant data from information that are being routinely collected for other purposes.