2011, Article / Letter to editor (M. Lunenberg en J. Dengerink (Eds), Kennisbasis lerarenopleiders, vol. 2011, iss. 32, (2011))Eén van de meest complexe uitdagingen waarvoor leraren en lerarenopleiders in hun onderwijspraktijk staan is de vraag wanneer leerlingen c.q. studenten daadwerkelijk hebben geleerd. Om te kunnen vaststellen wanneer iemand heeft geleerd, is kennis over beoordelingsprocessen en aanpalende leertheorieën essentieel. Deze kennis kan het beste beschreven worden aan de hand van de volgende vijf vragen: 1) Waarom wordt beoordeeld? 2) Wat wordt beoordeeld? 3) Hoe wordt beoordeeld? 4) Wie beoordeelt? en 5) Wanneer wordt beoordeeld? Alvorens de vijf vragen worden beantwoord, zal het begrip beoordelen beknopt worden gedefinieerd en geplaatst in een historische context.
2010, Article / Letter to editor (Educational Technology Research and Development, vol. 58, iss. 3, (2010), pp. 311-324)This article describes a blueprint for an online learning environment that is based on prominent instructional design and assessment theories for supporting learning in complex domains. The core of this environment consists of formative assessment tasks (i.e., assessment for learning) that center on professional situations. For each professional situation, three levels of situational complexity are defined, and within each of these three levels, tasks are offered that differ in the degree of support offered to the learner. This environment can support (beginning) professionals in complex domains in gaining insight into the available repertoire of behavior in professional situations, as well as into the quality and effectiveness of that behavior (assessment criteria), while simultaneously helping them to develop insight into the standards that their own behavior should (eventually) match.
2009, Article / Letter to editor (Assessment and Evaluation in Higher Education, vol. 35, iss. 1, (2009), pp. 55-70)In an effort to gain better understanding of the assessment of prior informal and non-formal learning, this article explores assessors’ approaches to portfolio assessment. Through this portfolio assessment, candidates had requested exemptions from specific courses within an educational programme or admission to the programme based on their prior learning. The assessors judged the portfolios according to set rating criteria, and subsequently discussed their approaches. Their decision-making processes, perception of portfolio use in the Assessment of Prior Learning (APL), deciding factors in portfolio assessment and use of the rating criteria were key elements in this discussion. The results show that they do use the rating criteria as an indicator in decision-making, but have mixed perceptions regarding the fairness of APL portfolio assessment. They perceive the portfolio evidence in combination with sound argumentation as the deciding elements in portfolio assessment.
2009, Article / Letter to editor (Studies in Continuing Education, vol. 31, iss. 1, (2009), pp. 61-76)Formal diplomas and certificates have been accepted as proof that students may receive exemption for parts of their educational programme. Nowadays, though, it is socially desirable that informal and non-formal learning experiences are also recognised. Assessment of prior learning (APL) addresses this issue. In APL, the candidates knowledge, skills or competences required in informal and non-formal learning are measured against a standard to determine whether they match the learning objectives. Although APL is frequently used in workplaces and vocational education, it is practised less in universities, and research is lacking in this context. This study aims to evaluate the first APL procedure in an academic computer science programme, and an adjusted APL procedure in an educational science masters programme. This is done from the perspective of the APL candidates, tutors and assessors, using the theoretical framework by Baartman et al. (2006). The computer science participants comprised 23 candidates from a police software company, four tutors and four assessors. From educational science, nine candidates, two tutors and two assessors participated. The results show that the APL procedure in educational science is viewed significantly more positively than that in computer science; further, the computer science assessors differ considerably from the other participants in their perceptions relating to the quality criterion ‘cognitive complexity’. Explanations for the difference between the two programmes are discussed in this article and assessor and tutor training highly recommended.