====== e-Assessment system functionality ====== ===== General ===== The basic functionality of an assessment system includes the following(([[http://www.iicm.tugraz.at/home/cguetl/publications/2008/Guetl%202008%20-%20IJET.pdf|Gütl, Christian. Moving towards a Fully Automatic Knowledge Assessment Tool Moving towards a Fully Automatic Knowledge Assessment Tool. International Journal of Emerging Technologies in Learning, 2007.]])): * support for different question/item generation, * support for different assessment types and assessment delivery models, * support for automatic answer assessment, * support for assessment feedback. ===== Question/item generation ===== Today a large variety of assessment items is used for the e-assessment process. A recently proposed taxonomy or categorization suggests 28 item types that can be used in e-assessment, classified by the level of constraint they impose on the examinee's options for answering. Those item types, sorted from most to least constrained are the following(([[http://ejournals.bc.edu/ojs/index.php/jtla/article/view/1653|Scalise, Kathleen, and Bernard Gifford. Computer-Based Assessment in E-Learning: A Framework for Constructing ‘Intermediate Constraint’ Questions and Tasks for Technology Platforms. The Journal of Technology, Learning, and Assessment 4, no. 6, June 2006.]])): * **multiple choice items** (including: //true/false//, //alternate choice//, //standard multiple choice//, //multiple choice with media distractors//), * **selection/identification** (including: multiple true/false, yes/no with explanation, //multiple answer//, //complex multiple choice//), * **reordering/rearrangement** (including: //matching//, //categorizing//, //ranking and sequencing//, //assembling proof//), * **substitution/correction** (including: //interlinear//, //sore-finger//, //limited figural drawing//, //bug/fault correction//), * **completion** (including: //single numerical constructed//, //short answer and sentence completion//, //cloze-procedure//, //matrix completion//), * **construction** (including: //open-ended multiple choice//, //figural constructed response//, //concept map//, //essay//), * **presentation/portfolio** (including: //project//, //demonstration//, //experiment//, //performance//, //discussion//, //interview//, //diagnosis//, //teaching//) ===== Assessment types and assessment delivery models ===== A good assessment system should support various types of assessment including:(([[http://www.ict-act.org/ICT-Innovations-10/papers09/ictinnovations2009_submission_109.pdf|Armenski, Goce, and Marjan Gusev. The Architecture of an ‘Ultimate’ e-Assessment System. Association for Information and Communication Technologies ICT-ACT, 2009.]]))(([[http://www.sciencedirect.com/science/article/pii/S036013150200132X|Sclater, Niall, and Karen Howie. User requirements of the 'ultimate' online assessment engine. Computers & Education 40, no. 3: 285-306, April 2003.]])): * continuous or end-of-the-course credit bearing assessments; * authenticated or anonymous self-assessments; * diagnostic assessments; * support for other or emerging types of assessment (competence assessment, performance assessment, portfolio assessment and peer assessment); Application of computers in assessment also enabled a comfortable use of //adaptive// tests supported by the [[item response theory]]. Adaptivity here means that the next test item (question, task or request) or group of items will depend on the correctness of the answer on the current test item or group of items. Some of the assessment or test delivery models based on the level of their adaptability are(([[http://professionals.collegeboard.com/profdownload/pdf/overview_of_computer__10507.pdf|Patelis, T. An Overview of Computer-Based Testing. The College Board, RN-09, Office of Research and Development, April 2000.]])): * **linear tests**, which represent a computer version of common pen and paper tests and not adaptively administered, * **linear-on-the-fly tests**, which are also not adaptively administered, yet every examine gets a unique test composed out of a pool of test items, * **testlets**, which are "//a group of items related to a single content area that is developed as a unit and contains a fixed number of predetermined paths that an examinee may follow//"(([[http://onlinelibrary.wiley.com/doi/10.1111/j.1745-3984.1987.tb00274.x/abstract|Wainer, Howard, and Gerard L Kiely. Item Clusters and Computerized Adaptive Testing: A Case for Testlets. Journal of Educational Measurement 24, no. 3: 185-201, September 1987.]])), or simplified, small tests of single units, small enough to manipulate, but big enough to carry their own context and be used to balance contents of a test(([[http://www.jstor.org/stable/1434763|Wainer, Howard, and Charles Lewis. Toward a Psychometrics for Testlets. Journal of Educational Measurement 27, no. 1: 1-14, Spring 1990.]])), * **mastery models**, which usually refer to tests designed of testlets presented using an adaptive mechanism which finally distinguishes mastery from non-mastery knowledge or skill level,(([[http://www.springerlink.com/content/p73612q302255654/|Glas, Cees A. W., and Hans J. Vos. Adaptive Mastery Testing Using a Multidimensional IRT Model. In Elements of Adaptive Testing, edited by Wim J. van der Linden and Cees A.W. Glas, 409-431. New York, NY: Springer New York, 2009.]])) and * **adaptive tests** (also //computer-adaptive tests// - CATs), in which not just a testlet, but the very next item to be presented is dependent.