Category Archives: Metacognition and Self-regulation

My thoughts on Alignment

Metacognition

I’ve always thought that exams should always be designed to be hard. I have come to accept the reality that exams are an inevitable part of schooling, a tool to gauge the amount of learning a student has acquired over the period of learning. As such, if students were given easy exams and got high scores, then that is not tantamount to the assumption that they have learned what they are supposed to learn. I’ve also thought that giving trick questions is a good way of checking that learning is not superficial. If they are answered correctly, then it meant that students understood the basic concepts and its many implications.

What I have learned

In this module, I learned that though it is not a crime to give hard exams, it is a must that students are well prepared for this type of exam. Instructions should be aligned to the kind of assessment intended to be given during or at the end of the course. If higher order thinking is what the assessment requires, then instuctional strategies should also be geared towards this method of thinking. Otherwise, students (who were accustomed to the recall and recognition thinking) would have a hard time on accomplishing their assessment tasks. Likewise, assessment should be aligned to the learning objectives. If the learning objectives as provided for by the curriculum calls for creativity or analysis, instuctional strategies and assessment cannot be based on recall/recognition or even comprehension skills.

Introductory Module Metacognition

Goal Setting

I took this course in order to help me be an effective teacher. For I know that teachers not only conduct lessons, but give out exams and grade students as well. For what good is being able to teach your students higher order thinking, if you cannot measure this skill and assure yourself that they have been well prepared for the future. Teaching, for me, is not only a job, it’s a passion and a vocation. I want to be able to face my Creator with the knowledge that I have passed on to my students the talents that have been entrusted to me by Him.

Metacognition

Seeing the many terms needed to be defined at the start of this term amazed me. Admittedly, I thought that assessment, evaluation and tests all mean the same thing, the big E (for exams). Just as the big C (for cancer) is dreaded by many, the big E is life threatening for many students. Haha ha.

Then again, I thought grading and measurement account for the same thing, assigning letters or numbers to gauge students’ performance at the culmination point of a term or period of learning.

What I have learned.
From this introducory module, I have learned that assessment covers a lot of ground. My answers to the discussion fora is as follows:

Assessment – the gathering of evidence of student performance over a period of time to measure learning [1] and then using this information in order to judge whether students have learned what is expected.[2]. The overall goal of assessment is to improve student learning. [1]

Testing is the act of giving students or candidates a test (as by questions) to determine what they know or have learned. [3]

Measurement – Quantifying observations or individuals in a systematic manner as a way of representing properties or characteristics of each individual. [2]

Evaluation occurs when a mark is assigned after the completion of a task, test, quiz, lesson or learning activity.[1]

Set B: Types of assessment practices

• diagnostic assessment- administered to determine each child’s instructional needs, diagnosing a child’s strengths and weaknesses, providing a starting point from which to measure a child’s literacy growth, and for providing ongoing instruction as the child is learning. [8]

• summative assessment- the process of arriving at a grade for a student, [6]
generally takes place after a period of instruction and requires making a judgment about the learning that has occurred.

formative assessment-diagnostic use of assessment to provide feedback to teachers and students on what students know (and don’t know) in order to make responsive changes in teaching and learning [5]. It is designed to improve (rather than to evaluate) students’ skills or their understanding of specific course concepts [6]. Formative assessments
-help students identify their strengths and weaknesses and target areas that need work [4]
-help faculty recognize where students are struggling and address problems immediately [4]

In summary, “formative assessment monitors student learning” during instruction, wheres “summative assessment evaluates student learning” after instruction [4]

• informal / formal assessment

Formal assessments have data which support the conclusions made from the tests (or standardized measures). The data is mathematically computed and summarized. Scores such as percentiles, stanines, or standard scores are mostly commonly given from this type of assessment[7]. Summative assessments, like exams or long tests, are usually formal in nature.

Informal assessments are not data driven but rather content and performance driven. Scores such as 10 correct out of 15, percent of words read correctly, and most rubric scores; are given from this type of assessment[7]. Formative assessments, like classroom discussions or group works, are informal in nature.

Validity refers to how well a test measures what it is intended to measure. Valid measures should measures the construct it claims to measure (Construct validity), contain an accurate sampling of content domain (Content validity), must be comparable with another valid measure (Criterion Validity), can predict results of a later measure (Predictive validity)

Reliability is the degree to which an assessment tool produces stable and consistent results. Similar results should be obtained when measures are given to the same group of individuals at two different times (test-retest), when two sets of tests of parallel content are given (parallel-forms) and when the first and second half of tests results are compared.

norm-referenced / criterion-referenced

Norm-referenced test (NRT) is a type of test, assessment, or evaluation which yields an estimate of the position of the tested individual in a predefined population, with respect to the trait being measured. This type of test identifies whether the test taker performed better or worse than other test takers, but not whether the test taker knows either more or less material than is necessary for a given purpose. Scores are usually expressed as a percentile, a grade equivalent score, or a stanine. IQ test is an example of NRT.

Criterion-Referenced Test (CRT) – A test designed to measure a student’s performance to measure how well he/she has learned a specific body of knowledge and skills, as compared to an expected level of mastery, educational objective, or standard [2]. The performance of other examinees is irrelevant. A student’s score is usually expressed as a percentage.