Recent Searches

Close

History

Close

Recent Pages

Recent Searches

EIU Faculty Development and Innovation Center

Developing Assessments

 


Introduction

Assessment is a crucial component of education, yet it's rarely the aspect that draws educators to the profession. When asked why they chose teaching, few faculty members respond, "because I like to develop and evaluate assessments." However, effective assessment is vital for understanding and improving learning. 

The topic of assessment can be challenging due to the varied terminology used to describe its components. For instance, the term "assessment types" can be confusing—does it refer to the assessment's goal, purpose, or method? Another point of confusion is that the terms assessment, testing, and evaluation are frequently used interchangeably. However, as Jay McTighe and Steve Ferrara explain in their book Assessing Student Learning by Design, these terms have distinct meanings. Assessment is a broad term referring to the process of gathering and synthesizing information to better understand and describe learning. Testing is one specific method of assessment. Evaluation refers to the process of making a judgment regarding the degree of knowledge, understanding, skill proficiency, or product/performance quality based on established criteria and performance standards. To avoid ambiguity, this section focuses on specific aspects of assessment with the purpose of providing faculty with clear, practical information on developing assessments. Key assessment principles, goals, purposes, methods, and strategies will be described to help faculty create more effective assessments.

Principles of effective assessment

Assessment serves as the crucial bridge between teaching and learning. It's not just about what you have taught, but what your learners have gained from your teaching. McTighe and Ferrara offer five sound principles for effective assessments to maximize teaching and learning. By consistently applying these principles when designing, implementing, and refining assessments, faculty can create more meaningful and impactful assessments that enhance learning and provide valuable insights into their teaching effectiveness

 

Assessments should serve learning 

Effective assessments do more than just measure learning; they enhance it. They provide valuable feedback to both learners and instructors, guiding future learning and teaching strategies. For example, a formative quiz with immediate feedback allows learners to identify areas for improvement and helps faculty adjust their teaching approach. 

 

Multiple measures provide more evidence 

No single assessment can capture the full range of learning. Using various assessment methods provides a more comprehensive picture of learner knowledge and skills. For example, a course might combine written exams, project-based assessments, and oral presentations to evaluate different aspects of learner understanding. 

 

Assessments should align with goals 

Assessments must directly reflect the learning objectives of the course or program. This alignment ensures that what's being measured is what was intended to be taught and learned. For example, if a learning goal is to apply statistical concepts to real-world problems, the assessment should require learners to analyze actual data sets rather than just recite formulas.

 

Assessments should measure what matters 

Focus on assessing the most important and enduring knowledge and skills. Prioritize depth of understanding and application over superficial recall of facts. For example, in a literature course, instead of asking learners to memorize plot details, assess their ability to critically analyze themes and character development. 

 

Assessments should be fair 

Fairness in assessment means providing all learners an equal opportunity to demonstrate their learning. This includes considering diverse learning styles, cultural backgrounds, and potential barriers to performance. For example, offering multiple formats for a final project (such as a written paper, oral presentation, or multimedia creation) allows learners to showcase their learning in a way that best suits their strengths.

These principles form the foundation for developing effective assessments that not only measure but also promote learning.


Assessment planning framework

Jay McTighe and Steve Ferrara have developed an Assessment Planning Framework to guide faculty in creating effective assessments. This framework emphasizes three key considerations: assessment goals, assessment purposes, and audience. Using this framework will help guide assessment design decisions and inform the kinds of assessment methods to use. By intentionally considering assessment goals, purposes, and audiences, faculty can create more targeted, effective, and meaningful assessments that best serve learning.

 

Assessment goals

Assessment goals define what is to be measured in terms of learning. What are the targeted learning goals and how are they best assessed? McTighe and Ferrara identify four kinds of assessment goals - each necessitating a different kind of evidence, therefore a different kind of assessment method:

 
Knowledge

This kind of assessment goal focuses on the factual information learners should know. For example: Identifying the main parts of a cell in a biology course. Assessment evidence can be determined relatively straightforward for a test/quiz method because assessments of knowledge typically have a single correct answer.

 
Skills and Processes

This goal focuses on the skills, abilities, techniques, processes, know-how learners should be able to demonstrate. For example, performing CPR in a health related course. Assessment evidence is best determined using (authentic) performance-based methods such as direct observation or an evaluation of a completed project, product or performance because skills and processes involve more complex actions requiring multiple steps and the integration of multiple skills along with declarative knowledge.

 
Understanding

This assessment goal focuses on the ideas, concepts, principles, and generalizations the learner is to comprehend at a deep level. For example, understanding that a vehicle can be a lethal weapon during its operation in a drivers education course. McTighe and Ferrara advocate assessment evidence is best determined using (authentic) performance-based methods that do two things: 1. apply learning, and 2. explain thinking and support for responses. Ultimately, assessment of understanding calls for a demonstration of using knowledge, skills, and/or processes in different situations. For example, if a learner truly understands a vehicle can be a lethal weapon during its operation, they can operate it safely in different driving situations and conditions.

 
Dispositions

Productive attitudes, habits of mind, and behaviors we aim to cultivate in the learner inside and outside of the course. For example, demonstrating ethical decision-making in a business ethics course. Assessment evidence for an assessment with a disposition goal is best determined using (authentic) performance-based evidence, such as direct observation, and self assessment methods over time so there is opportunity for the disposition(s) to be applied in varied situations and circumstances. Self-assessments can provide evidence of the learner internalizing the productive disposition(s).


Assessment purposes

The purpose of an assessment determines how it will be used. What is the primary purpose of the assessment you are developing? McTighe and Ferrara propose that there can be three primary assessment purposes:

 
Diagnostic

Diagnostic assessments determine learners' prior knowledge, experience, skills, interests, and/or misconceptions before instruction for the purposes of providing information to help guide instruction.

Example: A pre-test/assessment or knowledge check quiz on algebraic concepts at the beginning of a calculus course.

 
Formative

The goal of formative assessment is to monitor student learning during the learning process in order to provide ongoing feedback that can be used by:

    • faculty to improve their teaching/instruction
    • learners to improve their learning

Formative assessments measure the faculty teaching/instruction process and student learning progress and provide feedback for the purpose of improving teaching/instruction and learning. You may hear formative assessment is assessment for learning. Formative assessments help form learning. Formative assessments represent low stakes or no stakes (grades/points) because their primary purpose is for both the faculty and learner.

Example: Weekly quizzes in a foreign language course to check vocabulary acquisition.

Best practices for formative assessments: Consider creating a link between evaluative/summative and formative assessments by designing formative assessments in such a way that they contribute to the evaluative/summative task(s). For example, if the evaluative/summative assessment is a final paper, develop formative assessments that ask the learner to submit practice writings of different paper sections throughout the course. Most learning can be gained if formative evaluation matches the level of learning objective (thinking order skill, i.e. Bloom's Taxonomy) as the summative assessment. The feedback provided will most likely improve their final paper while also lowering the learner’s overall workload in the course. Also, in this example, the formative assessments align with the modality of the evaluative/summative assessment - essay/writing.

 
Evaluative/Summative

The goal of summative (also known as evaluative) assessment is to evaluate student learning at the end of an instructional unit by comparing it against some standard or benchmark. McTighe and Ferrar prefer to use the term evaluative rather than summative because they think it more accurately describes their purpose; however, for the purposes of this guide the terms educative and summative are used interchangeably. Evaluative/summative assessments are high stakes (grades/points) used to measure and provide evidence of the degree of mastery or proficiency in which the learner completed the learning objectives.Evaluative/summative assessment is assessment of learning.

Example: A final project in a computer science course where learners develop a functioning application.

 

Educative

While McTighe and Ferrara focus on the previous three primary assessment purposes, Grant Wiggins introduced the complementary assessment purpose of "educative assessment" in his 1998 book Educative Assessment: Designing Assessments to Inform and Improve Student Performance. Wiggins asserts that assessments should do more than just evaluate; they should also teach. He believed that well-designed assessments should provide learners with insights into their own learning processes and help them improve their understanding and performance.

Wiggins' work emphasizes the importance of authentic tasks, clear standards, and feedback in assessments. An educative focus can be integrated into any of the three main assessment purposes - diagnostic, formative, or evaluative/summative - to create more engaging and effective learning experiences that incorporate learning through assessment.

Example: In a history course, instead of a traditional multiple-choice test about the causes of World War I, an educative assessment might ask learners to analyze a set of primary source documents they have not seen before. Learners would need to apply their knowledge of the period to interpret the documents, consider multiple perspectives, and construct an argument about how these sources reflect the complex causes of the war. This task not only assesses learners' understanding but also deepens their historical thinking skills and introduces new information through the assessment process itself.

The term "educative assessment" is often used interchangeably with "authentic assessment". Both emphasize the use of real-world, complex tasks that not only evaluate student learning but also enhance it through the assessment process itself. Because an educative or authentic focus can be integrated into any of the three main assessment purposes, authentic assessment is described as an assessment method in this guide.


Assessment audience

Considering the audience helps determine how assessment results will be communicated and used. Potential audiences include:

Learners - to reflect on their learning and set goals.

Faculty - to inform instructional decisions and improvements.

Administrators - to evaluate program effectiveness and make policy decisions.

Accreditors - to ensure educational standards are met.


Assessment methods

Selected response

Selected response assessment components assess learner knowledge of factual information, concepts, and the application of basic skills - typically in isolation and out of context.

Examples: multiple choice, T/F, matching, rank order questions.

 

Constructed response

Constructed response assessment components assess declarative knowledge (factual knowledge) and procedural proficiency (skills). An "answer-and-explain" constructed response format can provide insight into conceptual understanding, while an argument supported by evidence or rationale format can provide insight into reasoning.

Examples: fill-in-the-blank, short answer, paragraph, label a diagram, social media post, show your work, flow chart, concept map.

 

Authentic project or performance-based assessments

Authentic assessments present learners with a genuine (authentic) challenge, a target audience, and realistic constraints that produce tangible products or performances that allow learners to apply knowledge and skills outside of the classroom and in the larger real (or simulated) world. For example, tangible products can include an essay, poem, blog, report, infographic, portfolio, model, video, podcast, exhibit. Example performances can include an oral presentation, demonstration, debate, recital, theater performance.

In their book, The Understanding by Design Advanced Guide to Advanced Concepts in Creating and Reviewing Units, Grant Wiggins and Jay McTighe offer the GRASPS framework for developing a genuine context for an authentic assessment.

G = goal, R = role, A = audience, S = situation, P = product and/or performance, and S = success

    • A real-world goal,
    • A meaningful role for the learner,
    • An authentic (or simulated) audience,
    • A contextualized situation that involves real-world application,
    • Learner-generated culminating product and/or performance, and
    • Success criteria by which the learner products and performances will be evaluated as evidence of learning.

Example: In a computer science course, instead of a traditional coding exam, learners are tasked with developing a functional mobile app to address a real community need. Working in small teams, learners must research local issues, design a user-friendly interface, write the necessary code, and present their finished product to a panel of local stakeholders and/or tech industry professionals. Throughout the project, learners receive feedback from their peers, instructors, and potential users, allowing them to refine their app. This assessment not only evaluates learners' coding skills and theoretical knowledge but also develops their problem-solving abilities, teamwork, communication skills, and understanding of real-world application development processes. The authentic nature of the task, coupled with ongoing feedback and real-world relevance, makes this an effective evaluative/summative assessment with an educative focus that goes beyond evaluating existing knowledge by preparing learners for challenges they may face in their future careers.

 

Process-focused

Process-focused assessments provide information on a learner's learning strategies and thinking processes - rather than focusing on a tangible product or performance. This method focuses on gaining insights into underlying cognitive processes used by the learner.

Example: In a psychology course, learners are asked to maintain a journal throughout a semester-long project. This journal focuses on capturing their thought processes and decision-making rather than just final results of the project. At several points during the semester, learners submit portions of their journal for faculty review. Faculty provides feedback on the learners' reasoning, critical thinking, and approach. The final assessment involves completing a self-analysis of their final project process. To complete this component of the final project, learners review their journal entries, identify key decision points, areas where their thinking evolved, and lessons learned.


Assessment inventory template

To assist faculty in designing comprehensive and effective assessment strategies, the FDIC has developed an assessment inventory table. This tool provides a structured approach to planning and organizing your course assessments, ensuring alignment with learning objectives and best practices in assessment design.

To assist faculty in designing comprehensive and effective assessment strategies, FDIC developed an assessment inventory table. This tool provides a structured approach to planning and organizing your course assessments, ensuring alignment with learning objectives and best practices in assessment.

The assessment inventory table allows faculty to:

1. Map out all assessments for your course
2. Align each assessment with principles of effective assessment
3. Clarify the purpose, audience, and desired learning outcomes for each assessment
4. Specify assessment and evaluation methods
5. Plan feedback mechanisms for learners

By completing this inventory, faculty gain a bird's-eye view of their course's assessment strategy, helping them identify any gaps or redundancies and ensure a balanced, comprehensive approach to assess learning.

To access this planning tool, click download the Assessment Inventory Table [Word] document. The document includes instructions on how to complete each column of the table.


Supplemental resources

University of Tennessee - Knoxville Teaching and Learning Innovation Center provides a comprehensive list of assessment-related teaching resources.

Assessment and Grading Webpage from the Johns Hopkins Bloomberg School of Public Health Center for Teaching and Learning Teaching Toolkit provides definitions for formative and summative assessments and assessment format and tools resources.

Understanding Formative Assessment: Insights from Learning Theory and Measurement Theory by Elise Trumbull and Andrea Lash 

Low-Stakes Assignments by DePaul University provides several examples of low-stakes (formative) assessments.

Classroom Assessment Techniques (CATs) by Vanderbilt University offers several formative assessment related resources.

Assessing Student Learning and Teaching Webpage by Baylor University Academy for Teaching and Learning provides guides related to formative and summative assessments, grading rubrics, effective student feedback, instructor self-assessment, ungrading, and alternative assessments.

Assessment of Learning Webpage by Kennesaw State University offers an assessment of learning guide.

High-Stakes Assignments by DePaul University provides several examples of high-stakes (educative/summative) assessments.

Authentic Assessment by Indiana University Bloomington offers a description and examples of this method of assessment.

What Happens When You Close the Door on Remote Proctoring? Moving Toward Authentic Assessments with a People-Centered Approach web article by the POD Network.

Creating Wicked Students: Designing Courses for a Complex World (an Excerpt) Faculty Focus blog post by Paul Hanstedt, PhD.

Repeated, Cumulative, Spaced, and Incremental: The Secret Recipe for Improving Assessments? Dr. Jeffrey R. Stowell, Professor of Psychology at Eastern Illinois University, published an article detailing a successful assessment strategy he has implemented in his courses.The full text article is available through the EBSCO permalink provided.

Multiple-Choice Testing in Education: Are the Best Practices for Assessment Also Good for Learning? Dr. Andrew C. Butler, Professor of Education and Professor of Psychological & Brain Sciences at Washington University in St. Louis, published an article where he explores whether best practices for assessment align with best practices for learning. Dr. Butler was the keynote speaker at the 2024 EIU Pedagogy Day organized by the FDIC. The full text article is available through the EBSCO permalink provided.


References

McTighe, J., Ferrara, S., & Brookhart, S. M. (2021). Assessing student learning by design : principles and practices for teachers and school leaders. Teachers

College Press.

Wiggins, G. P. (1998). Educative assessment : designing assessments to inform and improve student performance (1st ed.). Jossey-Bass.

Wiggins, G., McTighe, J. (2012). Understanding by Design Guide to Advanced Concepts in Creating and Reviewing Units. United States: ASCD.


The written information and resources are developed or currated by the 

Faculty Development and Innovation Center

phone 217-581-7051 :: email fdic@eiu.edu :: web eiu.edu/fdic

Contact the FDIC for instructional design related questions or to schedule a consultation appointment. The FDIC staff can recommend instructional design strategies for your online, hybrid, and face-to-face courses.

Last updated: Friday, Augsut 30, 2024

Related Pages

Contact Information

Dr. Michael Gillespie, Director, FDIC

217-581-7056
mgillespie@eiu.edu

Kim Ervin
Instructional Designer

217-581-3716
kservin@eiu.edu

Faculty Development and Innovation Center

1105 Booth
217-581-7051
fdic@eiu.edu

David Smith
Instructional Support and Training Specialist

217-581-6660
dmsmith4@eiu.edu

Keerthana Saraswathula
Instructional Support and Training Specialist

217-581-7856
knsaraswathula@eiu.edu


Take the next step

apply now
schedule a visit