FOSSconnect


Assessing Science Knowledge (The ASK Project)

Kathy Long and Larry Malone, FOSS Developers, Lawrence Hall of Science, University of California, Berkeley
March 20, 2006 | Assessment

Assessing Science Knowledge (ASK) is a project based at the Lawrence Hall of Science that was funded by the National Science Foundation in the summer of 2003. The purpose of the project is to design an assessment system for grade 3–6 FOSS modules that produces valid and reliable evidence of science achievement in an active-learning science environment. The article "Inside the Black Box" (Black & Wiliam, 1998) set the stage for ASK. Paul Black and Dylan Wiliam make a very convincing argument in the article that formative assessment practices are an important part of enhancing learning. It is not enough to do activities and to have discussions; you need additional information about how the students are interpreting those activities and discussions. Students can be further involved in the process through self-assessment practices. When students take on more responsibility for their own learning and teachers understand better how students are building the complex relationships involved in learning, it can only be a recipe for success.

ASK Goals

Two goals guide the work of the ASK Project:

  • Classroom assessment practices. Develop daily classroom assessment strategies and practices that lead to greater student achievement and enhance instructional practices.
  • Accountability. Develop assessments with the technical quality needed to provide accountability information to schools and districts. (In essence, the goal is to provide a measure closely aligned with the curriculum that could be used in conjunction with standardized test scores—usually disconnected from the curriculum—to measure student progress and evaluate program effectiveness.)

The ASK Project is restructuring the FOSS assessment system to provide teachers with tools for understanding the progression of learning taking place in the classroom as instruction occurs. It is designed to enhance the dialogue between teachers and each student to increase science understanding for all students.

ASK Design

In order to produce valid and reliable evidence of learning, the ASK Project follows the guidelines described in the National Research Council Report, Knowing What Students Know: The Science and Design of Educational Assessment (NRC, 2001). The report discusses three aspects of assessment design that must be carefully linked: cognition, observation, and interpretation
(see Assessment Triangle diagram).

National Standards and Benchmarks suggest learning outcomes for contemporary science education and thus inform the big picture for thinking about models of student learning (the cognition corner of the Assessment Triangle). Standards and Benchmarks are a good place to start, but further work is often needed to understand the finer-grained, incremental learning that must occur in order to acquire the knowledge described in a standard or benchmark.

Our specific module work begins with a review of the learning goals and objectives of the module. This leads to precisely defining our expectations for student learning outcomes, which we call key concepts—that is, what we want students to know and be able to do after completing the module. Once the key concepts for a module are identified, they are analyzed in terms of their conceptual complexity. We extract the sub-concept chunks, the pieces of knowledge that students must know and put together in relationships in order to build the bigger ideas. These are laid out in a matrix called a construct map. A construct map for a module generally has two to four key concepts followed by a description of the smaller pieces (called constructs) that need to be developed in order to fully understand the key concept.

#

Assessment items (the observation corner of the Assessment Triangle) are developed to elicit evidence of student learning related to the constructs and key concepts in a given module.

ASK is designing two kinds of assessments: embedded (formative) and benchmark (which can be used for formative or for summative purposes).

Embedded assessments are incorporated seamlessly into instruction. Their purpose is to provide diagnostic information about student learning to both teachers and students as teaching and learning are happening. (They are not used for grading purposes.) Embedded assessments generally involve teacher observation (watching students' inquiry practices during investigation activities), looking at written work in science notebooks, and having students engage in self-reflection.

Benchmark assessments assess students' accumulated knowledge and understanding of the science covered by a module. These assessments include a Survey before beginning instruction, I-Checks after each investigation, and a Posttest after the module is completed. Schools and districts can eventually use these assessments for accountability purposes (or for giving grades), but the most beneficial use that we have seen so far in the project is the use of I-Checks (short for "I check my own work") as a tool for student self-reflection. When used for formative purposes, teachers score the I-Checks to look for student progress, but they do not put scores on the students' papers. Instead, the papers are handed back to students and, as the class discusses the answers to the questions, students ponder their own answers and think about how they might improve them (if needed). This process encourages students to take more responsibility for their own learning, to ask questions to help clarify their thinking, and to understand more about what is required to write good explanations. Some classes have even taken this a step further. Students have been engaged in the project as research partners, and now they not only critique their own answers, but they also critique the questions and point out flaws in the ways questions are being asked that may have led them to give a wrong answer. That is definitely higher level thinking!

Interpreting the assessment data (the third corner of the Assessment Triangle) ties the evidence gained from the observations back to the cognition corner. Scores are run through a rigorous statistics program (quantitative analysis) and then combined with a qualitative analysis to determine whether the items are eliciting the information we were looking for about student learning. The quantitative statistics indicate when an item should be red flagged, but it's the qualitative analysis that generally allows us to describe and consider what to do about a problem with an item.

The statistical analyses also allow us to look at student achievement in relation to the constructs (based on the items students were able to answer). Both quantitative and qualitative statistics help us complete the final piece of the assessment program we call progress maps. Progress maps will detail by module a number of levels of student achievement and will describe what students are capable of doing independently at a particular level and what they still find challenging.

Lessons Learned

You may find yourself asking, "Why do we need to do all of this assessment?" We think there are several reasons. First, when students are engaged in active learning, it is impossible to interview every student every day to find out what they understand and where they need help.

ASK assessments give teachers additional insight into each student's thinking. Second, when teachers have class discussions with students, they generally call on three or four students, and if they give the right answers, then the teacher assumes everyone else is at the same place. This often is not the case we found as we began to use more formative assessments. Third, if teachers find that students are not giving correct answers, they tend to ask more questions that lead students to the right answers. It's good teaching to scaffold students' learning when they are not able to do so independently, but it does not ensure that students will be able to answer independently the next time they are asked. Assessments help to fill in the missing information.

The next question you might ask is, "How can I know what my students know if they are not good writers?" This was a question that was asked the first year of the project. An interesting shift occurred in the second year of the project. Teachers decided to look at this from a different perspective—students need to be good writers in any subject they study. So the question changed to, "How can we help our students be better writers?" We collectively decided that looking for progress was the more important factor. If you have a student who is only capable of responding with one relevant word, then that's where it starts. Each time students write they try to add a few more words, gradually shaping the kinds of answers that provide detailed explanations and good reasons (evidence) for those explanations. That's what we're looking for.

We collectively also discovered that writing can be used as a learning tool. When you have to write, you must gather your thoughts around the topic. As you write, you find that there are things you can communicate about the topic, but you also discover that there are pieces of information that you still need in order to complete your explanation. Students can then ask more focused questions to continue to build their understandings of the science they are studying.

When we decided to incorporate the benchmark assessments into the project, we worried that teachers and students would feel like they were being tested to death. We thought we might be adding more stress to the already stressful atmosphere that seems to have invaded schools in the past few years. Surprisingly, we found this not to be the case. In fact, exactly the opposite is happening in many classrooms. The classroom culture is changing. Tests are viewed as another part of the learning, but the key is that students are checking their own work and thinking hard about their own learning. They are now viewing the tests as another way to build their knowledge.

This revelation suggests a new teaching/learning paradigm, one in which the distinction between instruction and assessment is eroding. The separation between opportunities to learn (lessons, projects, practice, etc.) and demonstration of learning (tests, papers, homework scores, etc.) is proving to be a detriment tostudent achievement. When students and teachers understand that assessment results are shared intellectual property, they are transformed from judgments into valued information for planning corrective measures. In this open-access learning environment students understand assessment as an opportunity to discover what they don't know, share that flawed understanding with their teacher and peers, and together forge a plan for fixing the deficit. Students become unselfconscious, imaginative learners, and teachers are efficient, thorough facilitators of learning. The line between instruction and assessment is blurred now; by the end of the project the line may very well be absent.

Finally, we would like to acknowledge all of the people who are helping to shape theASK Project. The most important are the teachers and the students in classrooms at nine different centers who are testing all of the materials and giving us a lot of detailed feedback. No matter how many "experts" look at assessment items, you never know if they are really going to work until students start answering the questions. We appreciate all the extra time that teachers are putting into the project to give us feedback and suggestions for making the system viable. We have curriculum developers, science educators, outside evaluators, and graduate students spending hours scoring, analyzing the data, revising the items and re-conceiving the whole system each year. Our thanks to all who are involved and helping to make this a successful project.

For more information about the ASK Project, contact:

  • Kathy Long, FOSS Assessment Coordinator
    e-mail: klong@berkeley.edu
    or
  • Larry Malone, FOSS Co-Director
    e-mail: lmalone@berkeley.edu
    Lawrence Hall of Science, University of California
    Berkeley, CA 94720, phone: 510.642.8941