This blog will attempt to emulate what we think we’ve learned from George Hotz, how to be a node in a learning community working on a problem. Our statements of the problem(s) we are working on are tagged here. We view this space as one element in our Learning Portfolio, and will link to other portions of our portfolio among systems we host and world systems we have adopted. From time to time we anticipate writing reflections on our use of this space and on our changing understanding of portfolios for learning.

Our organization, CTLT, is committed to the advancement of authentic learning—learning that takes place in and beyond the classroom; that encourages theexchange of knowledge across disciplinary, institutional, and national boundaries; and that recognizes the need for participation in the global dialogue.

The problems that we are exploring include:
Learning portfolios (ePortfolios)
Assessment Communities and community assessment
Identification of (and learning about) global competencies

We invite your comments and trackbacks to connect our work to your thinking and to a community of like-minded explorers.

Nils Peterson, Theron Desrosier, Jayme Jacobson
CTLT has been thinking about portfolios for learning and their relationship to institutionally supported learning tools and course designs. This thinking has usmoving away from the traditional LMS. It has also led to a recognition that grade books are QWERTY artifacts of Learning 1.0. In a recent Campus Technologyinterview Gary Brown introduced the term “harvesting gradebook” to describe the grade book that a faculty needs to work in these decentralized environments.

“Right now at WSU, one of the things we’re developing in collaboration with Microsoft is a ‘harvesting’ gradebook. So as an instructor in an environment like this, my gradebook for you as a student has links to all the different things that are required of you in order for me to credit you for completing the work in my class. But you may have worked up one of the assignments in Flickr, another in Google Groups, another in Picasa, and another in a wiki.”

This post will provide more definition and a potential implementation for this new kind of transformed grade book. It is the result of a conversation between Nils Peterson, Theron DesRosier and Jayme Jacobson diagrammed here.
Figure 1: White board used for drafting these ideas. Black ink is “traditional” model, Blue is a first variation, Red is a second variation.
The process begins with a set of criteria that is agreed by to be useful by a community and is adopted across an academic program. An example is WSU’sCritical Thinking Rubric. This rubric was developed by the processes of a “traditional” academic community. How the process changes as the community changes will be discussed below.
Instructors start the process by defining assignments for their classes and “registering” them with the program. Various metadata are associated with the assignment in the registration process. Registration is important because in the end the process we propose will be able to link up student work, assessment of the work, the assignment that prompted the work, and assessments of the assignment. More implications of this registration will be seen below.
The student works the assignment and produces a solution in any number of media and venues, which might include the student’s ePortfolio (we define ePortfolio broadly). The student combines their work with the program’s rubric (in a survey format). The rubric survey is administered to either a specifically selected list of reviewers or to an ad hoc group. (We have been experimenting with two mechanisms for doing this “combining.” One places the rubric survey on the page with the student’s work as a sidebar or footer (analogous to a Comment feature, or the “Was this helpful?” survey included in some online resources). This approach is public to anyone who can access the web page. The other strategy imbeds a link to the student’s work in a survey, it can be targeted to a specific reviewer. Thisexample comes from the judging CTLT’s 2nd ePortfolio contest.
In either case the survey collects a score and qualitative feedback for the student’s work. We are imagining the survey engine is centrally hosted so that all the data is compiled into a single location and therefore is accessible to the academic program. Data can be organized by student, assignment, academic term, or course. A tool we are developing that can do this is called Skylight Matrix Survey System, which is rebranded as Flashlight Online 2.0 by the TLT Group. The important properties of Skylight for this application are the ability to render a rubric question type and the ability to have many survey instances (respondent pools) within one survey and both report instances individually and aggregate the data across some/all the instances.
Audiences for this data
The transformative aspects of this strategy arise from the multiple audiences for the resulting data. We have labeled these collections of data, and the capacities to present the data to audiences “assessment necklaces”
Figure 2: Diagram of rubric-based assessment. Learners, peers, and faculty are shown collecting data from rubric-based assessment of portfolios, then reflecting on and presenting the multiple data points (necklaces) in contexts important to them.
Students can review the data for self-reflection and can use the data as evidence in a learning portfolio. We are exploring ideas like Google’s Motion Chart gadget (aka Trednalyzer/Gapminder) to help visualize this data over time. They can also learn from giving rubric-based reviews to peers and by comparing themselves to aggregates of peer data.
Instructors can use the data (probably presented in the student’s course portfolio) for “grading” in a course. It’s worth noting that the Instructor’s assignments can be assessed with the same rubric, asking, “To what extent does this assignment advance each the goals of this rubric?” With the assignment rated, instructors can review the data across multiple students, assignments, and semesters for their own scholarship of teaching and learning (SoTL). Here the instructor can combine the rubric score of an assignment with the student performance on the assignment to improve the assignment. Instructors might also present this comparison data in a portfolio for more authentic teaching evaluations.
In this example the assignment might be rated by students or the instructor’s peers. Below, the rating of the assignment by wider communities will be explored.
Academic Programs can look across multiple courses and terms, for program-level learning outcomes and SoTL. They can also present the data in showcase portfolios used for recruiting students and faculty, funding and partners. This is where the collective registration of the assignment becomes important. The program can access the assignment in the context of the program, with an eye to coordinate assignments and courses to improve the coherence of the program outcomes.
The community, which might include accrediting bodies, employers and others, can use the data, as presented in portfolios by students, instructors, and the academic program, to reflect on, or give feedback to, the academic program. Over time, an important effect of this feedback should be to open dialogs that lead to changes in the rubric.
Variations on this model
The description above is still traditional in at least two important ways: the program (ie faculty) develop the rubric and the instructor decides the assignment. Variants are possible where outside interested parties participate in these activities.
First variation. WSU and University of Idaho run a joint program in Food Science. We have observed that the program enrolls a significant number of international students, from nations where food security is a pressing issue. We imagine that those nations view training food scientists as a national strategy for economic development.
We have imagined a model where the students (in conjunction with their sponsoring country), and interested NGOs, bring problem statements to the program and the program designs itself so that students are working on aspects of their problem while studying. The sponsors would also have an interest in the rubric, and students would be encouraged (required?) to maintain contacts with sponsors and NGOs and cultivate among them people to provide evaluations using the rubric.
The processes and activities described above would be similar, but the input from stakeholders would be more prominent than in the traditional university course. Review of the assignments, and decisions about the rubric, would be done within this wider community (two universities, national sponsors and NGOs). The review of assignments and the assessment of the relationship of assignments and learning products creates a very rich course evaluation, well beyond the satisfaction models presently used in traditional courses.
Second variation. This option opens the process up further and provides a model to implement Stephen Downes’ idea in Open Source Assessment. Downes says “were students given the opportunity to attempt the assessment, without the requirement that they sit through lectures or otherwise proprietary forms of learning, then they would create their own learning resources.”
In our idea of this model, the learner would come with the problem, or find a problem, and following Downes, learners would present aspects of their work to be evaluated with the program’s rubric, and the institution would credential the work based on its (and the community’s) judging of the problem/solution with the rubric. This sounds a lot like graduate education, the learner defines a problem of significance to a community and addresses that problem to the satisfaction of the community. In our proposed implementation, the ways that the community has access to the process are made more explicit.
In this variant, the decision about the rubric is an even broader community dialog and the assessment of the instructor (now mentor/learning coach) will be done by the community, both in terms of the skills demonstrated by students that the instructor mentored, and by the nature of the problems/approaches/solutions that were a result of the mentoring. The latter asks, is the instructor mentoring the student toward problems that are leading or lagging the thinking of the community?
Examples
For some sense of learning portfolios created by the processes above, consider these winners from CTLT’s 2007-08 ePortfolio contest.
The following two winners are examples of the second variant, where students were paired with a problem from a sponsor:
The Kayafungo Women’s Water Project documents the efforts of Engineers Without Borders at WSU (EWB@WSU) who partnered with the Student Movement for Real Change to provide clean water to 35,000 people in Kayafungo, Kenya.
The EEG Patient Monitoring Device portfolio follows the learning process of four MBA students who collaborated with faculty, the WSU Research Foundation, inventors, and engineers to develop a business plan for a wireless EEG patient monitoring device.
The next two are examples of the second variant — student defined problems assessed by the community. In the latter case, the student is using the work, both her activism in the community and her study-in-action as her dissertation:
The Grace Foundation started with a vision to create a non-profit organization that would assist poor and disenfranchised communities across Nigeria in four areas: Education, Health, Entrepreneurship and Advocacy. The author used the UN online volunteering program to form a team to develop a participatory model of development that addresses issues of poverty eradication from a holistic manner.
El Calaboz Portfolio chronicles the use of Internet and media strategies by the Lipan Apache Women’s Defense, a group that has grown in national and international prominence the last 75 days, from less than 10 people in August, to an e-organization of over 312 individuals currently working collectively. It now includes NGO leaders, tribal leaders, media experts, environmentalists, artists, and lawyers from the Center for Human Rights and Constitutional Law. It recently received official organization status at the UN.
The next steps in this work at WSU are to build worked examples of these software tools and to recruit faculty partners to collaborate in a small scale pilot implementation
CTLT has been thinking about portfolios for learning and their relationship to institutionally supported learning tools and course designs. This thinking has usmoving away from the traditional LMS.  In a February 2008 Campus Technologyinterview, Gary Brown introduced the term “harvesting gradebook” to describe the gradebook that  faculty need to work in these decentralized environments. As originally articulated by Gary, the gradebook “harvested” student work, storing copies of the work within itself where it was assessed.
On further discussion, the concept became inverted, what was “harvested” were assessments, from work that remained in-situ.
harvesting-gradebook1
This inversion of the idea allowed the widening of the community that could be involved in the assessment. There are ways that the instructor, as well as the program can learn from this transformed idea about a gradebook that are responsive to course and program improvements, as well as useful in accreditation.
A pilot course using these ideas earned the NUTN 2009 Best Resaerch Paper award. Here is the video made for the award ceremony.
At the AAC&U conference in Seattle, Jan 22-24, we presented these ideas at a round table on ePortfolios Friday morning. (Authentic Assessment of Learning in Global Contexts) Nils Peterson, Gary Brown, Jayme Jacobson, Theron DesRosier
In February 2009, a Campus Technology article summarized a pilot offering of a course that used this latter harvesting of assessments model beginning to demonstrate how a community could effectively participate in the process.
This post serves as a table of contents to materials from our “Authentic assessment of learning in global contexts” AAC&U presentation and background to the story in Campus Technology.

Categories

Blog Stats

  • 25,078 hits

RSS CTLT and Friends Recent Bookmarks

  • Why Your Boss Is Wrong About You - NYTimes.com
    Comments:A brief take on the "performance _pre_view", and why the standard performance _re_view tends to diminish, not enhance, performance. - Joshua YeidelTags: assessmentby: Joshua Yeidel
  • FERPA and social media
    Comments:FERPA is one of the most misunderstood regulations in education. It is commonly assumed that FERPA requires all student coursework to be kept private at all times, and thus prevents the use of social media in the classroom, but this is wrong. FERPA does not prevent instructors from assigning students to create public content as part of their course […]
  • The Answer Sheet - A principal on standardized vs. teacher-written tests
    Comments:High school principal George Wood eloquently contrasts standardized NCLB-style testing with his school's performance assessments. - Joshua YeidelTags: assessment, outcomes, accountabilityby: Joshua Yeidel
  • Performance Assessment | The Alternative to High Stakes Testing
    Comments:" The New York Performance Standards Consortium represents 28 schools across New York State. Formed in 1997, the Consortium opposes high stakes tests arguing that "one size does not fit all."Despite skepticism that an alternative to high stakes tests could work, the New York Performance Standards Consortium has done just that...develo […]
  • Higher Education: Assessment & Process Improvement Group News | LinkedIn
    Comments:High School Principal George Wood eloquently contrasts standardized NCLB-style testing and his school's term-end performance testing. - Joshua YeidelTags: assessment, outcomes, accountabilityby: Joshua Yeidel
Follow

Get every new post delivered to your Inbox.