Wednesday, October 17, 2012

"Big Data" Synergy Across Key Initiatives

This week I had the privileged of presenting on CEDS at the SETDA Leadership Summit along with champions of other key initiatives, SLC, LRMI, Learning Registry, and My Student Download.  The theme was "big data" to support teaching and learning. 

I think there is a bigger story than what any of these initiative is accomplishing on its own. And the session provided a great opportunity to talk about how, from the perspective of an educator or student, the separate initiatives fit together and support the kind of metadata and paradata for optimizing and personalized learning.  The following video is my version of that story prepared for the SEDTA meeting.  The PDF here (http://goo.gl/1qgn5) is a printable "storyboard" of the same.



...the SETDA session and presenters:
Leveraging "Big Data" to Support Digital Age Learning - Roosevelt Room

While states have been focusing on building robust longitudinal data systems for accountability and data-based decision making, a number of public-private efforts have recently launched that have the potential to leverage state, local and educator/student-generated education data in new and exciting ways for teaching and learning. These efforts include those focused on solving K-12 interoperability challenges, improving search and discovery of education resources, sharing of intelligence about the use of education resources, increasing transparency and opening access to education data, and personalizing learning. Participants will get an overview of the work underway in this area, share information about their state's priorities, assess gaps and opportunities, and - in so doing - contribute to the production of a summary white paper for states and policymakers.
Facilitators:
  • Neill Kimrey, Division Director, Division of Instructional Technology, North Carolina Department of Public Instruction
  • Lan Neugent, Assistant Superintendent for Technology, Career, and Adult Education and Chief Information Officer, Virginia Department of Education
Resource Specialists:
  • Jim Goodell, Senior Education Analyst at Quality Information Partners
  • Henry Hipps, Senior Program Officer, Bill & Melinda Gates Foundation
  • Michael Jay, President of Educational Systemics
  • Steve Midgley, Consulting Adviser, US Department of Education
  • Lyndsay Pinkus, Director, National and Federal Policy Initiatives, Data Quality Campaign

Thursday, October 4, 2012

Flipped Accreditation?


Is it time to consider “flipping” the accreditation process? I have some specific ideas about what it means to “flip” accreditation, and how an alternative accreditation process will support innovation, but first I will opine on why the flip is needed.

At an event in Washington DC this week participants were asked, “How are accreditors being a roadblock to innovation?” I’d argue that accreditors can’t help but be a roadblock. The organizational culture and operational process of today’s accreditation agencies is at odds with innovative learning models of education and competency-based credentialing.

Current accreditation processes are self-described using terms like "self-reflection" and "peer review". It is employees of traditional institutions reviewing their own organization and others like it.  They exist as a self-serving barrier to entry keeping out the ‘Riff Raff.’ To their credit, accreditors have served a valuable function in separating legitimate institutions of learning from diploma mills. Unfortunately, for students and employers though the culture and process of accreditation is a barrier to competition. Innovative new institutions that might offer better models for education don’t fit the mold for which the accreditation process was designed, and with which peer reviewers are comfortable.

One aspect of a flipped accreditation process would be a more granular understanding of what learning takes place at an institution. In the U.S., accreditation generally is to the institution. Many non-U.S. accreditation processes take it to the next level, the course of study. I think we can do better than that. I think we can identify the specific sets of competencies that are important for a particular credential and then evaluate whether or not the institution is teaching those competencies, and if there is evidence that students who receive the credential have leaned those competencies.

What is Flipped Accreditation?

I envision flipped accreditation as a market-driven process.  Rather than an accreditation system designed by academia for academia, let the future employees of an institution’s graduates also have a say. Industry groups or specific employers can say what competencies (skills, knowledge, methods of inquiry, habits of practice) are most important to them, and the accreditation process can measure the extent to which institutions graduate individuals with that minimum set of competencies. The minimum set of competencies would also include general competencies needed to be productive citizens regardless of industry/degree.

This market-driven perspective need not constrain the breadth and variety of academic programs in non-industrial-related areas of study. Stakeholders that will be the intended beneficiaries of the graduate’s future productivity, regardless of field or profession, might give input into what should be learned for the related degree or certificate. For example, for a fine art degree, why not let the best artists, designers, museum curators, art collectors, dealers, and art educators all provide input on what is important for an artist to learn and develop. There could even be multiple accreditations validating the same degree based on values of each group’s perspective.

 I envision new entities taking on the role of flipped accreditation providers, supported by key industries, while the traditional accreditors continue to accredit traditional institutions.  Market demand will eventually drive traditional institutions toward competency-based design of degree programs and seeking the additional accreditation based on the flipped model.

Rather than self-reflection and peer review, let the evaluation be based on evidence that the program of study prepares students with the competencies valued by the job market that they will enter. Rather than a labor-intensive peer review of the institution, evaluate if the program is teaching what is important and if students that graduate have learned what is important. Let the data validate if the certificate or degree is aligned with market priorities.

Monday, September 24, 2012

Design Patterns for Learning

A common mistake in education reform is to focus on what seems innovative about a new platform, solution, or content, rather than on the learning process used and how results vary based on use. I was reminded of this when reading a blog post by John Merrow that gives Khan Academy as an example of 'blended learning' (bit.ly/Pl58Ji). I am an advocate of Khan Academy; it's a great platform, a game changer; my kids use it daily as a key part blended learning model adopted by their school. However, the platform is not in itself 'blended learning.' Blended learning is the learner-centric process that includes a blend of interactions between the learner, technology platforms(s), and the teacher in a classroom setting.

Yes, Khan Academy is an enabling technology platform with a critical mass of content that can support a range of blended learning methodologies. However, it also can support NON-blended, online only, learning. The 'blend' can happen, when students and educational "coaches" (including classroom teachers) interact with the system and with each other via the system.  The details of the blended model used, and even decisions/actions made by and for students using Khan Academy, can result in very successful or very disappointing results.

I suspect this tendency to focus on the platform is largely because we don't know what we don't know. Well-established disciplines such as architecture and software have matured to a point in which they recognize "design patterns," engineering approaches that are proven effective to meet reoccurring challenges. Unlike design patterns for software that are implemented as code, design patterns for learning would describe learner experiences, which may or may not involve technology. We don't yet have a catalog of "design patterns" for learning to support emerging innovations.

It would be fair to say that there are well established instructor-centric 'design patterns' that are recognized in the field of curriculum development. Moreover, there are pedagogical 'patterns' applied in the field of instruction. There are also higher-level models defined, like the classification of blended learning models by the Innosight Institute.  However, the kinds of interactions now available for individualized and blended learning call for a new discipline that looks at interactions on a deeper level, and beyond the constraints of the traditional classroom-only, teacher-student, fixed-time interactions.

Learning happens based on the right mix of interactions for each learner, which may include a blend of technology and human interactions. We know from cognitive science what conditions lead to learning, and even what it takes for a person to become becoming an expert in an area of cognitive development. For example, we know that new learning builds on background knowledge and works when the new concept or skill is within a zone of proximal development. We also know the value of feedback loops, scaffolding, and practice over time.

The next breakthrough in  in education may be to establish/mature a professional specialty in the area of  "learning interactions engineering," in development of design patterns for learning interactions, and in optimizing the application of those patterns for individual learners.

Friday, July 20, 2012

Common Education Data Standards (CEDS) Version 3 Standardizing Data to Support Formative Assessment Processes

Last week at STATS-DC I was joined by Nancy Burke, Grafton Public School District (North Dakota), Lee Rabbitt, Newport Public Schools (Rhode Island), and David Weinberger, Yonkers Public Schools (New York) for this presentation about Standardizing Data to Support Formative Assessment Process Use in Schools.  My three co-presenters are members of the Common Education Data Standards (CEDS) Stakeholders Group and have been an important part of the CEDS version 3 focus on teaching and leaning.


One thing interesting about this presentation was that we used PowerPoint as our "sandbox" for the development work, so the presentation content developed long before we knew it would be a presentation at STATS-DC.  We started by developing a model of the human processes, and had that fully defined before working on defining the data elements needed to support those processes.  We avoided the risk of thinking about formative assessment as a thing, a "kind of test", and focused on the process of teaching and learning.  


Formative assessment is a process by which teachers and students use data to inform:
– where they need to go,
– where they are, and
– how to close the gap.


Formative assessment content specialists Margaret Heritage and Susan M. Brookhart help us refine the process model based on research finding and promising practices.  Organizations like the Innosight Institute helped us check assumptions about formative data needed to support various blended learning models. We worked with projects such as the Learning Resource Metadata Initiative (LRMI), Open Badges, Race to the Top Assessment consortia (RTTA), SLC, EdFi, and the Learning Registry (LR) to ensure these implementation-oriented projects will have compatibility with the common vocabulary for education data that will be defined in CEDS v3.

See the slides here.

The Bridge Between Data Standards and Learning Standards—Common Core State Standards (CCSS) and Common Education Data Standards (CEDS)

Last week I co-presented at STATS-DC about the work that is being done to bridge gaps between data standards and learning standards.  Maureen Wentworth of the Council of Chief State School Officers (CCSSO) and Greg Grossmeier of Creative Commons joined me to discuss the collaboration between many initiatives including the Common Core State Standards (CCSS) initiative led by CCSSO and the National Governors Association (NGA), Common Education Data Standards (CEDS) led by  IES/NCES under the U.S. Department of Education, and Learning Resource Metadata Initiative (LRMI) led by the Association of Education Publishers (AEP) and Creative Commons.


The presentation slides are here.

Wednesday, May 30, 2012

Data Standards to Support the Formative Assessment Process

Yesterday I had a great meeting with the K-12 stakeholder group directing the development of the Common Education Data Standards.  Version 3 of CEDS (a common vocabulary for P-20 data) will include standard data element definitions to support the formative assessment process.
“Formative assessment is a process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning to improve student’ achievement of intended instructional outcomes.”(FAST SCASS, 2008, October)
David Weinberger, one of the group's LEA representatives from Yonkers Public Schools, commented after the meeting,
"I am really excited that we are finding ways to include instructionally relevant information into the data standards in ways that are clearly meaningful and important."


I share in David's excitement. The expansion of the data standards will support scientifically proven processes of assessment for learning, using progress data to inform instructional decisions and student learning activities.

It has been a privilege to work with the  K-12 stakeholders, content specialists Susan M. Brookhart and Margaret Heritage, and others, on refining the process model and identifying supporting data elements.

The process model draft, shown below, is based on case studies of models being used successfully in practices, cognitive science/learning sciences research findings, and engineering/control theory.  

The group is identifying supporting data elements and definitions based on concepts from the Formative Assessment for Students and Teachers (FAST) State Collaborative on Assessment and Student Standards (SCASS) (2008).  Rather than "reinvent the wheel" the work is coordinated with many other initiatives interested in standard definitions for the types of data used to inform teaching and learning.  The group collaborates with initiatives such as the Learning Resource Metadata Initiative (LRMI), Learning Registry (LR), Schools Interoperability Framework Association (SIFA), EdFi, the Mozilla Foundation's Open Badges, and the Shared Learning Collaborative (SLC).  We are also working with CCSSO/NGA partners on the components related to Common Core State Standards (CCSS) and with organizations supporting the RTTT assessment consortia looking at more granular competencies "unpacked" from the standards to measure learner progress.

The emerging standards for data related to the formative assessment process includes use cases across traditional classroom-based models, blended learning models, and virtual learning models. It will create a  common vocabulary for describing data that most directly supports student learning.


Friday, March 2, 2012

Un-Blended Blended Learning


In their new monthly column in THE Journal Michael B. Horn and Heather Staker advise schools to ignore blended learning "best practices". 

I would argue that the "best" practices have not yet emerged ...although there are some promising practices on track to be proven practices.  Horn and Staker are right in that if "best practice" models are defined based on mostly irrelevant aspects of the blended learning model, such as where and when students sit in front of a screen, device to student ratios, or how many students sit in a computer lab at a time, then educators can make their own model.

The definition for blended learning leaves room for both good and bad practices:
"Blended learning is any time a student learns at least in part at a supervised brickand-
mortar location away from home and at least in part through online delivery
with some element of student control over time, place, path, and/or pace." (Staker, H.; The Rise of K–12 Blended Learning, p.5)
This definition includes models in which there is no connection between the supervised instruction and the self-paced online learning, thus "un-blended" blended learning.  The online and offline learning experiences may be disconnected, or they may be tightly aligned, each activity informing the other.

The emerging "best practices" for blended learning will have less to do with seat time and device count, and more to do with what learners do during both offline and online time, and what teachers and online tools do to support learning.   The best practices will focus the effort on the right kind of learning activities that motivate the right kind of behaviors and lead to the right kind of outcomes.  These are models not to ignore.