Friday, November 16, 2012

Innovations in Education Culture Needed to Scale Blended Learning Adoption


This has been a breakthrough year for blended learning models.   Innovation in supporting technologies, policies, and economic/market change agents have pushed the virtual/blended agenda.  With all of the investment and innovation around blended learning, one might expect wider adoption.

A recent comment posted on Tom Vander Ark’s Education Week blog about adoption of blended learning got my attention:

“We implemented a rotation style blended learning program this year at our high school. The biggest challenge that we are overcoming right now is that students aren't ready for taking ownership of their own learning. We didn't realize how conditioned they are to sit, listen, and regurgitate facts back.

(emphasis added, accessed on 15 November 2012 from:  http://blogs.edweek.org/edweek/on_innovation/2012/11/why_havent_districts_adopted_blended_learning_faster.html)

This rings true with my observations in schools/classrooms implementing several different blended learning models.  The biggest challenges have to do with changes in practice, both personally for students and teachers, and organizationally. 

When blended learning pilots fail to gain traction or fail to deliver the expected outcomes, it is easy to look to imperfections with the virtual technology, with the methodologies being used, or teacher preparation as the root causes. However, these are often only the scapegoat for a deeper root cause, i.e. resistance to change.  When individuals and organizations try to adopt new practices and encounter challenges, they have a proclivity to revert to more familiar patterns of behavior.  

Vander Ark’s blog listed some of the challenges that have slowed district adoption of blended learning including weaknesses in platforms, models, staffing and development, and district capacity.  These more visible factors create the “friction” for the organizational behavioral changes needed for people to put blended learning into practice.

I used the term “cultural inertia” in a session I co-presented at the National School Boards Association annual conference in 2009 to describe this phenomenon in education organizations. “Culture” can be thought of as shared habits of believing and behaving.  Habits are difficult enough to change in individuals…How is your 2012 New Year’s resolution working out?  Changing the culture of a school or district requires changes in the habits of many people.

Change is Worse than Pain

In another blog post I opined on how the education profession is like the medical profession of the early 19th century, when after the invention of anesthesia it took over 40 years for surgeons to see past preconceived notions about pain.  It took more than 40 years for them to accept that pain was not necessarily a good thing.  It is well known that the way things are done in schools is not working for many students, but many educators, students, and parents cannot image it working any other way.  As the 19th century surgeons demonstrated, sometimes having the right technology and knowing how to use it is not enough.  It takes something more for people to embrace a new professional practice.  It takes a change in what the professional and professional community believes about themselves and their professional purpose as it relates to the innovation. 

I cannot help but wonder what might have accelerated the timeline for anesthesia adoption by the 19th century surgeons.  Moreover, I wonder what “soft innovations” might accelerate professional adoption of blended learning practices.

Blended Learning is an Out-of-Comfort-Zone Experience for Most People…at first.

I have had the opportunity to observe at the classroom level and at different stages of adoption for a couple of different blended learning models.  It was apparent that there were some students and teachers that were energized by the change to more individualized learning, some that felt uncomfortable, and some that had a real struggle with the changed model.  In the models I observed, teachers needed to change from being “sage on the stage” to being “guide on the side.”  Students also need to take on a more active role in their own learning.   Teachers often felt less in control when first implementing the new model and student self-efficacy was challenged.

In spite of the very human nature to resist change I've been able to observe some successful educators and students that have worked within the blended learning model long enough to reach an “aha moment”, realizing that teaching and learning under the blended model is better than the old way of doing things, and adopting new habits of behavior.  However, there are many more who have started down the path to blended learning adoption but who have never make it to the “aha moment”.  There are many schools in which one or two teachers embrace the new model, while the rest of the school continues with the status-quo.

Innovations in Education Culture

I see a great opportunity to accelerate adoption of blended learning models by addressing the human aspects of adopting.  Innovations that matter most are often not the technological innovations.  We need better understand of the human and organizational behavior aspects of blended learning models, the “soft” innovations that catalyze changes in organizational culture and professional practice.
Just by paying attention to this aspect of the problem uncovers some potential solutions.  A while ago, I was chatting with a teacher implementing a blended model.  She was telling me about issues faced in her classroom and some solutions that work.  She talked about the blended model requiring her to make a significant change in professional practice, and that took time.  By enduring frustrations, daily problem solving, and coaching she made the transition.  While we were talking, she reflected on struggles that some students had with the new model.  It occurred to her that students also must adopt new practices and that some of the challenges could have been avoided if more time were spent up front preparing students for the new role.

Thanks to the Innosight Institute  and others, in 2012 we understand at a high-level HOW various blended learning models work.   In 2013, I hope to see organizations take a deeper dive into the processes and human interactions contained within each model and the evidence of learner outcomes, i.e. WHY the model works or does not work.   I also hope to see a deeper dive into the organizational change and professional practice models, and discoveries of soft innovations that catalyze accelerated adoption.    

Wednesday, October 17, 2012

"Big Data" Synergy Across Key Initiatives

This week I had the privileged of presenting on CEDS at the SETDA Leadership Summit along with champions of other key initiatives, SLC, LRMI, Learning Registry, and My Student Download.  The theme was "big data" to support teaching and learning. 

I think there is a bigger story than what any of these initiative is accomplishing on its own. And the session provided a great opportunity to talk about how, from the perspective of an educator or student, the separate initiatives fit together and support the kind of metadata and paradata for optimizing and personalized learning.  The following video is my version of that story prepared for the SEDTA meeting.  The PDF here (http://goo.gl/1qgn5) is a printable "storyboard" of the same.



...the SETDA session and presenters:
Leveraging "Big Data" to Support Digital Age Learning - Roosevelt Room

While states have been focusing on building robust longitudinal data systems for accountability and data-based decision making, a number of public-private efforts have recently launched that have the potential to leverage state, local and educator/student-generated education data in new and exciting ways for teaching and learning. These efforts include those focused on solving K-12 interoperability challenges, improving search and discovery of education resources, sharing of intelligence about the use of education resources, increasing transparency and opening access to education data, and personalizing learning. Participants will get an overview of the work underway in this area, share information about their state's priorities, assess gaps and opportunities, and - in so doing - contribute to the production of a summary white paper for states and policymakers.
Facilitators:
  • Neill Kimrey, Division Director, Division of Instructional Technology, North Carolina Department of Public Instruction
  • Lan Neugent, Assistant Superintendent for Technology, Career, and Adult Education and Chief Information Officer, Virginia Department of Education
Resource Specialists:
  • Jim Goodell, Senior Education Analyst at Quality Information Partners
  • Henry Hipps, Senior Program Officer, Bill & Melinda Gates Foundation
  • Michael Jay, President of Educational Systemics
  • Steve Midgley, Consulting Adviser, US Department of Education
  • Lyndsay Pinkus, Director, National and Federal Policy Initiatives, Data Quality Campaign

Thursday, October 4, 2012

Flipped Accreditation?


Is it time to consider “flipping” the accreditation process? I have some specific ideas about what it means to “flip” accreditation, and how an alternative accreditation process will support innovation, but first I will opine on why the flip is needed.

At an event in Washington DC this week participants were asked, “How are accreditors being a roadblock to innovation?” I’d argue that accreditors can’t help but be a roadblock. The organizational culture and operational process of today’s accreditation agencies is at odds with innovative learning models of education and competency-based credentialing.

Current accreditation processes are self-described using terms like "self-reflection" and "peer review". It is employees of traditional institutions reviewing their own organization and others like it.  They exist as a self-serving barrier to entry keeping out the ‘Riff Raff.’ To their credit, accreditors have served a valuable function in separating legitimate institutions of learning from diploma mills. Unfortunately, for students and employers though the culture and process of accreditation is a barrier to competition. Innovative new institutions that might offer better models for education don’t fit the mold for which the accreditation process was designed, and with which peer reviewers are comfortable.

One aspect of a flipped accreditation process would be a more granular understanding of what learning takes place at an institution. In the U.S., accreditation generally is to the institution. Many non-U.S. accreditation processes take it to the next level, the course of study. I think we can do better than that. I think we can identify the specific sets of competencies that are important for a particular credential and then evaluate whether or not the institution is teaching those competencies, and if there is evidence that students who receive the credential have leaned those competencies.

What is Flipped Accreditation?

I envision flipped accreditation as a market-driven process.  Rather than an accreditation system designed by academia for academia, let the future employees of an institution’s graduates also have a say. Industry groups or specific employers can say what competencies (skills, knowledge, methods of inquiry, habits of practice) are most important to them, and the accreditation process can measure the extent to which institutions graduate individuals with that minimum set of competencies. The minimum set of competencies would also include general competencies needed to be productive citizens regardless of industry/degree.

This market-driven perspective need not constrain the breadth and variety of academic programs in non-industrial-related areas of study. Stakeholders that will be the intended beneficiaries of the graduate’s future productivity, regardless of field or profession, might give input into what should be learned for the related degree or certificate. For example, for a fine art degree, why not let the best artists, designers, museum curators, art collectors, dealers, and art educators all provide input on what is important for an artist to learn and develop. There could even be multiple accreditations validating the same degree based on values of each group’s perspective.

 I envision new entities taking on the role of flipped accreditation providers, supported by key industries, while the traditional accreditors continue to accredit traditional institutions.  Market demand will eventually drive traditional institutions toward competency-based design of degree programs and seeking the additional accreditation based on the flipped model.

Rather than self-reflection and peer review, let the evaluation be based on evidence that the program of study prepares students with the competencies valued by the job market that they will enter. Rather than a labor-intensive peer review of the institution, evaluate if the program is teaching what is important and if students that graduate have learned what is important. Let the data validate if the certificate or degree is aligned with market priorities.

Monday, September 24, 2012

Design Patterns for Learning

A common mistake in education reform is to focus on what seems innovative about a new platform, solution, or content, rather than on the learning process used and how results vary based on use. I was reminded of this when reading a blog post by John Merrow that gives Khan Academy as an example of 'blended learning' (bit.ly/Pl58Ji). I am an advocate of Khan Academy; it's a great platform, a game changer; my kids use it daily as a key part blended learning model adopted by their school. However, the platform is not in itself 'blended learning.' Blended learning is the learner-centric process that includes a blend of interactions between the learner, technology platforms(s), and the teacher in a classroom setting.

Yes, Khan Academy is an enabling technology platform with a critical mass of content that can support a range of blended learning methodologies. However, it also can support NON-blended, online only, learning. The 'blend' can happen, when students and educational "coaches" (including classroom teachers) interact with the system and with each other via the system.  The details of the blended model used, and even decisions/actions made by and for students using Khan Academy, can result in very successful or very disappointing results.

I suspect this tendency to focus on the platform is largely because we don't know what we don't know. Well-established disciplines such as architecture and software have matured to a point in which they recognize "design patterns," engineering approaches that are proven effective to meet reoccurring challenges. Unlike design patterns for software that are implemented as code, design patterns for learning would describe learner experiences, which may or may not involve technology. We don't yet have a catalog of "design patterns" for learning to support emerging innovations.

It would be fair to say that there are well established instructor-centric 'design patterns' that are recognized in the field of curriculum development. Moreover, there are pedagogical 'patterns' applied in the field of instruction. There are also higher-level models defined, like the classification of blended learning models by the Innosight Institute.  However, the kinds of interactions now available for individualized and blended learning call for a new discipline that looks at interactions on a deeper level, and beyond the constraints of the traditional classroom-only, teacher-student, fixed-time interactions.

Learning happens based on the right mix of interactions for each learner, which may include a blend of technology and human interactions. We know from cognitive science what conditions lead to learning, and even what it takes for a person to become becoming an expert in an area of cognitive development. For example, we know that new learning builds on background knowledge and works when the new concept or skill is within a zone of proximal development. We also know the value of feedback loops, scaffolding, and practice over time.

The next breakthrough in  in education may be to establish/mature a professional specialty in the area of  "learning interactions engineering," in development of design patterns for learning interactions, and in optimizing the application of those patterns for individual learners.

Friday, July 20, 2012

Common Education Data Standards (CEDS) Version 3 Standardizing Data to Support Formative Assessment Processes

Last week at STATS-DC I was joined by Nancy Burke, Grafton Public School District (North Dakota), Lee Rabbitt, Newport Public Schools (Rhode Island), and David Weinberger, Yonkers Public Schools (New York) for this presentation about Standardizing Data to Support Formative Assessment Process Use in Schools.  My three co-presenters are members of the Common Education Data Standards (CEDS) Stakeholders Group and have been an important part of the CEDS version 3 focus on teaching and leaning.


One thing interesting about this presentation was that we used PowerPoint as our "sandbox" for the development work, so the presentation content developed long before we knew it would be a presentation at STATS-DC.  We started by developing a model of the human processes, and had that fully defined before working on defining the data elements needed to support those processes.  We avoided the risk of thinking about formative assessment as a thing, a "kind of test", and focused on the process of teaching and learning.  


Formative assessment is a process by which teachers and students use data to inform:
– where they need to go,
– where they are, and
– how to close the gap.


Formative assessment content specialists Margaret Heritage and Susan M. Brookhart help us refine the process model based on research finding and promising practices.  Organizations like the Innosight Institute helped us check assumptions about formative data needed to support various blended learning models. We worked with projects such as the Learning Resource Metadata Initiative (LRMI), Open Badges, Race to the Top Assessment consortia (RTTA), SLC, EdFi, and the Learning Registry (LR) to ensure these implementation-oriented projects will have compatibility with the common vocabulary for education data that will be defined in CEDS v3.

See the slides here.

The Bridge Between Data Standards and Learning Standards—Common Core State Standards (CCSS) and Common Education Data Standards (CEDS)

Last week I co-presented at STATS-DC about the work that is being done to bridge gaps between data standards and learning standards.  Maureen Wentworth of the Council of Chief State School Officers (CCSSO) and Greg Grossmeier of Creative Commons joined me to discuss the collaboration between many initiatives including the Common Core State Standards (CCSS) initiative led by CCSSO and the National Governors Association (NGA), Common Education Data Standards (CEDS) led by  IES/NCES under the U.S. Department of Education, and Learning Resource Metadata Initiative (LRMI) led by the Association of Education Publishers (AEP) and Creative Commons.


The presentation slides are here.

Wednesday, May 30, 2012

Data Standards to Support the Formative Assessment Process

Yesterday I had a great meeting with the K-12 stakeholder group directing the development of the Common Education Data Standards.  Version 3 of CEDS (a common vocabulary for P-20 data) will include standard data element definitions to support the formative assessment process.
“Formative assessment is a process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning to improve student’ achievement of intended instructional outcomes.”(FAST SCASS, 2008, October)
David Weinberger, one of the group's LEA representatives from Yonkers Public Schools, commented after the meeting,
"I am really excited that we are finding ways to include instructionally relevant information into the data standards in ways that are clearly meaningful and important."


I share in David's excitement. The expansion of the data standards will support scientifically proven processes of assessment for learning, using progress data to inform instructional decisions and student learning activities.

It has been a privilege to work with the  K-12 stakeholders, content specialists Susan M. Brookhart and Margaret Heritage, and others, on refining the process model and identifying supporting data elements.

The process model draft, shown below, is based on case studies of models being used successfully in practices, cognitive science/learning sciences research findings, and engineering/control theory.  

The group is identifying supporting data elements and definitions based on concepts from the Formative Assessment for Students and Teachers (FAST) State Collaborative on Assessment and Student Standards (SCASS) (2008).  Rather than "reinvent the wheel" the work is coordinated with many other initiatives interested in standard definitions for the types of data used to inform teaching and learning.  The group collaborates with initiatives such as the Learning Resource Metadata Initiative (LRMI), Learning Registry (LR), Schools Interoperability Framework Association (SIFA), EdFi, the Mozilla Foundation's Open Badges, and the Shared Learning Collaborative (SLC).  We are also working with CCSSO/NGA partners on the components related to Common Core State Standards (CCSS) and with organizations supporting the RTTT assessment consortia looking at more granular competencies "unpacked" from the standards to measure learner progress.

The emerging standards for data related to the formative assessment process includes use cases across traditional classroom-based models, blended learning models, and virtual learning models. It will create a  common vocabulary for describing data that most directly supports student learning.


Friday, March 2, 2012

Un-Blended Blended Learning


In their new monthly column in THE Journal Michael B. Horn and Heather Staker advise schools to ignore blended learning "best practices". 

I would argue that the "best" practices have not yet emerged ...although there are some promising practices on track to be proven practices.  Horn and Staker are right in that if "best practice" models are defined based on mostly irrelevant aspects of the blended learning model, such as where and when students sit in front of a screen, device to student ratios, or how many students sit in a computer lab at a time, then educators can make their own model.

The definition for blended learning leaves room for both good and bad practices:
"Blended learning is any time a student learns at least in part at a supervised brickand-
mortar location away from home and at least in part through online delivery
with some element of student control over time, place, path, and/or pace." (Staker, H.; The Rise of K–12 Blended Learning, p.5)
This definition includes models in which there is no connection between the supervised instruction and the self-paced online learning, thus "un-blended" blended learning.  The online and offline learning experiences may be disconnected, or they may be tightly aligned, each activity informing the other.

The emerging "best practices" for blended learning will have less to do with seat time and device count, and more to do with what learners do during both offline and online time, and what teachers and online tools do to support learning.   The best practices will focus the effort on the right kind of learning activities that motivate the right kind of behaviors and lead to the right kind of outcomes.  These are models not to ignore.

Tuesday, January 31, 2012

Common Education Data Standards v2 released!


The Common Education Data Standards v2 release today marks one of many interconnected milestones for 2012 that I think will have far-reaching impact on the U.S. education system.  Data standards are not going to revolutionize education...but CEDS is a catalyst.  It is serving as a bridge between many other initiatives that collectively have a shot at tipping the scales toward a system of education focused on individual learners rather than groups.  This release comes at a time when the "common vocabulary" of common data standards could determine the scale of success for these other technology-enabled game-changes such as SLC/SLI, EdFi, P20W data systems, and the innovations being developed/tested in Race to the Top states.  Instead of competing standards of the past, CEDS v2 has carved out common ground.  CEDS is just one catalyst for these separate initiatives to pull in a common direction and transform the ecosystem into one in which it is possible to meet the needs of every learner.  Here is the official announcement:
The National Center for Education Statistics (NCES) is pleased to announce the Version 2 release of the Common Education Data Standards (CEDS).  The CEDS project is a national, collaborative effort to develop voluntary data standards to streamline the exchange, comparison, and understanding of data within and across P-20 (early learning through postsecondary) institutions and sectors.  CEDS Version 2 includes a broad scope of elements spanning much of the P-20 spectrum and provides greater context for understanding the standards' interrelationships and practical utility.  Specifically, Version 2 of CEDS focuses on elements and modeling in the Early Learning, K12, and Postsecondary sectors and includes domains, entities, elements, options sets, and related use cases.
Version 2 of CEDS can be found at the CEDS website:  ( <http://ceds.ed.gov>http://ceds.ed.gov).  This website includes three ways to view and interact with CEDS:
1.      By Element - Via the CEDS elements page, users can access a searchable catalog of the CEDS "vocabulary"; 2.      By Relationship - Through the CEDS Data Model, users can explore the relationships that exist among entities and elements; 3.      By Comparison - The CEDS Data Alignment Tool allows users to load their organization's data dictionary and compare it, in detail, to CEDS and the data dictionaries of other users.
Educators and policymakers need accurate, timely, and consistent information about students and schools to inform decisionmaking-from planning effective learning experiences, to improving schools, reducing costs, and allocating resources-and states need to streamline federal data reporting. When common education data standards are in place, education stakeholders, from early childhood educators through postsecondary administrators, legislators, and researchers, can more efficiently work together toward ensuring student success, using consistent and comparable data throughout all education levels and sectors.
While numerous data standards have been used in the field for decades, there has not emerged a universal language that can serve basic information needs across all sites, levels, and sectors throughout the P-20 environment. By identifying, compiling, and building consensus around the basic, commonly used elements across P-20, CEDS will meet this critical need.
The standards are being developed by NCES with the assistance of a CEDS Stakeholder Group that includes representatives from states, districts, institutions of higher education, state higher education agencies, early childhood organizations, federal program offices, interoperability standards organizations, and key education associations and non-profit organizations. In a parallel effort a group of non-government interested parties with shared goals, including CCSSO, SHEEO, DQC, SIF, and PESC, has come together as a Consortium with foundation support to encourage the effort and assist with communications and adoption of the standards.

Wednesday, January 25, 2012

Is the U.S. ready to empower learners with data?

Last week, when I attended the Data Quality Campaign's National Data Summit, I was caught off guard by a subtle but dramatic shift in advocacy for use of data in education. It was no surprise that the event last week would focus on data use. DQC has shifted its emphasis over the past couple of years from promoting quality in state longitudinal data systems to promoting effective use of the data collected in such systems. Aimee Guidera and the DQC staff deserve credit for moving the agenda forward at each step of the way.

 It may not have been obvious to many, but last week's meeting took a next step. I expected to hear about expanded uses of educational data, beyond school accountability, and more about uses of data that more directly impact student learning. I expected to hear about the changing role of state data systems from accountability (reporting up), to supporting local decision-making (reporting down) to the school district, school, and classroom. I expected to hear discussions about how these changing roles require different kinds of data and services provided to teachers. I expected to hear about data for improving instructional and the teaching profession. 

What I was surprised about was discussion about giving students access to data to inform their own learning. Of course this is being done and is nothing new.  I was not surprised by the concept, only in that it was discussed at a DQC event with the U.S. Secretary of Education, a State Commissioner of Education, and others on the national stage.

 It is not a new idea to focus on learning and to empower students to become life-long learners. It is new, however, to discuss on the national stage about giving data to students. It is new to have a discussion about the learning that students might do on their own or outside of the traditional public school setting.  Long-held unwritten and unspoken turf rules, institutionalized as core to our educational system have made certain conversations about data use off-limits. For example, "local control" has been an excuse for some states to focus on collecting accountability data, not on data systems that support student learning. And until recently the classroom was off limits, even to other teachers, as the place for the private practice of a teacher's "art". Professional learning communities are changing that.

Education is too often perceived as something teachers do to students. Teachers "educate" students, principals and LEAs "manage" schools, state education agencies "fund, regulate, and hold accountable" schools and districts, and the federal education agency uses funding and federal policy as levels to hold states and school districts accountable. The institutional culture is preserved as long as everyone stays on their own turf. The unfortunate reality is that student learning is the core purpose for all of these actors. 

Because of the institutional turf culture, the national agenda for educational data use has until recently left out the most important participant, the learner. We collect a lot of data about students, but we don't provide students with the data that could optimize their learning experience.

The traditional classroom model of student learning doesn't always promote student ownership, but emerging blended learning models make the student a more active participant. A cultural shift is beginning to changed attitudes and behaviors, as U.S. Secretary of Education Arne Duncan said, "Our goal here is not data, our goal is to change behavior and get better outcomes." Better outcomes may require shifts in attitudes from "doing my job" (in the context of established turf) to "doing my part to support students learning."

I've been connected with some "disruptive" innovations already being used to empower students with data, multi-state efforts to develop a "shared learning infrastructure" and teaching and learning systems, and experiments in blended learning that shift the data needs from teacher-centric to learner-centric.

A long time friend and life-long teacher was once asked by a student, "Why do you not think I'm not important enough to let me in on what I'm supposed to be learning, and tell me how well I'm learning it?" The conversation at the DQC event last week asked a similar question to the data community. While there will continue to be turf issues, as well as positive aspects of local vs. national boundaries, I saw the conversation last week as another milestone toward a system able to meet the needs of all learners.

Thursday, January 5, 2012

Insights on Blended Learning

Interesting observations are in this report from a blended learning pilot study that used the Khan Academy platform and had teachers spend more time one-on-one supporting student learning and less time on grading, instructional planning, and group instruction. 

One thing that may surprise some people is that when students were allowed to choose how they spent time on the Khan site, they chose not to view the videos that established Khan Academy in the first place.  They preferred to spend time "taking tests" than watching videos...yes you read that correctly.  This is in stark contrast to the outcry from educational professionals that there should be "less testing".  

What the less-testing crowd doesn't understand is the process of "Assessment AS Learning," the continuous feedback and challenge provided by the online experience both supports and motivates learning.  Assessment of individual competencies is embedded in a "gaming" context, that provides an achievable short term goal. Working to get to the "next level" is fulfilling, even addicting (in a good way). And like with online games, individualized blended learning promotes students helping students in ways not found in a traditional competitive classroom environment.   

The best computer/online games don't get in the way or provide too much help, it is often trial an error for the user trying to develop the skill needed to reach the next level, and that is part of what makes games addicting..."I'll stop just as soon as I figure out how to finish this level."  The thrill of learning is in discovering something new or beating the challenge of a new skill.  Games designed so that all users eventually succeed might monitor the user behavior and provide some redirection when absolutely needed, just before the user would give up.   But most often, when games don't have 'intelligent tutors', the user will ask a peer, "How do you beat this level?"

Blended learning has an advantage over software-only-self-paced-learning, that a teacher has access to the data from each learner's online experience and can provide targeted one-on-one help that goes beyond what is built into the online platform.  However, this is a new kind of professional practice in its infancy.  Best practices must be defined and teacher proficiency developed in those practices.  In the same way, online learning platforms can be improved over time by adding logic that knows when to step in with a suggestion such as, "You've tried that problem set 10 times.  You may be more equipped to conquer this level if you watch this video," what video segment or activity to recommend, and when to trigger an alert to a teacher to intervene, diagnose, and prescribe an alternative learning path.