29 September 2016

1 Big Takeaway from #LEC2016


I tend to operate at the margins of learning at the school I work in. It can be a lonely feeling. You can feel like a boat against the current, back borne into the past. Believing in that orgastic future...

But I need to believe that this future of education isn't one that year by year recedes before us, always to be eluded.

. . . 

I am not alone.


That is my big takeaway from a recent conference I went to—Lincoln Educator's Conference 2016—hosted by Escuelas Lincoln in Buenos Aires, AR. That I am not alone in believing in a different future for education, one that is rooted in figuring out what learning is first, and not one rooted primarily in the traditions of compliance and conformity (cf Ken Robinson). 

. . . 


This is the first conference I went where I felt more like a presenter than an attendee. LEC2016 was committed to re-igniting education. There were over 40 presenters and over 250 attendees from Argentina and Uruguay. I was asked to give a workshop and be part of the Keynote Slam. I gave two workshops on How Space Structures Learning (materials here). My keynote touched on a theme that I had been thinking about for quite a while, something I call The Logic of Learning. I also attended a few very good workshops from other educators. 

How do I know I am not alone?


Every time I engaged with educators I got into thought provoking, instigating, alliance building, and sometimes the "You're a radical!" or "You just blew my mind" or "I'm going to do so many things different" conversations. All of these conversations focused on something that seems fundamental to me yet exceedingly difficult to realize in our modern school: returning to the basics of human-centered learning. 

We are so often overwhelmed by the noise of complexity we've designed into the system; looking to document and check off boxes because we believe we need to prove the "mechanizations" of school. We do this at the expense of allowing teachers to embrace the messiness inherent in the simplicity of learning; to be learners themselves, and be empowered to try new things, to experiment, to create cultures of cooperative, fun learning experiences. 

How do I know I am not alone? 


Because I saw the vast majority of the crowd nodding their heads and listening during my keynote. 

Because more than a handful of people sought me out afterwards to connect, and stay in contact. 

Because the first question I was asked after my keynote, which merely introduced the idea of balancing out the simple proposition of if teaching, then learning to include more focus on the antecedents to deep learning, was an excited yet surprised "are you doing this at your school?!" 

No, we are not. But I am. And so are a few others. Usually in spite of the system, not because of it.

And here's the thing. So can you. You are not alone.

. . . 


How do we re-ignite learning? One option. 

Re-bel.


verb
6. to resist or rise against some authority, control, or tradition.

Eschew tradition. Find the fringes. Defend your choices. Do "what would you do right now if you could do whatever you wanted to."

And know.

You are not alone. 


30 August 2016

Slow Jamming the Start to the New School Year


I first saw the quote above while attending a pre-conference about the Roosevelt Innovation Academy. I was reminded of it after reflecting on how I started my first three weeks of school this year. Let me explain.

I wanted to think big this year for my 7th grade Geography classes, think in ways that would make me approach the teaching and learning I had control of 180 degrees differently than I had before. I didn't create a list (maybe I should have) but thinking about it now, I probably had around three big ideas: I wanted to create a deep learning environment; I wanted to use space differently; I wanted to go mostly gradeless. 

Now wanting to do big things and actually doing them are two very different things. The time we have is finite and the effort we can put in is limited by the world we live in. My big ideas? I didn't have time nor the wherewithal to work on them over the break—I was traveling through Peru and also had to deal with the loss of our beloved dog. Big ideas. Meet reality. 

How far did I get during my inservice days before the kids arrived? About 50% through 1/3 of the big three: I spent the better part of my three planning days creating new learning spaces in my new classroom. Honestly, I was a little nervous. 


We usually start the year on a Thursday with a full 8 class schedule, and repeat this schedule on Friday. Then on the following Monday we start our normal Mon-Thurs A/B block schedule, Friday all 8 classes. In my opinion, it makes the start less intimidating for both teachers and students.

However this year the administration decided to start with A day Thursday, B day Friday. Instead of an easy, low threshold start, we jumped right into "school" mode. Meaning most teachers jumped right in too. Syllabi, class rules, grading policy, curricular goals, procedures...you know, the Harry Wong approach. The school seemed to think we just needed to step on the gas pedal and off we'd go.

I didn't. Not that I didn't think about trying to. But I just wasn't there. Couldn't get there. And was confident enough to be ok to not be there. It turned out better than the start of any of my other 6 first back-to-school days.

What did we do instead? We watched a weird video where I asked them to watch, listen, and then make links to what they think Geography is.

We defined Geography using an ideation strategy that will be part of our toolset this year. Afterwards we wrote the definition on the window (gasp, "we can write on the windows in here?!" said the 7th graders).

We came up with as many questions as we could about Geography and then sorted them between Googleable and NonGoogleable questions, a key skillset we want to develop. We put our NonGoogleable questions out in the hall for everyone to see.

We had a "snowball" fight to get to know each other. We talked about our vacations; I told them about me. They got to ask questions and I told them about the sad story of my dog dying over break.

By the end of the third block together we hadn't talked once about grades, rules, procedures, etc. I hadn't planned further than the day I was in. And the walls didn't burn down. Just the opposite.

I was trying to think big! But reality kept pulling me back down. So how I did I start the year? Slowly. Very, very slowly. I knew I wanted to do things differently, but I hadn't had time yet to enact the changes. So instead of hurrying to put together course syllabuses, backwards design my first unit, and make meaningful lesson plans to "get me through" the first two weeks (all advice I received as a beginning teacher), I didn't do any of it. I wouldn't have been happy with any of it anyway.

Instead, I stumbled head first into the second part of the Mayo Clinic Center for Innovation quote: start small.

By the end of the third day, I noticed that they all kept wanting to talk about the Olympics with me, and each other. Was I watching? Oh, did you see the injury where the guy broke his leg? Did I see Bolt (with the dap for emphasis)? How about Simone Biles, she won like all the records? Did you know I am from Brazil? Excited 7th grade energy buzzing around me every day on something that was real and relevant to them.

Maybe if I was a smarter and better prepared teacher, I would have made this connection before school started. But since I am not, when it finally hit me, I knew it: the Olympics are the perfect introduction to Geography.

We couldn't continue slow forever. Now I needed to move fast and create a pathway to harness their genuine interest in the Olympics. What I didn't do though is quickly sit down and backwards plan from my assessment to my standards; I didn't use my curriculum mapping software. Why would I? I didn't know yet where we wanted to go together with this. Go back to starting small.

So we created a map of all of the cities where both the Summer and Winter Olympics had been held, and answered basic geography questions based on our findings. We read a short history and current issues article about the Olympics. We watched short Claro Olympic videos of the day. Piquing their interest while front-loading new knowledge and new questions.


By the time Back-to-School night came around, we were over two weeks and 5 full blocks into school, and knee deep into picking topics to write our first real killer blog stories on, when I realized I had better make a syllabus and figure out my big grading ideas before the parents arrived that night at 7 pm.

I tried to start small and move fast at the same time. I created a rough framework of Geography themes and topics to guide us through the year (we'll see if they last), and a rough idea of what "gradeless" might mean for us.  I gave them a syllabus, talked about how we are going to use our ePortfolios to document our learning journey, create knowledge artifacts for feedback, and have quarterly "grade" meetings. They said I should start a revolution and get the other teachers on board.

That night, I was honest with the parents: I said I think middle school is the perfect time to emphasize learning over grades, that I was going to experiment 1st quarter, that I will keep them updated, and I hope they are all on board. Not. One. Single. Objection. Think big. Start small. Move fast.

Where are we today? Last Friday we talked with Will Carless, a friend and investigative reporter living in Rio de Janeiro, about the Olympics and how to write a good story.

Today we made visual thinking maps of our stories and are getting ready to write our first draft. Hopefully we will get our first drafts completed and ready to publish soon. I will also try to give my first round of feedback soon.

I am still trying to think big. I am still way behind on the normal teacher tasks of organizing and planning. But I have learned that starting small and moving fast is probably the best way to make the big thinking obtainable.

I also realized I am not alone in thinking that slowing jamming the start to the new school year might be a good thing. A great The Atlantic article came out recently that highlighted the Finnish approach to the new school year: slow and easy. I also just saw today an Education Week article that "talks back" to Harry Wong's approach to the first days.

I'd like to say this was all planned and based on deliberate action. But it wasn't. And maybe that is the point. I fell head first and ended up sticking the landing. But hey, I'll take falling head first into a philosophy where the Finns have my back. Who wouldn't?


27 August 2016

TOK Presentation as Compliance: A Reply by the TOK Subject Manager (and further comments by me)

In response to my first post about the TOK Presentation moderation outcomes from May 2016, Maria Zubizarreta, Subject manager for TOK, has written a detailed and thoughtful response to some of the concerns raised not only by me but many experienced TOK teachers throughout the world.

I appreciate the willingness of Maria to engage in conversation concerning the recent worldwide "blow-up" over the TOK Presentation moderation. Rereading my original post, I feel that it was probably worded too strongly in certain sections and was not generous enough to the competing side of the moderation argument (I can hear one of my old philosopher professor scolding me to keep the "principle of charity" in mind...). 

However, I continue to coming away from these "clarifications" from IB with a feeling of belittlement and condesension. The continued stance from IB is one of teacher "misconceptions," "misinformation," and "confusions." It´s an unfortunate deliberation tactic. I feel like I am a reasonably smart person who tries reasonably hard to understand the course to the best of my abilities. Am I perfect? Absolutely not. However, I do also expect a reciprocal relationship of deliberating ideas with any administrator, whether it be my school admin or the administrator´s of the course I am in charge of helping my students learn. 

To continue to deflect all the criticism back to the teachers themselves, teachers who are doing their best, is to reduce the whole issue to a one-sided affair. And for me, that is just plainly not the case. There is more going on here than "misconceptions" that need clarification. To hide from these very real concerns, to not address them, to not reflect and learn from them, is a disservice to the nature and spirit of the very pursuit we are all involved in: learning, and the spirit of collaboration and meaning making that is at the heart of learning...and ToK.

The Letter in its entirety, with rejoinders added by me.


Dear Mr. Koss:

Thank you for sending your comments and concerns about the assessment of the TOK presentation.

I hope that I can clarify some of the misconceptions that seem to be widespread amongst many teachers.

1. You seem to be separating the completion of the TK/PPD (planning of the presentation) from the actual delivery of the presentation as if these were two different things that ought to be kept separate. There are no two different things here… the TK/PPD is the product presented in a summarized way… it is meant to show the planning of the presentation, hence the name.
We do quite a bit of planning and prototyping in my class. Do we use the TK/PPD as our exclusive planning document? No. We use a variety of tools and scaffolding to accomplish this. However, I will point out that planning and presenting are two different verbs. It seems silly to insist that they are the same thing.

Why should the presentation be different than the planning of the presentation in terms of content and the ability to explore a RLS from a TOK perspective? If the planning has been done right, then preparation and product should not be different in relevant content and skills.
Quite simply, they are two very different knowledge artifacts. They will always vary in terms of content and ability. Can they both reasonably show similar content and ability? Yes. But to do so requires a special and unique focus on the filling out of the form as a knowledge artifact, which takes it beyond a planning designation, in my opinion. And would then require criteria.

2. You seem to think that the demands of the presentation (showing the fulfilment of the assessment objectives) is an unachievable task. If that were the case, then how do we explain the 7s, 8s, 9s and 10s?
No. This year we had two 9s become 8s, so I do not believe it is an unachievable task. My main concern is how from one year to the next my class and I could go from understanding the needs of the TK/PPD and the rubric well enough for scores to stay the same (including an 8) to being moderated up (6s to 7s), to the next year, who´s presentations were overall better in my opinion, could get moderated down 1-4 points.

It is clearly possible to use the planning document properly for the purposes of representing what will be delivered in the actual presentation. Perhaps it is important to realize that the TK/PPD is a working document, a preparation document… not an administrative requirement. It is different to a Group 3 IA form, it is not a coversheet for the presentation, it IS the presentation in its skeletal form. Nothing in the content and analysis of the presentation should be different or come as a surprise. Where in the assessment instrument is there any reference to the delivery of the presentation? Why is it that the wonderful things kids can say on a one-time presentation they cannot include in their planning? The presentation is not improvisation, it is putting in action what they previously planned with plenty of time; plenty of time to review, amend, correct… unlike the one-time presentation.
Yes, it is possible to use it as such. And that is how I thought we had used it, both last year and this year. Again, the available data I have is that we did a sufficiently good enough job last year to have our scores stay the same or increase, and this year we did a sufficiently poor enough job to have our scores decrease 1-4 points. The disconnect is that in my judgement, our presentations were better overall this year.

As far as the question "Where in the assessment instrument is there any reference to the delivery of the presentation?" Besides it explicitly being called "TOK presentation assessment instrument"? How else am I suppose to interpret the assessment instrument other than one of presentation? The presentation is the context by which I am determining whether or not they have shown analysis and exploration of TOK concepts and skills in the real world around them. It is what I am measuring.

Again, there is nothing that says they are not able to include what they did in their presentation on their TK/PPD. But this does not mean they conflate to the same measure of learning and understanding. And my students, at least, do not engage in a one-time presentation. They are a reflection of a year's worth of work on using the tools and language of TOK into real world inquiry. However, the discontinuity of scores from one year to the next is not lost on me.

3. You ask: “Shouldn’t my justification of my judgements count for something? I am the one who saw the presentation”. My answer is yes, of course. Your justification does count if it shows a good understanding and application of the assessment instrument as interpreted by the senior team.
And I ask: ‘where in the assessment instrument is there any reference to the delivery of the presentation? The examiner does not actually need to see the delivery of the presentation if the candidates have completed the TK/PPD appropriately. I argue it should be a lot easier for candidates to put their ideas on paper, revise, review, correct, think, re-think, re-write, than to present them on their one chance in front of a class full of other kids.
I have trouble seeing how my judgements could have been given full and deliberate consideration, given the way that they were marked down, or if they were, they were very quickly discarded and the scores were instead based upon a senior examiner's interpretation of what a good TK/PPD looks like.

You are right that the examiner does not need to see the delivery to see if candidates have "completed the TK/PPD appropriately." However, that was not what I was assessing. I was assessing their final performance. I saw first hand their preparation and planning through the exercises we did in class. We spent time on filling out the TK/PPD, but at the end of the day, it was viewed as a planning document, not a performance assessment. 

4. You mention the “… growing vocalization of TOK teachers around the world over assessment subjectivity”. Whose subjectivity are you referring to here? Is the teacher’s assessment of their own kids actually less subjective than the moderation of teachers’ ability to understand and apply the assessment instrument?
I am referring to everyone's subjectivity. It is a tangle of subjectivity. However, for me, teaching and learning is about putting into practice the principles. This is what we try to do in my class, both me and the students. This removes some of the subjectivity, and makes it more objective for our class. This is what creates learning autonomy.

5. You also mention that the assessment is not about performance, but about compliance. I am not sure that I fully understand your point here; compliance to what? Please don’t read this as a rhetorical question, it is an actual question. How exactly is performance measured according to you? Or rather, how should it be measured? And allow me to ask the question, how would viewing the presentation make the assessment less subjective and compliant in your opinion?
My argument was more to the point that the assessment has turned into one of more compliance, less performance. "Compliance to what" is exactly the question that many TOK teachers I know have been asking. Performance, as a measure of mastery, is a type of assessment that demonstrates the learning. I thought the assessment instrument actually does a very good job of guiding the measurement of mastery. The disconnect, again, for me, is using the TK/PPD as an equal measurement of this mastery.

As to your final question, this gets to the heart of the whole deliberation, I think. Take music and plays. Both can involve writing and performing. However, they are not equal measures of the same thing. That seems obvious to me. What we were putting into practice in our classroom was the performance; making TOK real. We do this through planning and practicing. However, what I was measuring was the performance. It was not for them to comply to what they thought they had been taught, but rather for them to put into action their learning.

For me, what makes it less subjective is that we learned the process together. We practiced together. We tried to put into practice the principles of TOK as best we could. And then I measured them according to the rubric as we had understood it together. That makes it an objective measure for us. We have a shared understanding of what we are trying to master, and how they are trying to demonstrate that mastery. This final act of them presenting and me viewing is what makes it more objective and less about compliance, at least in one tangible, real way.

I do not think you can remove the subjectivity overall; that is a fools errand. However, to reject the objectivity we created through our mutual understanding makes the whole process subjective for everyone. 

The main problem, for me, is that moderation has placed complying to the form in front of the actual presentation that was done and my comments justifying what was done--at least in my instance (and many accomplished and veteran TOK teachers it seems as well).

My students do a great job of moving from the RLS to the KQ—thats exactly what we practice in our blog posts and in-class work. Monitoring this is fine, but when my 1 page comments seem to be disregarded and excellent presentations are marked down to 6s, that is when we have lost the thrust behind moderation and thus the balance between autonomy and compliance.

I´ll always place more value on autonomy, but I recognize the need for compliance as well.
Can I ask non-rhetorical question of my own? When is the last time you have witnessed a full cohort of students give TOK Presentations, and then went through the exercise of marking them and writing up comments? What did the viewing do for you that the reading couldn't do?

Maybe your observations from the above would shed light on what I  think much of this hinges on: a philosophical difference on what it means to be assessed.

6. You also mention that we have ‘ruined what was probably the last IA with ‘deep learning’ in the IB. How do you measure ‘deep learning’ and the fact that it used to be the ‘best assessment’? The only difference in the assessment is that now the task is being moderated, whereas in the past, the teachers’ marks were not moderated at all. You might know this or not, but I am sure you can imagine what the consequence of the lack of moderation was. Marks were heavily and artificially inflated in the IA to counterbalance the marks of the essay. Every session we sampled actual presentations and we saw increasing evidence that that was the case. So I go back to my question, how do you determine what is and what isn’t ‘deep learning’. A task with unmoderated marks represent ‘deep learning’ and one with moderated marks represent ruined assessment?
Ruined might be too strong of word. And yes I appreciate the that some presentation scores were probably inflated. All I can base my argument on is my personal experience. Two years ago my scores were left the same and/or moderated up. This year, my presentations were better overall, as judged by my "objective" measures. Which made me very happy. I thought my students were learning more deeply what it means to look at the real world through a critical TOK lens. And then their scores were moderated down 1-4 marks.

One way to "determine" (a funny word) what deep learning is is by the process through which my students are undertaking the task, and then their ability to create meaning from those tasks and make it there own. I try to think about one of the antecedents to deep learning as the process of creation, feedback, reflection.

So no, it does not come down to moderation vs. unmoderation. That disjunction makes a mockery of everything we have deliberated to this point. Deep learning what was we did together in class. I measure that by my observations and their knowledge artifacts, one of which is the TOK Presentation.

7. Your comments seem to suggest that there isn’t a clear understanding about the purpose of the moderation. The moderation is not about judging the work of the candidate but about judging the ability of the teacher to understand and apply the assessment instrument. 
Again, which leads to my continued confusion over last year vs. this year. How can my ability swing so wildly from one year to the next? Honest question. I am not the best teacher in the world, nor have I been teaching the longest. But all of my observations and experiences told me this year's presentations were better than last. Yet the official moderated judgements told me the exact opposite. I have trouble understanding how I could be so wrong about my abilities.

In order to truly judge this ability by the teacher, wouldn't IB at a minimum have to keep a database of past scores by that teacher and the moderation outcomes? In order to establish baselines and longitudinal data?

Also, and I have trouble overstating this, but what the moderators have is a sample of a document that is not the artifact that was assessed by the assessor. For me, when I am looking at the assessment instrument, I see many different pathways toward an 8, or a 5. However, these different pathways get boiled down to one pathway within the TK/PPD document and moderation. Because what one 8 does is now representative of all 8s for that cohort. This seems less than sound.

To be honest, it seems much more complicated than just making it a one-off, year by year case of moderation (if the IB was to truly look at the methodology being put into place—I would actually love to see a researcher provide a analysis of this assessment methodology because I think it has a few questionable assumptions that need to be re-examined).

This leads to #8 below. 

8. You mention that the rubric does not indicate the criteria to fill out the TK/PPD and that presentations are being moderated according to an unknown standard. Why is it necessary for the assessment instrument to include where or by what means candidates have to show their understanding of TOK? What the rubric is focused on is the content and skills that need to be evident to show a good exploration and analysis of a real-life situation from a TOK perspective.  
You mention unknown criteria. I am not sure what you mean by that, as the criteria by which examiners assess the ability of the teachers to award marks are clearly set out in the assessment instrument.
This only makes sense if you think that writing and performing are the same measure. I think I tried to address this above. What I must conclude as a teacher is that I have to teach one way of "planning" in order to make it fair for all students who might happen to get the same score as someone else. This would entail compliance to a set of criteria that I have not been provided.

9. It is important to highlight that our moderation process uses a quality model to ensure that examiners are marking to the same standard set by the senior examining team. Examiners who apply a different standard are not allowed to continue moderation.
And as this system may still not be perfect, we provide schools with the opportunity to challenge and question the moderation by requesting an EUR service. 
Yes, thank$ for that.

And finally, I apologize if the clarifications to some truly concerning comments raised on the OCC forum sounded defensive and paternalistic, that was not the intention. The intention was to urgently clarify some very worrying misconceptions and misinformation evident in many, many comments.

Best wishes,

Maria Zubizarreta
Subject manager, TOK

One final last point, because it reaffirms for me what I think is probably the major issue going on: normalization. IB Score Reports recently released a data analysis of their TOK Scores. Their report can be found here. Their summary of findings:



Summary of Findings
1. More schools experienced moderation of their ToK Presentations in 2016 than 2015.
2. More ToK Presentations were moderated in 2016 than 2015.
3. The degree of moderation increased on ToK Presentations in 2016 from 2015.
4. The maximum degree of moderation increased on the ToK Presentation in 2016 from 2015.
And perhaps most importantly,
5. Final ToK Presentation scores were much lower in 2016 than 2015.
This evidence points to a concerted effort by IB to normalize TOK within an average distribution, and I think acknowledging this would go along way with many teachers. They should be honest about this (and maybe there have been oblique references to this both above by Ms Zubizarreta and in some of the recent communications). This effort may be based on a myth, but those philosophical debates go deeper than the concerns with the TOK Presentation moderation that have been raised by many TOK teachers.

I would like to thank Ms Zubizarreta for taking the time to respond so thoroughly and thoughtfully. In quite a bit of this deliberation I feel like we are talking past one another, which is unfortunate and probably a result of many of the problems we try to get our students to tackle in TOK. We live in a complex world, and the best we can do is try to make meaning from it in a critical and authentic way.

26 August 2016

Long Night. Opportunity for Empathy?

Yesterday we had our Back to School night, the not uncommon practice where schools invite the parents in to follow their students schedule (for us, 10 minutes per class). It started at 7 pm, so I decided to stay at school from 4–7 to try to get caught up on my beginning of the year to do list.

I got home about 9:15, and then had dinner with my wife and went to bed around 11 pm. How I am feeling today? Tired, lethargic, not totally motivated, and a little absent minded. I actually brought mate to school this morning to give myself a bit of a boost (I am not a caffeine in the morning person).

Realization: Perfect moment to empathize with my students, especially the older high school students.


Yesterday, I had to “do school” until 9 pm. The next day, I do not feel great. I actually feel both mentally and physically bad.

Simple empathy exercise: I asked them how late they usually “do school.” Almost all of my older high schools students said well past 9 pm (many of my students eat dinner at 9 pm, and then continue homework until they cannot any longer). How do my students feel then, coming back to school, after long days and nights “doing school,” 5 days a week?

And because they may be conditioned to it, does it make the reality any better?

Any more correct?

Our principal actually got on the intercom and mandated parents to leave at 9 pm in order for us, the teachers, to be able to get home and rest in order to be ready for the next day of teaching. I also had one parent nod off during my 4B Geography class, their last period of the day.

Should we be issuing similar mandates for those we are entrusted to help become healthy young adults? What does this say about our system where we think we might have to?

I know the debate surrounding homework is an age-old one. I know the real vs. the ideal paradox is ever-present (especially given some of the seeming intractable realities of the IBDP). But using experiences like last night as opportunities to again remind myself to try to put myself in the shoes of those I care about (the students) was a good experience for me.

I heard their echoes in myself.

Maybe you will too?

Cross-posted on Medium.com | https://medium.com/@joekoss/long-night-opportunity-for-empathy-a634bdbc6669#.dd3y51y5b

23 August 2016

3 Key Tips on How to Create a Killer Blog Post


Completing a blog post for an assignment and creating a blog post for an audience are two different things. A blog post for an assignment is usually completed because we have to. It is therefore usually an uninspired summary of what the student thinks the teacher wants.

A blog post for an audience is created to share, in order to interact with and inform an authentic, global, unknown readership. Creating a blog post for an audience is much harder than writing an essay or completing a blog for an assignment. You have to think about so many more things! This short introduction will give 3 key tips to creating that killer blog post: 1) write for story; 2) hook the reader; 3) design for effect.

1. Writing for story is different than writing for teachers

The most important component of any blog post is the content. What we need to remember is that we are writing a story to be read by a global audience, not an assignment to be read by an audience of one. When we write for story we need to be writing for a purpose.

Writing a purposeful story has two essential elements: focus and transition. Most importantly, as Jon Franklin, in Writing for Story, says is that a story needs a focus—the mind of the audience has to have something specific to focus on in order to make an identification.

Franklin uses a helpful analogy to understand these two elements. Imagine a filmmaker wanting to capture the essence of a huge crowd of people. The filmmaker would go about this in two ways. First, she would zoom in on individual faces, faces that would visually show the emotion and movement of the person. Second, the filmmaker would zoom out, pan, and find another face, in other words, transition from one face to another by showing the magnitude of the crowd. In this way, “the crowd in the hands of the filmmaker would become a human thing, and therefore meaningful.”

Ira Glass, in the first part of a great four part interview, describes a similar process, which he calls “anecdote and reflection” (h/t Corey Topf). The anecdote is the series of events or actions; these raise questions that act as bait for your reader. The reflection is the why behind the anecdote or action; these give meaning to the story.


In almost all narratives a focus starts with an image or anecdote, and these images become the fundamental component of story. Franklin stresses that the human mind is preoccupied with action. In order to focus the audience’s mind on action, successful images must be based on active verbs—”if they aren’t, they can’t transmit much in the way of meaningful information.”

One final note about successful narratives. Almost all successful stories use a basic common literary technique. There is a complication and a hero; a problem to be solved and a hero who solves it. Usually the best stories find ways to incorporate this technique as well.

2. You have 1 second to hook a potential reader


My product redesign post.
Hate your sneakers? Here’s 3 reasons why.
What title are you more likely to click on? Why? Well, there is a reason why clickbait was added to the Oxford English Dictionary in 2014. No matter how great your content is, we still live in a digital world oversaturated with digital information. And we are all competing for the attention of the same audience at any given time.

The title is so important because it is the title’s job to turn a potential reader into an actual reader. Or at least get them to the landing page. Then it is the job of the content (and design—see below!) to keep them there. Take the title of #2 seriously—you probably have around 1 second to make a potential reader click on your article. Create multiple titles, prototype and iterate, ask for feedback, and then choose your title carefully...and don’t be afraid to be bold!

3. Do not forget about design!

Design is not just for Paris fashion shows and hipsters in Apple stores. Thinking seriously about best design practices while creating a blog post should be at the forefront of every creator’s mind. There are two major design ideas to keep in mind while creating a killer blog post:
  1. Use Images
  2. Keep it Clean
Use Images
Images should be large, high quality, visually interesting, and most importantly, visually explanatory. Images help tell the story. Numerous studies have shown that people learn better from words and pictures than just words alone. Killer blog posts take advantage of this research by creating visually rich posts that use high quality, complementary images, and use them with effect.

Keep it Clean
Keep it clean refers to following a few basic rules about reading on the internet. Almost all good blogs incorporate these 3 key elements:
  • Simplicity
  • Readability 
  • Consistency 
Simplicity—do not overwhelm the reader with craziness, anywhere. Simplify the layout, the links, the ads, the sidebars, etc. Less is more when creating a blog post for reading.

Readability—focus on easiness on the eyes. Pay careful attention to your font selection, color schemes, word length, images and graphics, pretty much everything you put in. Again, less is usually more.
(For an interesting discussion on color schemes, contrast, and readability, go here.)

Consistency—use consistent layout choices to indicate to your reader what to expect. Same fonts, same colors, same branding, etc. Consistency equals professionalism, accuracy, and reliability, and gives readers a sense of trust in the source.

If you use this and have any suggestions, please share them in the comment section below (disclaimer: I have no formal educational experience in teaching writing).

Resources:

06 August 2016

The TOK Presentation as Compliance, not Performance

photo credit: William Murphy

Or how TOK Presentation moderation is ruining classroom learning autonomy

Click here to read the reply to this post by the TOK Subject Manager

This year marked the second year of using the new holistic IB rubrics for marking both the Theory of Knowledge essay and presentation. It was also the second year of the TOK Presentation being subjected to moderation procedures by external IB examiners. The moderation procedures follow more or less the same procedures IB uses for all other IA moderation, using dynamic sampling and tolerance bands.

Except for one glaring difference: the work being moderated for the TOK Presentation is not the product itself, as it is with all other externally moderated IAs. Rather the work moderated is a "Presentation Planning Document (TK/PPD)," which asks the students to, in 500 words or less, describe, state, explain, outline, and show how their presentation "succeeds in showing that TOK concepts can have practical application."

Last year, May 2015, the first year of the new moderation procedures, 5 of our TK/PPD's were selected and the scores were either unchanged or moderated up 1 point. So I thought I was being sufficiently cautious and consistent with my marking, and that my students had taken the task of filling out their TK/PPD seriously and competently. Overall, I was pleased with the presentations and the new holistic rubric, which I considered an improvement over the old analytic rubric.

This year, May 2016, the moderation results flabbergasted me. Again, 5 were selected. However, this year, my scores were moderated down 1-4 points. 6s became 2s, 7s became 4s, 8s became 6s, and 9s became 8s. On the whole, I thought my students did better, overall, in creating, presenting, and explaining how TOK concepts have real-world application than did last year's class. Their reward for working hard and for me thinking I taught it better? Lower scores.

This is especially frustrating because I take the time to write 1 page teacher comments that I put on the TK/PPD, where I explicitly spell out the logic behind my marks, according to my understanding of the rubric, and the performance as a whole. IB states:

“Marks awarded by teachers for the presentation will be subject to moderation procedures through sampling of the associated TK/PPD forms that have been uploaded. The objective of this process is to judge whether the contents of the TK/PPD form justify the marks given by the teacher for the presentation.”
Shouldn't my justification of my judgements count for something? I am the one who saw the presentation.

(This isn't a one off occurrence either, the ramblings of a disgruntled and vindictive teacher; anyone with OCC access can view the TOK blog thread on the assessments year after year to view the mounting and growing vocalization of TOK teachers around the world over assessment subjectivity).

This experience has made something that was opaque before quite obvious. The TOK Presentation as an assessment is now not about performance, it is about compliance. And this is not a good development.

A performance assessment is part of a larger complex of "competency assessments" that tries to give students the autonomy to show their mastery of certain knowledge and skills is a real and relevant way. Performances are powerful learning agents, especially when students are given the opportunity to iterate and receive feedback before they are required to do it for real.

The TOK Presentation used to be that one rare IB Assessment that almost fully put the learning into the hands of the students themselves. This is true learning autonomy, for both the teacher and the student. The IB should be embracing more of these types of assessments. Why? Because they are at the heart of deep learning. Instead, they've ruined what is probably the last one.

The Presentation will now need to become an exercise in compliance to unknown moderation standards by filling out a document that is not the authentic knowledge artifact, which in turn places the actual true learning experience—the performance—as a near perfunctory gesture. Which is unfortunate.

This act of moderation itself belies sound assessment policy. Marks for whole classes of students are being judged not by the actual, authentic product itself, but by a moderated sample of planning document. Where in the real world would we allow this? Imagine...great job out in the field today Mike Trout, but you know, you didn't fill in your daily performance chart correctly, so unfortunately, we're going to mark down your accomplishments for today. On top of that, the 4 teammates we selected did a poor job too on their charts too, so everyone gets marked down. Which means you lose the game today.

It. Defies. Logic. But then, when have centralized, officially moderated assessments ever corresponded to the logic of learning? They've always been about giving adults a false sense of assurance of their rankings of students by using numbers as a proxy for the real thing.

So here is my conclusion.

Nowhere on the rubric does it indicate the criteria to fill out a TK/PPD. The rubric is explicitly about making TOK real. However, I feel I will now be forced to somehow mark the TK/PPD as the main indicator of the final presentation grade in order to approximate compliance to an unknown standard: an anonymous moderator's interpretation of what proper TOK Presentation planning looks like. And I will be forced to spend limited class time on form filling, instead of critical thinking.

We just completed our Class of 2017 presentations in May. At the bottom of my one-page comments regarding the actual performance, I will be putting the disclaimer that while this presentation deserved an 8, based on the obligation of my part to try to judge a planning document in accordance to unknown criteria, I will give instead give the final mark of 6.

In our TOK classroom the past two years, we worked hard together to scaffold the approach, to practice, to receive feedback, and to create performances we could and should be proud of. This real learning has been taken away and treated as a formality. We now get a form to fill out, and the empty promise of official, moderated, compliance.


NB1: The Official IB TOK Subject Manager wrote what appears to be a more defensive and paternalistic than substantive reply to the growing OCC discontent that addresses none of the actual concerns about moderation and only furthers the confusion...

NB2: The Official IB TOK Subject Manager wrote a reply to this post, which I posted here with further comments by me.

01 June 2016

Experiencing Deep Learning


What is deep learning? It is pretty much what most educators already believe about education. Will Richardson, education critic and promoter of "modern learning," usually defines it as something close to a focus on student agency, choice, and authenticity.

One of the rallying cries of those of us who think school needs to be remade/transformed/revolutionized/etc is that what school needs to do first is focus on defining what we believe learning means.

[A] greater focus on the antecedents to deep learning, not the consequents of sound teaching.
To me this means a greater focus on the antecedents to deep learning, not the consequents of sound teaching. More fuzzy space for student innovating, creating, exploring; less directed mandates about teachers aligning standards and delivering curriculum.

However, in school, what we believe about learning and what type of learning is designed are usually two very different things.

I believe in the power of reflection. I believe it is one of the antecedents to deep learning. But how is it designed?

The John Dewey quote, I assume, has been widely seen by most educators at one point or another. I know I have seen in multiple times. But when I saw it the other day (again) on Twitter it sparked an internal dialogue: if I believe this, what does it mean in practice? How is it designed?

I think the design might look something like this:

Knowledge artifacts are by design shareable 

Bill Cope and Mary Kalantzis, in their New Learning research, talk quite extensively about actively creating knowledge, not passively receiving knowledge. A knowledge artifact is just this: a real, authentic product of active knowledge making; a narrative about what was learned.

Note that sharing has both local and digital modalities. For example, it can be done in a War Room during a design thinking based lesson or unit, or in an ePortfolio learning journey

Shareable knowledge is by design open to feedback 

Giving feedback has long been recognized as an essential component of teaching and learning. Feedback, according to John Hattie, "is conceptualized as information provided by an agent (e.g., teacher, peer, book, parent, self, experience) regarding aspects of one’s performance or understanding" (2007, p. 81). 

Again, feedback can be given locally, e.g. with post-its on butcher paper, and/or digitally, e.g. with Google Doc comments.

Feedback by design creates meaningful spaces for reflection 

Most educators would agree that reflection is a necessary component of learning, and maybe, as John Dewey argues, of thinking itself. Reflection, to Dewey:

involves not simply a sequence of ideas, but consequence — a consecutive ordering in such a way that each determines the next as its proper outcome, while each in turn leans back on its predecessors. The successive portions of the reflective thought grow out of one another and support one another; they do not come and go in a medley. Each phase is a step from something to something — technically speaking, it is a term of thought. Each term leaves a deposit which is utilized in the next term. The stream or flow becomes a train, chain, or thread. 
Reflection comes in many forms, from making a small change to your prototype based on a peer suggestion, to writing a reflective blog post after the completion of a project (which becomes a knowledge artifact itself).


Reflection by design creates deep learning. 

Reflection is not the only antecedent to deep learning. In fact, we need to focus much more of our efforts to figuring out what these antecedents are, instead of chiefly trying to figure out how to measure the consequents of sound teaching. But it is a powerful one. And it does not happen by chance. We should focus on designing more opportunities for it.

"By three methods we may learn wisdom: First, by reflection, which is noblest; Second, by imitation, which is easiest; and third, by experience, which is the bitterest."                                                                                             —Confucius


Cross posted at medium.com

References:
  • Cope, Bill and Kalantzis, Mary. New LearningLink.
  • Dewey, John. How We Think. 1933
  • Di Estephano, Giada, et al. Learning By Thinking. Link.
  • Hattie, John, et al. The Power of Feedback. Link.
  • Popova, Maria. How We Think: John Dewey on the Art of Reflection. Link.
  • Richardson, Will. The Surprising Truth About Learning in Schools. Link.
  • Tolisano, Silvia. Reflection in the Learning Process, Not As An Add On. Link.

31 May 2016

Understanding the TOK Presentation Rubric


The holistic TOK Presentation rubric asks one main question:

Do(es) the presenter(s) succeed in showing how TOK concepts can have practical application? 

However, if you are a student, what does this mean in practice? Last year I began teaching the rubric as having three main parts, which has shown to be helpful. A great presentation will take into account all three of these main parts.

The 3 Main Parts of the TOK Presentation Rubric

1. Knowledge Question formulation

Formulating a good Knowledge Question is the first step in a successful TOK Presentation. A TOK Presentation KQ needs to be directly derived from the RLS. It might be helpful to look at the "Think TOK Model" developed by OUP to understand the formulation process.
Understanding the Level 5 descriptors for #1:
  • Well-formulated means it is 2nd order, focused on TOK concepts, and can be applied generally to more than the RLS in question
  • Clearly connected to a specified RLS means that you can clearly explain the development of the KQ from the RLS by using an understanding of knowledge claims

2. Exploration of Knowledge Question

Exploring the Knowledge Question is the heart of the TOK Presentation. The exploration is the main way to show how TOK concepts have practical application. Exploration in the presentation should mainly stay in the 2nd Order world, connecting the RLS and KQ using the tools and language of TOK. The exploration should help create and understanding of the KQ by developing 2nd order knowledge claims that use the TOK framework selected by the student(s). To understand this exploration phase a bit better, we can look at the official IB deconstruction of an exemplar TOK Presentation.

Understanding the Level 5 descriptors for #2:
  • Effective exploration means that the analysis is based within the framework of the Ways of Knowing, Areas of Knowledge, and Personal vs. Shared Knowledge, that we use in TOK
  • Convincing argumentation means that the exploration is based on an insightful, well-thought out and understandable reasoning process
  • Investigation of different perspectives means that you have taken into account how other knower(s) might view the analysis on the KQ itself, not merely different perspectives within the KQ.  This can be done in many ways; usually a consideration of other TOK concepts is helpful (for example: belief, bias, certainty, culture, evidence, experience, explanation, interpretation, intuition, justification, limitations, reliability, subjectivity, truth, values)

3. Outcomes of Analysis

The outcomes are the "conclusions" about how to "answer" the KQ. This is the main reason why there is a presentation--because there are insights to share with others. An outcome is a 2nd order claim rooted in the chosen TOK framework by the students(s). These should be clearly stated and connect directly back to the real-life situation. If we look back above to the exemplar, the outcomes of the analysis was that scientific prediction as a validation tool has both strengths and weaknesses, which were then applied back to the RLS in question, and to others in both the Natural and Human Sciences. 

One of my better TOK Presentations last year focused on the KQ: In what ways is knowledge dependent on language. Their main outcome (2nd order knowledge claim) was that knowledge is dependent on language in many ways, but that we can communicate our knowledge in more ways than just through language, for example with art and emotion. This proved to be an excellent way to both understand the KQ and to apply the outcome more generally to other RLSs. 


Understanding the Level 5 descriptors for #3:
  • Showing significance to the RLS in question and to others means that the outcomes are compelling, that they do influence the way knowledge is created, and that these conclusions can also be applied to other non-connected RLSs


Dividing the TOK Presentation Rubric up into these three main parts should help both the teacher and the presenter(s) create superstar worthy TOK Presentations.

16 May 2016

School Evaluation Has An Improvement Problem

photo credit: Neil
Last week we had our accreditation visit. 

The lead facilitator said two things to me, as a learner, that I believe he meant as compliments: 1) that I am a button pusher, and that we (education) needs more people like me, and 2) to keep being an agitator.

I believe we (as educators) also believe those to be good dispositions for our students to have. Why? Because in our ever changing world we want them to be original challengers of the status quo.

But how do we, as teachers, administrators, and educators, usually model ourselves as learners? Take, for example, an accreditation process. We spend quite a bit of time looking at the past and present—the status quo. But why do we only spend time on defending old stuff rather than also trying to figure out new stuff—to be challengers?

What do we believe drives deep learning? What do we want our kids to do? Then think about what we practice as learners ourselves.

I think the goal of a good accreditation process is to be program agnostic and learning specific. When I asked about the role creative disruption and experimentation should have in a school evaluation process, one of the facilitators agreed and talked about needing to find a balance between the past, present, and future. I completely agree. But the evaluation model has only structured spaces for the past and present. 

Where are our spaces in evaluation for the type of learning we value in our students? Why don't we put some of our beliefs into the curriculum (8th ed. School Improvement through Accreditation) we use to evaluate and measure learning? Where is room for the practitioners to use creativity, experimentation, innovation?

The current school accreditation model, in the words of Russ Ackoff, structure efficiency within the current model. By using standards, and indicators, and closed modeling, they chiefly look for improvements within discrete areas (Sections A-G) within discrete time frames (12 month self-study, 5 year review cycle). But they do not structure conversations around what Ackoff calls creative discontinuity; structuring creativity to allow for the "seeing of realities that might otherwise be overlooked?" Why should school wait to transform learning until after the accreditation has checked off the standards that have been scrupulously checked by the siloed self-study committees by generating volumes of evidence?

There is no balance. 

As I think the facilitators would agree, the Accreditation Standards and Indicators are descriptors, not prescriptions. So, combining two things I have been thinking recently about, evaluation and design, I've gone through the standards and picked out ones that can be aligned with the goals of discontinuous improvement into my own category, Category H: Transforming Learning:
  • A3f …. the acquisition and refinement of the skills of leading and following, collaborating, adapting to the ideas of others, constructive problem-solving, and conflict-resolution through experiencing leadership in authentic contexts. 
  • B6b: Teachers create stimulating learning environments that are evidenced by students who are engaged and active participants in their learning 
  • B9b: The school encourages pilot curriculum innovations and exploration of new teaching strategies, monitored by appropriate assessment techniques. 
  • D2a: Teachers utilize methods and practices which are consistent with the school’s Guiding Statements and which inspire, encourage and challenge students to reach their full potential.
  • F2e: The school creates student learning opportunities by effectively using the skills of its own community members and by building partnerships with external agencies such as local businesses and professional organizations.
  • F3a: The development and delivery of the school’s complementary programs demonstrate sensitivity to the needs and beliefs of different cultures, foster engagement with the local culture and promote global citizenship.
  • F3b: The school actively supports the development of student leadership and encourages students to undertake service learning. 
  • F3c: The school actively promotes and models global environmental awareness and responsibility across its community. 
  • F3d: The school regularly evaluates its complementary programs to ensure they remain aligned with its Guiding Statements, meet student needs and interests, and foster global citizenship.
Taken together, these indicators represent a very different idea of learning than is found in almost any traditional school that has not already begun to transform its learning DNA. They should shake the business-as-usual approach to learning. Does current school evaluation shake our approach?

Here is the argument. It is not that current evaluation models do not do good; they do, especially at schools who still have foundational work to do. But this is not an either/or proposition. It is a both/and.

There is no /and.

Without structuring experimentation into what this new learning DNA might look like, most schools will never get to transforming what they believe about learning because they have never been asked to. We need to study space and structure, and time, and transdisciplinary learning.

All accreditation models should have a transformative piece in place, a place for creativity and discontinuity. A place where it is explicitly said that it is ok to study experimenting, to study innovating, to study discontinuity, to explore the unknown and come back later with what you have learned, not what you have checked off. As a learner, that is the self-study team I want to be a part of.

We can continue to assume the future will look much like the past, and build the best model we can. But how do we know we are not building the best flip-phone? Or the best combustible engine? We do not (in fact all evidence points to the fact we are). And that simple fact should force us to change. Not in 5 years, not in 12 months. But now.

Check out Part 1 here: The Backwards Design of School Evaluation

05 May 2016

Finding the Future(s) of School: Discontinuous Improvement and Scenario Planning


A couple of recent articles I have read and YouTube videos I have watched, plus a recent email conversation I had, have hit on something I have been thinking about for a few years but never had the vocabulary to talk about in any meaningful way—how can we strategically push school into an unknown future?

The first article was one Will Richardson recently wrote called "We're Trying To Do "The Wrong Thing Right" In Schools." He used some of the system thinking ideas of Russ Ackoff found in this video:


Which led me to this video of his:

Those 21 minutes of Prof. Ackoff have spurred more synthesizing about the way I see my current world (teaching and learning) than at probably any time since I was a junior in college and discussing formal vs. substantive rights for the first time in my Political Philosophy class. In particular his distinction between efficiency and effectiveness and his idea of creativity as a discontinuity—discontinuous improvement—has really created a new set of understandings for me.

Finding Ackoff lead me to start an email conversation with a leading expert in applying a type of systems thinking called Lean Manufacturing (or the Toyota Production System) to health care (he also happens to be my father-in-law). He pointed me in the direction of the second article, found in the Harvard Business Review called Living In The Futures. It chronicles the 50 year history of a small but transformative experiment at Shell Oil in using creativity and discontinuous improvement—alternative futures planning.

Simply, alternative futures planning "is about explicitly recognizing and exploring plausible ways in which the future could unfold"(link). Sounds exciting, especially if the system within which you work is old, slow, and not designed for change. Why? Because as they found at Shell,
They encouraged strategic conversations that went beyond the incremental, comfortable, and familiar progression customary in a consensus culture.

Planning for Possible Futures

From the beginning, those engaged with Shell's scenario practice maintained that scenarios are not predictions but can provide a deeper foundation of knowledge and self-awareness in approaching the future. They also felt that the "official" view of the future—the business-as-usual-outlook—both reflects an optimism bias and is based on the human tendency to see familiar patterns and be blind to the unexpected.
One thing I am not is a systems expert. Nor do I have more than a cursory, ankle-deep understanding of systems thinking, Lean methodology, and alternative futures scenario planning. But I completely intrigued by the concept of discontinuous improvement.

Because here is the riddle for me: how do you at the same time try to do what you are doing a bit better inside one system (more efficient?)—redesign, while at the same time prepare for those things moving to a different possible future/system (looking for effectiveness?)—revolution.

And this corollary: to what extent is looking for efficiencies within the current model (continuous improvement?) preventing preparing for a different possible future (discontinuous improvement?)?

For example, I believe there are things that are common to (almost) every sound learning design, and these small things can (probably) be transferable to such and such systems. However, I also believe that school or education is going through a model crisis on an order of magnitude we haven´t seen in our lifetimes. And most schools are not preparing for nor even really considering this future new system/paradigm.

That is why Ackoff resonated with me so much. If school would take a "possible futures" exercise seriously, I would be very curious to see how far they got along questioning the very model/system/paradigm they are operating within.  And if they got that far, how would they square what they are doing in the classroom right now with "what would you do right now if you could do whatever you wanted to," as Ackoff said.

Conclusion

Scenarios encourage attention to the future's openness and irreducible uncertainty.
Most schools are not designed to incorporate continuous improvement methodology, let alone discontinuous improvement methodology. If they do have some type of commitment to improvement, it is usually reduced to improving test scores.

Ackoff, Shell, et al have inspired a new round of questions for me.

  • What would futures planning look like in school?
  • What possible futures would be created in all the various contexts school operates in?
  • What does a commitment to discontinuous improvement look like in school?
  • How do we get school to institutionalize organizational creativity to allow for the "seeing of realities that might otherwise be overlooked?"
And inspired an urgency to frame new conversations. Because the conversations I am most interested in about education have moved past conversations about pedagogical strategies (important, but possibly only seeking efficiency). They are conversations about the ideas around the "what would you do if you would do anything" with the design of school, the system(s) and model(s), the spaces and structures—what looks like to me to be a conversation about true effectiveness.

The power of this conversation I think is aptly summarized by a closing quote from the HBR article:
It will help break the habit...of assuming that the future will look much like the present.

References: 

25 April 2016

23 April 2016

3 Big Takeaways from #L4LAASSA



I just got back from attending at AASSA's 2016 educator's conference (where I also presented a "Learning Lab"), whose theme this year was "Looking for Learning." Keynotes this year included Kevin Bartlett of the Common Ground Collaborative, Martin Skelton of Fieldwork Education, Mike Johnston of Compass Education, and Ewan McIntosh of NoTosh.com. There were also over 100 workshops given by teachers and other educators from the AASSA network.

As was the case last year when I went to Innovate Graded 2015, I needed to come home and let the conference sink in a bit before being able to properly synthesize all the keynotes, workshops, and seminars I went to. I came to three big takeaways, centering around the concepts of engagement, design, and conferences themselves.


Take Away #1: Engagement—way more than being on-task

(NB: I think a better word here might have been "empowerment" but...)

Kevin, Martin, Mike, and Ewan had (at least) one common theme that I keyed in on: what does true student engagement look like—what are it's antecedents? They challenged the teachers and educators in attendance to ask themselves what does learning engagement look like, really? True student engagement, in Ewan's words, is the difference between pseudo-contexts and real contexts. True student engagement is when we give the learner agency and choice instead of the outcomes being determined and linear. True student engagement is mostly student-directed, not only teacher-directed. 

Ewan's distinction between pseudo-contexts and real especially hit home with me this time. I could have my students create a "recipe" for how universal religions spread in my World History course (which I did last year). Or I could have them use those principals and concepts about how religions impact societies and apply them to ISIS, Rohingya, Tibet, Roman Catholic church, evangelization and radicalization, etc. My wife could have her students create a dino-museum during their study of dinosaurs (which she is doing now). Or they could use the same concepts of earth science, extinction and climate change and apply them our current global environmental and come up with possible solutions while also learning about dinosaurs. The end goals are fuzzier with the real contexts, but maybe we need to start embracing fuzzy thinking more?


We also talked about this idea in my Social Studies cohort in relation to doing a project as an end-of-unit assessment and doing a project-based learning unit, where the real drives, organizes, and creates the context.

I think I know where the real engagement will be found.


Take Away #2: Design—it is system wide

If you've been anywhere around education over the past 10 years, then you will have undoubtably come across the Understanding by Design and Backwards Design framework created by Jay McTighe and Grant Wiggins. But that is not the learning by design I want to highlight.

All artificial things are designed. —Don Norman, The Design of Everyday Things
School is an artificial thing. We are the designers of the spaces and structures that enable learning. The task before schools is no longer to merely design the delivery. The paradigm is shifting. The task is now system wide.

Will Richardson recently had a great post on schools "trying to do the wrong thing right," using the systems thinking of Russ Ackoff as a launching off point (this was my first introduction to Prof Ackoff and I'm blown away by him). Kevin touched on this theme as well.

The keynotes by Ewan, Mike, and Martin fit into this theme as well. Ewan's talked about agile leadership, giving us 10 great takeaways to think about as teachers and leaders. Mike talked about looking for learning beyond the walls, challenging the audience to embrace the complexity of this new paradigm, to take learning "beyond" your subject, school, linear, country, the physical.
And Martin's mantra was:

School, as a system, needs to take design seriously, at all levels, by all stakeholders. We are all the designers of the possible. Russ Ackoff  defines a system as "a whole, that consists of parts, each of which can affect its behavior or its properties" where the parts are interdependent. The "essential properties of a system are properties of the whole which none of its parts have." 


The essential property of school is learning. None of the individual parts have learning. Nor can we improve individual parts and think we are going to improve the whole. Thinking of school as a system requires designing the system first—an overall design of what we believe about deep learning. Are we up for the challenge?

(FWIW, I am continually blown away by both the work Ewan and his notosh.com do encouraging and leading these types of full system design discussions, and their willingness to freely share their learning and resources in the marketplace of ideas—i.e. go to their site right now and start using their materials!).


Take Away #3: Conferences—partners in the unknown

I am not old enough nor have I taught long enough to have gone to conferences in the '90s, or the '00s for that matter. But my understanding of the "traditional conference" are ones that were primarily focused on pedagogy, classroom strategies, and curriculum. Both Innovate Graded 2015 and AASSA 2016 Looking for Learning had a very different feeling. The energy in the air was not about learning how to do the thing you are doing better.  Rather, the energy was about getting excited about the unknown unknowns.

I think Mike Johnston might have captured this best in his talk about pushing beyond our normal educational walls.  Our current educational model or system rests on a premise that is existing less and less: that the guarantee of learning discrete subjects is enough, that good grades are enough, that getting into a good university is enough. The new conference is asking: To what extent do we need to fundamentally revolutionize the dominant structures of school itself—of the system?

I don't think you could have walked away from AASSA 2016 without thinking about the revolution.


Why? Because this seems to be more or less true.

Conclusion

So where do we go?
However, 
So remember,

And,

Those were my 3 big takeaways. I would love to hear yours if you were there. Leave a comment or send a tweet.