27 August 2016

TOK Presentation as Compliance: A Reply by the TOK Subject Manager (and further comments by me)

In response to my first post about the TOK Presentation moderation outcomes from May 2016, Maria Zubizarreta, Subject manager for TOK, has written a detailed and thoughtful response to some of the concerns raised not only by me but many experienced TOK teachers throughout the world.

I appreciate the willingness of Maria to engage in conversation concerning the recent worldwide "blow-up" over the TOK Presentation moderation. Rereading my original post, I feel that it was probably worded too strongly in certain sections and was not generous enough to the competing side of the moderation argument (I can hear one of my old philosopher professor scolding me to keep the "principle of charity" in mind...). 

However, I continue to coming away from these "clarifications" from IB with a feeling of belittlement and condesension. The continued stance from IB is one of teacher "misconceptions," "misinformation," and "confusions." It´s an unfortunate deliberation tactic. I feel like I am a reasonably smart person who tries reasonably hard to understand the course to the best of my abilities. Am I perfect? Absolutely not. However, I do also expect a reciprocal relationship of deliberating ideas with any administrator, whether it be my school admin or the administrator´s of the course I am in charge of helping my students learn. 

To continue to deflect all the criticism back to the teachers themselves, teachers who are doing their best, is to reduce the whole issue to a one-sided affair. And for me, that is just plainly not the case. There is more going on here than "misconceptions" that need clarification. To hide from these very real concerns, to not address them, to not reflect and learn from them, is a disservice to the nature and spirit of the very pursuit we are all involved in: learning, and the spirit of collaboration and meaning making that is at the heart of learning...and ToK.

The Letter in its entirety, with rejoinders added by me.


Dear Mr. Koss:

Thank you for sending your comments and concerns about the assessment of the TOK presentation.

I hope that I can clarify some of the misconceptions that seem to be widespread amongst many teachers.

1. You seem to be separating the completion of the TK/PPD (planning of the presentation) from the actual delivery of the presentation as if these were two different things that ought to be kept separate. There are no two different things here… the TK/PPD is the product presented in a summarized way… it is meant to show the planning of the presentation, hence the name.
We do quite a bit of planning and prototyping in my class. Do we use the TK/PPD as our exclusive planning document? No. We use a variety of tools and scaffolding to accomplish this. However, I will point out that planning and presenting are two different verbs. It seems silly to insist that they are the same thing.

Why should the presentation be different than the planning of the presentation in terms of content and the ability to explore a RLS from a TOK perspective? If the planning has been done right, then preparation and product should not be different in relevant content and skills.
Quite simply, they are two very different knowledge artifacts. They will always vary in terms of content and ability. Can they both reasonably show similar content and ability? Yes. But to do so requires a special and unique focus on the filling out of the form as a knowledge artifact, which takes it beyond a planning designation, in my opinion. And would then require criteria.

2. You seem to think that the demands of the presentation (showing the fulfilment of the assessment objectives) is an unachievable task. If that were the case, then how do we explain the 7s, 8s, 9s and 10s?
No. This year we had two 9s become 8s, so I do not believe it is an unachievable task. My main concern is how from one year to the next my class and I could go from understanding the needs of the TK/PPD and the rubric well enough for scores to stay the same (including an 8) to being moderated up (6s to 7s), to the next year, who´s presentations were overall better in my opinion, could get moderated down 1-4 points.

It is clearly possible to use the planning document properly for the purposes of representing what will be delivered in the actual presentation. Perhaps it is important to realize that the TK/PPD is a working document, a preparation document… not an administrative requirement. It is different to a Group 3 IA form, it is not a coversheet for the presentation, it IS the presentation in its skeletal form. Nothing in the content and analysis of the presentation should be different or come as a surprise. Where in the assessment instrument is there any reference to the delivery of the presentation? Why is it that the wonderful things kids can say on a one-time presentation they cannot include in their planning? The presentation is not improvisation, it is putting in action what they previously planned with plenty of time; plenty of time to review, amend, correct… unlike the one-time presentation.
Yes, it is possible to use it as such. And that is how I thought we had used it, both last year and this year. Again, the available data I have is that we did a sufficiently good enough job last year to have our scores stay the same or increase, and this year we did a sufficiently poor enough job to have our scores decrease 1-4 points. The disconnect is that in my judgement, our presentations were better overall this year.

As far as the question "Where in the assessment instrument is there any reference to the delivery of the presentation?" Besides it explicitly being called "TOK presentation assessment instrument"? How else am I suppose to interpret the assessment instrument other than one of presentation? The presentation is the context by which I am determining whether or not they have shown analysis and exploration of TOK concepts and skills in the real world around them. It is what I am measuring.

Again, there is nothing that says they are not able to include what they did in their presentation on their TK/PPD. But this does not mean they conflate to the same measure of learning and understanding. And my students, at least, do not engage in a one-time presentation. They are a reflection of a year's worth of work on using the tools and language of TOK into real world inquiry. However, the discontinuity of scores from one year to the next is not lost on me.

3. You ask: “Shouldn’t my justification of my judgements count for something? I am the one who saw the presentation”. My answer is yes, of course. Your justification does count if it shows a good understanding and application of the assessment instrument as interpreted by the senior team.
And I ask: ‘where in the assessment instrument is there any reference to the delivery of the presentation? The examiner does not actually need to see the delivery of the presentation if the candidates have completed the TK/PPD appropriately. I argue it should be a lot easier for candidates to put their ideas on paper, revise, review, correct, think, re-think, re-write, than to present them on their one chance in front of a class full of other kids.
I have trouble seeing how my judgements could have been given full and deliberate consideration, given the way that they were marked down, or if they were, they were very quickly discarded and the scores were instead based upon a senior examiner's interpretation of what a good TK/PPD looks like.

You are right that the examiner does not need to see the delivery to see if candidates have "completed the TK/PPD appropriately." However, that was not what I was assessing. I was assessing their final performance. I saw first hand their preparation and planning through the exercises we did in class. We spent time on filling out the TK/PPD, but at the end of the day, it was viewed as a planning document, not a performance assessment. 

4. You mention the “… growing vocalization of TOK teachers around the world over assessment subjectivity”. Whose subjectivity are you referring to here? Is the teacher’s assessment of their own kids actually less subjective than the moderation of teachers’ ability to understand and apply the assessment instrument?
I am referring to everyone's subjectivity. It is a tangle of subjectivity. However, for me, teaching and learning is about putting into practice the principles. This is what we try to do in my class, both me and the students. This removes some of the subjectivity, and makes it more objective for our class. This is what creates learning autonomy.

5. You also mention that the assessment is not about performance, but about compliance. I am not sure that I fully understand your point here; compliance to what? Please don’t read this as a rhetorical question, it is an actual question. How exactly is performance measured according to you? Or rather, how should it be measured? And allow me to ask the question, how would viewing the presentation make the assessment less subjective and compliant in your opinion?
My argument was more to the point that the assessment has turned into one of more compliance, less performance. "Compliance to what" is exactly the question that many TOK teachers I know have been asking. Performance, as a measure of mastery, is a type of assessment that demonstrates the learning. I thought the assessment instrument actually does a very good job of guiding the measurement of mastery. The disconnect, again, for me, is using the TK/PPD as an equal measurement of this mastery.

As to your final question, this gets to the heart of the whole deliberation, I think. Take music and plays. Both can involve writing and performing. However, they are not equal measures of the same thing. That seems obvious to me. What we were putting into practice in our classroom was the performance; making TOK real. We do this through planning and practicing. However, what I was measuring was the performance. It was not for them to comply to what they thought they had been taught, but rather for them to put into action their learning.

For me, what makes it less subjective is that we learned the process together. We practiced together. We tried to put into practice the principles of TOK as best we could. And then I measured them according to the rubric as we had understood it together. That makes it an objective measure for us. We have a shared understanding of what we are trying to master, and how they are trying to demonstrate that mastery. This final act of them presenting and me viewing is what makes it more objective and less about compliance, at least in one tangible, real way.

I do not think you can remove the subjectivity overall; that is a fools errand. However, to reject the objectivity we created through our mutual understanding makes the whole process subjective for everyone. 

The main problem, for me, is that moderation has placed complying to the form in front of the actual presentation that was done and my comments justifying what was done--at least in my instance (and many accomplished and veteran TOK teachers it seems as well).

My students do a great job of moving from the RLS to the KQ—thats exactly what we practice in our blog posts and in-class work. Monitoring this is fine, but when my 1 page comments seem to be disregarded and excellent presentations are marked down to 6s, that is when we have lost the thrust behind moderation and thus the balance between autonomy and compliance.

I´ll always place more value on autonomy, but I recognize the need for compliance as well.
Can I ask non-rhetorical question of my own? When is the last time you have witnessed a full cohort of students give TOK Presentations, and then went through the exercise of marking them and writing up comments? What did the viewing do for you that the reading couldn't do?

Maybe your observations from the above would shed light on what I  think much of this hinges on: a philosophical difference on what it means to be assessed.

6. You also mention that we have ‘ruined what was probably the last IA with ‘deep learning’ in the IB. How do you measure ‘deep learning’ and the fact that it used to be the ‘best assessment’? The only difference in the assessment is that now the task is being moderated, whereas in the past, the teachers’ marks were not moderated at all. You might know this or not, but I am sure you can imagine what the consequence of the lack of moderation was. Marks were heavily and artificially inflated in the IA to counterbalance the marks of the essay. Every session we sampled actual presentations and we saw increasing evidence that that was the case. So I go back to my question, how do you determine what is and what isn’t ‘deep learning’. A task with unmoderated marks represent ‘deep learning’ and one with moderated marks represent ruined assessment?
Ruined might be too strong of word. And yes I appreciate the that some presentation scores were probably inflated. All I can base my argument on is my personal experience. Two years ago my scores were left the same and/or moderated up. This year, my presentations were better overall, as judged by my "objective" measures. Which made me very happy. I thought my students were learning more deeply what it means to look at the real world through a critical TOK lens. And then their scores were moderated down 1-4 marks.

One way to "determine" (a funny word) what deep learning is is by the process through which my students are undertaking the task, and then their ability to create meaning from those tasks and make it there own. I try to think about one of the antecedents to deep learning as the process of creation, feedback, reflection.

So no, it does not come down to moderation vs. unmoderation. That disjunction makes a mockery of everything we have deliberated to this point. Deep learning what was we did together in class. I measure that by my observations and their knowledge artifacts, one of which is the TOK Presentation.

7. Your comments seem to suggest that there isn’t a clear understanding about the purpose of the moderation. The moderation is not about judging the work of the candidate but about judging the ability of the teacher to understand and apply the assessment instrument. 
Again, which leads to my continued confusion over last year vs. this year. How can my ability swing so wildly from one year to the next? Honest question. I am not the best teacher in the world, nor have I been teaching the longest. But all of my observations and experiences told me this year's presentations were better than last. Yet the official moderated judgements told me the exact opposite. I have trouble understanding how I could be so wrong about my abilities.

In order to truly judge this ability by the teacher, wouldn't IB at a minimum have to keep a database of past scores by that teacher and the moderation outcomes? In order to establish baselines and longitudinal data?

Also, and I have trouble overstating this, but what the moderators have is a sample of a document that is not the artifact that was assessed by the assessor. For me, when I am looking at the assessment instrument, I see many different pathways toward an 8, or a 5. However, these different pathways get boiled down to one pathway within the TK/PPD document and moderation. Because what one 8 does is now representative of all 8s for that cohort. This seems less than sound.

To be honest, it seems much more complicated than just making it a one-off, year by year case of moderation (if the IB was to truly look at the methodology being put into place—I would actually love to see a researcher provide a analysis of this assessment methodology because I think it has a few questionable assumptions that need to be re-examined).

This leads to #8 below. 

8. You mention that the rubric does not indicate the criteria to fill out the TK/PPD and that presentations are being moderated according to an unknown standard. Why is it necessary for the assessment instrument to include where or by what means candidates have to show their understanding of TOK? What the rubric is focused on is the content and skills that need to be evident to show a good exploration and analysis of a real-life situation from a TOK perspective.  
You mention unknown criteria. I am not sure what you mean by that, as the criteria by which examiners assess the ability of the teachers to award marks are clearly set out in the assessment instrument.
This only makes sense if you think that writing and performing are the same measure. I think I tried to address this above. What I must conclude as a teacher is that I have to teach one way of "planning" in order to make it fair for all students who might happen to get the same score as someone else. This would entail compliance to a set of criteria that I have not been provided.

9. It is important to highlight that our moderation process uses a quality model to ensure that examiners are marking to the same standard set by the senior examining team. Examiners who apply a different standard are not allowed to continue moderation.
And as this system may still not be perfect, we provide schools with the opportunity to challenge and question the moderation by requesting an EUR service. 
Yes, thank$ for that.

And finally, I apologize if the clarifications to some truly concerning comments raised on the OCC forum sounded defensive and paternalistic, that was not the intention. The intention was to urgently clarify some very worrying misconceptions and misinformation evident in many, many comments.

Best wishes,

Maria Zubizarreta
Subject manager, TOK

One final last point, because it reaffirms for me what I think is probably the major issue going on: normalization. IB Score Reports recently released a data analysis of their TOK Scores. Their report can be found here. Their summary of findings:



Summary of Findings
1. More schools experienced moderation of their ToK Presentations in 2016 than 2015.
2. More ToK Presentations were moderated in 2016 than 2015.
3. The degree of moderation increased on ToK Presentations in 2016 from 2015.
4. The maximum degree of moderation increased on the ToK Presentation in 2016 from 2015.
And perhaps most importantly,
5. Final ToK Presentation scores were much lower in 2016 than 2015.
This evidence points to a concerted effort by IB to normalize TOK within an average distribution, and I think acknowledging this would go along way with many teachers. They should be honest about this (and maybe there have been oblique references to this both above by Ms Zubizarreta and in some of the recent communications). This effort may be based on a myth, but those philosophical debates go deeper than the concerns with the TOK Presentation moderation that have been raised by many TOK teachers.

I would like to thank Ms Zubizarreta for taking the time to respond so thoroughly and thoughtfully. In quite a bit of this deliberation I feel like we are talking past one another, which is unfortunate and probably a result of many of the problems we try to get our students to tackle in TOK. We live in a complex world, and the best we can do is try to make meaning from it in a critical and authentic way.

6 comments:

  1. Dear Joe
    Thank you for sharing this insightful and helpful discussion. I am the Director of Education for Beaconhouse School System. This May was our first session for one of our newly authorized schools. I've been a ToK examiner in the past...and understand from both a teacher and IBO perspective the points you raise. I am in agreement with you....and find the IBO responses oblique and not about learning.... The feedback form our school received on the ToK moderated presentation was a disgrace. It gave no constructive feedback for our teacher and consisted of one word feedback.."some" "none" "most"...It is indeed about compliance, task completion over the value of learning. I shall follow your blog and insightful remarks with interest
    Best wishes
    Lawrence Burke

    ReplyDelete
    Replies
    1. Hi Lawrence, thanks for reading and replying. It has been nice to see that once this dialogue was started, many other more experienced and more accomplished teachers and leaders have similar complaints/frustrations/disillusionment, etc.

      Delete
  2. Hi Joe,

    Thanks for taking the time and making the effort to challenge this (and to have the courage to make the exchange public). Whoever suggested grading a student on a live presentation by only looking at an outline should have never been able to finish that sentence before getting laughed out of the room. From the student perspective the method of assessment is now completely unhinged from the learning goals. What they are essentially telling students is that you may as well not deliver a presentation at all but rather should just spend a few months making a really great outline. What about those students that struggle with writing and will always express their ideas better verbally with supporting visuals? I'm sure most of us (especially in intl schools with many different language backgrounds) have many students like this, and now their entire ToK grade is based on writing. Allowing for multiple ways to demonstrate learning is kind of Teaching 101 stuff.

    This is a fascinating case of how very smart people can intellectualize themselves into things that are absolutely absurd.

    ReplyDelete
  3. Hi Joe,

    As a new TOK teacher, I truly appreciate this post. I have been constantly having doubt on how can I help my students "transfer" their Presentation into writings. In my silliest thought, why do they have to do a presentation then, just add that 500 words to the essay. what difference does it make, yes?

    Looking forward to more of your insight.

    ReplyDelete
  4. Very interesting and thoughtful reply, I have been a TOK teacher for a number of years, and I have also moderated, but not TOK. It's a tricky task, but I think moderating an outline as a sample of a presentation is a task set for failure.

    ReplyDelete
  5. Thanks Joe for voicing our concerns. It is extremely frustrating to get the essays back with no comments at all and therefore no way in which they justify their marks. It is even more frustrating having watched amazing students deliver an excellent presentation which obviously was awarded with a 10, and later know that is was moderated 3 points down. With what authority, under what assumptions, based on what evidence????

    ReplyDelete

Your comment will be moderated before publishing.