Constant Velocity Buggy Practical

My absolute favourite practical I run is the constant velocity buggy which is used as the paradigm experiment in the constant velocity model of the physics modeling instruction course (Yr 12). I also do a similar practical in Year 10 science and Year 11 science.

Image result for constant speed buggies

Figure 1. The constant speed buggy in all its glory

For those who are yet to meet these little guys, they are little battery-powered toy cars that travel in a straight line at a steady speed (around 30 cm/s). This is a sufficiently slow speed that you can drop counters beside the buggy every couple of seconds to get great linear relationships of position vs time (dropping the counters at a regular interval means time is the independent variable, and so the x-axis).

Figure 2: Three whiteboards showing linear lines of best fit with similar gradients.

In fact I love this practical so much that I ran in 3 times with my year 12s this year! The first time was in the last 20 minutes of a lesson that mostly consisted of planning the experiment. Although the kids data was fine, I wanted them to spend more time with the experiment, and importantly, to make lasting memories of what the counters looked like (see below). So the next lesson, with no rush and plenty of time, I had these kids repeat the same experiment.

IMG_0998

Figure 3: These students choose to put down counters every 2 seconds. The lasting memory I wanted was that the counters were an equal distance apart. 

After the whiteboard session, where they had a chance to compare data, ask questions, make meaning from results, I had the kids do the experiment a 3rd time. This time each group was told to start at a different position, some went towards the origin, and some had one battery removed. This led to different y-intercepts and gradients (both magnitude and sign).

There are so many opportunities to provide great learning from these simple little toys, and I haven’t regretted the nearly 2 weeks I spent letting the kids take data with them, make meaning from that data, and continually refer to them as they answered more formal problems. Although imported from the US (through Delta Education), and they weren’t cheap, they have easily earned their place in our school.

Advertisements

Physics NCEA Review 3: Standards based grading vs Ranking Students

This post isn’t entirely physics-based. Its argument could be applied to most subjects examined in New Zealand. I have much more to discuss about a review of our physics curriculum, but I want to address some issues from last year’s exams and therefore want to wait till all remarking is finished so I don’t accidentally get myself in trouble…

Today I want to talk about some implications of New Zealand having an examination system which uses a “ranking grade” (also known as “grading to a curve”), which is interestingly dressed up, and marketed as “standards-based grading“.

From the NCEA website, an “excellence” understanding in Level 2 Physics mechanics is “a comprehensive understanding of mechanics”. Therefore the expectation is that if a student shows “a comprehensive understanding of mechanics”, then they should be awarded an excellence grade. This is unfortunately where the misdirection starts.

This same same system promoting standards-based grading has expectations of what the grading curve should look like. Disturbingly, I hear a large number of stories from people who know NCEA markers (not just in physics) that marking schedules for the exams are continuously adjusted so that the spread of marks matches those expected by the grading curve. I didn’t explicitly mention it in my post on “5 tau“, but I suspect the curriculum/syllabus changed that year by a marking panel that decided (post-hoc) that a question wasn’t hard enough, and to obtain the correct marking curve for the overall examination the “5-tau” was added.

As a teacher of science, I find this abhorrent.

So, what are the implications of this? Let’s play a “lightning round game“.

Suppose I (Teacher A) am teaching a large (very large) number of physics students and I figure out a way to make them understand physics extraordinarily well. What happens in their exams? They all get excellence grades.

Now because we look to fit our grades to a curve, what happens to students from Teacher B who hasn’t changed their approach? Their students grades are shifted down. According to Teacher B, they have done something worse this year, although they haven’t changed.  

Now suppose that you (Teacher C) also invent a new way to make student’s understand physics better. But not quite as well as my way. What happens? Maybe some student’s grades are improved, but the data is skewed by my (Teacher A’s) students. 

Finally imagine that all 3 of us taught the same as we always have, but the examiner accidentally writes a easier-than-normal examination. What happens? In order to maintain the grade curve, some questions are marked harder. Perhaps the examiner is now looking for a special “key word” (5 tau!).

The first three points above are a criticism of a ranking-based system. It impacts our ability to impartially critique our own teaching practice as the goal posts shift each year. As we enter into another exciting year of developing an inquiry, I wonder how we can accurately measure any progress using NCEA examinations that are based on “ranking system”. In fact for this reason I am now judging my teaching practice with a pre- and post-test with the FCI (Force Concept Inventory) and Lawson’s CTSR (Classroom Test of Scientific Reasoning). Although I am still teaching with the NCEA examinations as a goal, at least I have a test that I can compare my results to year on year that I know has reliability and validity.

The final point from above (that the examiner writes an easy test and has to tweek the mark schedule), shows specifically the weakness of a ranking-system dressed up as standards-based grading. If we had a pure ranking system, then this would not be an issue, we would simply report students rank based on an un-altered mark schedule. But because we want to try and achieve a certain number of excellences, the marking schedule of our physics test is distorted and twisted, and this ends up influencing the very curriculum we teach to.

As a teacher of physics, a subject I consider to be objective and full of laws and truth, our current system does not sit well with me.

So let me finish with a specific example. Last year I dropped teaching L2 Waves to my students, so that they could try and understand Mechanics at a deeper level. How did my change effect your results: A teacher I have never met, and students I have never taught? Although just a drop in the ocean (N=70), it had implications for your students in both Waves and Mechanics! If my students historically had done well in Waves, then by not sitting this exam, it meant that there were more Merits and Excellences left in the pool for your students. If my students had historically struggled in Waves, by no longer entering it, and decreasing the total number of students sitting that paper, your students would have suffered.

Now because my students concentrated and spent more time on Mechanics, and inevitably got better grades on that paper, unfortunately your student’s had a lower chance at getting the limited number Merits and Excellences available in that paper.

Obviously the numbers here mean that any influence is incredibly small, but you get my point. There are a number of flaws in the system…

 

“Why it took 2000 years for humans to crack the motion problem”

…So said Richard McNamara, who ran my physics modeling instruction course in 2017.

Student’s enter our class with 16 years of physics experience. They have dropped things, thrown things, collided things all their life, and as such, their brain has come up with mental models that explain these phenomena and are useful for everyday predictions.

We as physics teachers know that these mental models that our kids enter our classrooms with (and most adults live their everyday lives with) are simple, and in most cases wrong, and we give them the name “misconceptions”.

There are two important qualities of misconceptions that make it difficult to “upgrade” from misconceptions about motion to a coherent Newtonian conceptual model of motion.

The first is that these misconceptions are persistent. This stood out to me in the short Veratasium video I’ve linked below.

What sticks out to me is that after watching the experiment, and been proven wrong, all the participants then try and explain away the results. “Yes, but …”. When you see this, you realise that not one of these people have had their misconceptions changed at all, even they were all shown to be wrong.

The second is that misconceptions are not organised in the human mind in any coherent way. That is, because misconceptions are haphazardly organised in the mind, it is very difficult to simply “remove” them and replace them with the correct mental model. Disessa explain this in his chapter “Knowledge in pieces”, and it forms the basis for why it takes a considerable amount of time and intentional effort by teachers to build the correct mental models of our students.

And this is what Richard was getting at with his quote in the title. It took a significant duration of time, and mental effort for the human race to move from misconceptions about motion to the coherent conceptual model we use today. Our students are no exception to this, and we should allocate the appropriate time and learning activities for them to go through this same process.

Physics NCEA Review 2: “5 tau”

Last blog I talked about the number of student who are passing Level 3 electricity as an example at why we need to look at revising our current physics standards. There were lots of interesting comments on facebook, and I am excited to continue the conversation.
I thought I might alternate these posts between why we should change, and some ideas of what we change to.

Today’s post is about the level of detail provided to us in our NCEA Physics (examinable) curriculum, and why this needs to increase. I give credit where credit is due, and this post has been inspired by a similar one by Ben Duckett.

In my first year of teaching physics back in New Zealand (2015) one of my pupils achieved an outstanding scholarship in his scholarship physics exam. (I can take no credit for this, this student sat Level 3 calculus as a year 11…). What was particularly interesting was that this student received a merit grade for his level 3 electricity exam (I know I pick on L3 electricity, but I’ll definitely branch out in other posts).

One of the questions this pupil didn’t achieve full marks was the following:

l3 time constant

This question is answered using the concept of a “capacitor time constant”. What my student missed, and I didn’t teach, was that “the time to fully discharge is 5 time constants” (quote from answer schedule).

… What??!! How was he supposed to know that specific piece of information? It’s not in the standard, and all of a sudden it’s a critical piece of curriculum that is responsible for my student’s grade! In fact the standard only says “Capacitors in a DC circuit … time constants”. Very helpful…

And why 5 time constants? 5 time constants is 99% discharged. Why not 3 time constants which is 95% discharged? Or 8 time constants, which is 99.96% discharged, and is 100% discharged to 3 significant figures, which is how many significant figures the main question is written in.

So now textbooks promote “5 time constants” as though it is some universal physics rule. What is the next “physics” rule that we are expected to teach that will be effectively plucked out of thin air?

I know I’ve been picking on Level 3 Electricity. So add a comments below if you have any other specific examples of your own.

Physics NCEA review 1: The “electrical” elephant in the room.

Over the next few months I hope to present a number of arguments into why the current NCEA standards for Level 2 and 3 in physics should be reviewed. This review should involve a panel, unrushed, and full of teachers, university staff, and academics.

But before I start my arguments, let me explain upfront what I am not trying to do:

  • I am not advocating review/change simply for change’s sake. I am a big fan of the saying, “if it aint broke, dont fix it“. That being said, I don’t think our physics curriculum is completely broken. Although I didn’t study in NCEA, I still studied a very similar curriculum, and I seemed to turn out ok. My point is rather, that our curriculum could be improved, and I hope to make the case for this.
  • I am also not try to spruik a change to “more 21st century skills”. I don’t buy the “Our education system is failing us for 21st century learners” view. Rather, I believe we have the most educated population in the history of homo-sapiens. Our current 21st century workers are building robots, artificial intelligence, compostable plastic bags and a New Zealand space industry! We are by no means in a crisis of education, from a STEM point of view. So instead of promoting more “21st century skills”, I think a successful high school curriculum should produce more STEM graduates who understand physics better. That is our real challenge as physics teachers.

The first indication I had that our standards could be improved was in 2016, my 2nd year teaching back in New Zealand (I have taught in Melbourne). I cant remember the exact figures, but between 30-40% of my kids chose to not attempt the Level 3 electricity exam.

I initially put this down to my own teaching. Although I would back myself as someone who understands physics well (I have a degree in physics), I am the first to admit that there are parts of the Level 3 electricity course I understand in a mathematical sense, but not in a conceptual sense. By this I mean I can use the formula appropriately, and if I needed to explain the concept, I would use the formula as a basis. But I do not have good conceptual mental models for parts of this course.

After beating myself for a number of years, I was almost relieved to see some national statistics that told a very similar story! Take 2018 numbers as an example:

  • 9500 pupils were entered for the Level 3 mechanics exam. This is a good starting point for how many kids are in our physics classrooms.
  • 8300 pupils entered for the Level 3 electricity exam
  • Of those 2550 either were absent, or didnt attempt this paper! This is 31% of the kids entered for electricity, and 40% of the total kids in our physics classes that didn’t even open the electricity paper!
  • Furthermore 1620 pupils failed this paper. This means out of everyone who entered for the electricity exam, only 50% passed. Or, looking at our entire cohort, 43% of kids in our physics classroom passed the electricity exam.

I am sure you would agree with me as well, that a number of those kids who did pass, were simply “formula hunting”, and may not have much actual understanding of this topic. Without understanding, this topic will fail to live on in their long term memories. But that is a topic for another post…

So there is the elephant in the room. If one of our main topics is serving less than half of our kids, does it need to be looked at? My answer is a resounding yes.

What are your thoughts or experiences with the Level 3 electricity standard?

NCEA Results 2018: Initial thoughts

The results are out for last years NCEA exams. I taught 3 classes of Yr 12 Physics last year (almost 70 kids).

I used a modeling method, and only taught mechanics. We spent far longer on motion graphs, motion maps, understanding differences in acceleration and constant velocity, energy bar charts etc. We spent a lot more time going into much further depth with mechanics, with the hope of achieving understanding, as opposed to memorisation from spending less time on mainly formula plugging exercises.

My FCI data from this year indicated an improvement:

fci data 2018

My results are under the abbreviation MCG. The others are from a Hestenes article (Notes for a modeling theory of science, cognition and instruction).

However my NCEA results were interesting.

I have a higher percentage of students receiving Merit and Excellence in the Mechanics paper than in previous years (nearly 50% of the cohort in 2018). This was expected.

But I have a higher number of kids getting a Not Achieved than previous years (~38 %). This was not expected.

Here are my thoughts to try and explain this Not Achieved result:

  • Maybe these bottom kids have not done enough independent work themselves through the year (in groups others are doing the work for them)
  • In the past my bottom kids still didn’t understand physics, but they had done enough plug-and-chug to pass the test.
  • They may have had enough on their test to start a problem/show understanding. For example, on a momentum conservation question, they may drawn a momentum bar chart. But this would be completely new to markers, and not given credit.

My current actions I think I will take are:

  • I have significantly cut down the size of the work book I produced. Last year, as the first year, it had far too much stuff, and we didn’t get to do everything. Now we have time to complete everything, and that will be the expectation.
  • I choose groups, and we rotate groups every model. Perhaps these bottom students are getting used to the same kids doing all the heavy lifting (thinking).
  • Identify these kids in trouble near the end of the year, and give them a simplified version of physics, with the tools and the practice to do plug-and-chug, to give them a chance at passing the exam (although they might not know the physics…).
  • Explicit problem solving practice, based on Alan van Heuvelen’s worksheets
  • A better revision lead up/homework throughout Term 2 and 3, which includes “interleaved practice”. This could start off by just getting kids to choose the correct model, and not actually solve the problem (so as to reduce cognitive load)
  • Be more efficient in class. With being a peripatetic teacher last year, and also having “Monty Python Mondays” etc, as a conservative estimate, I probably lost 10 mins of every class (1 hour). By being more efficient, I can effectively add in an extra period almost every week.

These are my initial thoughts. I would love to get my hands on some papers, and have a look at how these kids did.