Are You a Computer Scientist?
You can develop the declarative and procedural knowledge in Computer Science, but no-one seems to be talking about "disciplinary knowledge".
Back in 2000, Channel 4 started to broadcast a series called Faking It, in which people would receive brief-but-intensive training to try to pass themselves off as an expert in an unfamiliar field. The introduction of the new National Curriculum in 2014 led to some ICT teachers, and in particular those teachers who had moved into ICT from other subjects, feeling like they were starring in an episode themselves.
There was some initial resistance, but two years on I think that most teachers have moved around the cycle of acceptance and have started to accept Computing as a subject. They've read up on various Computer Science techniques, learnt to program, and are now asking in forums not what to teach, but the best way to teach certain topics.
One of the things that bothered me most when I left my previous career in the software industry to become a teacher was that I could no longer really tell whether I was doing a good job. If you're a programmer, you can tell when your program doesn't work. You can tell how well your program works, for example, by using a stopwatch or looking at the size of the resulting file.
Teaching appears to be more subjective – what works seems to be open to debate. In the space of little more than a week, for example, the TES told us both that a curriculum which is over-reliant on pen and paper, timed exams and memorisation will not suffice for the 21st century and that more emphasis should be placed on memorisation to lay the foundations for more complex tasks.
You might be confidently delivering a course, and your classes might be getting good results (which is obviously a good thing for your students), but not everything is in the specification. As Donald Rumsfeld famously said, "You don't know what you don't know", and there can still be some obvious signs that you're "faking it" even if you can teach all of the topics. The new GCSE specifications are more explicit and help with the subject content, but what's missing is a sense of the subject's underlying philosophy.
I frequent a number of teaching forums, and when I joined a new one last year, the first discussion that caught my eye was about a particular coursework task for GCSE Computer Science. Several posters had proposed a similar solution, but I could see that there was a much more efficient way to approach the task, and I pointed this out. The other contributors immediately responded that efficiency wasn't one of the requirements of the task.
That was true, the task didn't explicitly mention efficiency. It didn't need to, though - efficiency is the raison d'être of the whole subject
This was nicely demonstrated in last year's BBC4 programme, The Wonder of Algorithms. The NHS and the University of Glasgow's department of Computer Science had worked together to produce a computer program to match people in need of an organ transplant with suitable donors. The program worked well and the doctors and patients were delighted that everyone had been matched with a new kidney. The computer scientists were disappointed because it had taken half-an-hour to run.
Computer Scientists, you see, consider efficiency at every available opportunity, not just when questions and tasks ask them to. The biggest difference between ICT and Computing is that ICT was more concerned with how things looked, while Computing is concerned is how things work. Refinement in ICT was about how to make your output's appearance better suit the audience, whereas refinement in Computing would mean getting your program to use fewer resources, with resources being things such as processor time, memory, disc space or bandwidth.
One way that you could remind yourself to consider efficiency is to use a really slow computer. Dijkstra famously said that the advent of cheap and powerful devices would set programming back 20 years. He was right – computers today are so fast that for most tasks that we don't need to think about efficiency, and have so much memory that we don’t need to think about saving the odd byte here or there.
Unnecessary repetition is usually the biggest waste of processor time, but complex calculations can also use a lot of processor time, particular on a slower computer. When I was a teenager in the 80s, for example, even drawing a circle was something that needed to be done carefully; trigonometric functions (e.g. sines and cosines) take longer to calculate than squares and roots, so it can be quicker to use Pythagoras' theorem.
I recently came across a discussion of this task in a forum for Computing teachers:
A student has a Saturday job selling cups of tea and coffee. The tea is £1.20 per cup and the coffee is £1.90. The student should keep a record of the number of cups of each sold. Unfortunately it has been so busy that they have lost count but they know that they have not sold more than 100 of each and the takings are £285. Create an algorithm that will calculate the number of cups of tea and coffee sold.
By the time I saw the question, there was already a number of responses, all suggesting the use of nested loops – one each for tea and coffee, both counting from 0 to 100 and multiplying by the cost of the drinks to see whether the total was £285.
I was a bit surprised that everyone had suggested the same solution as it's wildly inefficient – the program would loop 10,000 times to find the answer, so I proposed a solution that found the answer in about 14 iterations. As one amount decreases, the other would increase, so the quickest way to find the solution would be to start with 100 coffees and count down until you'd need 100 teas to reach £285; you could then work out the cost of the coffees and see whether the difference between that and £285 was a multiple of £1.90 (using modular arithmetic). I tried both solutions in Python on a new-ish laptop, and both took a negligible amount of time.
Having learnt to program in the 80s, though, I converted both programs into BBC BASIC and ran them in real-time on a BBC Model B emulator – a really slow computer by modern standards. The difference was clear – the single loop took 0.13s, the nested loops solution took well over a minute.
To be fair to the other forum contributors, though, it later turned out that the problem in question did actually come from a worksheet on nested loops. That doesn't mean that it's an appropriate use of nested loops, though – it's quite common for opportunists to try to make money from new developments in education. Those of you who remember the introduction of video projectors will also remember that schools were deluged with adverts for "interactive whiteboard resources" (i.e. poor-quality PowerPoint presentations) shortly afterwards.
When the Computing curriculum first appeared, I seriously considered using the BBC Model B emulator to teach programming to my KS3 students, precisely because it's so slow. It was only the complicated procedures for editing and saving programs that led me to look elsewhere.
When you write a program, you can measure how quickly it runs with a stopwatch, and generally the less time it takes, the better. Recently, though, Linus Torvalds has been talking about a slightly more abstract concept – "good taste" code. To summarise, it seems that applying the good taste principles really just involves thinking about your algorithm carefully to create a general function that works under all circumstances without the need for ifs to handle exceptions. While this might be a bit too abstract for KS3 classes, it's probably worth a mention to GCSE and A level classes.
Finally, the other thing that fascinated me when I first became a teacher is that teachers are often asked to do things for which there is no evidence - from accommodating different learning styles to "deep" marking.
As a Computer Scientist, I not only examine my programs and web-pages for efficiency, but I also want to teach in the most effective way possible. I would find myself asking things like "Where's your evidence that a three-part lesson is better?", "Are starters really effective?", or "Is open and closed mindset training the result of a proper study or is it the new Brain Gym?" A surprising number of colleagues didn't ask those questions.
I was recently taken aback to see someone asking, in a Computing forum, whether other teachers had considered "making CPUs with boxes and string" when teaching the fetch-execute cycle, and not only that, but a number of people had replied to say that they liked the idea. Now, there aren't necessarily right and wrong ways to teach things, as mentioned in paragraph 3, but no-one else seemed to question why you would do this, or whether it was a good idea. Knowing that we remember what we think about, and that a model of a CPU made with boxes and string would neither look, nor function, like the real thing, I could think of a reason why making such a model might not be effective; no-one could suggest why it might be.
I've hinted in previous articles that I'm a fan of evidence-based practice, and in particular the work of John Hattie and Daniel Willingham. I thoroughly recommend Why Don't Students Like School? as a guide to using cognitive science to improve your lessons. I've written previously that I don't like projects, and that challenge and repetition are more effective than "fun". These ideas have met with some resistance from colleagues, but I didn't make them up – they were based on research that I'd read (and to which I'd linked in the text). Next time you either like (or don't like) an idea, why not release your inner scientist and see if there's any evidence to back it up (or refute it)?
PS. After I wrote this, the following article, which touches on similar themes, appeared in the TES - ‘It's time to speak out against the ridiculous amount of poppycock we are spouted at education conferences'
This blog originally appeared in the TES Subject Genius series in December 2016.