I was one of the many thousands who took part in the online Stanford AI class -- in my case as much to find out about how they'd make the class work as in order to learn some of the AI topics I missed out on as an undergrad way back when. Now that it's over, here are a few thoughts:
I'll put my conclusion first. Large online classes like these won't replace local university courses; they will transform them. More and more, university lecturers are going to become content curators and facilitators, and they're going to need to write less and less of their own presentation material.
Of course, mine is a slightly biased view as the Intelligent Book, the interactive cloud teaching software I've been developing, makes it very easy to incorporate third party material like this into a lecture course. And as you read through this, you'll sense a certain "this is why we need Intelligent Books" theme in my comments!
Anyway, into detail on what I thought of the course...
The video lectures, which were like video-recorded personal tutorials, worked very well indeed. They were clear, concise, engaging, and had the feel of being in a small class rather than a large one. Thrun and Norvig are excellent communicators and very interesting to listen to. The fact that it was an ongoing course (everyone working to a schedule), was good motivation to make time to watch the videos and do the quizzes. That's the good news, and it really is very very good news indeed.
But every class has its flaws. So what were this one's?
Well, the class interaction and quizzes were simplistic, both in style and content. For instance, some of the final exam's questions on computer vision weren't about artificial intelligence at all, but were simple early high school physics questions about the optics. An object that's yay big is yay distant from a camera with a focal length of such-and-such, what's the size of the image on the image plane? Here are three objects in a scene; this camera sees them in this order, what order do they appear to be in to these other cameras that are looking at the scene from different angles?
I tend to think that while the videos are very effective for presenting a topic, they aren't so efficient for quizzes and reference. For reference, looking up that formula just to check you've got it right, seeking within a video to find the point it was on-screen is much slower than flicking back through text. For quizzes, the format they used only supported tick-the-box and fill-in-the-box questions, but nonetheless required the lecturers to spend time recording a video introduction for each question.
(So, this is already one area where I see the Intelligent Book bringing an advantage -- it helps courses to use a plurality of different kinds of content. Hop from the video to the notes, to the quiz, to the advice...)
The interaction between class-members was essentially limited to forums and whatever students organised off-line. The videos were pre-recorded, so of course there wasn't much in the way of to-and-fro between the lecturers and the class, except in the "office hours" on Google Hang-outs.
This is unfortunate, as interactive teaching is very beneficial and is starting to gain traction in universities. Eric Mazur, Bob Beichner, Rich Felder, and others in science and engineering education have been trying to encourage lecturers to interact with their classes more, and move beyond simple one-way transmission of material. Having taught a class last semester using the Intelligent Book, with the students chatting, discussing, and giving feedback live on the lecture screen, and answering and discussing questions as a class, I genuinely missed the interaction.
So what do I think will happen next -- how do I think/hope this will change university engineering and science education?
Well, the videos really are excellent. So the first thing that will happen is that other universities will want to use these videos and others like them in their courses. Rather than Dr Joe Bloggs spend another two hours working on his PowerPoint slides for a class, he might be better off finding and showing an excellent video by famous presenters, and then spending his energy interacting with the class to further their understanding.
And I think that trend -- to use more third party prerecorded material and spend more time interacting with the class rather than preparing material -- will grow very quickly. Lecturers won't just enjoy easy access to good material; they'll realise that the lecturers who recorded these videos get a great deal of exposure and can become famous teachers -- producing the next great teaching video will become another route to increasing your academic profile. I think we'll quickly see lecturers competing to get their videos used in other people's classes.
And that, I think, means that traditional lectures will change. Short videos punctuated by class discussions and exercises, and linked to rich sets of notes and social material, will become far more common than they are now. But then, I'm biased, because that's just the sort of thing that the Intelligent Book makes easy.
I teach technology design (particularly, software engineering, human-computer interaction, Scala, mobile, and web development) at the University of New England. I do research in how we can design smart useful systems and make sure that reasoning machines aren't unreasonable machines. Especially in technology education and education technology. I also re-invent far too many of my own wheels.
Wednesday, 21 December 2011
Wednesday, 14 December 2011
Metrics for Intelligent Books
There's a post on metrics in Intelligent Books gone up over on the Intelligent Book's blog.
Subscribe to:
Posts (Atom)