Saturday, 21 April 2018

Blended learning / Flipped classroom

In this post I show you how I teach the course Digital Signal Processing at the University of Glasgow.

It's flipped classroom style where the students learn by doing extensive lab experiments and watch videos in their own time.

The video clips are a combination of:
 This photo shows my setup to record the videos:

All this combined with a lot of labs where the students have to solve real world problems by coding in Python (object oriented) and C++. In the past we had:
  • Filtering of audio to make it sound better by faking an SM58 response
  • Reverse engineering the (in-)famous aural exciter
  • Doing OFDM via an audio link (=ADSL)
  • Filtering ECG with FIR and IIR filters
  • Writing a heartrate detector
  • Removing noise from audio recordings
  • Writing Python / SWIG wrappers for efficient FIR/IIR filters 
There were no traditional lectures but we had three tutorials prepping them for the written exam.



Planning is key as in any other filming task. The better planned the less takes are necessary.
Prior to recording I prepare every clip either as a handwritten note which I keep on the side or as a text-document outside of the capturing area - in my case on the right screen. The same applies to programming commands which I keep pre-prepared on the screen off the capturing area. Clips can be from 3mins to 30mins but ideally should stay short in the region of 3-5mins. However, some mathematical derivations cannot be split up easily and turn out to be longer. Students will pick and choose usually anyway. They might first watch the beginning and then the end. Later then they will be watching the whole derivation.


Generally, I arrange clips into playlists for YouTube. A playlist then covers a topic, for example FIR filter design. Playlists could be seen as the actual lectures in terms of classical lectures however they are really covering topics here. Every playlist then contains a sequence of clips roughly covering these different subtopics:
  • motivation
  • theory
  • example illustrating the theory
  • simulation, and finally a 
  • proper practical example pointing to the real world. 


It's easy to make a mistake which goes unnoticed. In my case my colleague in Singapore (thanks, David!) did the checking and spotted quite a lot of mistakes.However, even mistakes on the final clips are not a disaster: YouTube is a social medium. People can comment underneath and you can also add a correction underneath the clip and let students come up with ideas how to make it better.


Central technique for the video recordings

Capture a region of the screen which is 1280x720 pixels wide (HD resolution) and then show within that region whatever needs to be presented.

This content could be a programming environment in Python (Spyder) or the view on a sheet of paper or both combined! The next sections show how to bring this all together.

Video camera

As a video camera I use the Epson visualiser ELPDC06 or the ELPDC07. It acts as a standard webcam so that I only need is a simple webcam viewer. Under Windows 10 this is called CAMERA and is a standard Microsoft app.
I use the full resolution of the camera and would recommend it to avoid aliasing.
In terms of lighting: I didn't use the built in light of the visualiser because it was too uneven. Just the room-light from my office fluorescent lights was perfect for this pupose. Generally very diffuse light works best (i.e. office ceiling lights).


It is very beneficial to use a clipboard to have the sheets of paper at the same place during the recording. This allows later to edit out sequences and also to patch up mistakes in the edit because the drawings won't move. Note that the area around the sheets is also white to force the automatic exposure of the camera to expose as dark as possible.


A pen with slightly wider strokes than a ballpoint is beneficial. I use this pen by Maped:

It's a fountain pen and takes standard black ink cartridges. However instead of a steel tip it has a ballpoint. This won't create nice calligraphic lines but it won't smude and produces perfect black lines.

See the difference in my video clips: standard ballpoint pen and fountain pen.

Video capture

The central idea is that the capture area on the screen is the video mixer. The only software I need is some simple screen capture software.

I used Snagit (part of Camtasia) in the past but now it's 


  1. Start zoom
  2. Go into settings and select under Video: "optimise video for 3rd party software".
  3. New meeting
  4. Join with computer audio
  5. Now you have a meeting with yourself! :)
  6. Click on "Share" on capture your screen (ideally you have two screens)
  7. Press "record" on this computer (do not "cloud" that will take hours to save)
  8. Now do your presentation.
  9. Use the "Annotate Function" to draw the viewer to what you are doing.
  10. End meeting



With normal soundcards the problem is that you won't be able to hear yourself through the headphone. However this is very important because time is precious and you want to know instantly if the sound/content is OK or not. Take for example breathing. The microphone might catch the air puffs from your breathing and the recording might be unusable. A second issue is microphone positioning because all headsets mentioned here have highly directional microphones. Moving the microphone just a bit up or down will impact on the quality of the sound.

For webcasts the TASCAM US-125M or the US-2x2 are just great. They allow capturing audio from different sources such as a microphone (XLR/jack) and feed the mixed sound directly back to the headphone.The US-125M is now being superseded by the US-2x2 which has also phantom power so that also professional condenser Mics can be used.


I went for the BPHS1 from audio technica:
It has a dynamical microphone which sounds really good compared to standard Skype headsets and has a proper balanced XLR plug for the dynamic microphone. With the cheaper Skype headsets I had trouble with interference but this headset produces clean sound no matter how noisy the power is at work.


For editing I use Sony Vegas because it allows batch processing so that it's possible to prepare many clips on the timeline and then Vegas renders them out as separate files, nicely numbered. This means after a long day of editing I just start it, go to bed and next morning I have all clips nicely rendered and ready to be uploaded in one go. See my previous post about batch processing and the script I'm using for this. The screen shot above shows the whole playlist (i.e. lecture) which means that all clips are in this single timeline. The region markers define the start/endpoints for every clip and then the batch render generates separate video files which all together then form a playlist. All this can run over night and then uploaded with one drag/drop to YouTube next morning. It saves enormous time.
You need to have the Vegas pro version to be able to do batch render. Apart from the very convenient batch render virtually any video editor is suitable. At the end you need a program which is able to trim the start, end, can remove sections in the middle of a take, can normalise audio and video.

Note: This is an updated post of a previous blogpost which I wrote a couple of years ago.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.