The teaching style comprises:
- a mix of hand drawn mathematical derivations
- running a simulation environment (MATLAB/OCTAVE) and
- realtime C/C++ programs, for example, filtering my ECG.
Capture a region of the screen which is 1280x720 pixels wide (HD resolution) and then show within that region whatever needs to be presented.
Generally, I recorded clips and then arranged them into playlists for YouTube. A playlist then covers a topic, for example FIR filter design. Playlists could be seen as the actual lectures in terms of classical lectures however they are really covering topics here. Every playlist then contains a sequence of clips roughly covering these different subtopics:
- example illustrating the theory
- simulation, and finally a
- proper practical example pointing to the real world.
Prior to recording I prepare every clip either as a handwritten note which I keep on the side or as a text-document outside of the capturing area - in my case on the left screen. The same applies to programming commands which I keep pre-prepared on the screen off the capturing area. Clips can be from 3mins to 30mins but ideally should stay short in the region of 3-5mins. However, some mathematical derivations cannot be split up easily and turn out to be longer. Students will pick and choose usually anyway. They might first watch the beginning and then the end. Later then they will be watching the whole derivation.
Planning is key as in any other filming task. The better planned the less takes are necessary.
Also getting feedback about the clips is essential. It's easy to make a mistake which goes unnoticed. In my case my colleague in Singapore (thanks, David!) did the checking and spotted quite a lot of mistakes.
Within a clip I introduce the subject with a handwritten outline. Then for the main content I use as a main framework again handwritten text on numerous sheets as I would do it in a lecture theatre. Usually my talking closely matches what I'm writing being aware that for many students English is not their first language and I provide this redundancy so that it's easier for them to follow (the native speakers often find this a bit boring but they just skip these bits). Then I pull in different other windows in the capture area such as programming environments, applications, oscilloscopes, web pages, text editors, photos etc. Even films or other YouTube clips would be possible but I haven't used them yet.
At the end of a clip I summarise what I've lectured. This follows the standard storytelling.
In terms of takes I tend to stop as soon as I notice a major mistake and then resume a bit earlier, usually from the top of the page and re-draw everything from that page. This makes editing easiest.
As a video camera I use the Epson visualiser ELPDC06. It acts as a standard webcam so that I only need is a simple webcam viewer.
Under Windows a popular webcam viewer is actually called exactly like that. Under XP there is no need to download any webcam viewer because it is part of the operating system. Under Linux I use camorama which is part of the Ubuntu distribution and can be selected in the package/software manager. I use the full resolution of the camera and would recommend it to avoid aliasing.
In terms of lighting: I didn't use the built in light of the visualiser because it was too uneven. Just the room-light from my office fluorescent lights was perfect for this pupose. Generally very diffuse light works best (i.e. office ceiling lights).
It is very beneficial to use a clipboard to have the sheets of paper at the same place during the recording. This allows later to edit out sequences and also to patch up mistakes in the edit because the drawings won't move. Note that the area around the sheets is also white to force the automatic exposure of the camera to expose as dark as possible.
It took me a while to get the perfect pen but then my partner took me to a proper stationary store where I finally found the perfect solution. It's this pen here by Maped:
I had a pretty rough ride finding a working setup for video capture. Usually, people would use camtasia and I had a go at it because it offers capturing from two video sources: the camera for my drawings and my screen. The first problem was that camtasia cannot show the external video camera properly while capturing so that it is impossible to know if the recording is really sharp or if the writing is legible but I thought I could manage. However, after a week of testing I gave up with Camtasia, mainly because it is very unstable and crashes on a regular basis. After a crash I had to reboot the computer which always defaulted in camtasia to record without sound which then wasted even more time. The video files generated by camtasia are huge and the format is proprietary. One needs a special codec for other editing programs which is free to use but a pain to install. However, even with the codecs installed one needs to unpack the files to edit them with another editing program. Clearly camtasia wants to force one to use their video editor which is again a pain to use and lacks many professional features. Editing the two video files from the overhead camera and the screencapture was time consuming and the options of combining them were limited.
Finally it dawned to me that the capture area on the screen IS my video mixer!! The only software I need is some simple screen capture software. So, I would recommend the following:
Use any (free and/or open source) capture program which can capture a region of your screen and store the film as a standard MP4/H264 file.
My personal solution is pretty geeky but you get the gist: I record all my clips with ffmpeg (or now called avconv) which is a commandline video converter but also allows "converting" from a region on my screen directly into an MP4 file. It's available for Linux, Windows and Mac. My command line for Linux is:
avconv -f alsa -i pulse -f x11grab -show_region 1 -r 25 -s 1280x720 -i :0.0+1500,200 -threads 0 -b 10M -b:a 256k -ac 1 -strict experimental $1
...which has taken me quite a while to figure out and for that reason I put it into a script which I just start from the command window with all these cryptic commands in it.
You don't need to go down this route but the bottomline is: use any screen capture program which generates standard MP4 from a region of your screen and which marks the region with a box (see photo above) so that you know what you brodcast and what not. Feel free to send me your favourite screen capture programs for Windows or Mac and I put them here on this blog.
I started with a standard soundcard and a PC20 headset by Sennseiser (example clip here):
..which is amazingly OK for its price. The microphone can be moved around so that you can adjust the sound-quality. For me the main problem was to hear myself only on one ear. I then switched to the Sennheiser PC310 which is an open stereo headset which also sounds slightly richer (example clip):
The microphone is very directional which is great because it suppresses more or less completely any sound from outside or from the nearby coffee kitchen. However, the main problem with this headset is that it still just uses a standard unbalanced 3.5mm microphone plug. At work the power is so polluted with interference from dodgy equipment that I constantly got clicks in the recording.Finally I went for the BPHS1 from audio technica:
example clip here).
With normal soundcards the problem is that you won't be able to hear yourself through the headphone. However this is highly desirable because time is precious and you want to know instantly if the sound/content is OK or not. Take for example breathing. The microphone might catch the air puffs from your breathing and the recording might be unusable. A second issue is microphone positioning because all headsets mentioned here have a highly directional microphones. Moving the microphone just a bit up or down will impact on the quality of the sound.
For webcasts the TASCAM US-125M is just great. It allows capturing audio from different sources such as a microphone (XLR/jack) and feeds the mixed sound directly back to the headphone. The computer sees just a new recording device at the USB port and the box won't need any drivers. It also has a limiter and manual gain control for the microphone. It has two LEDs which indicate the existence of an input signal and clipping. With this setup it's nearly impossible to screw up the recording.
I tend to edit the audio so that the talking flows without any mmms/erms and allow jump cuts in the video because there are no cutaways. For YouTube generally sound is normalised to 0dB which can be done with one click in virtually all editing programs.
For editing I use Sony Vegas because it allows batch processing so that it's possible to prepare many clips on the timeline and then Vegas renders them out as separate files, nicely numbered. See my previous post about batch processing and the script I'm using for this. The screen shot above shows the whole playlist (i.e. lecture) which means that all clips are in this single timeline. The region markers define the start/endpoints for every clip and then the batch render generates separate video files which all together then form a playlist. All this can run over night and then uploaded with one drag/drop to YouTube next morning. It saves enormous time.
Check out this comparison site for the different version of Sony Vegas. You need to have the pro version to be able to do batch render. Apart from the very convenient batch render virtually any video editor is suitable. At the end you need a program which is able to trim the start, end, can remove sections in the middle of a take, can normalise audio and video.