Saturday, 21 April 2018

Blended learning / Flipped classroom

In this post I show you how I teach the course Digital Signal Processing at the University of Glasgow.

It's flipped classroom style where the students learn by doing extensive lab experiments and watch videos in their own time.

The video clips are a combination of:
 This photo shows my setup to record the videos:

All this combined with a lot of labs where the students have to solve real world problems by coding in Python (object oriented) and C++. In the past we had:
  • Filtering of audio to make it sound better by faking an SM58 response
  • Reverse engineering the (in-)famous aural exciter
  • Doing OFDM via an audio link (=ADSL)
  • Filtering ECG with FIR and IIR filters
  • Writing a heartrate detector
  • Removing noise from audio recordings
  • Writing Python / SWIG wrappers for efficient FIR/IIR filters 
There were no traditional lectures but we had three tutorials prepping them for the written exam.

Storytelling 

Prep

Planning is key as in any other filming task. The better planned the less takes are necessary.
Prior to recording I prepare every clip either as a handwritten note which I keep on the side or as a text-document outside of the capturing area - in my case on the right screen. The same applies to programming commands which I keep pre-prepared on the screen off the capturing area. Clips can be from 3mins to 30mins but ideally should stay short in the region of 3-5mins. However, some mathematical derivations cannot be split up easily and turn out to be longer. Students will pick and choose usually anyway. They might first watch the beginning and then the end. Later then they will be watching the whole derivation.

Playlists

Generally, I arrange clips into playlists for YouTube. A playlist then covers a topic, for example FIR filter design. Playlists could be seen as the actual lectures in terms of classical lectures however they are really covering topics here. Every playlist then contains a sequence of clips roughly covering these different subtopics:
  • motivation
  • theory
  • example illustrating the theory
  • simulation, and finally a 
  • proper practical example pointing to the real world. 

Feedback

It's easy to make a mistake which goes unnoticed. In my case my colleague in Singapore (thanks, David!) did the checking and spotted quite a lot of mistakes.However, even mistakes on the final clips are not a disaster: YouTube is a social medium. People can comment underneath and you can also add a correction underneath the clip and let students come up with ideas how to make it better.

Technology 

Central technique for the video recordings

Capture a region of the screen which is 1280x720 pixels wide (HD resolution) and then show within that region whatever needs to be presented.

This content could be a programming environment in Python (Spyder) or the view on a sheet of paper or both combined! The next sections show how to bring this all together.

Video camera


As a video camera I use the Epson visualiser ELPDC06 or the ELPDC07. It acts as a standard webcam so that I only need is a simple webcam viewer. Under Windows 10 this is called CAMERA and is a standard Microsoft app.
I use the full resolution of the camera and would recommend it to avoid aliasing.
In terms of lighting: I didn't use the built in light of the visualiser because it was too uneven. Just the room-light from my office fluorescent lights was perfect for this pupose. Generally very diffuse light works best (i.e. office ceiling lights).

Clipboard

It is very beneficial to use a clipboard to have the sheets of paper at the same place during the recording. This allows later to edit out sequences and also to patch up mistakes in the edit because the drawings won't move. Note that the area around the sheets is also white to force the automatic exposure of the camera to expose as dark as possible.

Pen

A pen with slightly wider strokes than a ballpoint is beneficial. I use this pen by Maped:

It's a fountain pen and takes standard black ink cartridges. However instead of a steel tip it has a ballpoint. This won't create nice calligraphic lines but it won't smude and produces perfect black lines.

See the difference in my video clips: standard ballpoint pen and fountain pen.


Video capture

The central idea is that the capture area on the screen is the video mixer. The only software I need is some simple screen capture software.

I used Snagit (part of Camtasia) in the past but now it's 

ZOOM!

  1. Start zoom
  2. Go into settings and select under Video: "optimise video for 3rd party software".
  3. New meeting
  4. Join with computer audio
  5. Now you have a meeting with yourself! :)
  6. Click on "Share" on capture your screen (ideally you have two screens)
  7. Press "record" on this computer (do not "cloud" that will take hours to save)
  8. Now do your presentation.
  9. Use the "Annotate Function" to draw the viewer to what you are doing.
  10. End meeting

Audio

Soundcard

With normal soundcards the problem is that you won't be able to hear yourself through the headphone. However this is very important because time is precious and you want to know instantly if the sound/content is OK or not. Take for example breathing. The microphone might catch the air puffs from your breathing and the recording might be unusable. A second issue is microphone positioning because all headsets mentioned here have highly directional microphones. Moving the microphone just a bit up or down will impact on the quality of the sound.

For webcasts the TASCAM US-125M or the US-2x2 are just great. They allow capturing audio from different sources such as a microphone (XLR/jack) and feed the mixed sound directly back to the headphone.The US-125M is now being superseded by the US-2x2 which has also phantom power so that also professional condenser Mics can be used.

Headset

I went for the BPHS1 from audio technica:
It has a dynamical microphone which sounds really good compared to standard Skype headsets and has a proper balanced XLR plug for the dynamic microphone. With the cheaper Skype headsets I had trouble with interference but this headset produces clean sound no matter how noisy the power is at work.


Editing



For editing I use Sony Vegas because it allows batch processing so that it's possible to prepare many clips on the timeline and then Vegas renders them out as separate files, nicely numbered. This means after a long day of editing I just start it, go to bed and next morning I have all clips nicely rendered and ready to be uploaded in one go. See my previous post about batch processing and the script I'm using for this. The screen shot above shows the whole playlist (i.e. lecture) which means that all clips are in this single timeline. The region markers define the start/endpoints for every clip and then the batch render generates separate video files which all together then form a playlist. All this can run over night and then uploaded with one drag/drop to YouTube next morning. It saves enormous time.
You need to have the Vegas pro version to be able to do batch render. Apart from the very convenient batch render virtually any video editor is suitable. At the end you need a program which is able to trim the start, end, can remove sections in the middle of a take, can normalise audio and video.

Note: This is an updated post of a previous blogpost which I wrote a couple of years ago.

Tuesday, 2 August 2016

Sony Vegas 13 under Windows 10

After I've upgraded to Windows 10 Sony Vegas would get stuck at "Initialising ActiveX plug-ins". I wasn't even able to kill the process with the task manager.

The solution is the following (some steps might be redundant):
  1. Uninstall Vegas 13
  2. Remove any entries called "Vegas" from the registry with regedit (be careful!)
  3. Re-install Vegas 13 and register it
  4. Right click on its icon and modify Target: "C:\Program Files\Sony\Vegas Pro 13.0\vegas130.exe" /NODXGROVEL
  5. Go to the device manager and disable your graphics card.
  6. Reboot your computer so it boots into plain VGA mode
  7. Right click on the Vegas icon and select "Run this program as Windows 7"
  8. Start Vegas
  9. Go into settings an disable GPU support in both the general settings and for the 2nd monitor
  10. Option: If you are adventerous try to re-enable GPU support! ;)
That took me a week to figure it out. Hope that helps others!

Saturday, 13 June 2015

Creating a DCP from a Premiere Pro CC project with OpenDCP

 Here I'm describing how to create a DCP master for digital cinema projection. This here describes how to directly export from a Premiere Project using the media encoder.

Sound levels

Digital cinema projection requires average sound levels at -24dB. These are subjective levels and not peak levels which can go up to -1dB. Premiere has an amazing tool which is called "Loudness Radar" which shows you the loudness of your film. Just add it to your master bus and press edit.
Generally cinemas expect 5.1 sound: left, right, center, left surround, right surround and low frequency effects. Make sure that all your sequences are 5.1 sequences and not just stereo. As a rough guide dialogue is on the center speaker and the music on  L/R. So the most basic setup is one center speaker and L/R. A subwoofer (LFE) channel is also strongly recommended especially for the music when the L/R speakers are not full range.

Video levels

I assume here that you grade your film in Premiere with a calibrated monitor. I've got the Pantone Huey for the monitor calibration (two Philips 2215 OLED screens). Video levels should of course stay between 0% and 100% where I rarely go over 90%. I'm editing under Windows which gives me a gamma of 2.2 which seems to be pretty accurately mapped after calibration. That's important to understand why we need to use (sRGB complex) later on in OpenDCP (version 0.30).

Export with the media encoder


Video
 
The video is exported as a sequence of TIFF images. For a feature film at 2K that is about 1TB of hard drive space.
  • Frame rate: Make sure that the TIFF export is at the frame rate your project is. For example here the film was shot at a frame rate of 23.976FPS so the TIFF export should have the same frame rate.
  • The resolution is 1998x1080. If you shot in HD then Premiere will add transparent bars left and right to pad it up to 1998 pixels. OpenDCP will ignore the alpha channel and luckily the underlying transparent pixels are actually black so that the left / right bars cause no problems.
  • Bits per colour is 16bits.
  • Tick the box "Use Maximum Render Quality"
  • Make sure that you have no transparent sections in the film. For example, the titler will use a transparent background by default, for example. However openDCP ignores the alpha channel which leads to messed up titles. Make sure that the titles have a black background.The same applies to fade to blacks via transparency (which you shouldn't do anyway).
  • After export check with GIMP that the correct levels have been exported. 0=black.
  • Take a note of the number of TIFF images we have exported. A feature film will have about 130,000 frames. This will be used for the audio conversion.
Audio


The sound files are exported as WAV files. Here, you need to create your own template probably. Cruical is that you export the audio streams separately: 48kHz, 24bits and mono. After export you get files like "myfilm_1.wav", "myfilm_2.wav" etc. Here is the mapping:
1=L
2=R
3=C
4=LFE
5=Ls
6=Rs

If you have any other framerate than 24fps you need to change the speed of the audio. In our case the film was shot at 23.976fps but will play at 24fps. So we need to speed up the film by 0.1%. However a better approach is to calculate the total number of samples of your film from the number of frames we have. Since we have exported a numbered TIFF sequence we know how many frames we have. On a DCP the frame rate is 24fps and the audio sample rate is 48kHz which means we need to have exactly 2000 samples / frame. So the total number of samples needs to be = number of frames * 2000. Adobe Audition is perfect for this purpose because you can set it to display everything in audio samples.

Use the effect Stretch and Pitch. The algorithm "Audition" is the one to choose and then just "Stretch". If you convert from 23.976 to 24fps this means that the film will play 0.1% faster so the stretch is 99.9%. Audition might change the duration from a multiple of 2000 to something slightly longer. In this case it decided to add to the film 9 audio samples which is of course not audible.At the end you should have 6 audio files normalised to a frame rate of 24fps.


Creating a DCP compatible drive

Hardware

The DCP is shipped on a 500GB hard drive which is inside of a CRU DataPort DX115 or DX115DC. You can also use a USB3.0 drive. The right part shows the actual DX115 carrier which contains the hard drive. I used a pretty standard 500GB hard drive. As long as it's SATA it should be fine. The docking station is an adapter which allows a SATA connection and perhaps a USB connection. Connect a SATA cable to one of the free SATA slots inside of you your computer and run it to the docking station.
Alternatively buy an internal docking station which fits into a standard DVD drive slot. This is actually the "proper" method because this internal docking station is used in the DCP servers and allows hot swapping. It is very elegant in that you can plug in the drive while the server is running and Linux can mount it automatically. Removal of the drive can also happen during operation.
USB 3.0 seems to be now catching on but I haven't tested it with a USB drive.

Linux

Time to reboot to Ubuntu Linux or in my case I have a separate Linux computer which talks to the windows box via a samba share.

Formatting the drive

The drive needs to be formatted as EXT3. This is probably the most risky step because you need to format the external drive and not your operating system (!). Type dmesg and have a look which drive is associated with your external drive:
[  139.684652] sd 7:0:0:0: [sdd] 976773168 512-byte logical blocks: (500 GB/465 GiB)
[  139.684689] sd 7:0:0:0: Attached scsi generic sg4 type 0
[  139.684712] sd 7:0:0:0: [sdd] Write Protect is off
[  139.684716] sd 7:0:0:0: [sdd] Mode Sense: 00 3a 00 00
[  139.684746] sd 7:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[  139.705147]  sdd: sdd1
[  139.705360] sd 7:0:0:0: [sdd] Attached SCSI disk
[  273.000188]  sdd: sdd1
[  429.766469]  sdd: sdd1
In this case plugging in the external drive gives you the device /dev/sdd. To be safe you should use fdisk with "fdisk /dev/sdd" and delete all particions and then create one primary Linux partition. It's menu guided so not very difficult.
Then you need to format this partition with:
sudo mkfs.ext2 -I 128 -j /dev/sdd1
This creates an EXT3 file system with an inode size of 128 bytes. Some people had difficulties with inode size of 256 on older servers but that was in 2008. Modern servers should all support the now default inode size of 256 but to be safe just force the formatter to use an inode size of 128. The option "-j" creates an EXT3 filesystem. However, also an EXT2 should do because it's downward compatible. Again, better to create an EXT3 even if that's not required by the DCP standard and a journal pretty useless in a read only filesystem.
Mount the newly created drive with:
sudo mount /dev/sdd1 /mnt
and then change permissions that you can write to it as a normal user:
sudo chmod a+rwx /mnt

Using openDCP

What I'm describing here has been done under both Ubuntu and Windows with OpenDCP (https://www.opendcp.org/). Download the ununtu package and install it with "sudo dpkg -i myopendcppackage.deb". Then you are ready to go. If dpkg moans about missing packages just start your favourite package manager and install the missing packages and re-run dpkg. For Windows there is a standard installer. I've done the JPEG2000 conversion under Windows by writing it on the samba share and then done the rest under Linux.

Creating the JPEG2000 files

The TIFFs need to be converted to JPEG2000 files and into the XYZ colour space. This can be done by the command line tools or by the openDCP gui. I used the GUI for it which shown here.
I just left the settings as they are. Important here is that the framerate is 24P and that the source colour space is 'sRGB complex'and that there is a conversion to the XYZ colour space. It's important to use 'sRGB complex' in version 0.30.0 because 'sRGB' will give you crushed blacks. 'sRGB complex' is actually the correct conversion between sRGB and XYZ whereas 'sRGB' in the settings is a hack to fix problems when using Aftereffects for the export.
BTW, don't panic: the preview stays black. Check at the end that all images have been converted. If there are any problems then use the command line tool instead but in my case everything went fine. If you load the JPEG2000 images into GIMp you will notice that they will look hazy and with the wrong contrast. This is normal.

Creating the MXF files

The actual files which are played on the DCP server are MXF files. There is one for audio and one for video. Again, I used the graphical tools to create the MXF files.

Picture

The MXF containing the picture is created again with the "opendcp" GUI. Select "MXF" and then JPEG2000, SMPTE and 24 frames per second. Specify the directory where all the JPEG2000 files reside and the output directory which should be on the EXT3 formatted external drive.
Both, the scanning of the directory and also the conversion will take a while so be patient. Alternatively you could use the command line tool:
opendcp_mxf -i . -o /mnt/myfilm_video.mxf
assuming that the input images are in the current directory and the output MXF goes straight onto the external drive.

Sound


The sound is done in the same way but by selecting WAV in the uppermost drop down box. Select 5.1 surround sound (6 channels). The output MXF should be saved on the EXT3 formatted drive.
Just select all the separate wav files and then press Create MXF. This will probably just take a few minutes. As a curiosity there is no documentation how to create the sound MXF from the command line. You need to use the GUI.

Creating the XML files

The final step is to create the XML files which are the "glue" between the MXF files and contain the title and descriptions such as issuer and rating.
For the final step I switched to the command line tool because in the GUI version it's not clear what's happening to the actual files. The command line version won't move/delete any of the MXF files but just creates the XML files in the same directory. Run it in the directory of the external drive. In my case just called "/mnt" after "cd /mnt" (one line):

opendcp_xml --reel ./a_mugs_game_video.mxf ./a_mugs_game_sound.mxf --title V_MUGS_GAME_SHR-2D-24_F_51-EN_2K_20191105_SMPTE --annotation OpenDCP_CPL --issuer BratwurstAndHaggis --kind feature

Note the string in --title: this has been generated by the openDCP gui. Just click on "Title Generator" which is a standard defined by the DCI.

Don't forget to properly un-mount the drive before unplugging or just shut down the computer. 

To test your DCP plug the HD back in. If you have a USB connection then you should instantly see the files. If you have the internal SATA bay then it should show up in your dmesg when the drive is inserted into the computer. Mount it as described above and check that everything is there.

To test if it plays you can use the NeoDCP. It plays the first 15 secs of your film or you could invest about £100 for a license which is worth every penny. This is only available under Windows and you need to install a driver which can mount the EXT3 filesystem.

Finished!

Credits:
Matt Cameron's DCP tutorial
A big thanks to the GFT for letting me to try out the DCP and especially Barney who gave me valuable feedback especially about the sound.

Saturday, 6 June 2015

Premiere 5.1 channel assignments

If you export 5.1 from Premiere as separate WAVs then Premiere annoyingly not calls them L,R,C but just 1.wav, 2.wav, etc.

Here is the mapping:

1=L
2=R
3=C
4=LFE
5=Ls
6=Rs

Thursday, 19 December 2013

Producing video lectures

In this post I show you how I recorded the video lecture Digital Signal Processing, a course at the University of Glasgow, taught both in Glasgow and Singapore.

The teaching style comprises:
 This photo shows my final setup (I started with a less elaborated one):

Central idea

Capture a region of the screen which is 1280x720 pixels wide (HD resolution) and then show within that region whatever needs to be presented. 

Storytelling

Generally, I recorded clips and then arranged them into playlists for YouTube. A playlist then covers a topic, for example FIR filter design. Playlists could be seen as the actual lectures in terms of classical lectures however they are really covering topics here. Every playlist then contains a sequence of clips roughly covering these different subtopics:
  • motivation
  • theory
  • example illustrating the theory
  • simulation, and finally a 
  • proper practical example pointing to the real world.
Prior to recording I prepare every clip either as a handwritten note which I keep on the side or as a text-document outside of the capturing area - in my case on the left screen. The same applies to programming commands which I keep pre-prepared on the screen off the capturing area. Clips can be from 3mins to 30mins but ideally should stay short in the region of 3-5mins. However, some mathematical derivations cannot be split up easily and turn out to be longer. Students will pick and choose usually anyway. They might first watch the beginning and then the end. Later then they will be watching the whole derivation.
Planning is key as in any other filming task. The better planned the less takes are necessary. 
Also getting feedback about the clips is essential. It's easy to make a mistake which goes unnoticed. In my case my colleague in Singapore (thanks, David!) did the checking and spotted quite a lot of mistakes.

Within a clip I introduce the subject with a handwritten outline. Then for the main content I use as a main framework again handwritten text on numerous sheets as I would do it in a lecture theatre. Usually my talking closely matches what I'm writing being aware that for many students English is not their first language and I provide this redundancy so that it's easier for them to follow (the native speakers often find this a bit boring but they just skip these bits). Then I pull in different other windows in the capture area such as programming environments, applications, oscilloscopes, web pages, text editors, photos etc. Even films or other YouTube clips would be possible but I haven't used them yet.
At the end of a clip I summarise what I've lectured. This follows the standard storytelling.
In terms of takes I tend to stop as soon as I notice a major mistake and then resume a bit earlier, usually from the top of the page and re-draw everything from that page. This makes editing easiest.

Technoloy

Video camera


As a video camera I use the Epson visualiser ELPDC06. It acts as a standard webcam so that I only need is a simple webcam viewer. 
Under Windows a popular webcam viewer is actually called exactly like that. Under XP there is no need to download any webcam viewer because it is part of the operating system. Under Linux I use camorama which is part of the Ubuntu distribution and can be selected in the package/software manager. I use the full resolution of the camera and would recommend it to avoid aliasing.
In terms of lighting: I didn't use the built in light of the visualiser because it was too uneven. Just the room-light from my office fluorescent lights was perfect for this pupose. Generally very diffuse light works best (i.e. office ceiling lights).

Clipboard

It is very beneficial to use a clipboard to have the sheets of paper at the same place during the recording. This allows later to edit out sequences and also to patch up mistakes in the edit because the drawings won't move. Note that the area around the sheets is also white to force the automatic exposure of the camera to expose as dark as possible.

Pen

It took me a while to get the perfect pen but then my partner took me to a proper stationary store where I finally found the perfect solution. It's this pen here by Maped:

It's a fountain pen and takes standard black ink cartridges. However instead of a steel tip it has a ballpoint. This won't create nice calligraphic lines (sorry John!) as a proper fountain pen does but it won't smude and produces perfect black lines.

See the difference in my video clips: standard ballpoint pen and fountain pen.


Video capture

I had a pretty rough ride finding a working setup for video capture. Usually, people would use camtasia and I had a go at it because it offers capturing from two video sources: the camera for my drawings and my screen. The first problem was that camtasia cannot show the external video camera properly while capturing so that it is impossible to know if the recording is really sharp or if the writing is legible but I thought I could manage. However, after a week of testing I gave up with Camtasia, mainly because it is very unstable and crashes on a regular basis. After a crash I had to reboot the computer which always defaulted in camtasia to record without sound which then wasted even more time. The video files generated by camtasia are huge and the format is proprietary. One needs a special codec for other editing programs which is free to use but a pain to install. However, even with the codecs installed one needs to unpack the files to edit them with another editing program. Clearly camtasia wants to force one to use their video editor which is again a pain to use and lacks many professional features. Editing the two video files from the overhead camera and the screencapture was time consuming and the options of combining them were limited.

Finally it dawned to me that the capture area on the screen IS my video mixer!! The only software I need is some simple screen capture software. So, I would recommend the following:
Use any (free and/or open source) capture program which can capture a region of your screen and store the film as a standard MP4/H264 file.

My personal solution is pretty geeky but you get the gist: I record all my clips with ffmpeg (or now called avconv) which is a commandline video converter but also allows "converting" from a region on my screen directly into an MP4 file. It's available for Linux, Windows and Mac. My command line for Linux is:
avconv -f alsa -i pulse -f x11grab -show_region 1 -r 25 -s 1280x720 -i :0.0+1500,200 -threads 0 -b 10M -b:a 256k -ac 1 -strict experimental $1
...which has taken me quite a while to figure out and for that reason I put it into a script which I just start from the command window with all these cryptic commands in it.
You don't need to go down this route but the bottomline is: use any screen capture program which generates standard MP4 from a region of your screen and which marks the region with a box (see photo above) so that you know what you brodcast and what not. Feel free to send me your favourite screen capture programs for Windows or Mac and I put them here on this blog.

Audio

Headset
I started with a standard soundcard and a PC20 headset by Sennseiser (example clip here):

..which is amazingly OK for its price. The microphone can be moved around so that you can adjust the sound-quality. For me the main problem was to hear myself only on one ear. I then switched to the Sennheiser PC310 which is an open stereo headset which also sounds slightly richer (example clip):

The microphone is very directional which is great because it suppresses more or less completely any sound from outside or from the nearby coffee kitchen. However, the main problem with this headset is that it still just uses a standard unbalanced 3.5mm microphone plug. At work the power is so polluted with interference from dodgy equipment that I constantly got clicks in the recording.
Finally I went for the BPHS1 from audio technica:
It has a dynamical microphone which sounds really good compared to the Sennheiser headsets and has a proper balanced XLR plug for the dynamic microphone. The only drawback is that the microphone is harder to place so that it won't record any breathing sounds (example clip here).


Soundcard
With normal soundcards the problem is that you won't be able to hear yourself through the headphone. However this is highly desirable because time is precious and you want to know instantly if the sound/content is OK or not. Take for example breathing. The microphone might catch the air puffs from your breathing and the recording might be unusable. A second issue is microphone positioning because all headsets mentioned here have a highly directional microphones. Moving the microphone just a bit up or down will impact on the quality of the sound.

For webcasts the TASCAM US-125M is just great. It allows capturing audio from different sources such as a microphone (XLR/jack) and feeds the mixed sound directly back to the headphone. The computer sees just a new recording device at the USB port and the box won't need any drivers. It also has a limiter and manual gain control for the microphone. It has two LEDs which indicate the existence of an input signal and clipping. With this setup it's nearly impossible to screw up the recording.

Editing

I tend to edit the audio so that the talking flows without any mmms/erms and allow jump cuts in the video because there are no cutaways. For YouTube generally sound is normalised to 0dB which can be done with one click in virtually all editing programs.
For editing I use Sony Vegas because it allows batch processing so that it's possible to prepare many clips on the timeline and then Vegas renders them out as separate files, nicely numbered. See my previous post about batch processing and the script I'm using for this. The screen shot above shows the whole playlist (i.e. lecture) which means that all clips are in this single timeline. The region markers define the start/endpoints for every clip and then the batch render generates separate video files which all together then form a playlist. All this can run over night and then uploaded with one drag/drop to YouTube next morning. It saves enormous time.
Check out this comparison site for the different version of Sony Vegas. You need to have the pro version to be able to do batch render. Apart from the very convenient batch render virtually any video editor is suitable. At the end you need a program which is able to trim the start, end, can remove sections in the middle of a take, can normalise audio and video.

Tuesday, 22 October 2013

Sony Vegas Batch render

At the moment I'm creating a lecture series to be uploaded onto youTube which consists of hundreds of clips. To help comes here the batch render script by Sony Vegas which allows the rendering of regions into separate files from the timeline which is a great time saver. I just start the script in the "render regions" mode and then it generates all files while I'm away.

Unfortunately the original script shipped with Vegas creates filenames just containing the index numbers of the regions but not their names. I've modified the script so that the names of the regions are used as well as the index numbers:
http://www.berndporr.me.uk/blogspot/Batch Render2.cs

Sunday, 10 March 2013

AF101 setting for cinematic shooting

 Still from the horror short CUT FREE starring Vasso Georgiadou

Hi DOPs,

these are my favourite settings for noise free cinematic images (scene file) on my AF101:

Rec PH 1080/25P
VFR off
Detail -6
VDetail -6
Det. C -6
Chroma level -6
Chroma phase -2
Colour temp Ach 0
Colour temp Ach 0
Master ped -2
A iris 0
DRS off
Gamma Cinelike V
Matrix Norm1
Skintone DTL off

Most important are to set all details to -6 to avoid noise, master ped to -2 (0 give a black level higher than needed), gamma to Cine V and the matrix to Norm1. Chroma phase removes a slight pink-ish tint but that's a matter of taste.