Difference between revisions of "Blender-workflow"

From ElphelWiki
Jump to: navigation, search
m
(Blanked the page)
Line 1: Line 1:
This is the first draft workflow that kinoraw.net propose to edit Elphel footage with Blender:
 
  
* Recording and RAW dumping
 
* Creation of proxies for editing
 
* Editing and exporting to JPG
 
* conversion to EXR and colour grading
 
 
At the end of this post you'll find the links to the scripts required to tests the method. (rename the files from *.py.txt to *.py in order to use them)
 
 
    If you're afraid of committing errors in the terminal, at the end there are a couple of videos that show how is done :-)
 
 
 
Also,  If you're interested in try this with raw footage from the red one, or a Blackmagic, export them to DNGs and start reading in step 4.
 
 
 
= Recording - raw dump =
 
 
 
== Configuration ==
 
 
 
At the moment we use the camera's own interface (php) to make the setting changes, while Biel is already Working on a python script useful for managing several configurations.
 
...In development...
 
 
== Recording ==
 
 
Recording is done at the JP4 format using a series of scripts based on the ones published by [Flavio](https://szaszak.wordpress.com/linux/elphel-as-a-digital-cinema-camera/).
 
 
We have to copy them inside the camera (in /usr/local/scripts). they are four scripts residing inside the camera that we will use in order to record.
 
 
The first step is to launch a terminal, launch Telnet to the Camera and send this command:
 
 
    sh /usr/local/scripts/mountd.sh
 
 
This script ensure that the internal disc in the camera gets mounted at /var/hdd
 
 
    sh /usr/local/scripts/camogm_start.sh
 
 
When this script gets called some internal PHP process in the camera as well as the auto-exposure PHP process, get killed and so the camogm program becomes then ready to record, the window that handles Telnet where we are calling those scripts can remain open in order to receive the feedback on the recording in order to see if it is going well, or if otherwise there are some dropped frames.
 
 
Following next, from another new instance of the terminal (ctrl+alt+t) we call another instance of Telnet to the camera and send this other script.
 
 
    sh /usr/local/scripts/record_camogm.sh folder
 
 
In that last script the first argument ("folder") is the name of the folder where we want to keep the recorded footage, as well as the prefix name that those videos will have in the filename.
 
   
 
Back in the first instance of the terminal that still remains open we can check the result of the calling of the last script, And when we want to stop the recording process, we will call in the second instance of the terminal that last script:
 
 
    sh /usr/local/scripts/stopRecording_camogm.sh
 
 
Because the fact of displaying the stream depends on the capacity of the computer that we're using, we have in our disposal several scripts to help us check the stream of the camera properly. At the moment when we record at RGB we use mplayer but at JP4 we use a gstreamer script that automatically detects the camera's resolution.
 
 
We have a small Kivyi interface in development, that will integrate all those scripts in a single easy interface.
 
 
== Raw dump ==
 
 
We currently are using filezilla to dump the content from the camera, but there is also a new dumping script, that creates a list of files and verifies the whole process with a MD5 checksum.
 
 
= Proxies creation with Gstreamer =
 
 
Once dumped the raw footage, we call a nautilus script for the generation of proxies. The script calls a Gstreamer's command, in nautilus or the terminal, in roder to convert the video in JP4 format to RGB video m-jpeg encoded and resolution reduced to 25% (there's also the possibility to generate proxies at 100%) Then, the proxy it's kept within a MKV container (matroska video)
 
 
This is the nautilus script:
 
 
    #!/bin/bash
 
    fpaths=`echo "$NAUTILUS_SCRIPT_SELECTED_FILE_PATHS" | sort`
 
    for file in $fpaths
 
    do
 
      if [ -f "$file" ]; then
 
    base=${file%.*}
 
    ext=${file##*.}
 
    basename=${base##*/}
 
    gst-launch-0.10 filesrc location=$file ! decodebin ! ffmpegcolorspace ! queue ! jp462bayer ! "video/x-raw-bayer, width=(int)1920, height=(int)1088, format=(string)grbg" ! queue ! bayer2rgb2 method=1 ! queue ! ffmpegcolorspace ! videoscale ! video/x-raw-yuv, width=480, height = 272 ! jpegenc ! matroskamux ! filesink location=$base-25.mkv
 
      fi
 
    done
 
 
TO DO: automatize the script so it can detect the resolution of the video files for conversion and that it gets executed recursively in a folder.
 
 
= Blender editing - JPG exporting =
 
 
After the proxies have been generated, we can start the editing with Blender. To accomplish this task, we can get hold of those next scripts:
 
 
* [addon jump to cut](http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Sequencer/Jump_to_cut)
 
* [addon extra sequencer tools](http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Sequencer/Extra_Sequencer_Actions)
 
 
Within the second script, there is a panel called _recursive loader_ that let us import to the VSE all the raw video files hosted in a directory (even with sub-directories included) and configure in an automatized way the proxies generated in the last step.
 
 
Since the JP4 is encoded in a JPEG fashion, if, at the final rendering stage, we configure Blender's exit format to JPEG at 100% quality, we will obtain a selection of the JP4s that will be necessary to develop.
 
 
= EXR conversion - Colour grading in Blender =
 
 
In order to convert JP4 files to EXR, we can use the Python script. But for the script to work, we will need to have installed dcraw, exiftool and qtpfsgui, as well as the JP4 to DNG converter elphel_dng (you can find in here, in JP4tools). It's also necessary to install Parallel enabling us to call the conversion in a multi-threaded fashion.
 
 
At the moment I'm using qtpfsgui for simplicity, but I have the intention to try to create the EXRs directly with pfstools.
 
 
The thing that this script do is pretty simple: First it converts every image to DNG using elphel_dng. then, it assigns the EXIF meta-data that the image lacks and that are essential for the conversion that later qtpfsgui performs.
 
 
-ISO=100 -FocalLength=4.5 -ExposureTime=0.04 -ApertureValue=2.0
 
 
those meta-data values are not known by the camera, so I assign to them a random value.
 
 
[jp4-elphel-exr.py](http://www.kinoraw.net/sites/default/files/jp4-elphel-exr.py.txt)
 
 
If we want to start Blender's colour grading work-flow from the standpoint of a sequence of DNGs, we can do it by using this modification of the script (basically, erasing, from it, the lines in which elphel_dng gets executed and the recovering of the meta-data with exiftool):
 
 
[dng-elphel-exr.py](http://www.kinoraw.net/sites/default/files/dng-elphel-exr.py.txt)
 
 
In order to call this script, copy the script inside the directory containing the DNGs and also inside there type the following command:
 
 
    ls *.dng | parallel -j 8 python dng-elphel-exr.py {}
 
 
Where you can substitute the "8" for the actual number of __cores__ that your __CPU__ features
 
 
You have to bear in mind this anotation that resides inside the code:
 
       
 
        # dcraw -T -4 -q 1 -a -m 2
 
        # |
 
        # `-> you have to configure this inside qtpfsgui preferences
 
 
If we want to manipulate the way in which our DNG it's developed, we can see several instructions within the preferences of qtpfsgui, in the RAW Importation Options tab.
 
 
Once the frames remain converted to EXR, we can come back to Blender,conform our project with the EXR (which won't show any image in the VSE) and start colour grading at the node compositor with the help of this script that it's still WIP:
 
 
* [script/addon edit strip with compositor](http://blenderartists.org/forum/showthread.php?221567-Edit-strip-with-compositor/page8)
 
 
Caution! the script composite strip_0_4_x.py from that forum is a collective work in progress, and perform tasks still somewhat unstable, handle with care.
 
 
 
= Howto Videos =
 
 
Here is a video showing up the work-flow:
 
 
http://player.vimeo.com/video/50247207
 
 
here there is a timelapse of the whole process:
 
 
http://player.vimeo.com/video/50247217
 
 
= Test footage =
 
 
And finally, a couple of tests EXT / DAY and INT / NIGHT, of the capabilities of what Elphel can see when recording at this particular raw format JP4:
 
 
http://player.vimeo.com/video/50218814
 
 
http://player.vimeo.com/video/50218807
 
 
 
 
more info:
 
http://www.kinoraw.net
 
 
 
source code contributors for Blender:
 
 
Turi Scandurra, peddie, TMW, Björn Sonnenschein, in [Blenderartists](http://blenderartists.org/).
 
 
source code contributors for the Elphel camera:
 
 
Biel, Sebastian, and Flavio from the [Apertus](http://apertus.org) community
 
Andrey and Oleg from [Elphel.inc](http://www.elphel.com)
 

Revision as of 11:55, 20 October 2012