https://wiki.elphel.com/api.php?action=feedcontributions&user=Polto&feedformat=atomElphelWiki - User contributions [en]2024-03-28T16:54:49ZUser contributionsMediaWiki 1.28.0https://wiki.elphel.com/index.php?title=Event_logger&diff=13743Event logger2015-05-30T02:51:01Z<p>Polto: /* Syncing with an external device */</p>
<hr />
<div>===Features===<br />
* ~1 micro second precision<br />
* Up to 4 source channels: <br />
** IMU <br />
** GPS <br />
** Image Acquisition<br />
** External General Purpose Input (e.g., odometer - 3..5V pulses)<br />
<br />
===Description===<br />
<br />
The FPGA-based Event Logger uses local clock for time-stamping, so each log entry (IMU, GPS, Image Acquisition and External Input) is recorded with timing info.<br />
<br />
<br />
In a single camera each acquired image has a timestamp in its header (Exif). The log entry for images has this timestamp recorded at the logger (local) time.<br />
<br />
Multiple cameras (e.g., Eyesis4&pi;) are synchronized by the master camera to sub-microsecond, and each acquired also image has the master timestamp in Exif. The log entries for images (if logged in the camera other than master so with different local clock) have 2 fields - master timestamp (same as in image Exif) and local timestamp (same clock as used for IMU), so it is easy to match images with inertial data.<br />
<br />
<br />
A typical log record has the following format:<br />
<font size="2"><br />
[LocalTimeStamp] [SensorData]<br />
Examples or parsed records:<br />
[LocalTimeStamp]: IMU: [wX] [wY] [wZ] [dAngleX] [dAngleY] [dAngleZ] [accelX] [accelY] [accelZ] [veloX] [veloY] [veloZ] [temperature]<br />
[LocalTimeStamp]: GPS: [NMEA sentence]<br />
[LocalTimeStamp]: SRC: [MasterTimeStamp]<br />
</font><br />
===Syncing with an external device===<br />
An external device (e.g., odometer) can be connected with a camera / camera rig.<br />
<br />
The device have to send HTTP requests to be logged to the camera on http://192.168.0.221/imu_setup.php?msg=message_to_log (message is limited to 56 bytes) and 3..5V pulses on the two middle wires of the J15 connector. '''!! Make sure only the two middle wires are connected and the externals one are not. Since the camera's input trigger is optoisolated, but not the trigger output. !!'''<br />
<br />
For testing purposes we used an [http://www.arduino.cc/en/Guide/ArduinoYun Arduino Yún] with [https://www.adafruit.com/products/1272 Adafruit Ultimate GPS Logger Shield]. The GPS was used only to send a PPS to the camera's J15 port while Arduino Yún was running a simple script such as:<br />
echo "" > wifi.log ; i=0; while true; do wget http://192.168.0.221/imu_setup.php?msg=$i -O /dev/null -o /dev/null; echo $i ; echo $i >> wifi.log ; iwlist wlan0 scan >> wifi.log ; i=`expr $i + 1`; done<br />
<br />
This script is logging an incrementing number both to the camera log and Yún's file system, WiFi scanning is also recorded to the Yún's log. So later both log files can be synchronized in post-processing.<br />
<br />
===Notes===<br />
* GPS data is written to both, the event log and the image header (Exif).<br />
* IMU data is written only to the event log.<br />
* Example IMU ([http://www.analog.com/en/mems-sensors/imu/adis16375/products/product.html ADIS16375]) samples rate is 2460Hz.<br />
* Example GPS receiver (Garmin 18x serial) samples rate is 5Hz in NMEA or other configured format.<br />
<br />
===Examples===<br />
====Raw====<br />
Raw '''*.log''' files are found [http://community.elphel.com/files/imu/ here]<br />
====Parsed==== <br />
[http://community.elphel.com/files/imu/parsed_log_example.txt parsed_log_example.txt] (41.3MB) - [http://community.elphel.com/files/imu/ here]<br />
<br />
====Tools for parsing logs====<br />
[http://community.elphel.com/files/imu/ Download] one of the raw logs.<br />
* '''PHP:''' Download [http://community.elphel.com/files/imu/read_imu_log.php_txt read_imu_log.php_txt], rename it to '''*.php''' and run on a PC with PHP installed.<br />
* '''Java:''' Download [http://community.elphel.com/files/imu/java/ IMUDataProcessing] project for [http://www.eclipse.org/ Eclipse].<br />
<br />
[[Category:Event_logger]]<br />
<br />
<!--==Links==<br />
* http://blog.elphel.com/2011/10/elphel-eyesis-4%CF%80-preassembly-stage/--></div>Poltohttps://wiki.elphel.com/index.php?title=Event_logger&diff=13742Event logger2015-05-30T02:47:22Z<p>Polto: </p>
<hr />
<div>===Features===<br />
* ~1 micro second precision<br />
* Up to 4 source channels: <br />
** IMU <br />
** GPS <br />
** Image Acquisition<br />
** External General Purpose Input (e.g., odometer - 3..5V pulses)<br />
<br />
===Description===<br />
<br />
The FPGA-based Event Logger uses local clock for time-stamping, so each log entry (IMU, GPS, Image Acquisition and External Input) is recorded with timing info.<br />
<br />
<br />
In a single camera each acquired image has a timestamp in its header (Exif). The log entry for images has this timestamp recorded at the logger (local) time.<br />
<br />
Multiple cameras (e.g., Eyesis4&pi;) are synchronized by the master camera to sub-microsecond, and each acquired also image has the master timestamp in Exif. The log entries for images (if logged in the camera other than master so with different local clock) have 2 fields - master timestamp (same as in image Exif) and local timestamp (same clock as used for IMU), so it is easy to match images with inertial data.<br />
<br />
<br />
A typical log record has the following format:<br />
<font size="2"><br />
[LocalTimeStamp] [SensorData]<br />
Examples or parsed records:<br />
[LocalTimeStamp]: IMU: [wX] [wY] [wZ] [dAngleX] [dAngleY] [dAngleZ] [accelX] [accelY] [accelZ] [veloX] [veloY] [veloZ] [temperature]<br />
[LocalTimeStamp]: GPS: [NMEA sentence]<br />
[LocalTimeStamp]: SRC: [MasterTimeStamp]<br />
</font><br />
===Syncing with an external device===<br />
An external device (e.g., odometer) can be connected with a camera / camera rig.<br />
<br />
The device have to send HTTP requests to be logged to the camera on http://192.168.0.221/imu_setup.php?msg=message_to_log (message is limited to 56 bytes) and 3..5V pulses on the two middle wires of the J15 connector. !! Make sure only the two middle wires are connected and the externals one are not. Since the camera's input trigger is optoisolated, but not the trigger output.<br />
<br />
For testing purposes we used an [http://www.arduino.cc/en/Guide/ArduinoYun Arduino Yún] with [https://www.adafruit.com/products/1272 Adafruit Ultimate GPS Logger Shield]. The GPS was used only to send a PPS to the camera's J15 port while Arduino Yún was running a simple script such as:<br />
echo "" > wifi.log ; i=0; while true; do wget http://192.168.0.221/imu_setup.php?msg=$i -O /dev/null -o /dev/null; echo $i ; echo $i >> wifi.log ; iwlist wlan0 scan >> wifi.log ; i=`expr $i + 1`; done<br />
<br />
This script is logging an incrementing number both to the camera log and Yún's file system, WiFi scanning is also recorded to the Yún's log. So later both log files can be synchronized in post-processing.<br />
<br />
===Notes===<br />
* GPS data is written to both, the event log and the image header (Exif).<br />
* IMU data is written only to the event log.<br />
* Example IMU ([http://www.analog.com/en/mems-sensors/imu/adis16375/products/product.html ADIS16375]) samples rate is 2460Hz.<br />
* Example GPS receiver (Garmin 18x serial) samples rate is 5Hz in NMEA or other configured format.<br />
<br />
===Examples===<br />
====Raw====<br />
Raw '''*.log''' files are found [http://community.elphel.com/files/imu/ here]<br />
====Parsed==== <br />
[http://community.elphel.com/files/imu/parsed_log_example.txt parsed_log_example.txt] (41.3MB) - [http://community.elphel.com/files/imu/ here]<br />
<br />
====Tools for parsing logs====<br />
[http://community.elphel.com/files/imu/ Download] one of the raw logs.<br />
* '''PHP:''' Download [http://community.elphel.com/files/imu/read_imu_log.php_txt read_imu_log.php_txt], rename it to '''*.php''' and run on a PC with PHP installed.<br />
* '''Java:''' Download [http://community.elphel.com/files/imu/java/ IMUDataProcessing] project for [http://www.eclipse.org/ Eclipse].<br />
<br />
[[Category:Event_logger]]<br />
<br />
<!--==Links==<br />
* http://blog.elphel.com/2011/10/elphel-eyesis-4%CF%80-preassembly-stage/--></div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_Software_Kit_for_Ubuntu&diff=13333Elphel Software Kit for Ubuntu2014-02-01T20:27:19Z<p>Polto: /* ImageJ and Elphel plugins for imageJ */</p>
<hr />
<div>=About=<br />
<br />
This page is a simple howto for running Elphel software on (K)Ubuntu GNU/Linux.<br />
<br />
You can download this GNU/Linux distribution freely from http://www.kubuntu.org/<br />
<br />
A <font size='3'>'''[http://community.elphel.com/files/live-dvd/ Live ISO image]'''</font> with almost all the packages pre-installed (except [[Elphel_Software_Kit_for_Ubuntu#Installation_of_the_source_code_of_Elphel_camera_software|Elphel camera source]] code and [[Elphel_Software_Kit_for_Ubuntu#Installing_Xilinx_WebPack_.28to_be_moved_to_a_separate_page.2C_just_a_link_here.29|Xilinx ISE WebPack]]) is available for downloading from [http://community.elphel.com/files/live-dvd/ here]:<br />
* [http://community.elphel.com/files/live-dvd/kubuntu-elphel-3.iso kubuntu-elphel-3.iso] is based on Kubuntu 9.10 LTS.<br />
* [http://community.elphel.com/files/live-dvd/kubuntu-elphel-4.iso kubuntu-elphel-4.iso] is based on Kubuntu 12.04 LTS.<br />
<br />
<br />
=If you are new to GNU / Linux=<br />
Many forums and wikis are available in many languages to help you to install and use Ubuntu. Ex: http://www.google.com/search?q=forum+ubuntu (you can add "&hl=fr" or any language code to the URL)<br />
<br />
Most instructions below are commands that you need to enter in the in the terminal window. For the lines that do not end with "\" sign you just copy them one-by-one and paste in the terminal window (in KDE it is Konsole in the "System" menu). For pasting you '''can not''' use <cntrl-V> - you need to '''right-click in the terminal window and select "Paste"''' from the drop-down context menu. Alternatively you can use '''the middle mouse button''' to both copy (drag while middle mouse pressed) and paste - click it in the console window.<br />
<br />
Character "'''\'''" at the end of the line means continuation, so you can copy the whole block of text where each line but the very last ends with "\" and paste them together.<br />
<br />
Many of the commands start with "'''sudo'''" - first time system will ask you for your user password that you enter without any starts (provided you have administrative privileges).<br />
<br />
If you get some problems it is very useful to copy the error message that system outputs (avoiding anything specific to your particular installation - like user directories names) and paste them into the search box of you browser.<br />
<br />
=User software=<br />
<br />
Some software need to be patched and recompiled even if they exist in Ubuntu software repositories, some softwares are not yet packaged in Ubuntu, so you have to compile them from sources also. We try to push our software patches to the mainstream applications, but it take time and is not always possible. For Ubuntu users some packages can be downloaded and upgraded using our [https://launchpad.net/~elphel/+archive/ppa ppa on launchpad].<br />
<br />
sudo add-apt-repository ppa:elphel/ppa<br />
sudo apt-get update<br />
<br />
==Mplayer - Ubuntu 9.10 or later==<br />
'''Kubuntu 9.10 includes MPlayer that is working with Elphel camera "out of the box"'''<br />
So for (K)Ubuntu 9.10 you just need to install<br />
sudo apt-get install mplayer-nogui mplayer gecko-mediaplayer<br />
<br />
===Mplayer - installation for (K)Ubuntu older than 9.10 release ===<br />
<br />
With the previous verions MPlayer has to be patched and recompiled, following instructions document how to do it on (K)Ubuntu or Debian based workstation.<br />
<br />
Install gecko-mediaplayer before compiling/installing MPlayer, if you do it later it will install non-patched version of MPlayer<br />
<br />
sudo apt-get install gecko-mediaplayer<br />
<br />
<br />
First install some compilation dependencies, mainly libraries...<br />
<br />
<!-- sudo apt-get install build-essential debhelper libncurses5-dev libesd0-dev liblircclient-dev libgtk2.0-dev \<br />
libvorbis-dev libsdl1.2-dev sharutils libasound2-dev liblzo-dev gawk libjpeg62-dev libaudiofile-dev \<br />
libsmbclient-dev libxv-dev libpng3-dev libgif-dev libcdparanoia0-dev libxvidcore4-dev libdv-dev \<br />
liblivemedia-dev libfreetype6-dev em8300-headers libgl1-mesa-dev libdvdread-dev libdts-dev libtheora-dev \<br />
libglu-dev libartsc0-dev libfontconfig-dev libxxf86dga-dev libxinerama-dev libxxf86vm-dev \<br />
libxvmc-dev libggi2-dev libmpcdec-dev libspeex-dev libfribidi-dev libfaac-dev libaa1-dev libcaca-dev \<br />
libx264-dev libpulse-dev libmad0-dev ladspa-sdk libdbus-glib-1-dev libaudio-dev liblzo2-dev libdvdnav-dev \<br />
libopenal-dev libjack-dev libtwolame-dev libsvga1-dev libenca-dev libmp3lame-dev<br />
<br />
'''If you are under Ubuntu 8.10 (Intrepid) replace liblame-dev at the end by libmp3lame-dev''' --><br />
<br />
sudo apt-get install build-essential debhelper libncurses5-dev libesd0-dev liblircclient-dev libgtk2.0-dev \<br />
libvorbis-dev libsdl1.2-dev sharutils libasound2-dev gawk libjpeg62-dev libaudiofile-dev \<br />
libsmbclient-dev libxv-dev libpng12-dev libgif-dev libcdparanoia-dev libdv4-dev \<br />
liblivemedia-dev libfreetype6-dev libgl1-mesa-dev libdvdread-dev libdts-dev libtheora-dev \<br />
libglu1-mesa-dev libfontconfig-dev libxxf86dga-dev libxinerama-dev libxxf86vm-dev \<br />
libxvmc-dev libggi2-dev libmpcdec-dev libspeex-dev libfribidi-dev libfaac-dev libaa1-dev libcaca-dev \<br />
libx264-dev libpulse-dev libmad0-dev ladspa-sdk libdbus-glib-1-dev libaudio-dev liblzo2-dev libdvdnav-dev \<br />
libopenal-dev libjack-dev libtwolame-dev libsvga1-dev libenca-dev libmp3lame-dev<br />
<br />
<br />
<!--<br />
Get the MPlayer ubuntu source package:<br />
apt-get source mplayer<br />
<br />
patch the sources and compile:<br />
cd mplayer-1.0~rc2/<br />
sed s/\#define\ MAX_RTP_FRAME_SIZE\ 50000/\#define\ MAX_RTP_FRAME_SIZE\ 5000000/g \<br />
libmpdemux/demux_rtp.cpp > libmpdemux/demux_rtp.cpp_<br />
mv libmpdemux/demux_rtp.cpp_ libmpdemux/demux_rtp.cpp<br />
sudo dpkg-buildpackage <br />
cd ..<br />
<br />
install mplayer package:<br />
sudo dpkg --install mplayer_1.0~rc2-0ubuntu*.deb<br />
--><br />
Current MPlayer code is capable of working with full resolution video produced by Elphel cameras. That is not yet true for the MPlayer packages in Ubuntu repositories, so you'll have to obtain the source code from MPlayer recommended source - their Subversion (SVN) source. First you need to install Subversion itself:<br />
sudo apt-get install subversion<br />
Now create a download directory and get MPlayer source:<br />
mkdir download; cd download<br />
svn checkout svn://svn.mplayerhq.hu/mplayer/trunk mplayer<br />
Configure, compile and prepare Debian package from that source (will take some time):<br />
cd mplayer<br />
sudo dpkg-buildpackage <br />
Occasionally dpkg-buildpackage fails to build, try the comon ./configure, make, make install way then.<br />
<br />
Now uninstall the default Ubuntu's mplayer and mencoder & install the created package:<br />
sudo apt-get remove mplayer mencoder mplayer-nogui<br />
cd ..<br />
sudo dpkg --install mplayer_1.0svn*.deb<br />
<br />
===MPlayer - testing with Elphel camera ===<br />
You should be able now to play videos with up to 5MB frames (highest quality 5MPix images are around 1 MB) as a multicast or unicast video stream. (the streamer in the camera should be ENABLED)<br />
mplayer rtsp://192.168.0.9:554 -vo x11 -fs -zoom<br />
<br />
''Update 10/09/2009: In (K)Ubuntu 9.10 (Karmic Koala) repository the '''50,000 bytes limit on the frame size''' is fixed, but unfortunately the [https://bugs.launchpad.net/ubuntu/+source/mplayer/+bug/296488 other one -'''frame width limit of 2048 pixels'''] (submitted to MPlayer SVN on May, 5, 2009) - is not''--[[User:Andrey.filippov|Andrey.filippov]] 19:03, 9 October 2009 (CDT)<br />
<br />
The first (50,000) makes you picture break after the first 50,000 bytes (only top is shown), the second (current for the 9.10) makes MPlayer to report fatal error. So you still have to use MPlayer SVN to have the full resolution from Elphel cameras.<br />
<br />
Additionally, to make MPlayer work inside the web page you need to specify video output as "x11" in the [http://howto.wikia.com/wiki/Howto_configure_MPlayer MPlayer config file] - add a line<br />
vo="x11"<br />
To ~/.mplayer/config file<br />
<br />
=For developers=<br />
''Following was written for (K)Ubuntu 9.10 and earlier, it also works with 10.04, please see revision-specific notes.''<br />
<br />
==Adding universe and multiverse sources==<br />
Please follow this howto for adding universe and multiverse sources. <br />
<br />
https://help.ubuntu.com/community/Repositories/Ubuntu<br />
<br />
or <br />
<br />
https://help.ubuntu.com/community/Repositories/Kubuntu<br />
<br />
==Install needed packages==<br />
Minimal packages:<br />
sudo apt-get install cvs build-essential autoconf flex byacc bison libglib2.0-dev tcl gettext libncurses5-dev patch zlib1g-dev nfs-kernel-server bash xutils-dev<br />
Suggested packages:<br />
sudo apt-get install kinfocenter minicom firefox graphviz doxygen ctags cervisia php5 php5-cli xchat ssh kompare git-core<br />
<br />
==Configure your NFS server==<br />
<br />
Let's say you want to configure an NFS server on your machine and your IP address is '''192.168.0.15'''.<br />
<br />
<!--Edit /etc/exports file with your favorite editor. Here I use nano.<br />
sudo nano -w /etc/exports<br />
alternatively you may edit the same file with kate<br />
Those who know how to use nano will figure that out themselves --[[User:Andrey.filippov|Andrey.filippov]] 18:09, 23 May 2009 (CDT)<br />
--><br />
Modify the configuration file:<br />
kdesudo kate /etc/exports<br />
Add at the end of the file:<br />
/nfs 192.168.0.0/255.255.0.0(rw,sync,no_root_squash)<br />
save the file.<br />
<br />
If it does not yet exist make /nfs directory and make it world writable to make it possible to write logs from the camera.<br />
sudo mkdir /nfs<br />
sudo chmod 777 -R /nfs<br />
<br />
And finally export the filesystem.<br />
sudo exportfs -a<br />
<br />
<br />
== Installation of GCC Compiler kit for Axis ETRAX processor ==<br />
=== Automatic Installation ((K)Ubuntu 11.04 Natty Narwhal)===<br />
1. <br />
sudo add-apt-repository ppa:elphel/ppa<br />
<br />
(The line above is equivalent to adding the following lines to your sources.list ('''sudo kate /etc/apt/sources.list''')):<br />
deb http://ppa.launchpad.net/elphel/ppa/ubuntu natty main <br />
deb-src http://ppa.launchpad.net/elphel/ppa/ubuntu natty main <br />
<br />
2.<br />
sudo apt-get update<br />
sudo apt-get install cris-dist<br />
<br />
<br />
=== Manual Installation ===<br />
====Downloading and unpacking gcc-cris====<br />
Download and install Cris-GCC compiler. It is needed to compile C and C++ programs for the CPU used in Elphel cameras - [http://en.wikipedia.org/wiki/ETRAX_CRIS Axis ETRAX FS] :<br />
mkdir -p ~/Downloads/axis ; cd ~/Downloads/axis<br />
wget http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/cris-dist-linux-headers-1.64.tar.gz <br />
wget http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/cris-dist-linux-headersv32-1.64.tar.gz <br />
wget http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/cris-dist-glibc-1.64.tar.gz <br />
wget http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/cris-dist-1.64.tar.gz<br />
wget http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/cris-dist-1.64-1--1.64-2.patch<br />
tar zxvf cris-dist-1.64.tar.gz<br />
cd cris-dist-1.64/<br />
tar zxvf ../cris-dist-linux-headers-1.64.tar.gz <br />
tar zxvf ../cris-dist-linux-headersv32-1.64.tar.gz <br />
tar zxvf ../cris-dist-glibc-1.64.tar.gz<br />
patch -p0 < ../cris-dist-1.64-1--1.64-2.patch<br />
<br />
==== Compiling gcc-cris ====<br />
Now you may proceed with compiling gcc-cris (takes some time):<br />
<br />
sudo ./install-cris-tools<br />
answer by default (enter, enter, ...)<br />
<br />
<!--<br />
Don't forget to export the path to the cris-compiler - the defaul location is /usr/local/cris, as example<br />
<br />
tobias@MoonbaseAlphaOne:~$ export PATH=$PATH:/usr/local/cris/bin<br />
<br />
If everything worked out well, you can check the compiler version with<br />
<br />
gcc-cris --version<br />
<br />
Which should result in an output like this one (example, might vary with version)<br />
<br />
cris-axis-elf-gcc (GCC) 3.2.1 Axis release R64/1.64<br />
Copyright (C) 2002 Free Software Foundation, Inc.<br />
This is free software; see the source for copying conditions. There is NO<br />
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.<br />
--><br />
<br />
== Build Elphel Software==<br />
Get the [http://sourceforge.net/projects/elphel/files/elphel353-8/install-8.0-fromcvs/elphel353_80_install_from_cvs.sh.tar.gz/download script]. Unpack & Launch.<br />
<br />
<br />
== Installation of Kdevelop 3.5 ==<br />
''note: Newer KDevelop 4 does not work with Elphel software''<br />
==== pre (K)Ubuntu 9.10 ====<br />
sudo apt-get install kdevelop<br />
<br />
==== (K)Ubuntu 9.10 (Karmic), 10.4 (Lucid) ====<br />
see [[KDevelop#Installation_of_KDevelop_3.5_on_.28K.29Ubuntu_9.10|Installation of KDevelop 3.5 on (K)Ubuntu 9.10]]<br />
<br />
== Install Icarus Verilog and GTKWave ==<br />
(If you plan to develop FPGA code or at least look how it works)<br />
<br />
GTKWave is OK from repository:<br />
sudo apt-get install gtkwave<br />
<br />
But unfortunately Icarus Verilog (package verilog) is compiled without needed support for compressed output format, so you'll have to compile it<br />
cd ~/Downloads<br />
wget "ftp://ftp.icarus.com/pub/eda/verilog/v0.9/verilog-0.9.3.tar.gz"<br />
tar zxvf verilog-0.9.3.tar.gz<br />
cd verilog-0.9.3/<br />
./configure<br />
make<br />
sudo make install<br />
<br />
== Installing Xilinx WebPack (to be moved to a separate page, just a link here)==<br />
If you plan to compile the FPGA code from the source code you will need to install appropriate software from the FPGA manufacturer web site. This is proprietary (and non-free) software provided by [http://www.xilinx.com Xilinx] free of charge. This software is called the [http://www.xilinx.com/tools/webpack.htm ISE WebPACK] and you may download it (currently some 2.2GB) after registering at Xilinx web site. Currently tested version is 10.1.03, we'll try to update our code when Xilinx will release the new version of their software.<br />
<br />
You will also need that software if you would like to simulate the FPGA functionality. Elphel camera FPGA code (written in Verilog HDL) is licensed under GNU GPLv3, simulator and waveform viewer (see below) are also Free Software, but there are a few Verilog models of some primitives of Xilinx FPGA that are not and can be only obtained from Xilinx as a part of their unisims library. For simulation we use small subset of the unisims library components and only for functional simulation so it will probably make sense to re-write those primitives models so the 2.2 GB distribution will not be needed to extract just few kilobytes of required source code.<br />
<br />
Such independent re-implementation will help us to solve another problem - we have to patch the Xilinx library components to make them work correctly with our code and the simulator we use and currently each Xilinx library update breaks our patches.<br />
<br />
When you will build Elphel software (as described later) the installation script will try to locate Xilinx software on your computer and patch a copy of unisims library. If you'll install Xilinx WebPack after building the camera software - you'll need to navigate to ''fpga'' subdirectory of the source tree and execute<br />
make clean ; make<br />
It is described in file ''README.simulation'' in that subdirectory.<br />
<br />
<br />
There are additional steps required for the Xilinx WebPACK installation if you have a 64-bit GNU/Linux operation system. The next command line detects if you are running on 64 bit version of GNU/Linux and conditionally installs '''ia64-libs'''. That library is needed if you'll install Xilinx WebPack software (Xilinx does not allow to use 64-bit programs in their free for download software and the provided installation script does not install 32-bit version on its own). There is [http://ubuntuforums.org/showthread.php?t=203459 another trick] you'll need to be aware of if you are using 32-bit Xilinx WebPack on a 64-bit GNU/Linux system.<br />
if [ `uname -m` = "x86_64" ] ; then sudo apt-get install ia32-libs ; fi<br />
<br />
==Installation of the source code of Elphel camera software==<br />
<br />
You may install Elphel source code by either of the two methods. Either from the CVS (the most current code) and form the tarball files. <br />
===Installation from the CVS===<br />
<br />
Get [http://downloads.sourceforge.net/elphel/elphel353_80_install_from_cvs.sh.tar.gz elphel353_80_install_from_cvs.sh] open archive that contains the shell script and execute it. It is recommended that you create subdirectory in your home directory, i.e. "elphel_projects", move and execute elphel353_80_install_from_cvs.sh script there. Directory "distfiles" will be created there and used as a cache for software archives that will be downloaded during installation.<br />
<br />
mkdir -p ~/elphel_projects; cd ~/elphel_projects<br />
wget "http://downloads.sourceforge.net/elphel/elphel353_80_install_from_cvs.sh.tar.gz"<br />
tar zxvf elphel353_80_install_from_cvs.sh.tar.gz<br />
./elphel353_80_install_from_cvs.sh<br />
<br />
===Installation from the tarball (release file)===<br />
http://sourceforge.net/projects/elphel<br />
<br />
* get one of the elphel353-8.0.* releases<br />
* decompress the archive<br />
* execute the ./install_elphel script<br />
./install_elphel<br />
<br />
There will be created a file build.log in the top installation directory. If you get any installation problems you can compress that file and email it to Elphel support.<br />
<br />
At the end of installation the script will generate a list of all the files in the target (camera) file system that are to be installed and compares it against the contents of the file ''target.list'' that is included in the distribution. If there are any differences -- they will be reported (there should be none). If there are some missing files it is likely that something failed to install correctly.<br />
<br />
After the installation will be completed successfully you may want to execute the following command in the top installation directory (the one that has apps, configure-files, ... subdirectories)<br />
./prep_kdevelop.php<br />
<br />
That will create ''elphel353.kdevelop'' - a project file for KDevelop IDE (version 3.5.x), you can use it as described here - [[KDevelop]]<br />
<br />
<br />
----<br />
<br />
== ImageJ and Elphel plugins for imageJ ==<br />
ImageJ is a powerful open source image processing package, written in Java. There are plugins to work with Elphel cameras, the most universal one allows opening JP46 files (stored on a file system or downloaded directly from the camera). This plugin reads meta data from the image and un-applies in-camera non-linear conversion resulting in the linearized image, where each pixel is represented by a floating point value proportional to the number of photons that were detected in that pixel.<br />
<br />
It is a very useful tool to do quantitative analysis of the camera images.<br />
<br />
===ImageJ installation ===<br />
You may download ImageJ bundled with Java from the [http://rsbweb.nih.gov/ij/download.html download page]:<br />
==== With 32-bit Java ==== <br />
cd ~/Download; wget "http://rsbweb.nih.gov/ij/download/linux/ij147-linux32.zip" ; unzip ij147-linux32.zip<br />
<br />
==== With 64-bit Java ====<br />
cd ~/Download; wget "http://rsbweb.nih.gov/ij/download/linux/ij147-linux64.zip" ; unzip ij147-linux64.zip<br />
<br />
'''If any of the two direct download links above are broken, please use the [http://rsbweb.nih.gov/ij/download.html] to get the new ImageJ version'''<br />
<br />
===Running ImageJ ===<br />
There is "run" command the newly crated directory. You may start ImageJ with the following command:<br />
/Downloads/ImageJ/run<br />
That command allows Java to use 512M of system memory. If you need more you can change "512" to say "1024" in a single-line "run" script. If the run command is started from the command prompt, you'll be able to see some debug output - either from existent plugins or from your own ones. It is also very useful to see any error diagnostic output.<br />
<br />
=== Installation of Elphel plugins for ImageJ ===<br />
Elphel plugins are available in Elphel project page Git repository. To use it you first need to install git (if it was not already done):<br />
sudo apt-get install git-core<br />
<br />
Then clone the repository - directly to the ImageJ plugins directory. Provided you used the same directory for ImageJ as written above:<br />
cd ~/Downloads/ImageJ/plugins<br />
<!-- old url: git clone git://elphel.git.sourceforge.net/gitroot/elphel/ImageJ-Elphel --><br />
git clone git://git.code.sf.net/p/elphel/ImageJ-Elphel<br />
<br />
=== Updating Elphel plugins for ImageJ ===<br />
Later, when you'll need to update your files from the repository you will need another command:<br />
cd ~/Downloads/ImageJ/plugins/ImageJ-Elphel<br />
git pull<br />
=== Running Elphel plugin for ImageJ ===<br />
When you just install Elphel plugins you need to select<br />
~/Downloads/ImageJ/plugins/ImageJ-Elphel/JP46_Reader_camera.java<br />
That command will do exactly as it is named - first compile that source file to the Java bytecode, then execute it. Next time after you close and re-start ImageJ there will be a submenu in Plugins: Plugins->ImageJ-Elphel and JP46_Reader_camera command there,<br />
<br />
When you'll update the source files, you 'll need to re-run "Compile and run...". meanwhile just use the item in Plugins->ImageJ-Elphel menu.</div>Poltohttps://wiki.elphel.com/index.php?title=Talk:Nfs_access_speeds&diff=10696Talk:Nfs access speeds2011-09-17T18:54:27Z<p>Polto: Created page with "Why using NFS over TCP ? UDP is faster. --~~~~"</p>
<hr />
<div>Why using NFS over TCP ? UDP is faster.<br />
--[[User:Polto|Alexandre.Poltorak]] 18:54, 17 September 2011 (UTC)</div>Poltohttps://wiki.elphel.com/index.php?title=JP4&diff=10549JP42011-07-13T13:24:59Z<p>Polto: /* Avisynth plugin for JP4 processing */</p>
<hr />
<div>Note: the JP4 mode described here is referred as "JP46" in current 8.0 firwmare<br />
<br />
== JP4 Format ==<br />
<br />
We have added a special JP4 mode that bypasses the Demosaic in the FPGA and provides an image with pixels in each 16x16 macroblock that are rearranged to separate Bayer colors in individual 8x8 blocks, then encoded as monochrome. [[Demosaic_on_client_side|Demosaic]] will be applied during post-processing on the host PC. This section describe different algorithms and implementations used to provide this functionality.<br />
<br />
Main goals:<br />
- compression speed improvement<br />
- possibility to obtain more high quality image (near to RAW)<br />
- drasticaly lowering data size<br />
== Different JP4 Modes in 8.X Software ==<br />
only modes 0-2 can be processed with standard libjpeg: <br />
*0 - mono6, monochrome (color YCbCr 4:2:0 with zeroed out color componets) <br />
*1 - color, YCbCr 4:2:0, 3x3 pixels <br />
*2 - jp46 - original JP4 (from 7.X software), encoded as 4:2:0 with zeroed color components <br />
*3 - jp46dc, modified jp46 so each color component uses individual DC diffenential encoding <br />
*4 - reserved for color with 5x5 conversion (not yet implemented)<br />
*5 - jp4 with ommitted color components (4:0:0)<br />
*6 - jp4dc, similar to jp46dc encoded as 4:0:0<br />
*7 - jp4diff, differential where (R-G), G, (G2-G) and (B-G) components are encoded as 4:0:0<br />
*8 - jp4hdr, (R-G), G, G2,(B-G) are encoded so G2 can be used with high gain <br />
*9 - jp4fiff2, (R-G)/2, G,(G2-G)/2, (B-G)/2 to avoid possible overflow in compressed values <br />
*10 - jp4hdr2, (R-G)/2, G,G2,(B-G)/2 <br />
*14 - mono, monochrome with ommitted color components (4:0:0)<br />
<br />
<br />
=== [[JP4 HDR]] ===<br />
Bayer pattern look like this<br />
{| class="wikitable" <br />
|-<br />
|<br />
{| class="wikitable"align="center"<br />
|+RGGB<br />
|-<br />
| bgcolor="red"|R || bgcolor="green"|G1 || bgcolor="red"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="blue"|B || bgcolor="green"|G2 || bgcolor="blue"|B<br />
|-<br />
| bgcolor="red"|R || bgcolor="green"|G1 || bgcolor="red"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="blue"|B || bgcolor="green"|G2 || bgcolor="blue"|B<br />
|}<br />
|<br />
{| class="wikitable"align="center"<br />
|+BGGR<br />
|-<br />
| bgcolor="blue"|R || bgcolor="green"|G1 || bgcolor="blue"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="red"|B || bgcolor="green"|G2 || bgcolor="red"|B<br />
|-<br />
| bgcolor="blue"|R || bgcolor="green"|G1 || bgcolor="blue"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="red"|B || bgcolor="green"|G2 || bgcolor="red"|B<br />
|}<br />
|<br />
{| class="wikitable"align="center"<br />
|+GBRG<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="blue"|B || bgcolor="green"|G2 || bgcolor="blue"|B<br />
|-<br />
| bgcolor="red"|R || bgcolor="green"|G1 || bgcolor="red"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="blue"|B || bgcolor="green"|G2 || bgcolor="blue"|B<br />
|-<br />
| bgcolor="red"|R || bgcolor="green"|G1 || bgcolor="red"|R || bgcolor="green"|G1<br />
|}<br />
|<br />
{| class="wikitable"align="center"<br />
|+GRBG<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="red"|B || bgcolor="green"|G2 || bgcolor="red"|B<br />
|-<br />
| bgcolor="blue"|R || bgcolor="green"|G1 || bgcolor="blue"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="red"|B || bgcolor="green"|G2 || bgcolor="red"|B<br />
|-<br />
| bgcolor="blue"|R || bgcolor="green"|G1 || bgcolor="blue"|R || bgcolor="green"|G1<br />
|}<br />
|}<br />
<br />
The remark: all kinds of bayer patterns can be received from initial RGGB by flipping on X and/or Y.<br />
<br />
Some sensors have possibility to set independed scale to G1 and G2.<br />
Considering that the accessible optics does not give the full permission of a sensor resolution, it [[JP4_HDR|can be]] used for increase in a dynamic range of a image sensor.<br />
<br />
== JP46 processing on the host ==<br />
<br />
=== JP46 image decoding in MATLAB ===<br />
JP4 format can be easy manipulated by [http://www.mathworks.com/matlabcentral/fileexchange/22144 MATLAB] [[Image:Fruits_jp4.jpg|thumb|JP4 image]]<br />
<br />
1. Read image<br />
I=imread('hdr02.jp4'); %read JP4 file like JPEG<br />
,or online grab image from http like this:<br />
I=imread('http://community.elphel.com/pictures/jp4.jpg');<br />
,or cam:<br />
I=imread('http://cam_ip/bimg'); %get online buffered image from cam<br />
<br />
I=I(:,:,1); %strip color data<br />
2. Remove block grouping[[Image:Fruits_jp4_deblocked.jpg|thumb|Bayer CFA encoded image]]<br />
<code matlab><br />
II=deblock16x16(I); %deblock image<br />
<br />
%file deblock16x16.m<br />
function y=deblock16x16(I)<br />
y0=uint8(zeros(size(I)));<br />
for x=1:16:size(I,1)<br />
for y=1:16:size(I,2)<br />
blk16=I(x:x+15,y:y+15);<br />
for dx=0:7<br />
for dy=0:7<br />
y0(x+2*dx ,y+2*dy) = blk16(dx+1,dy+1);<br />
y0(x+2*dx+1,y+2*dy) = blk16(dx+9,dy+1);<br />
y0(x+2*dx ,y+2*dy+1) = blk16(dx+1,dy+9);<br />
y0(x+2*dx+1,y+2*dy+1) = blk16(dx+9,dy+9);<br />
end<br />
end<br />
end<br />
end<br />
y=y0;<br />
</code><br />
2. Demosaic image (Decode from Bayer CFA (Color Filter Array) encoded image)[[Image:Fruits_jp4_debayered.jpg|thumb|Decoded image]]<br />
J=demosaic(II,'gbrg');<br />
3. Show image<br />
imshow(J);<br />
<br />
=== JP46 to DNG image conversion ===<br />
<br />
==== movies ====<br />
<br />
See [[Movie2dng]] for conversion of JP4 movies to DNG.<br />
<br />
==== still frames ==== <br />
<br />
Credits: Dave Coffin<br />
<br />
This Linux command line tool allows conversion of JP4 files into a DNGs that dcraw and Adobe Photoshop can open.<br />
<br />
Download [http://community.elphel.com/files/jp4/tiff-3.8.2.tar.gz LibTIFF v3.8.2]<br />
*extract the tar.gz (this should create a new folder called "tiff-3.8.2")<br />
<br />
Apply [http://community.elphel.com/files/jp4/libtiff.patch this patch]: in terminal (first cd to path of libtiff.patch):<br />
patch -p0 < libtiff.patch <br />
<br />
build LibTIFF:<br />
<br />
cd tiff-3.8.2<br />
./configure<br />
make<br />
sudo make install<br />
<br />
Then compile [http://community.elphel.com/files/jp4/elphel_dng.c this C program] with:<br />
gcc -o elphel_dng elphel_dng.c -ltiff -Wl,--rpath=/usr/local/lib<br />
<br />
With Ubuntu 9.04 (and later) its possible that the wrong libtiff is selected automatically which results in a error like this when using the compiled application:<br />
TIFFSetField: test.dng: Unknown tag 33421.<br />
TIFFSetField: test.dng: Unknown tag 33422.<br />
Segmentation faul<br />
<br />
To solve that problem compile with this line forcing a specific libtiff version<br />
gcc -o elphel_dng elphel_dng.c -lm /usr/local/lib/libtiff.so.3.8.2 -Wl,--rpath=/usr/local/lib<br />
<br />
Then use the created application:<br />
<br />
Usage: ./elphel_dng "gamma" "input.jpg" "output.dng"<br />
Example: ./elphel_dng 100 example_JP4.jpeg example.dng<br />
<br />
=== JP46 video stream decoding using MPlayer ===<br />
JP46 stream can be decoded by MPlayer.<br />
Use this [[http://community.elphel.com/files/mplayer/debayer.diff this patch]]<br />
patch . -p0 < debayer.diff in the mplayer source dir<br />
<br />
Or download win32 binaries from sourceforge.<br />
usage example: mplayer.exe test.avi -vf demosaic=deblock=1:method=7:pattern=3 -vo gl<br />
mencoder example: mencoder.exe test.avi -ovc lavc -lavcopts vcodec=mjpeg -o output.avi -vf demosaic=deblock=1:method=1,scale<br />
<br />
Debayer ([[Demosaic_on_client_side|Demosaic]]) algorithm variants provided by libdc1394:<br />
- Nearest Neighbor : OpenCV library<br />
- Bilinear : OpenCV library<br />
- HQLinear : High-Quality Linear Interpolation For Demosaicing Of Bayer-Patterned<br />
Color Images, by Henrique S. Malvar, Li-wei He, and Ross Cutler, <br />
in Proceedings of the ICASSP'04 Conference. <br />
- Edge Sense II : Laroche, Claude A. "Apparatus and method for adaptively interpolating<br />
a full color image utilizing chrominance gradients" <br />
U.S. Patent 5,373,322. Based on the code found on the website <br />
http://www-ise.stanford.edu/~tingchen/ Converted to C and adapted to <br />
all four elementary patterns. <br />
- Downsample : "Known to the Ancients" <br />
- Simple : Implemented from the information found in the manual of Allied Vision<br />
Technologies (AVT) cameras. <br />
- VNG : Variable Number of Gradients, a method described in <br />
http://www-ise.stanford.edu/~tingchen/algodep/vargra.html <br />
Sources import from DCRAW by Frederic Devernay. DCRAW is a RAW <br />
converter program by Dave Coffin. URL: <br />
http://www.cybercom.net/~dcoffin/dcraw/ <br />
- AHD : Adaptive Homogeneity-Directed Demosaicing Algorithm, by K. Hirakawa <br />
and T.W. Parks, IEEE Transactions on Image Processing, Vol. 14, Nr. 3,<br />
March 2005, pp. 360 - 369.<br />
<br />
Pattern codes: pattern=0..3 -> [RGGB, BGGR, GBRG, GRBG]<br />
<br />
=== Avisynth plugin for JP46 processing ===<br />
[http://avisynth.org/mediawiki/Main_Page Avisynth plugin] also available<br />
<br />
AVS script example:<br />
LoadCPlugin("jp4.dll")<br />
DirectShowSource("test.avi")<br />
JP4("AHD","RGGB")<br />
<br />
=== GStreamer plugins for Elphel JP4 image and video processing ===<br />
[http://code.google.com/p/gst-plugins-elphel/ This project] supported by http://ubicast.eu hosts Elphel related gstreamer components, so far: <br />
<br />
* the jp462bayer plugin converts color and monochrome JP46 Elphel bitstreams to Bayer raw format. In the future, it might support other JP4 modes (JP4, JP4-HDR, ...)<br />
* bayer2rgb2 converts raw Bayer streams to RGB images <br />
<br />
==== jp462bayer: JP4 to Bayer ====<br />
<br />
After jpegdec, re-arranges the pixels in Bayer format.<br />
<br />
==== bayer2rgb2: debayer ====<br />
<br />
It offers the same features as the legacy bayer2rgb, but by wrapping Libdc1394's debayering algorithms you can choose the interpoloation algorithm between : simple, bilinear, hqlinear, downsample, edgesense, vng, ahd, nearest <br />
<br />
==== Example pipelines ====<br />
<br />
gst-launch-0.10 rtspsrc location=rtsp://elphel:554 protocols=0x00000001 ! rtpjpegdepay ! jpegdec ! \<br />
queue ! jp462bayer ! queue ! bayer2rgb2 ! queue ! ffmpegcolorspace ! videorate ! "video/x-raw-yuv, \<br />
format=(fourcc)I420, width=(int)1920, height=(int)1088, framerate=(fraction)25/1" ! xvimagesink sync=false max-lateness=-1<br />
<br />
== Demosaicing/debayering links ==<br />
<br />
[http://scien.stanford.edu/class/psych221/projects/99/tingchen A Study of Spatial Color Interpolation Algorithms for Single-Detector Digital Cameras. Ting Chen / Stanford University]<br />
<br />
Source code:<br />
[http://sourceforge.net/projects/elynx eLynx Image Processing SDK and Lab]<br />
[http://libdc1394.git.sourceforge.net/git/gitweb.cgi?p=libdc1394;a=blob;f=libdc1394/dc1394/bayer.c;hb=HEAD libdc1394]<br />
[http://graphics.cs.williams.edu/papers/BayerJGT09 Efficient, high-quality Bayer demosaic filtering on GPUs]<br />
http://svn2.assembla.com/svn/ge/libgedrawing/trunk/src/ImageBayer.cpp<br />
<br />
Example files:<br />
* [http://community.elphel.com/files/jp4/example_JP4.jpeg Example JP4]<br />
* [http://community.elphel.com/files/jp4/example_flipped.dng Example DNG]<br />
* [http://community.elphel.com/files/jp4/example_converted.jpg Example JPG (converted)]<br />
<br />
For more colorful examples please visit http://cinema.elphel.com/still-images<br />
<br />
See also:<br />
<br />
* [[Demosaic on client side]]<br />
* [http://linuxdevices.com/articles/AT4187053130.html Elphel camera under the hood: from Verilog to PHP - on LinuxDevices.com]<br />
* [http://www.pythonware.com/library/pil/handbook/decoder.htm Python PIL Writing Your Own File Decoder]</div>Poltohttps://wiki.elphel.com/index.php?title=JP4&diff=10548JP42011-07-13T13:24:40Z<p>Polto: </p>
<hr />
<div>Note: the JP4 mode described here is referred as "JP46" in current 8.0 firwmare<br />
<br />
== JP4 Format ==<br />
<br />
We have added a special JP4 mode that bypasses the Demosaic in the FPGA and provides an image with pixels in each 16x16 macroblock that are rearranged to separate Bayer colors in individual 8x8 blocks, then encoded as monochrome. [[Demosaic_on_client_side|Demosaic]] will be applied during post-processing on the host PC. This section describe different algorithms and implementations used to provide this functionality.<br />
<br />
Main goals:<br />
- compression speed improvement<br />
- possibility to obtain more high quality image (near to RAW)<br />
- drasticaly lowering data size<br />
== Different JP4 Modes in 8.X Software ==<br />
only modes 0-2 can be processed with standard libjpeg: <br />
*0 - mono6, monochrome (color YCbCr 4:2:0 with zeroed out color componets) <br />
*1 - color, YCbCr 4:2:0, 3x3 pixels <br />
*2 - jp46 - original JP4 (from 7.X software), encoded as 4:2:0 with zeroed color components <br />
*3 - jp46dc, modified jp46 so each color component uses individual DC diffenential encoding <br />
*4 - reserved for color with 5x5 conversion (not yet implemented)<br />
*5 - jp4 with ommitted color components (4:0:0)<br />
*6 - jp4dc, similar to jp46dc encoded as 4:0:0<br />
*7 - jp4diff, differential where (R-G), G, (G2-G) and (B-G) components are encoded as 4:0:0<br />
*8 - jp4hdr, (R-G), G, G2,(B-G) are encoded so G2 can be used with high gain <br />
*9 - jp4fiff2, (R-G)/2, G,(G2-G)/2, (B-G)/2 to avoid possible overflow in compressed values <br />
*10 - jp4hdr2, (R-G)/2, G,G2,(B-G)/2 <br />
*14 - mono, monochrome with ommitted color components (4:0:0)<br />
<br />
<br />
=== [[JP4 HDR]] ===<br />
Bayer pattern look like this<br />
{| class="wikitable" <br />
|-<br />
|<br />
{| class="wikitable"align="center"<br />
|+RGGB<br />
|-<br />
| bgcolor="red"|R || bgcolor="green"|G1 || bgcolor="red"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="blue"|B || bgcolor="green"|G2 || bgcolor="blue"|B<br />
|-<br />
| bgcolor="red"|R || bgcolor="green"|G1 || bgcolor="red"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="blue"|B || bgcolor="green"|G2 || bgcolor="blue"|B<br />
|}<br />
|<br />
{| class="wikitable"align="center"<br />
|+BGGR<br />
|-<br />
| bgcolor="blue"|R || bgcolor="green"|G1 || bgcolor="blue"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="red"|B || bgcolor="green"|G2 || bgcolor="red"|B<br />
|-<br />
| bgcolor="blue"|R || bgcolor="green"|G1 || bgcolor="blue"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="red"|B || bgcolor="green"|G2 || bgcolor="red"|B<br />
|}<br />
|<br />
{| class="wikitable"align="center"<br />
|+GBRG<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="blue"|B || bgcolor="green"|G2 || bgcolor="blue"|B<br />
|-<br />
| bgcolor="red"|R || bgcolor="green"|G1 || bgcolor="red"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="blue"|B || bgcolor="green"|G2 || bgcolor="blue"|B<br />
|-<br />
| bgcolor="red"|R || bgcolor="green"|G1 || bgcolor="red"|R || bgcolor="green"|G1<br />
|}<br />
|<br />
{| class="wikitable"align="center"<br />
|+GRBG<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="red"|B || bgcolor="green"|G2 || bgcolor="red"|B<br />
|-<br />
| bgcolor="blue"|R || bgcolor="green"|G1 || bgcolor="blue"|R || bgcolor="green"|G1<br />
|-<br />
| bgcolor="green"|G2 || bgcolor="red"|B || bgcolor="green"|G2 || bgcolor="red"|B<br />
|-<br />
| bgcolor="blue"|R || bgcolor="green"|G1 || bgcolor="blue"|R || bgcolor="green"|G1<br />
|}<br />
|}<br />
<br />
The remark: all kinds of bayer patterns can be received from initial RGGB by flipping on X and/or Y.<br />
<br />
Some sensors have possibility to set independed scale to G1 and G2.<br />
Considering that the accessible optics does not give the full permission of a sensor resolution, it [[JP4_HDR|can be]] used for increase in a dynamic range of a image sensor.<br />
<br />
== JP46 processing on the host ==<br />
<br />
=== JP46 image decoding in MATLAB ===<br />
JP4 format can be easy manipulated by [http://www.mathworks.com/matlabcentral/fileexchange/22144 MATLAB] [[Image:Fruits_jp4.jpg|thumb|JP4 image]]<br />
<br />
1. Read image<br />
I=imread('hdr02.jp4'); %read JP4 file like JPEG<br />
,or online grab image from http like this:<br />
I=imread('http://community.elphel.com/pictures/jp4.jpg');<br />
,or cam:<br />
I=imread('http://cam_ip/bimg'); %get online buffered image from cam<br />
<br />
I=I(:,:,1); %strip color data<br />
2. Remove block grouping[[Image:Fruits_jp4_deblocked.jpg|thumb|Bayer CFA encoded image]]<br />
<code matlab><br />
II=deblock16x16(I); %deblock image<br />
<br />
%file deblock16x16.m<br />
function y=deblock16x16(I)<br />
y0=uint8(zeros(size(I)));<br />
for x=1:16:size(I,1)<br />
for y=1:16:size(I,2)<br />
blk16=I(x:x+15,y:y+15);<br />
for dx=0:7<br />
for dy=0:7<br />
y0(x+2*dx ,y+2*dy) = blk16(dx+1,dy+1);<br />
y0(x+2*dx+1,y+2*dy) = blk16(dx+9,dy+1);<br />
y0(x+2*dx ,y+2*dy+1) = blk16(dx+1,dy+9);<br />
y0(x+2*dx+1,y+2*dy+1) = blk16(dx+9,dy+9);<br />
end<br />
end<br />
end<br />
end<br />
y=y0;<br />
</code><br />
2. Demosaic image (Decode from Bayer CFA (Color Filter Array) encoded image)[[Image:Fruits_jp4_debayered.jpg|thumb|Decoded image]]<br />
J=demosaic(II,'gbrg');<br />
3. Show image<br />
imshow(J);<br />
<br />
=== JP46 to DNG image conversion ===<br />
<br />
==== movies ====<br />
<br />
See [[Movie2dng]] for conversion of JP4 movies to DNG.<br />
<br />
==== still frames ==== <br />
<br />
Credits: Dave Coffin<br />
<br />
This Linux command line tool allows conversion of JP4 files into a DNGs that dcraw and Adobe Photoshop can open.<br />
<br />
Download [http://community.elphel.com/files/jp4/tiff-3.8.2.tar.gz LibTIFF v3.8.2]<br />
*extract the tar.gz (this should create a new folder called "tiff-3.8.2")<br />
<br />
Apply [http://community.elphel.com/files/jp4/libtiff.patch this patch]: in terminal (first cd to path of libtiff.patch):<br />
patch -p0 < libtiff.patch <br />
<br />
build LibTIFF:<br />
<br />
cd tiff-3.8.2<br />
./configure<br />
make<br />
sudo make install<br />
<br />
Then compile [http://community.elphel.com/files/jp4/elphel_dng.c this C program] with:<br />
gcc -o elphel_dng elphel_dng.c -ltiff -Wl,--rpath=/usr/local/lib<br />
<br />
With Ubuntu 9.04 (and later) its possible that the wrong libtiff is selected automatically which results in a error like this when using the compiled application:<br />
TIFFSetField: test.dng: Unknown tag 33421.<br />
TIFFSetField: test.dng: Unknown tag 33422.<br />
Segmentation faul<br />
<br />
To solve that problem compile with this line forcing a specific libtiff version<br />
gcc -o elphel_dng elphel_dng.c -lm /usr/local/lib/libtiff.so.3.8.2 -Wl,--rpath=/usr/local/lib<br />
<br />
Then use the created application:<br />
<br />
Usage: ./elphel_dng "gamma" "input.jpg" "output.dng"<br />
Example: ./elphel_dng 100 example_JP4.jpeg example.dng<br />
<br />
=== JP46 video stream decoding using MPlayer ===<br />
JP46 stream can be decoded by MPlayer.<br />
Use this [[http://community.elphel.com/files/mplayer/debayer.diff this patch]]<br />
patch . -p0 < debayer.diff in the mplayer source dir<br />
<br />
Or download win32 binaries from sourceforge.<br />
usage example: mplayer.exe test.avi -vf demosaic=deblock=1:method=7:pattern=3 -vo gl<br />
mencoder example: mencoder.exe test.avi -ovc lavc -lavcopts vcodec=mjpeg -o output.avi -vf demosaic=deblock=1:method=1,scale<br />
<br />
Debayer ([[Demosaic_on_client_side|Demosaic]]) algorithm variants provided by libdc1394:<br />
- Nearest Neighbor : OpenCV library<br />
- Bilinear : OpenCV library<br />
- HQLinear : High-Quality Linear Interpolation For Demosaicing Of Bayer-Patterned<br />
Color Images, by Henrique S. Malvar, Li-wei He, and Ross Cutler, <br />
in Proceedings of the ICASSP'04 Conference. <br />
- Edge Sense II : Laroche, Claude A. "Apparatus and method for adaptively interpolating<br />
a full color image utilizing chrominance gradients" <br />
U.S. Patent 5,373,322. Based on the code found on the website <br />
http://www-ise.stanford.edu/~tingchen/ Converted to C and adapted to <br />
all four elementary patterns. <br />
- Downsample : "Known to the Ancients" <br />
- Simple : Implemented from the information found in the manual of Allied Vision<br />
Technologies (AVT) cameras. <br />
- VNG : Variable Number of Gradients, a method described in <br />
http://www-ise.stanford.edu/~tingchen/algodep/vargra.html <br />
Sources import from DCRAW by Frederic Devernay. DCRAW is a RAW <br />
converter program by Dave Coffin. URL: <br />
http://www.cybercom.net/~dcoffin/dcraw/ <br />
- AHD : Adaptive Homogeneity-Directed Demosaicing Algorithm, by K. Hirakawa <br />
and T.W. Parks, IEEE Transactions on Image Processing, Vol. 14, Nr. 3,<br />
March 2005, pp. 360 - 369.<br />
<br />
Pattern codes: pattern=0..3 -> [RGGB, BGGR, GBRG, GRBG]<br />
<br />
=== Avisynth plugin for JP4 processing ===<br />
[http://avisynth.org/mediawiki/Main_Page Avisynth plugin] also available<br />
<br />
AVS script example:<br />
LoadCPlugin("jp4.dll")<br />
DirectShowSource("test.avi")<br />
JP4("AHD","RGGB")<br />
<br />
=== GStreamer plugins for Elphel JP4 image and video processing ===<br />
[http://code.google.com/p/gst-plugins-elphel/ This project] supported by http://ubicast.eu hosts Elphel related gstreamer components, so far: <br />
<br />
* the jp462bayer plugin converts color and monochrome JP46 Elphel bitstreams to Bayer raw format. In the future, it might support other JP4 modes (JP4, JP4-HDR, ...)<br />
* bayer2rgb2 converts raw Bayer streams to RGB images <br />
<br />
==== jp462bayer: JP4 to Bayer ====<br />
<br />
After jpegdec, re-arranges the pixels in Bayer format.<br />
<br />
==== bayer2rgb2: debayer ====<br />
<br />
It offers the same features as the legacy bayer2rgb, but by wrapping Libdc1394's debayering algorithms you can choose the interpoloation algorithm between : simple, bilinear, hqlinear, downsample, edgesense, vng, ahd, nearest <br />
<br />
==== Example pipelines ====<br />
<br />
gst-launch-0.10 rtspsrc location=rtsp://elphel:554 protocols=0x00000001 ! rtpjpegdepay ! jpegdec ! \<br />
queue ! jp462bayer ! queue ! bayer2rgb2 ! queue ! ffmpegcolorspace ! videorate ! "video/x-raw-yuv, \<br />
format=(fourcc)I420, width=(int)1920, height=(int)1088, framerate=(fraction)25/1" ! xvimagesink sync=false max-lateness=-1<br />
<br />
== Demosaicing/debayering links ==<br />
<br />
[http://scien.stanford.edu/class/psych221/projects/99/tingchen A Study of Spatial Color Interpolation Algorithms for Single-Detector Digital Cameras. Ting Chen / Stanford University]<br />
<br />
Source code:<br />
[http://sourceforge.net/projects/elynx eLynx Image Processing SDK and Lab]<br />
[http://libdc1394.git.sourceforge.net/git/gitweb.cgi?p=libdc1394;a=blob;f=libdc1394/dc1394/bayer.c;hb=HEAD libdc1394]<br />
[http://graphics.cs.williams.edu/papers/BayerJGT09 Efficient, high-quality Bayer demosaic filtering on GPUs]<br />
http://svn2.assembla.com/svn/ge/libgedrawing/trunk/src/ImageBayer.cpp<br />
<br />
Example files:<br />
* [http://community.elphel.com/files/jp4/example_JP4.jpeg Example JP4]<br />
* [http://community.elphel.com/files/jp4/example_flipped.dng Example DNG]<br />
* [http://community.elphel.com/files/jp4/example_converted.jpg Example JPG (converted)]<br />
<br />
For more colorful examples please visit http://cinema.elphel.com/still-images<br />
<br />
See also:<br />
<br />
* [[Demosaic on client side]]<br />
* [http://linuxdevices.com/articles/AT4187053130.html Elphel camera under the hood: from Verilog to PHP - on LinuxDevices.com]<br />
* [http://www.pythonware.com/library/pil/handbook/decoder.htm Python PIL Writing Your Own File Decoder]</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_Research_Lab&diff=10022Elphel Research Lab2011-04-11T16:43:33Z<p>Polto: </p>
<hr />
<div>===Invitation for collaboration to University of Utah students and faculty members===<br />
<br />
[http://www3.elphel.com/ Elphel, Inc.] is a Open Hardware and Free Software company, based in Utah, designing and manufacturing high-end digital cameras, used in a variety of applications, among which are [http://en.wikipedia.org/wiki/Google_Books Google Books] and [http://www.google.com/search?q=elphel+streetview Google Streetview] projects. <br />
<br />
Our cameras are very flexible high-performance device, capable of image processing, analysis, compression, etc.. <br />
More information about camera configurations and capabilities can be found here: http://wiki.elphel.com/index.php?title=353<br />
Elphel NC353L cameras can be used as an open development platform, which can be modified on various levels - from high to low (from PHP to FPGA) depending on the task and skills, and can be reconfigured for different [http://en.wikipedia.org/wiki/Elphel applications]. They are already widely used in scientific applications, some of them are:<br />
<br />
* [http://blog.elphel.com/2010/08/scini-takes-elphel-under-antarctic-ice/ SCINI project] -camera was integrated in an underwater robot, designed by Moss Landing Marine Lab for observing the life under Antarctic Ice.<br />
* [http://www.mail-archive.com/support-list@support.elphel.com/msg00249.html NASA Global Hawk UAV] - camera is used for photographic imaging of Earth from near-space.<br />
* [http://cinema.elphel.com/ Apertus] - developing cinematographic camera.<br />
<br />
Elphel's current work in the fields of image processing and enhancement, optical calibration and aberration correction, is described on [http://blogs.elphel.com/ Elphel Development Blog];<br />
Other [http://www3.elphel.com/articles articles] describe camera's capabilities as a reconfigurable device.<br />
<br />
Elphel's cameras are used in research in many universities [http://map.elphel.com/ around the world], such as:<br />
<br />
*CNRS (Centre National de la Recherche Scientifique), France <br />
*Concordia University, Electrical & Computer Engineering, Canada<br />
*Czech Technical University in Prague, Faculty of Electrical Engineering, Czech Republic<br />
*Massachusetts Institute Technology, Department of Aeronautics and Astronautics, USA<br />
*Niagara College, Research & Innovation, Canada<br />
*Stanford University Computer Science Department, USA<br />
*TU Dortmund, Department of Computer Science, Germany<br />
*Technical University of Madrid, Department of Informatics, Spain<br />
*University of Alcala, Electronics Department, Spain<br />
*University of Cantabria, Spain<br />
*Universidad de Concepción, Optics and Photonics Department, Chile<br />
*University of Talca, Engineering Department, Chile<br />
*University of Vaasa, Department of electrical engineering and automation, Finland<br />
and other universities.<br />
<br />
===Elphel, Inc. would like to invite students majoring in engineering, physics and computer science===<br />
to work with Elphel for scientific research and development in the areas, including but not limited to:<br />
<br />
*Image processing and post-processing: optical aberration correction, image enhancement<br />
<br />
*IMU data processing for image correction <br />
<br />
using Elphel camera open hardware and open source software (all software development has to be licensed under GNU GPL v3 license) as a development platform.<br />
<br />
<br />
Possible project ideas within these areas will be discussed at the Elphel presentation / student interview, at [http://www3.elphel.com/contacts Elphel office]<br />
<br />
After the initial research project is completed Elphel Inc. may offer a contract for finishing a product development based on the completed research. <br />
<br />
<br />
If you are interested in collaboration with Elphel, Inc. please contact Andrey N. Filippov, PhD:<br />
<br />
by e-mail: [mailto://projects@elphel.com projects@elphel.com] <br />
<br />
or by phone: 801.783.5555x106</div>Poltohttps://wiki.elphel.com/index.php?title=RTC&diff=8796RTC2010-10-03T15:23:22Z<p>Polto: /* Absolute date/time in Elphel cameras */</p>
<hr />
<div>[[RTC]] |<br />
[[10331]] |<br />
[[10332]] |<br />
[[10334]]<br />
----<br />
<br />
== Real Time Clock ==<br />
=== Absolute date/time in Elphel cameras ===<br />
Elphel model NC353L (as well as older ones) camera does not have a hardware clock with battery backup so each time it wakes up it is January, 1 1970. (NC353L-369* models have cmos clock). We made a simple hack to automatically setup date and time - camera web interface uses javascript code that sends host computer time (UTC) to the camera as one parameters for image acquisitions (originally the purpose was just to make all URLs unique to prevent caching of images by the browser).<br />
<br />
=== Timekeeping in Elphel cameras ===<br />
After the date/time are set cameras use internal CPU timer (baased on the master crystal oscillator) to maintain current time. There are several restrictions of these method:<br />
<br />
1 - the precision of the clock is limited to that of the crystal oscillator. Actually the clock IC allows fine-tune oscillator varying capacitance load but it is not really easy to calculate required change to apply;<br />
<br />
2 - FPGA, that performs most of the data processing in the camera does not have direct access to the CPU timer. Precise timer in the FPGA might simplify image synchronization and timestamping with no uncertainty of the software responce.<br />
<br />
=== FPGA Timer ===<br />
Now FPGA code has a replacement RTC with digitally adjustable rate (so it is possible to calculate and apply correction). The timer data consists of 32-bit seconds counter and 20 bit counter from 0 to 999999 (0xf423f) of microseconds, provisions are made to freeze 52=32+20 bits during reading so seconds are not incremented between to reads.<br />
<br />
Timer is be programmed by writing data to the FPGA registers<br />
Writing to X313_WA_RTC_CORR (0x46) sets the 16-bit (signed) correction value (positive - make clock faster, negative - slower). Correction work in the following way. Each 0.5 microsecond (10*period of 20MHz master crystal oscillator) 24-bit accumulator is incremented by 0x800000+correction_value (0x7f8000 to 0x807fff) and the carry out increments microsecond counter - that usually happens each other time.<br />
<br />
Writing to X313_WA_RTC_USEC (0x44) presets microsecond counter. Actually it is applied to the counter only together with the write value to the next register - X313_WA_RTC_SEC (0x45) - seconds.<br />
<br />
Output data needs to be latched in the registers before read out can be performed. It is accomplished by writing to X313_WA_RTC_LATCH (0x47).<br />
<br />
After that microseconds (20 bits) are available for reading at X313_RA_RTC_USEC (0x44), and seconds - at X313_RA_RTC_SEC (0x45).<br />
<br />
Currently timer is not generating any interrupts but it is not difficult to add such functionality if needed.<br />
<br />
== Timestamps ==<br />
<br />
One of the main functions of the FPGA timer is to provide timestamping of the images, it especially important in [http://www.elphel.com/3fhlo/samples/pf/ Photo-finish mode]. In that mode camera is rotated 90 degrees and the image consits of two pixel wide columns (two - for color filter pattern)acquired at several thousand fps. And each should be timestamped.<br />
<br />
Time stamps currently consist of 52 binary values (20 - microseconds and 32 - seconds) and completely black (for 0) or white (for 1) pixels are embedded into the image. Because of the FPGA compressor architecture this embedding takes place before color de-mosaic algorithm is applied so the pixel values of the timestamps are distorted, but it is rather easy to apply image filter to restore original values.<br />
<br />
Each timestamp is split into 2 lines. First line has bits 5..0 (LSBs) of seconds and all 20 bits of microseconds, the second - bits 31..6 of seconds (MSB first). <br />
<br />
Time stamping embeds binary values of the 52 bits (20 - microseconds and 32 - seconds) to the image as white/black pixels (the output values<br />
are distorted by the bayer->YCbCr 4:2:0 conversion so some processing is needed to restore the bits). First line has bits 5..0 of seconds and<br />
all 20 bits of microseconds, the second - bits 31..6 of seconds (MSB first).<br />
<br />
Writing data to X313_WA_TIMESTAMP (0x48) selects one of the two timestamping modes (or disables timestamping):<br />
<br />
0 - timestampos off<br />
<br />
1 - timestamps are embedded into the upper-left corner of the image. Currently while demosaic algorithm uses 3x3 pixel interpolation first row and first column are skipped. When the 5x5 conversion will be implemented the time stamp will be in the very corner (or maybe not to simplify reversing de-mosaic processing of the timestamp data). The frame size will not increase in that mode - timestamp will be embedded over the part of an image.<br />
<br />
2 - timestamps are embedded after the sensor data in each column (sensor row), this mode is needed for the photo-finish application. The image size will be increased by 28 pixels tgo accomodate the timestamps.<br />
<br />
There is a way to specify timestamp mode with the http request by adding &ts=<0..2> to the image URL, you may als..<br />
== Timestamping frame data ==<br />
In 353 cameras now - starting from the fpga revision 0333100f (it can be read as "fpcf -r 13" from telnet session) the timestamp data is embedded into each frame transferred from the FPGA to the system memory. It comes right before the frame length data as 8-byte sequence - 4 bytes of seconds and then 4 of microseconds.<br />
<br />
This data is also embedded as EXIF into the file header.<br />
<br />
<br />
----<br />
''Free Software and Open Hardware. Elphel, Inc., 2005-2006''</div>Poltohttps://wiki.elphel.com/index.php?title=Exif&diff=8795Exif2010-10-03T15:15:01Z<p>Polto: /* MOV EXIF Data Extraction Script (NASA) */</p>
<hr />
<div>[[Category:Development Topics]]<br />
Images and videos from Elphel 353/363 cameras are tagged with EXIF data (we use standard EXIF fields (EXIF 2.2 specification from exif.org)).<br />
<br />
= Reading EXIF from Images =<br />
Basically any standard software that can read EXIF data from JPEG images should work.<br />
<br />
[http://www.exiv2.org/ exiv2] allow you to add, delete and modify metadata. <br />
<br />
Though the [http://en.wikipedia.org/wiki/Makernote#MakerNote_data makernote] is an EXIF field that is different (structure as well as content) for every camera manufacturer.<br />
<br />
= Reading EXIF from Videos =<br />
Each frame of Elphel generated MJPEG / (OGM or MOV) videos also contain EXIF data.<br />
<br />
See examples of how to read them below.<br />
<br />
== GSTreamer metadatademux ==<br />
[http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad-plugins/html/gst-plugins-bad-plugins-metadatademux.html GSTreamer metadatademux] — element that parse or demux metadata from image files, we need to make it work with MJPEG video files.<br />
<br />
To test it on single image:<br />
gst-launch -v -m filesrc location=./test.jpeg ! metadatademux ! fakesink silent=TRUE<br />
<br />
The idea is to be able to parse a video in realtime, use metadatademux plugin to extract needed EXIF field and pipe them to another plugin such as [http://gstreamer.freedesktop.org/data/doc/gstreamer/0.10.4/gst-plugins-base-plugins/html/gst-plugins-base-plugins-plugin-subparse.html subparse] or [http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-textoverlay.html textoverlay] to include those data on the video or a custom plugin to write KML/KMZ files for google earth.<br />
<br />
== OGG subtitles ==<br />
OGG has a provision to include [http://en.wikipedia.org/wiki/Ogg#Metadata metadata]. It can be interesting as many players have support for subtitles.<br />
<br />
==MOV EXIF Data Extraction Example Script==<br />
<br />
[[Media:PHP-geo-test.tar.gz | Download the Script]]<br />
<br />
This example script was written by Konstantin Kim in PHP and based on PEL ([http://pel.sourceforge.net/ PHP Exif Library]) from Martin Geisler. <br />
<br />
Both are released under GNU GPL V3.<br />
<br />
The script extracts EXIF data from a MOV file and writes it to a KML file. It's easy to modify to do something else.<br />
<br />
Though the script has a design error, it needs to load the entire MOV file into RAM. So dont use it with 32GB files ;)<br />
<br />
==MOV EXIF Data Extraction Script (NASA)==<br />
<br />
[[media:Extract_exif.tar.gz | Download the Script]]<br />
<br />
This script was modified by Scott Janz based on the work by Konstantin Kim in PHP and based on PEL ([http://pel.sourceforge.net/ PHP Exif Library]) from Martin Geisler. <br />
<br />
This includes the PEL distribution as well since one file had to be modified.<br />
<br />
The script is modified to output the location parameters (GPS coordinates) to a text file formatted in a way that you can read it with QuickTime and overlay the information on the movie.<br />
<br />
=EXIF Fields=<br />
All JPEG images have mentioned below fields (you can check this with any program what have support of Exif, in Linux with "exif" command line program, for example):<br />
<br />
<br><br />
<table border="1" width="75%"><br />
<tr><br />
<td><b>Tag name</b></td><br />
<td><b>Tag ID</b></td><br />
<td><b>Sample value</b></td><br />
<td><b>description</b></td><br />
</tr><br />
<br />
<tr><br />
<td>Image description</td><br />
<td>0x010E</td><br />
<td></td><br />
<td><i>coming soon...</i></td><br />
</tr><br />
<br />
<tr><br />
<td>Manufacturer</td><br />
<td>0x010F</td><br />
<td>Elphel, Inc</td><br />
<td><i>standard</i></td><br />
</tr><br />
<br />
<tr><br />
<td>Model</td><br />
<td>0x0110</td><br />
<td>353</td><br />
<td><i>standard</i></td><br />
</tr><br />
<br />
<tr><br />
<td>Software</td><br />
<td>0x0131</td><br />
<td>7.1.0.18</td><br />
<td>firmware version of camera at exposition moment</td><br />
</tr><br />
<br />
<tr><br />
<td>Date and Time</td><br />
<td>0x0132</td><br />
<td>1970:01:01 01:50:14</td><br />
<td><i>standard</i>; time from FPGA RTC at sensor start moment</td><br />
</tr><br />
<br />
<tr><br />
<td>Artist</td><br />
<td>0x013B</td><br />
<td>00:0E:64:01:02:03</td><br />
<td>serial number of camera (Ethernet MAC address), ASCIIZ value</td><br />
</tr><br />
<br />
<tr><br />
<td>Exposure Time</td><br />
<td>0x829A</td><br />
<td>1/49 sec.</td><br />
<td><i>standard</i></td><br />
</tr><br />
<br />
<tr><br />
<td>Date and Time (original)</td><br />
<td>0x9003</td><br />
<td>1970:01:01 01:50:14</td><br />
<td><i>standard</i>; time from FPGA RTC at sensor start moment</td><br />
</tr><br />
<br />
<tr><br />
<td>SubSecTimeOriginal</td><br />
<td>0x9291</td><br />
<td>686527</td><br />
<td><i>standard</i>; time from FPGA RTC at sensor start moment</td><br />
</tr><br />
<br />
</table><br />
<br />
<br />
----<br />
<br />
<i>Note:</i> in latest at this moment firmware (7.1.0.18) for 353 camera (CMOS sensor) all fields are correct, but with 363 camera (CCD sensor) we have wrong exposure time and shifted timestamp - TODO.<br />
<br />
Exif data supported by [[Imgsrv]] and [[Camogm]] is described in [[Imgsrv#imgsrv_and_Exif_data]]. More changes are expected shortly - software will support more flexible Exif data including GPS-related fields.--[[User:Andrey.filippov|Andrey.filippov]] 10:40, 28 March 2008 (CDT)<br />
<br />
=Adding additional EXIF2.2 Data Fields in the camera=<br />
<br />
Emails by Andrey Filippov on Support Mailinglist:<br />
<br />
[http://www.mail-archive.com/support-list@support.elphel.com/msg00313.html http://www.mail-archive.com/support-list@support.elphel.com/msg00313.html]<br />
<br />
[http://www.mail-archive.com/support-list@support.elphel.com/msg00278.html http://www.mail-archive.com/support-list@support.elphel.com/msg00278.html]</div>Poltohttps://wiki.elphel.com/index.php?title=Exif&diff=8794Exif2010-10-03T15:14:29Z<p>Polto: /* MOV EXIF Data Extraction Example Script */</p>
<hr />
<div>[[Category:Development Topics]]<br />
Images and videos from Elphel 353/363 cameras are tagged with EXIF data (we use standard EXIF fields (EXIF 2.2 specification from exif.org)).<br />
<br />
= Reading EXIF from Images =<br />
Basically any standard software that can read EXIF data from JPEG images should work.<br />
<br />
[http://www.exiv2.org/ exiv2] allow you to add, delete and modify metadata. <br />
<br />
Though the [http://en.wikipedia.org/wiki/Makernote#MakerNote_data makernote] is an EXIF field that is different (structure as well as content) for every camera manufacturer.<br />
<br />
= Reading EXIF from Videos =<br />
Each frame of Elphel generated MJPEG / (OGM or MOV) videos also contain EXIF data.<br />
<br />
See examples of how to read them below.<br />
<br />
== GSTreamer metadatademux ==<br />
[http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad-plugins/html/gst-plugins-bad-plugins-metadatademux.html GSTreamer metadatademux] — element that parse or demux metadata from image files, we need to make it work with MJPEG video files.<br />
<br />
To test it on single image:<br />
gst-launch -v -m filesrc location=./test.jpeg ! metadatademux ! fakesink silent=TRUE<br />
<br />
The idea is to be able to parse a video in realtime, use metadatademux plugin to extract needed EXIF field and pipe them to another plugin such as [http://gstreamer.freedesktop.org/data/doc/gstreamer/0.10.4/gst-plugins-base-plugins/html/gst-plugins-base-plugins-plugin-subparse.html subparse] or [http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-textoverlay.html textoverlay] to include those data on the video or a custom plugin to write KML/KMZ files for google earth.<br />
<br />
== OGG subtitles ==<br />
OGG has a provision to include [http://en.wikipedia.org/wiki/Ogg#Metadata metadata]. It can be interesting as many players have support for subtitles.<br />
<br />
==MOV EXIF Data Extraction Example Script==<br />
<br />
[[Media:PHP-geo-test.tar.gz | Download the Script]]<br />
<br />
This example script was written by Konstantin Kim in PHP and based on PEL ([http://pel.sourceforge.net/ PHP Exif Library]) from Martin Geisler. <br />
<br />
Both are released under GNU GPL V3.<br />
<br />
The script extracts EXIF data from a MOV file and writes it to a KML file. It's easy to modify to do something else.<br />
<br />
Though the script has a design error, it needs to load the entire MOV file into RAM. So dont use it with 32GB files ;)<br />
<br />
==MOV EXIF Data Extraction Script (NASA)==<br />
<br />
[[Image:Extract_exif.tar.gz | Download the Script]]<br />
<br />
This script was modified by Scott Janz based on the work by Konstantin Kim in PHP and based on PEL ([http://pel.sourceforge.net/ PHP Exif Library]) from Martin Geisler. <br />
<br />
This includes the PEL distribution as well since one file had to be modified.<br />
<br />
The script is modified to output the location parameters (GPS coordinates) to a text file formatted in a way that you can read it with QuickTime and overlay the information on the movie.<br />
<br />
=EXIF Fields=<br />
All JPEG images have mentioned below fields (you can check this with any program what have support of Exif, in Linux with "exif" command line program, for example):<br />
<br />
<br><br />
<table border="1" width="75%"><br />
<tr><br />
<td><b>Tag name</b></td><br />
<td><b>Tag ID</b></td><br />
<td><b>Sample value</b></td><br />
<td><b>description</b></td><br />
</tr><br />
<br />
<tr><br />
<td>Image description</td><br />
<td>0x010E</td><br />
<td></td><br />
<td><i>coming soon...</i></td><br />
</tr><br />
<br />
<tr><br />
<td>Manufacturer</td><br />
<td>0x010F</td><br />
<td>Elphel, Inc</td><br />
<td><i>standard</i></td><br />
</tr><br />
<br />
<tr><br />
<td>Model</td><br />
<td>0x0110</td><br />
<td>353</td><br />
<td><i>standard</i></td><br />
</tr><br />
<br />
<tr><br />
<td>Software</td><br />
<td>0x0131</td><br />
<td>7.1.0.18</td><br />
<td>firmware version of camera at exposition moment</td><br />
</tr><br />
<br />
<tr><br />
<td>Date and Time</td><br />
<td>0x0132</td><br />
<td>1970:01:01 01:50:14</td><br />
<td><i>standard</i>; time from FPGA RTC at sensor start moment</td><br />
</tr><br />
<br />
<tr><br />
<td>Artist</td><br />
<td>0x013B</td><br />
<td>00:0E:64:01:02:03</td><br />
<td>serial number of camera (Ethernet MAC address), ASCIIZ value</td><br />
</tr><br />
<br />
<tr><br />
<td>Exposure Time</td><br />
<td>0x829A</td><br />
<td>1/49 sec.</td><br />
<td><i>standard</i></td><br />
</tr><br />
<br />
<tr><br />
<td>Date and Time (original)</td><br />
<td>0x9003</td><br />
<td>1970:01:01 01:50:14</td><br />
<td><i>standard</i>; time from FPGA RTC at sensor start moment</td><br />
</tr><br />
<br />
<tr><br />
<td>SubSecTimeOriginal</td><br />
<td>0x9291</td><br />
<td>686527</td><br />
<td><i>standard</i>; time from FPGA RTC at sensor start moment</td><br />
</tr><br />
<br />
</table><br />
<br />
<br />
----<br />
<br />
<i>Note:</i> in latest at this moment firmware (7.1.0.18) for 353 camera (CMOS sensor) all fields are correct, but with 363 camera (CCD sensor) we have wrong exposure time and shifted timestamp - TODO.<br />
<br />
Exif data supported by [[Imgsrv]] and [[Camogm]] is described in [[Imgsrv#imgsrv_and_Exif_data]]. More changes are expected shortly - software will support more flexible Exif data including GPS-related fields.--[[User:Andrey.filippov|Andrey.filippov]] 10:40, 28 March 2008 (CDT)<br />
<br />
=Adding additional EXIF2.2 Data Fields in the camera=<br />
<br />
Emails by Andrey Filippov on Support Mailinglist:<br />
<br />
[http://www.mail-archive.com/support-list@support.elphel.com/msg00313.html http://www.mail-archive.com/support-list@support.elphel.com/msg00313.html]<br />
<br />
[http://www.mail-archive.com/support-list@support.elphel.com/msg00278.html http://www.mail-archive.com/support-list@support.elphel.com/msg00278.html]</div>Poltohttps://wiki.elphel.com/index.php?title=OpenCV&diff=8748OpenCV2010-08-29T15:31:42Z<p>Polto: /* Using v4lsink GStreamer plugin and v4l2loopback */</p>
<hr />
<div>=Single image= <br />
You can use [[imgsrv]] to download a single image from the [[circbuf]]. MJPEG live stream is also available from [[imgsrv]].<br />
<br />
You can use [http://www.gnu.org/software/wget/ wget] to download the image to your computer or implement a small HTTP client in your software.<br />
<br />
=Live video=<br />
==Using AVLD==<br />
AVLD stand for [[AVLD - Another Video Loopback Device]]. It's a very CPU and RAM consuming way to present an Elphel network camera as V4l device.<br />
<br />
While it is the only solution with proprietary v4l compatible software products (such as skype), it's a total waste of resources for a free software app what can be adapted to receive an RTP stream.<br />
<br />
Moreover, you can proceed to download the source code from:<br />
http://allonlinux.free.fr/Projets/AVLD/src/avld_0.1.4.tar.bz2<br />
<br />
Then:<br />
$ tar -jxvf avld_0.1.4.tar.bz2<br />
$ cd avld_0.1.4<br />
$ make && sudo make install<br />
<br />
Right now you can mount your dummy device typing:<br />
$sudo modprobe avld width=<camera width resolution> height=<camera height resolution> fps=<camera fps><br />
<br />
Finally, you need to use mencoder in order to redirect the rtsp source to the dummy device:<br />
$ sudo mencoder rtsp://192.168.1.180 -nosound -ovc raw -vf format=bgr24 -of rawvideo -o /dev/video0<br />
<br />
With this, you just need a simple code in order to preview the scene:<br />
<br />
#include "highgui.h"<br />
#include "cv.h"<br />
#include "cvaux.h"<br />
#include <ml.h><br />
<br />
int main(int argc,char *argv[])<br />
{<br />
cvNamedWindow( "Current Frame");<br />
IplImage* pCurrentFrame = NULL;<br />
CvSize frame_size;<br />
CvCapture* capture = cvCaptureFromCAM ( 0 ); //the dummy device make the job :)<br />
char c;<br />
while ( 1 )<br />
{<br />
//Request frame from Camera<br />
pCurrentFrame = cvQueryFrame ( capture );<br />
//video input file finished <br />
if( !pCurrentFrame ) break;<br />
cvShowImage( "Current Frame", pCurrentFrame );<br />
c = cvWaitKey(10);<br />
if ( c == 27 ) break;<br />
}<br />
cvReleaseImage ( &pCurrentFrame );<br />
cvDestroyAllWindows ();<br />
return 0;<br />
}<br />
<br />
--[[User:maurosc3ner|Esteban Correa]] 21:54, 29 July 2010 (CDT)<br />
<br />
==Using v4lsink GStreamer plugin and v4l2loopback==<br />
This is basically an AVLD fork that now evolved into v4l2loopback and a special gstreamer sink to the loopback device.<br />
<br />
You can find the projects homepages at: http://code.google.com/p/v4lsink/ & http://code.google.com/p/v4l2loopback/ (This version does not compile due to vala dependencies, somebody cloned and patched the project, it is available from github: http://github.com/umlaeute/gst-v4l2loopback)<br />
<br />
To install it do:<br />
hg clone https://v4l2loopback.googlecode.com/hg/ v4l2loopback <br />
cd v4l2loopback<br />
make<br />
make install<br />
<br />
load the module:<br />
modprobe v4l2loopback<br />
<br />
To compile v4lsink you will need Vala compiler >= 0.8.1, the easy way is to install the package from Vala PPA:<br />
sudo add-apt-repository ppa:vala-team/ppa<br />
sudo apt-get update<br />
sudo apt-get install valac vala-utils<br />
It will install valac 0.9.1 instead of default 0.8.0 under Ubuntu 10.04.<br />
<br />
Install v4lsink:<br />
git clone http://github.com/umlaeute/gst-v4l2loopback.git<br />
cd gst-v4l2loopback<br />
./autogen.sh<br />
make<br />
make install<br />
This will install the new gstreamer module in ~/.gstreamer-0.10/plugins/ folder. Now you can try "gst-inspect v4l2loopback"<br />
<br />
Default resolution is fixed to 640x480, if you want to change the resolution, you can change it in the two places of the v4l2loopback.c file and recompile/reinstall v4lsink.<br />
<br />
To test it with an RTSP stream from an Elphel camera you need to feed the stream into the loopback device:<br />
gst-launch -e -m rtspsrc location=rtsp://192.168.0.9:554 latency=50 ! rtpjpegdepay ! jpegdec ! ffmpegcolorspace ! v4l2loopback<br />
<br />
Now you can open the stream as v4l2 device with any compatible program:<br />
mplayer tv:// -tv device=/dev/video1<br />
<br />
To test with OpenCV simply change the v4l2 device number:<br />
// CvCapture* capture = 0;<br />
CvCapture* capture = 1;<br />
<br />
==Using OpenCV with FFMPEG==<br />
<br />
<br />
==Using OpenCV with GStreamer==<br />
<br />
==Design your OpenCV code as GStreamer plugin==<br />
http://github.com/Elleo/gst-opencv<br />
<br />
=Portability=<br />
<br />
=Tutorials=<br />
==Tennis balls recognizing==<br />
[[OpenCV Tennis balls recognizing tutorial]]<br />
<br />
==Go game record (kifu) generator==<br />
[[Kifu:_Go_game_record_(kifu)_generator]]</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_353_framerate&diff=8725Elphel 353 framerate2010-08-23T20:52:04Z<p>Polto: </p>
<hr />
<div>{| class="wikitable" border="1"<br />
|-<br />
! <br />
! JPEG mode<br />
! [[JP4]] mode<br />
|-<br />
| Performance || 53 Mp/s || 80 Mp/s<br />
|-<br />
| 2592x1936 || 15 FPS || <br />
|-<br />
| 1920x1088 (FullHD) || 24 FPS || 30 FPS<br />
|-<br />
| 1280x720 (720p) || 57FPS || 60 FPS<br />
|-<br />
| 800x608 || 90 FPS ||<br />
|-<br />
| 640x480 || 126 FPS ||<br />
|-<br />
| 320x240 || 310 FPS ||<br />
|}<br />
<br />
<br />
Framerate limits:<br />
* network bandwidth<br />
* image quality<br />
* exposition time</div>Poltohttps://wiki.elphel.com/index.php?title=Talk:OpenCV&diff=8700Talk:OpenCV2010-08-09T16:15:24Z<p>Polto: </p>
<hr />
<div>= Intro =<br />
Few people recently expressed interest in using [[OpenCV]] with Elphel cameras.<br />
<br />
Let's concentrate our efforts using [[OpenCV]] wiki page and mailing list.<br />
<br />
= OpenCV with FFMPEG =<br />
<br />
I compiled both OpenCV 2.1 and current git version with FFMPEG support. FFMPEG support seems to work, but FFMPEG itself brake on the MJPEG stream.<br />
<br />
I took the facedetect example in OpenCV-2.1.0/samples/c/facedetect.cpp and just modified one line:<br />
// CvCapture* capture = 0;<br />
CvCapture* capture = cvCreateFileCapture("rtsp://192.168.0.9:554");<br />
<br />
Running ./facetedect I can see network load from the camera, but few seconds later the program die with:<br />
[rtsp @ 0xbefd80]Estimating duration from bitrate, this may be inaccurate <br />
picture size invalid (0x0)<br />
Last message repeated 1 times<br />
[rtsp @ 0xbf2480]Estimating duration from bitrate, this may be inaccurate<br />
it's an FFMPEG error.<br />
<br />
I tried both stable FFMPEG from Ubuntu 10.04, PPA version from MOTU Media Team and even fresh git version. They all die the same way. Passing -x & -y to ffplay does not help. Here is what I get directly from ffplay rtsp://192.168.0.9:554 :<br />
FFplay version 0.6-4:0.6-2ubuntu2~lucid1~ppa1, Copyright (c) 2003-2010 the FFmpeg developers<br />
built on Jul 18 2010 16:06:55 with gcc 4.4.3<br />
configuration: --extra-version='4:0.6-2ubuntu2~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable- libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime- cpudetect --enable-gpl --enable-postproc --enable-x11grab --enable-libdc1394 --enable-shared --disable-static<br />
WARNING: library configuration mismatch<br />
libavutil configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavcodec configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavformat configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavdevice configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable- libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavutil 50.15. 1 / 50.15. 1<br />
libavcodec 52.72. 2 / 52.72. 2<br />
libavformat 52.64. 2 / 52.64. 2<br />
libavdevice 52. 2. 0 / 52. 2. 0<br />
libavfilter 1.19. 0 / 1.19. 0<br />
libswscale 0.11. 0 / 0.11. 0<br />
libpostproc 51. 2. 0 / 51. 2. 0<br />
[rtsp @ 0x1c51a10]Estimating duration from bitrate, this may be inaccurate<br />
Input #0, rtsp, from 'rtsp://192.168.0.9:554':<br />
Duration: N/A, start: 0.038089, bitrate: N/A<br />
Stream #0.0: Video: mjpeg, 90k tbr, 90k tbn, 90k tbc<br />
[ffplay_output @ 0x1c57060]auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out'<br />
Impossible to convert between the formats supported by the filter 'src' and the filter 'auto-inserted scaler 0'<br />
<br />
Please advise if you have any ideas... --[[User:Polto|Alexandre.Poltorak]] 05:57, 27 July 2010 (CDT)<br />
<br />
= OpenCV with GStreamer =<br />
Ok, I put FFMPEG aside for some time and decided to give a try with GStreamer.<br />
<br />
So I compiled OpenCV with<br />
cmake -D WITH_FFMPEG=OFF -D WITH_GSTREAMER=ON .<br />
<br />
Still use the same facedetect example in C from samples/c directory and replace just one line:<br />
<br />
I took the facedetect example in OpenCV-2.1.0/samples/c/facedetect.cpp and just modified one line:<br />
// CvCapture* capture = 0;<br />
CvCapture* capture = cvCreateFileCapture("rtsp://192.168.0.9:554");<br />
<br />
I get:<br />
Trying to connect to stream <br />
restarting pipeline, going to ready<br />
ready, relinking<br />
filtering with video/x-raw-rgb, width=(int)1920<br />
relinked, pausing<br />
state now paused<br />
and facedetect fall-back to v4l device.<br />
<br />
Looking in:<br />
./modules/highgui/src/cap_gstreamer.cpp<br />
I noticed that it only have support for "dv1394src", "v4lsrc", "v4l2src", "filesrc". We have to bring rtspsrc support now.<br />
<br />
--[[User:Polto|Alexandre.Poltorak]] 14:17, 29 July 2010 (CDT)<br />
<br />
= Links =<br />
== Our [[OpenCV]] wiki page ==<br />
== OpenCV blogs ==</div>Poltohttps://wiki.elphel.com/index.php?title=Talk:OpenCV&diff=8584Talk:OpenCV2010-07-29T19:29:27Z<p>Polto: </p>
<hr />
<div>= Intro =<br />
Few people recently expressed interest in using [[OpenCV]] with Elphel cameras.<br />
<br />
Let's concentrate our efforts using [[OpenCV]] wiki page and mailing list.<br />
<br />
= OpenCV with FFMPEG =<br />
<br />
I compiled both OpenCV 2.1 and current git version with FFMPEG support. FFMPEG support seems to work, but FFMPEG itself brake on the MJPEG stream.<br />
<br />
I took the facedetect example in OpenCV-2.1.0/samples/c/facedetect.cpp and just modified one line:<br />
// CvCapture* capture = 0;<br />
CvCapture* capture = cvCreateFileCapture("rtsp://192.168.0.9:554");<br />
<br />
Running ./facetedect I can see network load from the camera, but few seconds later the program die with:<br />
[rtsp @ 0xbefd80]Estimating duration from bitrate, this may be inaccurate <br />
picture size invalid (0x0)<br />
Last message repeated 1 times<br />
[rtsp @ 0xbf2480]Estimating duration from bitrate, this may be inaccurate<br />
it's an FFMPEG error.<br />
<br />
I tried both stable FFMPEG from Ubuntu 10.04, PPA version from MOTU Media Team and even fresh git version. They all die the same way. Passing -x & -y to ffplay does not help. Here is what I get directly from ffplay rtsp://192.168.0.9:554 :<br />
FFplay version 0.6-4:0.6-2ubuntu2~lucid1~ppa1, Copyright (c) 2003-2010 the FFmpeg developers<br />
built on Jul 18 2010 16:06:55 with gcc 4.4.3<br />
configuration: --extra-version='4:0.6-2ubuntu2~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable- libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime- cpudetect --enable-gpl --enable-postproc --enable-x11grab --enable-libdc1394 --enable-shared --disable-static<br />
WARNING: library configuration mismatch<br />
libavutil configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavcodec configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavformat configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavdevice configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable- libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavutil 50.15. 1 / 50.15. 1<br />
libavcodec 52.72. 2 / 52.72. 2<br />
libavformat 52.64. 2 / 52.64. 2<br />
libavdevice 52. 2. 0 / 52. 2. 0<br />
libavfilter 1.19. 0 / 1.19. 0<br />
libswscale 0.11. 0 / 0.11. 0<br />
libpostproc 51. 2. 0 / 51. 2. 0<br />
[rtsp @ 0x1c51a10]Estimating duration from bitrate, this may be inaccurate<br />
Input #0, rtsp, from 'rtsp://192.168.0.9:554':<br />
Duration: N/A, start: 0.038089, bitrate: N/A<br />
Stream #0.0: Video: mjpeg, 90k tbr, 90k tbn, 90k tbc<br />
[ffplay_output @ 0x1c57060]auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out'<br />
Impossible to convert between the formats supported by the filter 'src' and the filter 'auto-inserted scaler 0'<br />
<br />
Please advise if you have any ideas... --[[User:Polto|Alexandre.Poltorak]] 05:57, 27 July 2010 (CDT)<br />
<br />
= OpenCV with GStreamer =<br />
Ok, I put FFMPEG aside for some time and decided to give a try with GStreamer.<br />
<br />
So I compiled OpenCV with<br />
cmake -D WITH_FFMPEG=OFF -D WITH_GSTREAMER=ON .<br />
<br />
Still use the same facedetect example in C from samples/c directory and replace just one line:<br />
<br />
I took the facedetect example in OpenCV-2.1.0/samples/c/facedetect.cpp and just modified one line:<br />
// CvCapture* capture = 0;<br />
CvCapture* capture = cvCreateFileCapture("rtsp://192.168.0.9:554");<br />
<br />
I get:<br />
Trying to connect to stream <br />
restarting pipeline, going to ready<br />
ready, relinking<br />
filtering with video/x-raw-rgb, width=(int)1920<br />
relinked, pausing<br />
state now paused<br />
and facedetect die.<br />
<br />
Looking in:<br />
./modules/highgui/src/cap_gstreamer.cpp<br />
I noticed that it only have support for "dv1394src", "v4lsrc", "v4l2src", "filesrc". We have to bring rtspsrc support now.<br />
<br />
--[[User:Polto|Alexandre.Poltorak]] 14:17, 29 July 2010 (CDT)<br />
<br />
= Links =<br />
== Our [[OpenCV]] wiki page ==<br />
== OpenCV blogs ==</div>Poltohttps://wiki.elphel.com/index.php?title=Talk:OpenCV&diff=8583Talk:OpenCV2010-07-29T19:17:35Z<p>Polto: </p>
<hr />
<div>= Intro =<br />
Few people recently expressed interest in using [[OpenCV]] with Elphel cameras.<br />
<br />
Let's concentrate our efforts using [[OpenCV]] wiki page and mailing list.<br />
<br />
= OpenCV with FFMPEG =<br />
<br />
I compiled both OpenCV 2.1 and current git version with FFMPEG support. FFMPEG support seems to work, but FFMPEG itself brake on the MJPEG stream.<br />
<br />
I took the facedetect example in OpenCV-2.1.0/samples/c/facedetect.cpp and just modified one line:<br />
// CvCapture* capture = 0;<br />
CvCapture* capture = cvCreateFileCapture("rtsp://192.168.0.9:554");<br />
<br />
Running ./facetedect I can see network load from the camera, but few seconds later the program die with:<br />
[rtsp @ 0xbefd80]Estimating duration from bitrate, this may be inaccurate <br />
picture size invalid (0x0)<br />
Last message repeated 1 times<br />
[rtsp @ 0xbf2480]Estimating duration from bitrate, this may be inaccurate<br />
it's an FFMPEG error.<br />
<br />
I tried both stable FFMPEG from Ubuntu 10.04, PPA version from MOTU Media Team and even fresh git version. They all die the same way. Passing -x & -y to ffplay does not help. Here is what I get directly from ffplay rtsp://192.168.0.9:554 :<br />
FFplay version 0.6-4:0.6-2ubuntu2~lucid1~ppa1, Copyright (c) 2003-2010 the FFmpeg developers<br />
built on Jul 18 2010 16:06:55 with gcc 4.4.3<br />
configuration: --extra-version='4:0.6-2ubuntu2~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable- libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime- cpudetect --enable-gpl --enable-postproc --enable-x11grab --enable-libdc1394 --enable-shared --disable-static<br />
WARNING: library configuration mismatch<br />
libavutil configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavcodec configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavformat configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavdevice configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable- libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavutil 50.15. 1 / 50.15. 1<br />
libavcodec 52.72. 2 / 52.72. 2<br />
libavformat 52.64. 2 / 52.64. 2<br />
libavdevice 52. 2. 0 / 52. 2. 0<br />
libavfilter 1.19. 0 / 1.19. 0<br />
libswscale 0.11. 0 / 0.11. 0<br />
libpostproc 51. 2. 0 / 51. 2. 0<br />
[rtsp @ 0x1c51a10]Estimating duration from bitrate, this may be inaccurate<br />
Input #0, rtsp, from 'rtsp://192.168.0.9:554':<br />
Duration: N/A, start: 0.038089, bitrate: N/A<br />
Stream #0.0: Video: mjpeg, 90k tbr, 90k tbn, 90k tbc<br />
[ffplay_output @ 0x1c57060]auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out'<br />
Impossible to convert between the formats supported by the filter 'src' and the filter 'auto-inserted scaler 0'<br />
<br />
Please advise if you have any ideas... --[[User:Polto|Alexandre.Poltorak]] 05:57, 27 July 2010 (CDT)<br />
<br />
= OpenCV with GStreamer =<br />
Ok, I put FFMPEG aside for some time and decided to give a try with GStreamer.<br />
<br />
So I compiled OpenCV with<br />
cmake -D WITH_FFMPEG=OFF -D WITH_GSTREAMER=ON .<br />
<br />
Still use the same facedetect example in C from samples/c directory and replace just one line:<br />
<br />
I took the facedetect example in OpenCV-2.1.0/samples/c/facedetect.cpp and just modified one line:<br />
// CvCapture* capture = 0;<br />
CvCapture* capture = cvCreateFileCapture("rtsp://192.168.0.9:554");<br />
<br />
I get:<br />
Trying to connect to stream <br />
restarting pipeline, going to ready<br />
ready, relinking<br />
filtering with video/x-raw-rgb, width=(int)1920<br />
relinked, pausing<br />
state now paused<br />
and facedetect die.<br />
<br />
Looking in:<br />
./modules/highgui/src/cap_gstreamer.cpp<br />
I noticed that it only have support for "dv1394src", "v4lsrc", "v4l2src", "filesrc". We have to bring rtspsrc support now.<br />
<br />
--[[User:Polto|Alexandre.Poltorak]] 14:17, 29 July 2010 (CDT)</div>Poltohttps://wiki.elphel.com/index.php?title=OpenCV&diff=8581OpenCV2010-07-28T23:26:13Z<p>Polto: </p>
<hr />
<div>=Single image= <br />
You can use [[imgsrv]] to download a single image from the [[circbuf]]. MJPEG live stream is also available from [[imgsrv]].<br />
<br />
You can use [http://www.gnu.org/software/wget/ wget] to download the image to your computer or implement a small HTTP client in your software.<br />
<br />
=Live video=<br />
==Using AVLD==<br />
AVLD stand for [[AVLD - Another Video Loopback Device]]. It's a very CPU and RAM consuming way to present an Elphel network camera as V4l device.<br />
<br />
While it is the only solution with proprietary v4l compatible software products (such as skype), it's a total waste of resources for a free software app what can be adapted to receive an RTP stream.<br />
<br />
==Using v4lsink GStreamer plugin and v4l2loopback==<br />
This is basically an AVLD fork that now evolved into v4l2loopback and a special gstreamer sink to the loopback device.<br />
<br />
http://code.google.com/p/v4lsink/ & http://code.google.com/p/v4l2loopback/<br />
<br />
==Using OpenCV with FFMPEG==<br />
<br />
<br />
==Using OpenCV with GStreamer==<br />
<br />
==Design your OpenCV code as GStreamer plugin==<br />
http://github.com/Elleo/gst-opencv<br />
<br />
=Portability=<br />
<br />
=Tutorials=<br />
==Tennis balls recognizing==<br />
[[OpenCV Tennis balls recognizing tutorial]]<br />
<br />
==Go game record (kifu) generator==<br />
[[Kifu:_Go_game_record_(kifu)_generator]]</div>Poltohttps://wiki.elphel.com/index.php?title=Talk:OpenCV&diff=8567Talk:OpenCV2010-07-27T11:23:44Z<p>Polto: </p>
<hr />
<div>Few people recently expressed interest in using [[OpenCV]] with Elphel cameras.<br />
<br />
Let's concentrate our efforts using [[OpenCV]] wiki page and mailing list.<br />
<br />
----<br />
<br />
I compiled both OpenCV 2.1 and current git version with FFMPEG support. FFMPEG support seems to work, but FFMPEG itself brake on the MJPEG stream.<br />
<br />
I took the facedetect example in OpenCV-2.1.0/samples/c/facedetect.cpp and just modified one line:<br />
// CvCapture* capture = 0;<br />
CvCapture* capture = cvCreateFileCapture("rtsp://192.168.0.9:554");<br />
<br />
Running ./facetedect I can see network load from the camera, but few seconds later the program die with:<br />
[rtsp @ 0xbefd80]Estimating duration from bitrate, this may be inaccurate <br />
picture size invalid (0x0)<br />
Last message repeated 1 times<br />
[rtsp @ 0xbf2480]Estimating duration from bitrate, this may be inaccurate<br />
it's an FFMPEG error.<br />
<br />
I tried both stable FFMPEG from Ubuntu 10.04, PPA version from MOTU Media Team and even fresh git version. They all die the same way. Passing -x & -y to ffplay does not help. Here is what I get directly from ffplay rtsp://192.168.0.9:554 :<br />
FFplay version 0.6-4:0.6-2ubuntu2~lucid1~ppa1, Copyright (c) 2003-2010 the FFmpeg developers<br />
built on Jul 18 2010 16:06:55 with gcc 4.4.3<br />
configuration: --extra-version='4:0.6-2ubuntu2~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable- libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime- cpudetect --enable-gpl --enable-postproc --enable-x11grab --enable-libdc1394 --enable-shared --disable-static<br />
WARNING: library configuration mismatch<br />
libavutil configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavcodec configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavformat configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavdevice configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable- libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavutil 50.15. 1 / 50.15. 1<br />
libavcodec 52.72. 2 / 52.72. 2<br />
libavformat 52.64. 2 / 52.64. 2<br />
libavdevice 52. 2. 0 / 52. 2. 0<br />
libavfilter 1.19. 0 / 1.19. 0<br />
libswscale 0.11. 0 / 0.11. 0<br />
libpostproc 51. 2. 0 / 51. 2. 0<br />
[rtsp @ 0x1c51a10]Estimating duration from bitrate, this may be inaccurate<br />
Input #0, rtsp, from 'rtsp://192.168.0.9:554':<br />
Duration: N/A, start: 0.038089, bitrate: N/A<br />
Stream #0.0: Video: mjpeg, 90k tbr, 90k tbn, 90k tbc<br />
[ffplay_output @ 0x1c57060]auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out'<br />
Impossible to convert between the formats supported by the filter 'src' and the filter 'auto-inserted scaler 0'<br />
<br />
Please advise if you have any ideas... --[[User:Polto|Alexandre.Poltorak]] 05:57, 27 July 2010 (CDT)</div>Poltohttps://wiki.elphel.com/index.php?title=OpenCV&diff=8566OpenCV2010-07-27T11:02:36Z<p>Polto: </p>
<hr />
<div>=Single image= <br />
You can use [[imgsrv]] to download a single image from the [[circbuf]]. MJPEG live stream is also available from [[imgsrv]].<br />
<br />
You can use [http://www.gnu.org/software/wget/ wget] to download the image to your computer or implement a small HTTP client in your software.<br />
<br />
=Live video=<br />
==Using AVLD==<br />
AVLD stand for [[AVLD - Another Video Loopback Device]]. It's a very CPU and RAM consuming way to present an Elphel network camera as V4l device.<br />
<br />
While it is the only solution with proprietary v4l compatible software products (such as skype), it's a total waste of resources for a free software app what can be adapted to receive an RTP stream.<br />
<br />
==Using OpenCV with FFMPEG==<br />
<br />
<br />
==Using OpenCV with GStreamer==<br />
<br />
==Design your OpenCV code as GStreamer plugin==<br />
http://github.com/Elleo/gst-opencv<br />
<br />
=Portability=<br />
<br />
=Tutorials=<br />
==Tennis balls recognizing==<br />
[[OpenCV Tennis balls recognizing tutorial]]<br />
<br />
==Go game record (kifu) generator==<br />
[[Kifu:_Go_game_record_(kifu)_generator]]</div>Poltohttps://wiki.elphel.com/index.php?title=Talk:OpenCV&diff=8565Talk:OpenCV2010-07-27T10:57:36Z<p>Polto: </p>
<hr />
<div>Few people recently expressed interest in using [[OpenCV]] with Elphel cameras.<br />
<br />
Let's concentrate our efforts using our wiki and mailing list.<br />
<br />
----<br />
<br />
I compiled both OpenCV 2.1 and current git version with FFMPEG support. FFMPEG support seems to work, but FFMPEG itself brake on the MJPEG stream.<br />
<br />
I took the facedetect example in OpenCV-2.1.0/samples/c/facedetect.cpp and just modified one line:<br />
// CvCapture* capture = 0;<br />
CvCapture* capture = cvCreateFileCapture("rtsp://192.168.0.9:554");<br />
<br />
Running ./facetedect I can see network load from the camera, but few seconds later the program die with:<br />
[rtsp @ 0xbefd80]Estimating duration from bitrate, this may be inaccurate <br />
picture size invalid (0x0)<br />
Last message repeated 1 times<br />
[rtsp @ 0xbf2480]Estimating duration from bitrate, this may be inaccurate<br />
it's an FFMPEG error.<br />
<br />
I tried both stable FFMPEG from Ubuntu 10.04, PPA version from MOTU Media Team and even fresh git version. They all die the same way. Passing -x & -y to ffplay does not help. Here is what I get directly from ffplay rtsp://192.168.0.9:554 :<br />
FFplay version 0.6-4:0.6-2ubuntu2~lucid1~ppa1, Copyright (c) 2003-2010 the FFmpeg developers<br />
built on Jul 18 2010 16:06:55 with gcc 4.4.3<br />
configuration: --extra-version='4:0.6-2ubuntu2~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable- libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime- cpudetect --enable-gpl --enable-postproc --enable-x11grab --enable-libdc1394 --enable-shared --disable-static<br />
WARNING: library configuration mismatch<br />
libavutil configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavcodec configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavformat configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable-libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavdevice configuration: --extra-version='4:0.6-2ubuntu1~lucid1~ppa1' --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libdirac --enable-libgsm --enable-libopenjpeg --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-vaapi --enable-pthreads --enable-zlib --enable-libvpx --disable-stripping --enable-runtime-cpudetect --enable-libmp3lame --enable-gpl --enable-postproc --enable-x11grab --enable-libfaad --enable-libxvid --enable- libx264 --enable-librtmp --enable-libdc1394 --enable-shared --disable-static<br />
libavutil 50.15. 1 / 50.15. 1<br />
libavcodec 52.72. 2 / 52.72. 2<br />
libavformat 52.64. 2 / 52.64. 2<br />
libavdevice 52. 2. 0 / 52. 2. 0<br />
libavfilter 1.19. 0 / 1.19. 0<br />
libswscale 0.11. 0 / 0.11. 0<br />
libpostproc 51. 2. 0 / 51. 2. 0<br />
[rtsp @ 0x1c51a10]Estimating duration from bitrate, this may be inaccurate<br />
Input #0, rtsp, from 'rtsp://192.168.0.9:554':<br />
Duration: N/A, start: 0.038089, bitrate: N/A<br />
Stream #0.0: Video: mjpeg, 90k tbr, 90k tbn, 90k tbc<br />
[ffplay_output @ 0x1c57060]auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out'<br />
Impossible to convert between the formats supported by the filter 'src' and the filter 'auto-inserted scaler 0'<br />
<br />
Please advise if you have any ideas... --[[User:Polto|Alexandre.Poltorak]] 05:57, 27 July 2010 (CDT)</div>Poltohttps://wiki.elphel.com/index.php?title=OpenCV&diff=8564OpenCV2010-07-27T10:19:45Z<p>Polto: </p>
<hr />
<div>=Single image= <br />
You can use [[imgsrv]] to download a single image from the [[circbuf]]. MJPEG live stream is also available from [[imgsrv]].<br />
<br />
You can use [http://www.gnu.org/software/wget/ wget] to download the image to your computer or implement a small HTTP client in your software.<br />
<br />
=Live video=<br />
==Using AVLD==<br />
AVLD stand for [[AVLD - Another Video Loopback Device]]. It's a very CPU and RAM consuming way to present an Elphel network camera as V4l device.<br />
<br />
While it is the only solution with proprietary v4l compatible software products (such as skype), it's a total waste of resources for a free software app what can be adapted to receive an RTP stream.<br />
<br />
==Using OpenCV with FFMPEG==<br />
<br />
<br />
==Using OpenCV with GStreamer==<br />
<br />
<br />
=Portability=<br />
<br />
=Tutorials=<br />
==Tennis balls recognizing==<br />
[[OpenCV Tennis balls recognizing tutorial]]<br />
<br />
==Go game record (kifu) generator==<br />
[[Kifu:_Go_game_record_(kifu)_generator]]</div>Poltohttps://wiki.elphel.com/index.php?title=OpenCV_Tennis_balls_recognizing_tutorial&diff=8560OpenCV Tennis balls recognizing tutorial2010-07-27T09:56:03Z<p>Polto: OpenCV moved to OpenCV Tennis balls recognizing tutorial</p>
<hr />
<div>This tutorial demonstrates how to use an Elphel (or perhaps another) camera to perform some basic computer vision tasks, such as identifying objects. For this example, we will be recognizing tennis balls. A [[Image:Tennis.tar.gz|finished example]] is available for GNU/Linux systems.<br />
<br />
== Prerequisites ==<br />
;OpenCV<br />
:[http://opencv.willowgarage.com/wiki/ OpenCV] is a C library designed to help with computer vision programs. It provides quite a few useful functions that can save a lot typing when performing operations on images.<br />
<br />
;V4L/AVLD<br />
:OpenCV provides a V4L API that can be used to acquire images from Elphel cameras. This is done using [[AVLD_-_Another_Video_Loopback_Device|AVLD]].<br />
<br />
<br />
== Image acquisition ==<br />
[[Image:Tennis-input.jpg|thumb|Example input image]]<br />
<br />
The first step is to get some images into OpenCV and display them. This assumes you have AVLD and V4L already set up. Below is a fairly minimal example that captures and displays images. It can be compiled with '''gcc -o main main.c $(pkg-config --libs --cflags opencv)'''.<br />
<br />
<pre><br />
#include <opencv/cv.h><br />
#include <opencv/highgui.h><br />
#include <X11/keysym.h><br />
<br />
int main(int argc, char **argv)<br />
{<br />
/* Start the CV system and get the first v4l camera */<br />
cvInitSystem(argc, argv);<br />
CvCapture *cam = cvCreateCameraCapture(0);<br />
<br />
/* Create a window to use for displaying the images */<br />
cvNamedWindow("img", 0);<br />
cvMoveWindow("img", 200, 200);<br />
<br />
/* Display images until the user presses q */<br />
while (1) {<br />
cvGrabFrame(cam);<br />
IplImage *img = cvRetrieveFrame(cam);<br />
cvShowImage("img", img);<br />
if (cvWaitKey(10) == XK_q)<br />
return 0;<br />
cvReleaseImage(&img);<br />
}<br />
}<br />
</pre><br />
<br />
== Processing ==<br />
=== Color space ===<br />
[[Image:Tennis-hsv.jpg|thumb|Example HSV output]]<br />
<br />
A good first step in many CV algorithms is to convert the image to [http://en.wikipedia.org/wiki/HSL_and_HSV HSV] (or another similar [http://en.wikipedia.org/wiki/Color_space color space]). This makes picking out objects based on colors a bit simpler, as will be seen later. We'll make a copy of the original image so that we can display the original at the end. Note that OpenCV stores images in BGR format by default.<br />
<pre><br />
CvSize size = cvGetSize(img);<br />
IplImage *hsv = cvCreateImage(size, IPL_DEPTH_8U, 3);<br />
cvCvtColor(img, hsv, CV_BGR2HSV); <br />
</pre><br />
<br />
=== Masks ===<br />
[[Image:Tennis-mask.png|thumb|Example mask]]<br />
<br />
The next step is to select all pixels that we think might be part of a tennis ball. We'll do this based purely on their HSV values. OpenCV provides a InRanage function that can be used to pick out pixels based on their values. This generates a [http://en.wikipedia.org/wiki/Mask_(computing) mask]; a binary image where the foreground pixels (white) were within the specified range. We're done with the HSV image after this, so we can free it's memory.<br />
<br />
Picking the ranges for creating masks is one of the more complicated parts of a CV algorithm. For now, manually tuning the ranges is easiest. You might also use one of the OpenCV machine learning algorithms to pick the ranges automatically.<br />
<br />
<pre><br />
CvMat *mask = cvCreateMat(size.height, size.width, CV_8UC1);<br />
cvInRangeS(hsv, cvScalar(0.11*256, 0.60*256, 0.20*256, 0),<br />
cvScalar(0.14*256, 1.00*256, 1.00*256, 0), mask);<br />
cvReleaseImage(&hsv);<br />
</pre><br />
<br />
=== Morphological operations ===<br />
[[Image:Tennis-morph.png|thumb|Mask after morphological operations]]<br />
<br />
No matter how good your ranges are when generating a mask, there will almost always be noise in the mask. In our example, the white lines on the tennis ball don't show up because they don't fit the hue range. Much of this nose can be eliminated by using a series of [http://en.wikipedia.org/wiki/Mathematical_morphology morphological operations]. Two commonly uses operation are [http://en.wikipedia.org/wiki/Opening_(morphology) opening] and [http://en.wikipedia.org/wiki/Closing_(morphology) closing], which are in turn comprised of [http://en.wikipedia.org/wiki/Dilation_(morphology) dilate] and [http://en.wikipedia.org/wiki/Erosion_(morphology) erode] operations. The table below summarizes these operations.<br />
<br />
{| border=1<br />
! Operation !! Effect / Use<br />
|-<br />
| Dilate || Expand the foreground<br />
|-<br />
| Erode || Contract the foreground (~ expand background)<br />
|-<br />
| Close || Dilation followed by erosion, removes specks of background, fills in foreground areas.<br />
|-<br />
| Open || Erosion followed by dilation, removes specks of foreground, fills in background areas.<br />
|} <br />
<br />
Morphological operations are performed with a [http://en.wikipedia.org/wiki/Structuring_element Structuring Element]. In computer vision, this is typically a oval or a rectangle of some specific size. Note that using rectangles results in faster code but can also cause poorer results.<br />
<br />
Below, we use a large rectangular structuring element along with a close to remove the black lines that show up in the tennis balls. Afterwards, we perform an open with a smaller structuring element to eliminate some additional nose from the image.<br />
<br />
<pre><br />
IplConvKernel *se21 = cvCreateStructuringElementEx(21, 21, 10, 10, CV_SHAPE_RECT, NULL);<br />
IplConvKernel *se11 = cvCreateStructuringElementEx(11, 11, 5, 5, CV_SHAPE_RECT, NULL);<br />
cvClose(mask, mask, se21); // See completed example for cvClose definition<br />
cvOpen(mask, mask, se11); // See completed example for cvOpen definition<br />
cvReleaseStructuringElement(&se21);<br />
cvReleaseStructuringElement(&se11);<br />
</pre><br />
<br />
=== Hough transform ===<br />
The real work in finding tennis balls is done by a [http://en.wikipedia.org/wiki/Hough_transform Hough transform]. The specifics of this are beyond the scope of this tutorial. We'll just treat it as a black box function that finds circular objects in an input image.<br />
<br />
The OpenCV Hough function performs a [http://en.wikipedia.org/wiki/Canny_edge_detector Canny edge detection] on the input image before the actual Hough transform. Due to this and the way the Hough transform works, it is beneficial to do quite a bit of smoothing to get a nice gradient around the edge of the circles before passing the image to the Hough function. Many of the parameters to the Hough function can also be tuned to provide better results.<br />
<br />
<pre><br />
/* Copy mask into a grayscale image */<br />
IplImage *hough_in = cvCreateImage(size, 8, 1);<br />
cvCopy(mask, hough_in, NULL);<br />
cvSmooth(hough_in, hough_in, CV_GAUSSIAN, 15, 15, 0, 0);<br />
<br />
/* Run the Hough function */<br />
CvMemStorage *storage = cvCreateMemStorage(0);<br />
CvSeq *circles = cvHoughCircles(hough_in, storage,<br />
CV_HOUGH_GRADIENT, 4, size.height/10, 100, 40, 0, 0);<br />
cvReleaseMemStorage(&storage);<br />
</pre><br />
<br />
== Output ==<br />
[[Image:Tennis-output.jpg|thumb|Example output on a particularly nice image]]<br />
<br />
The output of the Hough function can then be used in a variety of ways. For now we'll just draw some circles and centers onto the original input image before displaying it.<br />
<br />
<pre><br />
int i;<br />
for (i = 0; i < circles->total; i++) {<br />
float *p = (float*)cvGetSeqElem(circles, i);<br />
CvPoint center = cvPoint(cvRound(p[0]),cvRound(p[1]));<br />
CvScalar val = cvGet2D(mask, center.y, center.x);<br />
if (val.val[0] < 1) continue;<br />
cvCircle(img, center, 3, CV_RGB(0,255,0), -1, CV_AA, 0);<br />
cvCircle(img, center, cvRound(p[2]), CV_RGB(255,0,0), 3, CV_AA, 0);<br />
cvCircle(mask, center, 3, CV_RGB(0,255,0), -1, CV_AA, 0);<br />
cvCircle(mask, center, cvRound(p[2]), CV_RGB(255,0,0), 3, CV_AA, 0);<br />
}<br />
</pre><br />
<br />
== Acknowledgments ==<br />
Some of the code provided as part of this example was developed by Jon Nibert and Andy Spencer as part of the Image Recognition course taught at Rose-Hulman Institute of Technology. All examples are provided under the GNU GPLv3.</div>Poltohttps://wiki.elphel.com/index.php?title=Talk:OpenCV_Tennis_balls_recognizing_tutorial&diff=8562Talk:OpenCV Tennis balls recognizing tutorial2010-07-27T09:56:03Z<p>Polto: Talk:OpenCV moved to Talk:OpenCV Tennis balls recognizing tutorial</p>
<hr />
<div>I don't currently have an Elphel camera at my disposal anymore, so I was unable to test to make sure the image acquisition code in the tarball actually works. --[[User:Andy753421|Andy753421]]</div>Poltohttps://wiki.elphel.com/index.php?title=OpenCV&diff=8561OpenCV2010-07-27T09:56:03Z<p>Polto: OpenCV moved to OpenCV Tennis balls recognizing tutorial: We will have a more general OpenCV page.</p>
<hr />
<div>#redirect [[OpenCV Tennis balls recognizing tutorial]]</div>Poltohttps://wiki.elphel.com/index.php?title=Talk:OpenCV&diff=8563Talk:OpenCV2010-07-27T09:56:03Z<p>Polto: Talk:OpenCV moved to Talk:OpenCV Tennis balls recognizing tutorial: We will have a more general OpenCV page.</p>
<hr />
<div>#redirect [[Talk:OpenCV Tennis balls recognizing tutorial]]</div>Poltohttps://wiki.elphel.com/index.php?title=Known_bugs&diff=8524Known bugs2010-07-19T13:44:40Z<p>Polto: </p>
<hr />
<div>== elphel353-8.0.8.8.30 and earlier ==<br />
To set the camera in triggered mode you need to set "Program ahead" parameter to 5 instead of it's default value - 3. If not the camera freeze after the first frame.<br />
<br />
<br />
== elphel353.7.1.4.5 and earlier ==<br />
Cache coherency problem that was revealing itself in imgsrv by (sometimes) returning empty images (actually 1x1 GIF ones) after waiting fro the new image to be acquired. For the error to happen, mmap() (used while processing "img" or "meta" command) had to be called after the kernel wrote the metadata to the circular buffer (that happens during interrupt generated when the image is completely transferred to the system memory), and the mmamp-ed metadata was used before the cached data (kernel writes to cache first for this memory segment) first was flushed out (by other unrelated cache memory usage).<br />
<br />
In rev. 7.1.4.6 the bug was fixed by explicitly flushing the used cache line.<br />
<br />
== ==</div>Poltohttps://wiki.elphel.com/index.php?title=2010.RMLL.info_recording&diff=85202010.RMLL.info recording2010-07-15T09:40:15Z<p>Polto: </p>
<hr />
<div>=Prefix=<br />
Elphel participated to http://2010.rmll.info/ with support of our Swiss partner - Alsenet SA. We organized an Elphel [[Elphel_workshop_in_Bordeaux_during_RMLL_2010|workshop]] and participated in conferences recording.<br />
<br />
=Goals=<br />
The goal is:<br />
<br />
* keep high quality 1920x1080p MJPEG sources for all conferences what will be post processed.<br />
* encode in free software and open data formats like WebM/VP8/vorbis and Ogg/Theora/vorbis as well as proprietary codecs such as h264.<br />
* do all that with exclusively FreeSoftware<br />
<br />
=Hardware=<br />
==Cameras==<br />
[[Image:Elphel NC353L-10369-HDD-microphone.jpeg|thumb|conference recording kit]]<br />
<br />
Two types of cameras kit were used. <br />
<br />
* one is based on NC353L-369 camera and was used with an external SATA HDD (with external power) and a [http://www.thomann.de/gb/the_tbone_micplug_usb.htm T.BONE MICPLUG USB]<br />
* the other is based on NC353L camera, audio/video is not synced and recorded on the camera, but on a networked PC.<br />
===camera software===<br />
NC353L kit was running standard Elphel firmware release.<br />
<br />
The NC353L-369 camera combined with USB-Audio converter and SATA hard drive was running experimental firmware by Andrey Latin build with MKV (Matroska) support. [[Camogm]] was replaced with cammkv and camogmgui was hacked to accommodate with cammkv. Cammkv software allowed to record synced PCM/MJPEG into MKV container.<br />
<br />
==Lenses==<br />
We used mainly computar's H3Z4518CS-MPIR and fujinon's HF16SA-1<br />
<br />
==Hard drive==<br />
We used Plextor 2.5" external eSATA and USB hard drive. When use with camera it is connected with a SATA-eSATA cable and powered with an external power supply. You can also use the disc as USB only connecting it to your PC for video post-processing.<br />
<br />
==USB Audio==<br />
[http://www.thomann.de/gb/the_tbone_micplug_usb.htm T.BONE MICPLUG USB] is compatible with ALSA running on the camera and provide phantom power to the microphone from USB. It also have a mini-jack output and gain control.<br />
<br />
=PC=<br />
==hardware==<br />
==software==</div>Poltohttps://wiki.elphel.com/index.php?title=2010.RMLL.info_recording&diff=85192010.RMLL.info recording2010-07-15T09:32:05Z<p>Polto: /* Cameras */</p>
<hr />
<div>=Prefix=<br />
Elphel participated to http://2010.rmll.info/ with support of our Swiss partner - Alsenet SA. We organized an Elphel [[Elphel_workshop_in_Bordeaux_during_RMLL_2010|workshop]] and participated in conferences recording.<br />
<br />
=Goals=<br />
The goal is:<br />
<br />
* keep high quality 1920x1080p MJPEG sources for all conferences what will be post processed.<br />
* encode in free software and open data formats like WebM/VP8/vorbis and Ogg/Theora/vorbis as well as proprietary codecs such as h264.<br />
* do all that with exclusively FreeSoftware<br />
<br />
=Hardware=<br />
==Cameras==<br />
[[Image:Elphel NC353L-10369-HDD-microphone.jpeg|thumb|conference recording kit]]<br />
<br />
Two types of cameras kit were used. <br />
<br />
* one is based on NC353L-369 camera and was used with an external SATA HDD (with external power) and a [http://www.thomann.de/gb/the_tbone_micplug_usb.htm T.BONE MICPLUG USB]<br />
* the other is based on NC353L camera, audio/video is not synced and recorded on the camera, but on a networked PC.<br />
<br />
===Lenses===<br />
We used mainly computar's H3Z4518CS-MPIR and fujinon's HF16SA-1<br />
<br />
===Hard drive===<br />
We used Plextor 2.5" external eSATA and USB hard drive. When use with camera it is connected with a SATA-eSATA cable and powered with an external power supply. You can also use the disc as USB only connecting it to your PC for video post-processing.<br />
<br />
===USB Audio===<br />
[http://www.thomann.de/gb/the_tbone_micplug_usb.htm T.BONE MICPLUG USB] is compatible with ALSA running on the camera and provide phantom power to the microphone from USB. It also have a mini-jack output and gain control.<br />
<br />
==PC==<br />
<br />
===Requirement===<br />
<br />
=Microphones=<br />
<br />
=Tripods=<br />
<br />
=Network=<br />
<br />
=Software=<br />
<br />
==FreeSoftware==<br />
<br />
List of freesoftware needed to record video/audio, mux and stream.<br />
<br />
<br />
Of course standard GNU/Linux<br />
<br />
mencoder to record video<br />
<br />
gstreamer-0.10 to record audio<br />
ffmpeg2theora to encode on theora <br />
==Ubuntu Desktop==<br />
<br />
==Special scripts for recording==<br />
<br />
===Transcoding to Theora/Vorbis===</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8514Elphel workshop in Bordeaux during RMLL 20102010-07-14T22:00:02Z<p>Polto: </p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop was made during RMLL 2010 and was organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Any question should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10353]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
<br />
Elphel provide Free Software SDK for everything but synthesis/place&route tools for the FPGA, here you will have to deal with a proprietary but free of charge Xilinx WebPack ISE. Simulation is still possible with free software (this is what we use ourselves at Elphel) - Icarus Verilog and GTKWave. Unfortunately there are a few Xilinx primitives used in the design (from Xilinx unisims library) that are needed for the simulation. We hope that Xilinx will eventually release this code under free license, or somebody will help us to re-implement these simulation Verilog models.<br />
<br />
To install the SDK for Elphel cameras you need a fresh installation of the latest supported (10.04 at the moment) (K)Ubuntu GNU/Linux distribution and follows instructions available on this page: [[Elphel Software Kit for Ubuntu]], FPGA part is documented separately: [[FPGA Development in Elphel cameras]].<br />
<br />
Those manuals are written so that you should only copy & paste commands in a terminal in order to install the needed software needed to start developing on Elhpel cameras.<br />
<br />
Elphel use [[KDevelop]] 3.5 IDE - there is a script that creates KDevelop project from Elphel source tree so you can easily navigate the files (for KDevelop 3.5x only, 4.x is not supported yet). But of course you are free to use vi or emacs...<br />
<br />
If you are not able to install the SDK using those instruction, please report it on our [http://www3.elphel.com/list mailing-list] or here on the wiki in the discussion page.<br />
<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
== FPGA bitstream ==<br />
<br />
FPGA is reflashed on every boot. The bitstream is placed in /etc folder.<br />
<br />
* /etc/x347.bit [[10347]] code<br />
* /etc/x353.bit Main board FPGA [[10353]]<br />
* /etc/x359.bit [[10359]] multiplexer board<br />
<br />
The bitstream is sent to /dev/fpgaconfjtag (or other device) from /etc/init.d/fpga init script. Some configuration is done prior and after that. So the easiest way is to upload your bitstream by FTP or SCP, sync and reboot the camera.<br />
<br />
Please read [[FPGA Development in Elphel cameras]] for more info.<br />
<br />
== Reflashing a working and accessible camera ==<br />
[[Image:Firmware update System Preferences.jpg|thumb|reflashing using web interface and NFS server]]<br />
If the camera run a working version of Elphel firmware and is accessible on the local network you should be able to flash the camera using the easy way. <br />
<br />
* "System Preferences" on the camera index have Firmware reflashing interface, You still have to [[Elphel_Software_Kit_for_Ubuntu#Configure_your_NFS_server|configure an NFS server ]]<br />
* [[Reflash.php]] is a PHP script on the camera used by "System Preferences" web interface. You can also call it from your favorite browser including wget to automate reflashing process.<br />
<br />
== Reflashing inaccessible camera ==<br />
<br />
Sometimes you may flash the camera with a non working experimental firmware, or simply corrupt your network configuration files.<br />
<br />
In this case you can quickly check if [[Network_configuration#ipsetd|IPsetd]] helps or try to connect via minicom on the debugging RS232 port available on [[10369]] IO board to repair your network config.<br />
<br />
If nothing works, you still have a chance to see your camera working. Axis CPU have embedded network boot loader, so you can reflash the camera even if you have broken all the software.<br />
<br />
* The easy way to do that low level reflashing is to use our live DVD, the procedure is detailed on [[353 firmware upgrade procedure]]<br />
* [[Prod353]] was designed not only to reflash cameras but also to test the camera hardware<br />
* And finally http://elphel.cvs.sourceforge.net/viewvc/elphel/elphel353-8.0/README.flash?view=markup&pathrev=MAIN explain how to do the procedure manually. You will need tools installed by [[Elphel Software Kit for Ubuntu]].<br />
<br />
= Overview of the main software available on the camera =<br />
== Imgsrv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use [http://www.mplayerhq.hu/ mencoder] to assemble videos:<br />
<br />
On the PC edit your cron:<br />
<br />
crontab -e<br />
<br />
you should add a line like<br />
<br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 21:59 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== Trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
<br />
Triggered mode can be used to achieve precise locked FPS, to synchronize multiple cameras or to trigger them from an external device.<br />
<br />
=== About triggered mode ===<br />
Triggered mode is documented [[Trigger|here]]. During your experimentations please do not forget that our CMOS sensor is ERS ([[Electronic Rolling Shutter]]).<br />
<br />
We have hardware synchronization capability allowing 1μs jitter between images as well as external trigger.<br />
<br />
The NC353L camera have FPGA code allowing the camera synchronization, but an additional board is needed. The first board allowing the use of hardware synchronization commercialized by Elphel is the 10369 IO extension board. The board have internal synchronization connectors for cameras mounted in the same camera case and wired internally and an external modular RJ-14 opto-isolated connector.<br />
<br />
You can have several cameras in a so called "slave" mode waiting to receive the trigger and one camera (or any other device) serving as master. To trigger the image from all cameras the "master" device need to send a 3-5v impulse on the synchronization cable. <br />
<br />
The [[10369]] boards have two individual sets of I/Os for the synchronization of several cameras:<br />
<br />
1. Small 4-pin flex cable connectors to interconnect multiple camera boards in a common enclosure<br />
<br />
2. Modular RJ-14 4-pin connectors for synchronizing multiple individual cameras<br />
<br />
Each of the two channels has bi-directional opto-isolated I/O's and a non-isolated high current driver that can trigger multiple cameras. The FPGA code includes a programmable generator that can control the synchronization output drivers, and a programmable input delay generator driven by the selected opto-isolated inputs so each sensor can be triggered with a specified delay from the common for-multiple-cameras trigger. There is also circuitry to drive sensor trigger input.<br />
<br />
The same FPGA module can be used in a single camera configuration to provide precise control over the frame rate. The period of the free running sensor is defined as a product of the number of lines by the number of pixels in a line (including invisible margins) by a pixel period, so there are some restrictions on the period that can be programmed. This triggered mode of sensor operation also simplifies alternating the exposure time between consecutive frames. In a free-running ERS mode, exposure overlaps between frames and it is not possible to control it independently for each frame.<br />
<br />
=== Playing with a LED (or a flash lamp) ===<br />
<br />
Here are some examples with a camera in triggered mode. The camera is filming at Full resolution at 1FPS.<br />
<br />
[[Image:Trigger blink.jpg|thumb|A LED is triggered by the camera and pointed directly to the sensor]]<br />
<br />
{|<br />
|[[Image:trig_1.jpeg|thumb|The LED is triggered just after the sensor is fully erased and the exposure is > 1/15s]]<br />
|[[Image:trig_2.jpeg|thumb|Here the end of the image was erased after the LED was triggered.]]<br />
|[[Image:trig_3.jpeg|thumb|Here the LED was triggered before the image was totally erased and the exposure was not enough big so the top of the image was already read at the time of the LED flash.]]<br />
|}<br />
<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
* There are many other parameters, selection of the sequence and switching between single/multi mode is included in camvc (controls are shown when the 10359 board is detected in the system)<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8513Elphel workshop in Bordeaux during RMLL 20102010-07-14T21:54:06Z<p>Polto: Reverted edit of Polto, changed back to last version by Andrey.filippov</p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop will take place during RMLL 2010 and is organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Some preparations on your end (e.g. pre-installing required packages) would allow us to focus on the main topics and cover any remaining questions. Any question prior or during the workshop should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
Please sign up on the [http://doodle.com/89tms34cippzw4kv participant’s list], and please add your availability during RMLL so that we can find the optimal day & time for the workshop. All further information about this workshop will be available on this wiki page.<br />
<br />
We are looking forward to seeing you at the workshop!<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10353]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
<br />
Elphel provide Free Software SDK for everything but synthesis/place&route tools for the FPGA, here you will have to deal with a proprietary but free of charge Xilinx WebPack ISE. Simulation is still possible with free software (this is what we use ourselves at Elphel) - Icarus Verilog and GTKWave. Unfortunately there are a few Xilinx primitives used in the design (from Xilinx unisims library) that are needed for the simulation. We hope that Xilinx will eventually release this code under free license, or somebody will help us to re-implement these simulation Verilog models.<br />
<br />
To install the SDK for Elphel cameras you need a fresh installation of the latest supported (10.04 at the moment) (K)Ubuntu GNU/Linux distribution and follows instructions available on this page: [[Elphel Software Kit for Ubuntu]], FPGA part is documented separately: [[FPGA Development in Elphel cameras]].<br />
<br />
Those manuals are written so that you should only copy & paste commands in a terminal in order to install the needed software needed to start developing on Elhpel cameras.<br />
<br />
Elphel use [[KDevelop]] 3.5 IDE - there is a script that creates KDevelop project from Elphel source tree so you can easily navigate the files (for KDevelop 3.5x only, 4.x is not supported yet). But of course you are free to use vi or emacs...<br />
<br />
If you are not able to install the SDK using those instruction, please report it on our [http://www3.elphel.com/list mailing-list] or here on the wiki in the discussion page.<br />
<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
= Overview of the main software available on the camera =<br />
== Imgsrv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use [http://www.mplayerhq.hu/ mencoder] to assemble videos:<br />
<br />
On the PC edit your cron:<br />
<br />
crontab -e<br />
<br />
you should add a line like<br />
<br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 21:59 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== Trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
<br />
Triggered mode can be used to achieve precise locked FPS, to synchronize multiple cameras or to trigger them from an external device.<br />
<br />
=== About triggered mode ===<br />
Triggered mode is documented [[Trigger|here]]. During your experimentations please do not forget that our CMOS sensor is ERS ([[Electronic Rolling Shutter]]).<br />
<br />
We have hardware synchronization capability allowing 1μs jitter between images as well as external trigger.<br />
<br />
The NC353L camera have FPGA code allowing the camera synchronization, but an additional board is needed. The first board allowing the use of hardware synchronization commercialized by Elphel is the 10369 IO extension board. The board have internal synchronization connectors for cameras mounted in the same camera case and wired internally and an external modular RJ-14 opto-isolated connector.<br />
<br />
You can have several cameras in a so called "slave" mode waiting to receive the trigger and one camera (or any other device) serving as master. To trigger the image from all cameras the "master" device need to send a 3-5v impulse on the synchronization cable. <br />
<br />
The [[10369]] boards have two individual sets of I/Os for the synchronization of several cameras:<br />
<br />
1. Small 4-pin flex cable connectors to interconnect multiple camera boards in a common enclosure<br />
<br />
2. Modular RJ-14 4-pin connectors for synchronizing multiple individual cameras<br />
<br />
Each of the two channels has bi-directional opto-isolated I/O's and a non-isolated high current driver that can trigger multiple cameras. The FPGA code includes a programmable generator that can control the synchronization output drivers, and a programmable input delay generator driven by the selected opto-isolated inputs so each sensor can be triggered with a specified delay from the common for-multiple-cameras trigger. There is also circuitry to drive sensor trigger input.<br />
<br />
The same FPGA module can be used in a single camera configuration to provide precise control over the frame rate. The period of the free running sensor is defined as a product of the number of lines by the number of pixels in a line (including invisible margins) by a pixel period, so there are some restrictions on the period that can be programmed. This triggered mode of sensor operation also simplifies alternating the exposure time between consecutive frames. In a free-running ERS mode, exposure overlaps between frames and it is not possible to control it independently for each frame.<br />
<br />
=== Playing with a LED (or a flash lamp) ===<br />
<br />
Here are some examples with a camera in triggered mode. The camera is filming at Full resolution at 1FPS.<br />
<br />
[[Image:Trigger blink.jpg|thumb|A LED is triggered by the camera and pointed directly to the sensor]]<br />
<br />
{|<br />
|[[Image:trig_1.jpeg|thumb|The LED is triggered just after the sensor is fully erased and the exposure is > 1/15s]]<br />
|[[Image:trig_2.jpeg|thumb|Here the end of the image was erased after the LED was triggered.]]<br />
|[[Image:trig_3.jpeg|thumb|Here the LED was triggered before the image was totally erased and the exposure was not enough big so the top of the image was already read at the time of the LED flash.]]<br />
|}<br />
<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
* There are many other parameters, selection of the sequence and switching between single/multi mode is included in camvc (controls are shown when the 10359 board is detected in the system)<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8512Elphel workshop in Bordeaux during RMLL 20102010-07-14T21:52:10Z<p>Polto: /* Publications */</p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop was made during RMLL 2010 and was organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Any question should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10353]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
<br />
Elphel provide Free Software SDK for everything but synthesis/place&route tools for the FPGA, here you will have to deal with a proprietary but free of charge Xilinx WebPack ISE. Simulation is still possible with free software (this is what we use ourselves at Elphel) - Icarus Verilog and GTKWave. Unfortunately there are a few Xilinx primitives used in the design (from Xilinx unisims library) that are needed for the simulation. We hope that Xilinx will eventually release this code under free license, or somebody will help us to re-implement these simulation Verilog models.<br />
<br />
To install the SDK for Elphel cameras you need a fresh installation of the latest supported (10.04 at the moment) (K)Ubuntu GNU/Linux distribution and follows instructions available on this page: [[Elphel Software Kit for Ubuntu]], FPGA part is documented separately: [[FPGA Development in Elphel cameras]].<br />
<br />
Those manuals are written so that you should only copy & paste commands in a terminal in order to install the needed software needed to start developing on Elhpel cameras.<br />
<br />
Elphel use [[KDevelop]] 3.5 IDE - there is a script that creates KDevelop project from Elphel source tree so you can easily navigate the files (for KDevelop 3.5x only, 4.x is not supported yet). But of course you are free to use vi or emacs...<br />
<br />
If you are not able to install the SDK using those instruction, please report it on our [http://www3.elphel.com/list mailing-list] or here on the wiki in the discussion page.<br />
<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
== FPGA bitstream ==<br />
<br />
FPGA is reflashed on every boot. The bitstream is placed in /etc folder.<br />
<br />
* /etc/x347.bit [[10347]] code<br />
* /etc/x353.bit Main board FPGA [[10353]]<br />
* /etc/x359.bit [[10359]] multiplexer board<br />
<br />
The bitstream is sent to /dev/fpgaconfjtag (or other device) from /etc/init.d/fpga init script. Some configuration is done prior and after that. So the easiest way is to upload your bitstream by FTP or SCP, sync and reboot the camera.<br />
<br />
Please read [[FPGA Development in Elphel cameras]] for more info.<br />
<br />
== Reflashing a working and accessible camera ==<br />
[[Image:Firmware update System Preferences.jpg|thumb|reflashing using web interface and NFS server]]<br />
If the camera run a working version of Elphel firmware and is accessible on the local network you should be able to flash the camera using the easy way. <br />
<br />
* "System Preferences" on the camera index have Firmware reflashing interface, You still have to [[Elphel_Software_Kit_for_Ubuntu#Configure_your_NFS_server|configure an NFS server ]]<br />
* [[Reflash.php]] is a PHP script on the camera used by "System Preferences" web interface. You can also call it from your favorite browser including wget to automate reflashing process.<br />
<br />
== Reflashing inaccessible camera ==<br />
<br />
Sometimes you may flash the camera with a non working experimental firmware, or simply corrupt your network configuration files.<br />
<br />
In this case you can quickly check if [[Network_configuration#ipsetd|IPsetd]] helps or try to connect via minicom on the debugging RS232 port available on [[10369]] IO board to repair your network config.<br />
<br />
If nothing works, you still have a chance to see your camera working. Axis CPU have embedded network boot loader, so you can reflash the camera even if you have broken all the software.<br />
<br />
* The easy way to do that low level reflashing is to use our live DVD, the procedure is detailed on [[353 firmware upgrade procedure]]<br />
* [[Prod353]] was designed not only to reflash cameras but also to test the camera hardware<br />
* And finally http://elphel.cvs.sourceforge.net/viewvc/elphel/elphel353-8.0/README.flash?view=markup&pathrev=MAIN explain how to do the procedure manually. You will need tools installed by [[Elphel Software Kit for Ubuntu]].<br />
<br />
= Overview of the main software available on the camera =<br />
== Imgsrv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use [http://www.mplayerhq.hu/ mencoder] to assemble videos:<br />
<br />
On the PC edit your cron:<br />
<br />
crontab -e<br />
<br />
you should add a line like<br />
<br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 21:59 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== Trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
<br />
Triggered mode can be used to achieve precise locked FPS, to synchronize multiple cameras or to trigger them from an external device.<br />
<br />
=== About triggered mode ===<br />
Triggered mode is documented [[Trigger|here]]. During your experimentations please do not forget that our CMOS sensor is ERS ([[Electronic Rolling Shutter]]).<br />
<br />
We have hardware synchronization capability allowing 1μs jitter between images as well as external trigger.<br />
<br />
The NC353L camera have FPGA code allowing the camera synchronization, but an additional board is needed. The first board allowing the use of hardware synchronization commercialized by Elphel is the 10369 IO extension board. The board have internal synchronization connectors for cameras mounted in the same camera case and wired internally and an external modular RJ-14 opto-isolated connector.<br />
<br />
You can have several cameras in a so called "slave" mode waiting to receive the trigger and one camera (or any other device) serving as master. To trigger the image from all cameras the "master" device need to send a 3-5v impulse on the synchronization cable. <br />
<br />
The [[10369]] boards have two individual sets of I/Os for the synchronization of several cameras:<br />
<br />
1. Small 4-pin flex cable connectors to interconnect multiple camera boards in a common enclosure<br />
<br />
2. Modular RJ-14 4-pin connectors for synchronizing multiple individual cameras<br />
<br />
Each of the two channels has bi-directional opto-isolated I/O's and a non-isolated high current driver that can trigger multiple cameras. The FPGA code includes a programmable generator that can control the synchronization output drivers, and a programmable input delay generator driven by the selected opto-isolated inputs so each sensor can be triggered with a specified delay from the common for-multiple-cameras trigger. There is also circuitry to drive sensor trigger input.<br />
<br />
The same FPGA module can be used in a single camera configuration to provide precise control over the frame rate. The period of the free running sensor is defined as a product of the number of lines by the number of pixels in a line (including invisible margins) by a pixel period, so there are some restrictions on the period that can be programmed. This triggered mode of sensor operation also simplifies alternating the exposure time between consecutive frames. In a free-running ERS mode, exposure overlaps between frames and it is not possible to control it independently for each frame.<br />
<br />
=== Playing with a LED (or a flash lamp) ===<br />
<br />
Here are some examples with a camera in triggered mode. The camera is filming at Full resolution at 1FPS.<br />
<br />
[[Image:Trigger blink.jpg|thumb|A LED is triggered by the camera and pointed directly to the sensor]]<br />
<br />
{|<br />
|[[Image:trig_1.jpeg|thumb|The LED is triggered just after the sensor is fully erased and the exposure is > 1/15s]]<br />
|[[Image:trig_2.jpeg|thumb|Here the end of the image was erased after the LED was triggered.]]<br />
|[[Image:trig_3.jpeg|thumb|Here the LED was triggered before the image was totally erased and the exposure was not enough big so the top of the image was already read at the time of the LED flash.]]<br />
|}<br />
<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
* There are many other parameters, selection of the sequence and switching between single/multi mode is included in camvc (controls are shown when the 10359 board is detected in the system)<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles<br />
<br />
[http://docs.elphel.com/linuxdevices/AT8926870933.html How many bits are really needed in the image pixels?]</div>Poltohttps://wiki.elphel.com/index.php?title=2010.RMLL.info_recording&diff=85112010.RMLL.info recording2010-07-14T18:04:13Z<p>Polto: </p>
<hr />
<div>=Prefix=<br />
Elphel participated to http://2010.rmll.info/ with support of our Swiss partner - Alsenet SA. We organized an Elphel [[Elphel_workshop_in_Bordeaux_during_RMLL_2010|workshop]] and participated in conferences recording.<br />
<br />
=Goals=<br />
The goal is:<br />
<br />
* keep high quality 1920x1080p MJPEG sources for all conferences what will be post processed.<br />
* encode in free software and open data formats like WebM/VP8/vorbis and Ogg/Theora/vorbis as well as proprietary codecs such as h264.<br />
* do all that with exclusively FreeSoftware<br />
<br />
=Hardware=<br />
==Cameras==<br />
[[Image:Elphel NC353L-10369-HDD-microphone.jpeg|thumb|conference recording kit]]<br />
===POE injectors===<br />
<br />
===Lenses===<br />
<br />
==PC==<br />
<br />
===Requirement===<br />
<br />
=Microphones=<br />
<br />
=Tripods=<br />
<br />
=Network=<br />
<br />
=Software=<br />
<br />
==FreeSoftware==<br />
<br />
List of freesoftware needed to record video/audio, mux and stream.<br />
<br />
<br />
Of course standard GNU/Linux<br />
<br />
mencoder to record video<br />
<br />
gstreamer-0.10 to record audio<br />
ffmpeg2theora to encode on theora <br />
==Ubuntu Desktop==<br />
<br />
==Special scripts for recording==<br />
<br />
===Transcoding to Theora/Vorbis===</div>Poltohttps://wiki.elphel.com/index.php?title=File:Elphel_NC353L-10369-HDD-microphone.jpeg&diff=8510File:Elphel NC353L-10369-HDD-microphone.jpeg2010-07-14T17:44:14Z<p>Polto: </p>
<hr />
<div></div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8509Elphel workshop in Bordeaux during RMLL 20102010-07-14T13:53:08Z<p>Polto: /* About the workshop */</p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop was made during RMLL 2010 and was organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Any question should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10353]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
<br />
Elphel provide Free Software SDK for everything but synthesis/place&route tools for the FPGA, here you will have to deal with a proprietary but free of charge Xilinx WebPack ISE. Simulation is still possible with free software (this is what we use ourselves at Elphel) - Icarus Verilog and GTKWave. Unfortunately there are a few Xilinx primitives used in the design (from Xilinx unisims library) that are needed for the simulation. We hope that Xilinx will eventually release this code under free license, or somebody will help us to re-implement these simulation Verilog models.<br />
<br />
To install the SDK for Elphel cameras you need a fresh installation of the latest supported (10.04 at the moment) (K)Ubuntu GNU/Linux distribution and follows instructions available on this page: [[Elphel Software Kit for Ubuntu]], FPGA part is documented separately: [[FPGA Development in Elphel cameras]].<br />
<br />
Those manuals are written so that you should only copy & paste commands in a terminal in order to install the needed software needed to start developing on Elhpel cameras.<br />
<br />
Elphel use [[KDevelop]] 3.5 IDE - there is a script that creates KDevelop project from Elphel source tree so you can easily navigate the files (for KDevelop 3.5x only, 4.x is not supported yet). But of course you are free to use vi or emacs...<br />
<br />
If you are not able to install the SDK using those instruction, please report it on our [http://www3.elphel.com/list mailing-list] or here on the wiki in the discussion page.<br />
<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
== FPGA bitstream ==<br />
<br />
FPGA is reflashed on every boot. The bitstream is placed in /etc folder.<br />
<br />
* /etc/x347.bit [[10347]] code<br />
* /etc/x353.bit Main board FPGA [[10353]]<br />
* /etc/x359.bit [[10359]] multiplexer board<br />
<br />
The bitstream is sent to /dev/fpgaconfjtag (or other device) from /etc/init.d/fpga init script. Some configuration is done prior and after that. So the easiest way is to upload your bitstream by FTP or SCP, sync and reboot the camera.<br />
<br />
Please read [[FPGA Development in Elphel cameras]] for more info.<br />
<br />
== Reflashing a working and accessible camera ==<br />
[[Image:Firmware update System Preferences.jpg|thumb|reflashing using web interface and NFS server]]<br />
If the camera run a working version of Elphel firmware and is accessible on the local network you should be able to flash the camera using the easy way. <br />
<br />
* "System Preferences" on the camera index have Firmware reflashing interface, You still have to [[Elphel_Software_Kit_for_Ubuntu#Configure_your_NFS_server|configure an NFS server ]]<br />
* [[Reflash.php]] is a PHP script on the camera used by "System Preferences" web interface. You can also call it from your favorite browser including wget to automate reflashing process.<br />
<br />
== Reflashing inaccessible camera ==<br />
<br />
Sometimes you may flash the camera with a non working experimental firmware, or simply corrupt your network configuration files.<br />
<br />
In this case you can quickly check if [[Network_configuration#ipsetd|IPsetd]] helps or try to connect via minicom on the debugging RS232 port available on [[10369]] IO board to repair your network config.<br />
<br />
If nothing works, you still have a chance to see your camera working. Axis CPU have embedded network boot loader, so you can reflash the camera even if you have broken all the software.<br />
<br />
* The easy way to do that low level reflashing is to use our live DVD, the procedure is detailed on [[353 firmware upgrade procedure]]<br />
* [[Prod353]] was designed not only to reflash cameras but also to test the camera hardware<br />
* And finally http://elphel.cvs.sourceforge.net/viewvc/elphel/elphel353-8.0/README.flash?view=markup&pathrev=MAIN explain how to do the procedure manually. You will need tools installed by [[Elphel Software Kit for Ubuntu]].<br />
<br />
= Overview of the main software available on the camera =<br />
== Imgsrv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use [http://www.mplayerhq.hu/ mencoder] to assemble videos:<br />
<br />
On the PC edit your cron:<br />
<br />
crontab -e<br />
<br />
you should add a line like<br />
<br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 21:59 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== Trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
<br />
Triggered mode can be used to achieve precise locked FPS, to synchronize multiple cameras or to trigger them from an external device.<br />
<br />
=== About triggered mode ===<br />
Triggered mode is documented [[Trigger|here]]. During your experimentations please do not forget that our CMOS sensor is ERS ([[Electronic Rolling Shutter]]).<br />
<br />
We have hardware synchronization capability allowing 1μs jitter between images as well as external trigger.<br />
<br />
The NC353L camera have FPGA code allowing the camera synchronization, but an additional board is needed. The first board allowing the use of hardware synchronization commercialized by Elphel is the 10369 IO extension board. The board have internal synchronization connectors for cameras mounted in the same camera case and wired internally and an external modular RJ-14 opto-isolated connector.<br />
<br />
You can have several cameras in a so called "slave" mode waiting to receive the trigger and one camera (or any other device) serving as master. To trigger the image from all cameras the "master" device need to send a 3-5v impulse on the synchronization cable. <br />
<br />
The [[10369]] boards have two individual sets of I/Os for the synchronization of several cameras:<br />
<br />
1. Small 4-pin flex cable connectors to interconnect multiple camera boards in a common enclosure<br />
<br />
2. Modular RJ-14 4-pin connectors for synchronizing multiple individual cameras<br />
<br />
Each of the two channels has bi-directional opto-isolated I/O's and a non-isolated high current driver that can trigger multiple cameras. The FPGA code includes a programmable generator that can control the synchronization output drivers, and a programmable input delay generator driven by the selected opto-isolated inputs so each sensor can be triggered with a specified delay from the common for-multiple-cameras trigger. There is also circuitry to drive sensor trigger input.<br />
<br />
The same FPGA module can be used in a single camera configuration to provide precise control over the frame rate. The period of the free running sensor is defined as a product of the number of lines by the number of pixels in a line (including invisible margins) by a pixel period, so there are some restrictions on the period that can be programmed. This triggered mode of sensor operation also simplifies alternating the exposure time between consecutive frames. In a free-running ERS mode, exposure overlaps between frames and it is not possible to control it independently for each frame.<br />
<br />
=== Playing with a LED (or a flash lamp) ===<br />
<br />
Here are some examples with a camera in triggered mode. The camera is filming at Full resolution at 1FPS.<br />
<br />
[[Image:Trigger blink.jpg|thumb|A LED is triggered by the camera and pointed directly to the sensor]]<br />
<br />
{|<br />
|[[Image:trig_1.jpeg|thumb|The LED is triggered just after the sensor is fully erased and the exposure is > 1/15s]]<br />
|[[Image:trig_2.jpeg|thumb|Here the end of the image was erased after the LED was triggered.]]<br />
|[[Image:trig_3.jpeg|thumb|Here the LED was triggered before the image was totally erased and the exposure was not enough big so the top of the image was already read at the time of the LED flash.]]<br />
|}<br />
<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
* There are many other parameters, selection of the sequence and switching between single/multi mode is included in camvc (controls are shown when the 10359 board is detected in the system)<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8508Elphel workshop in Bordeaux during RMLL 20102010-07-14T13:50:22Z<p>Polto: /* reflashing camera firmware & FPGA bitstream */</p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop will take place during RMLL 2010 and is organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Some preparations on your end (e.g. pre-installing required packages) would allow us to focus on the main topics and cover any remaining questions. Any question prior or during the workshop should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
Please sign up on the [http://doodle.com/89tms34cippzw4kv participant’s list], and please add your availability during RMLL so that we can find the optimal day & time for the workshop. All further information about this workshop will be available on this wiki page.<br />
<br />
We are looking forward to seeing you at the workshop!<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10353]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
<br />
Elphel provide Free Software SDK for everything but synthesis/place&route tools for the FPGA, here you will have to deal with a proprietary but free of charge Xilinx WebPack ISE. Simulation is still possible with free software (this is what we use ourselves at Elphel) - Icarus Verilog and GTKWave. Unfortunately there are a few Xilinx primitives used in the design (from Xilinx unisims library) that are needed for the simulation. We hope that Xilinx will eventually release this code under free license, or somebody will help us to re-implement these simulation Verilog models.<br />
<br />
To install the SDK for Elphel cameras you need a fresh installation of the latest supported (10.04 at the moment) (K)Ubuntu GNU/Linux distribution and follows instructions available on this page: [[Elphel Software Kit for Ubuntu]], FPGA part is documented separately: [[FPGA Development in Elphel cameras]].<br />
<br />
Those manuals are written so that you should only copy & paste commands in a terminal in order to install the needed software needed to start developing on Elhpel cameras.<br />
<br />
Elphel use [[KDevelop]] 3.5 IDE - there is a script that creates KDevelop project from Elphel source tree so you can easily navigate the files (for KDevelop 3.5x only, 4.x is not supported yet). But of course you are free to use vi or emacs...<br />
<br />
If you are not able to install the SDK using those instruction, please report it on our [http://www3.elphel.com/list mailing-list] or here on the wiki in the discussion page.<br />
<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
== FPGA bitstream ==<br />
<br />
FPGA is reflashed on every boot. The bitstream is placed in /etc folder.<br />
<br />
* /etc/x347.bit [[10347]] code<br />
* /etc/x353.bit Main board FPGA [[10353]]<br />
* /etc/x359.bit [[10359]] multiplexer board<br />
<br />
The bitstream is sent to /dev/fpgaconfjtag (or other device) from /etc/init.d/fpga init script. Some configuration is done prior and after that. So the easiest way is to upload your bitstream by FTP or SCP, sync and reboot the camera.<br />
<br />
Please read [[FPGA Development in Elphel cameras]] for more info.<br />
<br />
== Reflashing a working and accessible camera ==<br />
[[Image:Firmware update System Preferences.jpg|thumb|reflashing using web interface and NFS server]]<br />
If the camera run a working version of Elphel firmware and is accessible on the local network you should be able to flash the camera using the easy way. <br />
<br />
* "System Preferences" on the camera index have Firmware reflashing interface, You still have to [[Elphel_Software_Kit_for_Ubuntu#Configure_your_NFS_server|configure an NFS server ]]<br />
* [[Reflash.php]] is a PHP script on the camera used by "System Preferences" web interface. You can also call it from your favorite browser including wget to automate reflashing process.<br />
<br />
== Reflashing inaccessible camera ==<br />
<br />
Sometimes you may flash the camera with a non working experimental firmware, or simply corrupt your network configuration files.<br />
<br />
In this case you can quickly check if [[Network_configuration#ipsetd|IPsetd]] helps or try to connect via minicom on the debugging RS232 port available on [[10369]] IO board to repair your network config.<br />
<br />
If nothing works, you still have a chance to see your camera working. Axis CPU have embedded network boot loader, so you can reflash the camera even if you have broken all the software.<br />
<br />
* The easy way to do that low level reflashing is to use our live DVD, the procedure is detailed on [[353 firmware upgrade procedure]]<br />
* [[Prod353]] was designed not only to reflash cameras but also to test the camera hardware<br />
* And finally http://elphel.cvs.sourceforge.net/viewvc/elphel/elphel353-8.0/README.flash?view=markup&pathrev=MAIN explain how to do the procedure manually. You will need tools installed by [[Elphel Software Kit for Ubuntu]].<br />
<br />
= Overview of the main software available on the camera =<br />
== Imgsrv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use [http://www.mplayerhq.hu/ mencoder] to assemble videos:<br />
<br />
On the PC edit your cron:<br />
<br />
crontab -e<br />
<br />
you should add a line like<br />
<br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 21:59 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== Trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
<br />
Triggered mode can be used to achieve precise locked FPS, to synchronize multiple cameras or to trigger them from an external device.<br />
<br />
=== About triggered mode ===<br />
Triggered mode is documented [[Trigger|here]]. During your experimentations please do not forget that our CMOS sensor is ERS ([[Electronic Rolling Shutter]]).<br />
<br />
We have hardware synchronization capability allowing 1μs jitter between images as well as external trigger.<br />
<br />
The NC353L camera have FPGA code allowing the camera synchronization, but an additional board is needed. The first board allowing the use of hardware synchronization commercialized by Elphel is the 10369 IO extension board. The board have internal synchronization connectors for cameras mounted in the same camera case and wired internally and an external modular RJ-14 opto-isolated connector.<br />
<br />
You can have several cameras in a so called "slave" mode waiting to receive the trigger and one camera (or any other device) serving as master. To trigger the image from all cameras the "master" device need to send a 3-5v impulse on the synchronization cable. <br />
<br />
The [[10369]] boards have two individual sets of I/Os for the synchronization of several cameras:<br />
<br />
1. Small 4-pin flex cable connectors to interconnect multiple camera boards in a common enclosure<br />
<br />
2. Modular RJ-14 4-pin connectors for synchronizing multiple individual cameras<br />
<br />
Each of the two channels has bi-directional opto-isolated I/O's and a non-isolated high current driver that can trigger multiple cameras. The FPGA code includes a programmable generator that can control the synchronization output drivers, and a programmable input delay generator driven by the selected opto-isolated inputs so each sensor can be triggered with a specified delay from the common for-multiple-cameras trigger. There is also circuitry to drive sensor trigger input.<br />
<br />
The same FPGA module can be used in a single camera configuration to provide precise control over the frame rate. The period of the free running sensor is defined as a product of the number of lines by the number of pixels in a line (including invisible margins) by a pixel period, so there are some restrictions on the period that can be programmed. This triggered mode of sensor operation also simplifies alternating the exposure time between consecutive frames. In a free-running ERS mode, exposure overlaps between frames and it is not possible to control it independently for each frame.<br />
<br />
=== Playing with a LED (or a flash lamp) ===<br />
<br />
Here are some examples with a camera in triggered mode. The camera is filming at Full resolution at 1FPS.<br />
<br />
[[Image:Trigger blink.jpg|thumb|A LED is triggered by the camera and pointed directly to the sensor]]<br />
<br />
{|<br />
|[[Image:trig_1.jpeg|thumb|The LED is triggered just after the sensor is fully erased and the exposure is > 1/15s]]<br />
|[[Image:trig_2.jpeg|thumb|Here the end of the image was erased after the LED was triggered.]]<br />
|[[Image:trig_3.jpeg|thumb|Here the LED was triggered before the image was totally erased and the exposure was not enough big so the top of the image was already read at the time of the LED flash.]]<br />
|}<br />
<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
* There are many other parameters, selection of the sequence and switching between single/multi mode is included in camvc (controls are shown when the 10359 board is detected in the system)<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=File:Firmware_update_System_Preferences.jpg&diff=8507File:Firmware update System Preferences.jpg2010-07-14T13:19:56Z<p>Polto: </p>
<hr />
<div></div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8506Elphel workshop in Bordeaux during RMLL 20102010-07-14T13:06:02Z<p>Polto: /* reflashing camera firmware & FPGA bitstream */</p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop will take place during RMLL 2010 and is organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Some preparations on your end (e.g. pre-installing required packages) would allow us to focus on the main topics and cover any remaining questions. Any question prior or during the workshop should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
Please sign up on the [http://doodle.com/89tms34cippzw4kv participant’s list], and please add your availability during RMLL so that we can find the optimal day & time for the workshop. All further information about this workshop will be available on this wiki page.<br />
<br />
We are looking forward to seeing you at the workshop!<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10353]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
<br />
Elphel provide Free Software SDK for everything but synthesis/place&route tools for the FPGA, here you will have to deal with a proprietary but free of charge Xilinx WebPack ISE. Simulation is still possible with free software (this is what we use ourselves at Elphel) - Icarus Verilog and GTKWave. Unfortunately there are a few Xilinx primitives used in the design (from Xilinx unisims library) that are needed for the simulation. We hope that Xilinx will eventually release this code under free license, or somebody will help us to re-implement these simulation Verilog models.<br />
<br />
To install the SDK for Elphel cameras you need a fresh installation of the latest supported (10.04 at the moment) (K)Ubuntu GNU/Linux distribution and follows instructions available on this page: [[Elphel Software Kit for Ubuntu]], FPGA part is documented separately: [[FPGA Development in Elphel cameras]].<br />
<br />
Those manuals are written so that you should only copy & paste commands in a terminal in order to install the needed software needed to start developing on Elhpel cameras.<br />
<br />
Elphel use [[KDevelop]] 3.5 IDE - there is a script that creates KDevelop project from Elphel source tree so you can easily navigate the files (for KDevelop 3.5x only, 4.x is not supported yet). But of course you are free to use vi or emacs...<br />
<br />
If you are not able to install the SDK using those instruction, please report it on our [http://www3.elphel.com/list mailing-list] or here on the wiki in the discussion page.<br />
<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
== FPGA bitstream ==<br />
<br />
FPGA is reflashed on every boot. The bitstream is placed in /etc folder.<br />
<br />
* /etc/x347.bit [[10347]] code<br />
* /etc/x353.bit Main board FPGA [[10353]]<br />
* /etc/x359.bit [[10359]] multiplexer board<br />
<br />
The bitstream is sent to /dev/fpgaconfjtag (or other device) from /etc/init.d/fpga init script. Some configuration is done prior and after that. So the easiest way is to upload your bitstream by FTP or SCP, sync and reboot the camera.<br />
<br />
Please read [[FPGA Development in Elphel cameras]] for more info.<br />
<br />
= Overview of the main software available on the camera =<br />
== Imgsrv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use [http://www.mplayerhq.hu/ mencoder] to assemble videos:<br />
<br />
On the PC edit your cron:<br />
<br />
crontab -e<br />
<br />
you should add a line like<br />
<br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 21:59 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== Trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
<br />
Triggered mode can be used to achieve precise locked FPS, to synchronize multiple cameras or to trigger them from an external device.<br />
<br />
=== About triggered mode ===<br />
Triggered mode is documented [[Trigger|here]]. During your experimentations please do not forget that our CMOS sensor is ERS ([[Electronic Rolling Shutter]]).<br />
<br />
We have hardware synchronization capability allowing 1μs jitter between images as well as external trigger.<br />
<br />
The NC353L camera have FPGA code allowing the camera synchronization, but an additional board is needed. The first board allowing the use of hardware synchronization commercialized by Elphel is the 10369 IO extension board. The board have internal synchronization connectors for cameras mounted in the same camera case and wired internally and an external modular RJ-14 opto-isolated connector.<br />
<br />
You can have several cameras in a so called "slave" mode waiting to receive the trigger and one camera (or any other device) serving as master. To trigger the image from all cameras the "master" device need to send a 3-5v impulse on the synchronization cable. <br />
<br />
The [[10369]] boards have two individual sets of I/Os for the synchronization of several cameras:<br />
<br />
1. Small 4-pin flex cable connectors to interconnect multiple camera boards in a common enclosure<br />
<br />
2. Modular RJ-14 4-pin connectors for synchronizing multiple individual cameras<br />
<br />
Each of the two channels has bi-directional opto-isolated I/O's and a non-isolated high current driver that can trigger multiple cameras. The FPGA code includes a programmable generator that can control the synchronization output drivers, and a programmable input delay generator driven by the selected opto-isolated inputs so each sensor can be triggered with a specified delay from the common for-multiple-cameras trigger. There is also circuitry to drive sensor trigger input.<br />
<br />
The same FPGA module can be used in a single camera configuration to provide precise control over the frame rate. The period of the free running sensor is defined as a product of the number of lines by the number of pixels in a line (including invisible margins) by a pixel period, so there are some restrictions on the period that can be programmed. This triggered mode of sensor operation also simplifies alternating the exposure time between consecutive frames. In a free-running ERS mode, exposure overlaps between frames and it is not possible to control it independently for each frame.<br />
<br />
=== Playing with a LED (or a flash lamp) ===<br />
<br />
Here are some examples with a camera in triggered mode. The camera is filming at Full resolution at 1FPS.<br />
<br />
[[Image:Trigger blink.jpg|thumb|A LED is triggered by the camera and pointed directly to the sensor]]<br />
<br />
{|<br />
|[[Image:trig_1.jpeg|thumb|The LED is triggered just after the sensor is fully erased and the exposure is > 1/15s]]<br />
|[[Image:trig_2.jpeg|thumb|Here the end of the image was erased after the LED was triggered.]]<br />
|[[Image:trig_3.jpeg|thumb|Here the LED was triggered before the image was totally erased and the exposure was not enough big so the top of the image was already read at the time of the LED flash.]]<br />
|}<br />
<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
* There are many other parameters, selection of the sequence and switching between single/multi mode is included in camvc (controls are shown when the 10359 board is detected in the system)<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_Software_Kit_for_Ubuntu&diff=8434Elphel Software Kit for Ubuntu2010-07-05T08:09:08Z<p>Polto: /* Mplayer - installation for (K)Ubuntu older than 9.10 release */</p>
<hr />
<div>=About=<br />
<br />
This page is a simple howto for running Elphel software on (K)Ubuntu GNU/Linux.<br />
<br />
Instructions below are updated for the [http://www.kubuntu.org Kubuntu 9.04] , should work with Ubuntu 9.04 also --[[User:Andrey.filippov|Andrey.filippov]] 02:52, 24 May 2009 (CDT)''<br />
<br />
You can download this GNU/Linux distribution freely from http://www.kubuntu.org/<br />
<br />
=If you are new to GNU / Linux=<br />
Many forums and wikis are available in many languages to help you to install and use Ubuntu. Ex: http://www.google.com/search?q=forum+ubuntu (you can add "&hl=fr" or any language code to the URL)<br />
<br />
Most instructions below are commands that you need to enter in the in the terminal window. For the lines that do not end with "\" sign you just copy them one-by-one and paste in the terminal window (in KDE it is Konsole in the "System" menu). For pasting you '''can not''' use <cntrl-V> - you need to '''right-click in the terminal window and select "Paste"''' from the drop-down context menu. Alternatively you can use '''the middle mouse button''' to both copy (drag while middle mouse pressed) and paste - click it in the console window.<br />
<br />
Character "'''\'''" at the end of the line means continuation, so you can copy the whole block of text where each line but the very last ends with "\" and paste them together.<br />
<br />
Many of the commands start with "'''sudo'''" - first time system will ask you for your user password that you enter without any starts (provided you have administrative privileges).<br />
<br />
If you get some problems it is very useful to copy the error message that system outputs (avoiding anything specific to your particular installation - like user directories names) and paste them into the search box of you browser.<br />
<br />
=User software=<br />
<br />
Some software need to be patched and recompiled even if they exist in Ubuntu software repositories, some softwares are not yet packaged in Ubuntu, so you have to compile them from sources also. We try to push our software patches to the mainstream applications, but it take time and is not always possible.<br />
<br />
<br />
==Mplayer==<br />
'''Kubuntu 9.10 includes MPlayer that is working with Elphel camera "out of the box"'''<br />
So for (K)Ubuntu 9.10 you just need to install<br />
sudo apt-get install mplayer-nogui mplayer gecko-mediaplayer<br />
<br />
===Mplayer - installation for (K)Ubuntu older than 9.10 release ===<br />
<br />
With the previous verions MPlayer has to be patched and recompiled, following instructions document how to do it on (K)Ubuntu or Debian based workstation.<br />
<br />
Install gecko-mediaplayer before compiling/installing MPlayer, if you do it later it will install non-patched version of MPlayer<br />
<br />
sudo apt-get gecko-mediaplayer<br />
<br />
<br />
First install some compilation dependencies, mainly libraries...<br />
<br />
<!-- sudo apt-get install build-essential debhelper libncurses5-dev libesd0-dev liblircclient-dev libgtk2.0-dev \<br />
libvorbis-dev libsdl1.2-dev sharutils libasound2-dev liblzo-dev gawk libjpeg62-dev libaudiofile-dev \<br />
libsmbclient-dev libxv-dev libpng3-dev libgif-dev libcdparanoia0-dev libxvidcore4-dev libdv-dev \<br />
liblivemedia-dev libfreetype6-dev em8300-headers libgl1-mesa-dev libdvdread-dev libdts-dev libtheora-dev \<br />
libglu-dev libartsc0-dev libfontconfig-dev libxxf86dga-dev libxinerama-dev libxxf86vm-dev \<br />
libxvmc-dev libggi2-dev libmpcdec-dev libspeex-dev libfribidi-dev libfaac-dev libaa1-dev libcaca-dev \<br />
libx264-dev libpulse-dev libmad0-dev ladspa-sdk libdbus-glib-1-dev libaudio-dev liblzo2-dev libdvdnav-dev \<br />
libopenal-dev libjack-dev libtwolame-dev libsvga1-dev libenca-dev libmp3lame-dev<br />
<br />
'''If you are under Ubuntu 8.10 (Intrepid) replace liblame-dev at the end by libmp3lame-dev''' --><br />
<br />
sudo apt-get install build-essential debhelper libncurses5-dev libesd0-dev liblircclient-dev libgtk2.0-dev \<br />
libvorbis-dev libsdl1.2-dev sharutils libasound2-dev gawk libjpeg62-dev libaudiofile-dev \<br />
libsmbclient-dev libxv-dev libpng12-dev libgif-dev libcdparanoia-dev libdv4-dev \<br />
liblivemedia-dev libfreetype6-dev libgl1-mesa-dev libdvdread-dev libdts-dev libtheora-dev \<br />
libglu1-mesa-dev libfontconfig-dev libxxf86dga-dev libxinerama-dev libxxf86vm-dev \<br />
libxvmc-dev libggi2-dev libmpcdec-dev libspeex-dev libfribidi-dev libfaac-dev libaa1-dev libcaca-dev \<br />
libx264-dev libpulse-dev libmad0-dev ladspa-sdk libdbus-glib-1-dev libaudio-dev liblzo2-dev libdvdnav-dev \<br />
libopenal-dev libjack-dev libtwolame-dev libsvga1-dev libenca-dev libmp3lame-dev<br />
<br />
<br />
<!--<br />
Get the MPlayer ubuntu source package:<br />
apt-get source mplayer<br />
<br />
patch the sources and compile:<br />
cd mplayer-1.0~rc2/<br />
sed s/\#define\ MAX_RTP_FRAME_SIZE\ 50000/\#define\ MAX_RTP_FRAME_SIZE\ 5000000/g \<br />
libmpdemux/demux_rtp.cpp > libmpdemux/demux_rtp.cpp_<br />
mv libmpdemux/demux_rtp.cpp_ libmpdemux/demux_rtp.cpp<br />
sudo dpkg-buildpackage <br />
cd ..<br />
<br />
install mplayer package:<br />
sudo dpkg --install mplayer_1.0~rc2-0ubuntu*.deb<br />
--><br />
Current MPlayer code is capable of working with full resolution video produced by Elphel cameras. That is not yet true for the MPlayer packages in Ubuntu repositories, so you'll have to obtain the source code from MPlayer recommended source - their Subversion (SVN) source. First you need to install Subversion itself:<br />
sudo apt-get install subversion<br />
Now create a download directory and get MPlayer source:<br />
mkdir download; cd download<br />
svn checkout svn://svn.mplayerhq.hu/mplayer/trunk mplayer<br />
Configure, compile and prepare Debian package from that source (will take some time):<br />
cd mplayer<br />
sudo dpkg-buildpackage <br />
Occasionally dpkg-buildpackage fails to build, try the comon ./configure, make, make install way then.<br />
<br />
Now uninstall the default Ubuntu's mplayer and mencoder & install the created package:<br />
sudo apt-get remove mplayer mencoder mplayer-nogui<br />
cd ..<br />
sudo dpkg --install mplayer_1.0svn*.deb<br />
<br />
===MPlayer - testing with Elphel camera ===<br />
You should be able now to play videos with up to 5MB frames (highest quality 5MPix images are around 1 MB) as a multicast or unicast video stream. (the streamer in the camera should be ENABLED)<br />
mplayer rtsp://192.168.0.9:554 -vo x11 -fs -zoom<br />
<br />
''Update 10/09/2009: In (K)Ubuntu 9.10 (Karmic Koala) repository the '''50,000 bytes limit on the frame size''' is fixed, but unfortunately the [https://bugs.launchpad.net/ubuntu/+source/mplayer/+bug/296488 other one -'''frame width limit of 2048 pixels'''] (submitted to MPlayer SVN on May, 5, 2009) - is not''--[[User:Andrey.filippov|Andrey.filippov]] 19:03, 9 October 2009 (CDT)<br />
<br />
The first (50,000) makes you picture break after the first 50,000 bytes (only top is shown), the second (current for the 9.10) makes MPlayer to report fatal error. So you still have to use MPlayer SVN to have the full resolution from Elphel cameras.<br />
<br />
Additionally, to make MPlayer work inside the web page you need to specify video output as "x11" in the [http://howto.wikia.com/wiki/Howto_configure_MPlayer MPlayer config file] - add a line<br />
vo="x11"<br />
To ~/.mplayer/config file<br />
<br />
=For developers=<br />
<br />
==Adding universe and multiverse sources==<br />
Please follow this howto for adding universe and multiverse sources. <br />
<br />
https://help.ubuntu.com/community/Repositories/Ubuntu<br />
<br />
or <br />
<br />
https://help.ubuntu.com/community/Repositories/Kubuntu<br />
<br />
==Install needed packages==<br />
Minimal packages:<br />
sudo apt-get install cvs build-essential autoconf flex byacc bison libglib2.0-dev tcl gettext libncurses5-dev patch zlib1g-dev nfs-kernel-server bash xutils-dev<br />
Suggested packages:<br />
sudo apt-get install kinfocenter minicom firefox graphviz doxygen ctags cervisia php5 php5-cli xchat ssh kompare<br />
<br />
=== Installation of GCC Compiler kit for Axis ETRAX processor ===<br />
Download and install Cris-GCC compiler. It is needed to compile C and C++ programs for the CPU used in Elphel cameras - [http://en.wikipedia.org/wiki/ETRAX_CRIS Axis ETRAX FS] :<br />
mkdir -p ~/Downloads/axis ; cd ~/Downloads/axis<br />
wget http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/cris-dist-linux-headers-1.64.tar.gz <br />
wget http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/cris-dist-linux-headersv32-1.64.tar.gz <br />
wget http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/cris-dist-glibc-1.64.tar.gz <br />
wget http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/cris-dist-1.64.tar.gz<br />
tar zxvf cris-dist-1.64.tar.gz<br />
cd cris-dist-1.64/<br />
tar zxvf ../cris-dist-linux-headers-1.64.tar.gz <br />
tar zxvf ../cris-dist-linux-headersv32-1.64.tar.gz <br />
tar zxvf ../cris-dist-glibc-1.64.tar.gz<br />
<br />
Unfortunately CRIS compiler does not compile with GCC 4.3.x and the only know working solution would be to temporary downgrade the compiler. You'll rather upgrade it back to the current one after gcc-cris is built:<br />
<br />
sudo apt-get remove gcc<br />
<br />
See what other packages are removed as being dependent on GCC. You may copy that list an re-install them later with "sudo apt-get install ''list of removed packages''<br />
<br />
sudo apt-get install gcc-4.2 g++-4.2<br />
<br />
And link them as default version:<br />
sudo ln -sf /usr/bin/gcc-4.2 /usr/bin/gcc<br />
sudo ln -sf /usr/bin/g++-4.2 /usr/bin/g++<br />
sudo ln -sf /usr/bin/gcc-4.2 /usr/bin/cc<br />
<br />
You may verify the current GCC version (make sure it is 4.2.x) with the following command:<br />
gcc --version<br />
<br />
Now you may proceed with compiling gcc-cris (takes some time):<br />
<br />
sudo ./install-cris-tools<br />
answer by default (enter, enter, ...)<br />
<br />
If everything finished OK you may reinstall the GCC back (and add other packages you may have removed):<br />
sudo apt-get install build-essential g++ gcc<br />
<br />
Don't forget to export the path to the cris-compiler - the defaul location is /usr/local/cris, as example<br />
<br />
tobias@MoonbaseAlphaOne:~$ export PATH=$PATH:/usr/local/cris/bin<br />
<br />
If everything worked out well, you can check the compiler version with<br />
<br />
gcc-cris --version<br />
<br />
Which should result in an output like this one (example, might vary with version)<br />
<br />
cris-axis-elf-gcc (GCC) 3.2.1 Axis release R64/1.64<br />
Copyright (C) 2002 Free Software Foundation, Inc.<br />
This is free software; see the source for copying conditions. There is NO<br />
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.<br />
<br />
==Configure your NFS server==<br />
<br />
Let's say you want to configure an NFS server on your machine and your IP address is '''192.168.0.15'''.<br />
<br />
<!--Edit /etc/exports file with your favorite editor. Here I use nano.<br />
sudo nano -w /etc/exports<br />
alternatively you may edit the same file with kate<br />
Those who know how to use nano will figure that out themselves --[[User:Andrey.filippov|Andrey.filippov]] 18:09, 23 May 2009 (CDT)<br />
--><br />
Modify the configuration file:<br />
kdesudo kate /etc/exports<br />
Add at the end of the file:<br />
/nfs 192.168.0.0/255.255.0.0(rw,sync,no_root_squash)<br />
save the file.<br />
<br />
If it does not yet exist make /nfs directory and make it world writable to make it possible to write logs from the camera.<br />
sudo mkdir /nfs<br />
sudo chmod 777 -R /nfs<br />
<br />
And finally export the filesystem.<br />
sudo exportfs -a<br />
<br />
== Installation of Kdevelop 3.5 ==<br />
''note: Newer KDevelop 4 does not work with Elphel software''<br />
==== pre (K)Ubuntu 9.10 ====<br />
sudo apt-get install kdevelop<br />
<br />
==== (K)Ubuntu 9.10 ====<br />
see [[KDevelop#Installation_of_KDevelop_3.5_on_.28K.29Ubuntu_9.10|Installation of KDevelop 3.5 on (K)Ubuntu 9.10]]<br />
<br />
== Install Icarus Verilog and GTKWave ==<br />
(If you plan to develop FPGA code or at least look how it works)<br />
<br />
GTKWave is OK from repository:<br />
sudo apt-get install gtkwave<br />
<br />
But unfortunately Icarus Verilog (package verilog) is compiled without needed support for compressed output format, so you'll have to compile it<br />
wget "ftp://ftp.icarus.com/pub/eda/verilog/v0.9/verilog-0.9.1.tar.gz"<br />
tar zxvf verilog-0.9.1.tar.gz<br />
cd verilog-0.9.1/<br />
./configure<br />
make<br />
sudo make install<br />
<br />
== Installing Xilinx WebPack (to be moved to a separate page, just a link here)==<br />
If you plan to compile the FPGA code from the source code you will need to install appropriate software from the FPGA manufacturer web site. This is proprietary (and non-free) software provided by [http://www.xilinx.com Xilinx] free of charge. This software is called the [http://www.xilinx.com/tools/webpack.htm ISE WebPACK] and you may download it (currently some 2.2GB) after registering at Xilinx web site. Currently tested version is 10.1.03, we'll try to update our code when Xilinx will release the new version of their software.<br />
<br />
You will also need that software if you would like to simulate the FPGA functionality. Elphel camera FPGA code (written in Verilog HDL) is licensed under GNU GPLv3, simulator and waveform viewer (see below) are also Free Software, but there are a few Verilog models of some primitives of Xilinx FPGA that are not and can be only obtained from Xilinx as a part of their unisims library. For simulation we use small subset of the unisims library components and only for functional simulation so it will probably make sense to re-write those primitives models so the 2.2 GB distribution will not be needed to extract just few kilobytes of required source code.<br />
<br />
Such independent re-implementation will help us to solve another problem - we have to patch the Xilinx library components to make them work correctly with our code and the simulator we use and currently each Xilinx library update breaks our patches.<br />
<br />
When you will build Elphel software (as described later) the installation script will try to locate Xilinx software on your computer and patch a copy of unisims library. If you'll install Xilinx WebPack after building the camera software - you'll need to navigate to ''fpga'' subdirectory of the source tree and execute<br />
make clean ; make<br />
It is described in file ''README.simulation'' in that subdirectory.<br />
<br />
<br />
There are additional steps required for the Xilinx WebPACK installation if you have a 64-bit GNU/Linux operation system. The next command line detects if you are running on 64 bit version of GNU/Linux and conditionally installs '''ia64-libs'''. That library is needed if you'll install Xilinx WebPack software (Xilinx does not allow to use 64-bit programs in their free for download software and the provided installation script does not install 32-bit version on its own). There is [http://ubuntuforums.org/showthread.php?t=203459 another trick] you'll need to be aware of if you are using 32-bit Xilinx WebPack on a 64-bit GNU/Linux system.<br />
if [ `uname -m` = "x86_64" ] ; then sudo apt-get install ia32-libs ; fi<br />
<br />
==Installation of the source code of Elphel camera software==<br />
<br />
You may install Elphel source code by either of the two methods. Either from the CVS (the most current code) and form the tarball files. <br />
===Installation from the CVS===<br />
<br />
Get [http://downloads.sourceforge.net/elphel/elphel353_80_install_from_cvs.sh.tar.gz elphel353_80_install_from_cvs.sh] open archive that contains the shell script and execute it. It is recommended that you create subdirectory in your home directory, i.e. "elphel_projects", move and execute elphel353_80_install_from_cvs.sh script there. Directory "distfiles" will be created there and used as a cache for software archives that will be downloaded during installation.<br />
<br />
mkdir -p ~/elphel_projects; cd ~/elphel_projects<br />
wget "http://downloads.sourceforge.net/elphel/elphel353_80_install_from_cvs.sh.tar.gz"<br />
tar zxvf elphel353_80_install_from_cvs.sh.tar.gz<br />
./elphel353_80_install_from_cvs.sh<br />
<br />
===Installation from the tarball (release file)===<br />
http://sourceforge.net/projects/elphel<br />
<br />
* get one of the elphel353-8.0.* releases<br />
* decompress the archive<br />
* execute the ./install_elphel script<br />
./install_elphel<br />
<br />
There will be created a file build.log in the top installation directory. If you get any installation problems you can compress that file and email it to Elphel support.<br />
<br />
At the end of installation the script will generate a list of all the files in the target (camera) file system that are to be installed and compares it against the contents of the file ''target.list'' that is included in the distribution. If there are any differences -- they will be reported (there should be none). If there are some missing files it is likely that something failed to install correctly.<br />
<br />
After the installation will be completed successfully you may want to execute the following command in the top installation directory (the one that has apps, configure-files, ... subdirectories)<br />
./prep_kdevelop.php<br />
<br />
That will create ''elphel353.kdevelop'' - a project file for KDevelop IDE (version 3.5.x), you can use it as described here - [[KDevelop]]<br />
<br />
<br />
----<br />
<br />
<br />
= New SDK Environment =<br />
*About<br />
<br />
Considering the existing/old Knoppoix-based Live CD is outdated in terms of its software packages and it's currently left out with no periodic maintenance, we're planning to build a new SDK environment for the users/developers. <br />
<br />
* Architecture<br />
o x86 compatible, 32bit (i386) & 64bit (amd64)<br />
o Can be fit into USB key(4GB) and DVD(4.7GB) <br />
o Ubuntu 9.10 based for first prototype, release in Feb. 2010. Collect bug report/feedback to make improvement and target to have the first official release (Ubuntu 10.4 based ) in June 2010.<br />
o Default desktop environment: KDE (so Kubuntu actually)<br />
<br />
<br />
* Main Feature<br />
o USB key can be self-bootable<br />
o USB key can be used both as live system and installation media (same as Ubuntu Desktop DVD)<br />
o The live system should be able to drive and demo the Elphel camera.<br />
o Can be used as a normal Ubuntu system<br />
o Default installation of following packages: (TBD, refer to http://wiki.elphel.com/index.php?title=Elphel_Software_Kit_for_Ubuntu)<br />
+ mplayer-nogui mplayer gecko-mediaplayer (1.0 from sourceforge)<br />
+ libvorbis-dev libsdl1.2-dev sharutils libasound2-dev liblzo-dev gawk libjpeg62-dev libaudiofile-dev \<br />
libsmbclient-dev libxv-dev libpng3-dev libgif-dev libcdparanoia0-dev libxvidcore4-dev libdv-dev \<br />
liblivemedia-dev libfreetype6-dev em8300-headers libgl1-mesa-dev libdvdread-dev libdts-dev libtheora-dev \<br />
libglu-dev libartsc0-dev libfontconfig-dev libxxf86dga-dev libxinerama-dev libxxf86vm-dev \<br />
libxvmc-dev libggi2-dev libmpcdec-dev libspeex-dev libfribidi-dev libfaac-dev libaa1-dev libcaca-dev \<br />
libx264-dev libpulse-dev libmad0-dev ladspa-sdk libdbus-glib-1-dev libaudio-dev liblzo2-dev libdvdnav-dev \<br />
libopenal-dev libjack-dev libtwolame-dev libsvga1-dev libenca-dev libmp3lame-dev<br />
+ cvs build-essential autoconf flex byacc bison libglib2.0-dev tcl gettext libncurses5-dev patch zlib1g-dev nfs-kernel-server bash xutils-dev<br />
+ kinfocenter minicom firefox graphviz doxygen ctags cervisia php5 php5-cli xchat ssh kompare<br />
+ KDevelop <br />
o Key package list in exclude list so actions like apt-get upgrade would not break above packages that are already installed<br />
o Above packages must be included in the USB key image<br />
o Create meta-package such as apt-get install elphel-dev ?<br />
o For packages cannot be included in the installation media due to the legal restriction, create meta package / installation script? (put on the desktop)<br />
o Elphel document link(such as user manual) on desktop (both in installation media live system and installed system)<br />
o Provide a debian/ubuntu style repository for users so apt-get install can be an feasible option for people who have regular Ubuntu system<br />
o Include zeroconf for device discovery (10373)<br />
o Include "camera discovery" script for device discovery<br />
o Desktop icons for starting mplayer & gstreamer live stream from camera<br />
<br />
<br />
* Release & Update<br />
o Release in a six months cycle in accordance with Ubuntu release cycle, focus on LTS version<br />
o Wiki page to include key information (installation, important notice etc)<br />
o Include both USB key in camera package.<br />
o ISO image downloadable from Elphel website, user may choose to burn into DVD or USB key.<br />
<br />
* Others<br />
o Elphel label sticker on USB key.<br />
o Legal/comliance check to ensure no violation on Canonical/Ubuntu, Xilinx, Verilog, Axis, etc such as logo, name use, etc. Vendor selection (USB key, label etc)<br />
<br />
<br />
<!--<br />
<br />
In Ubuntu 8.10 there can be an error like '''''(update - solved with installation xutils-dev for 'makedep' utility at 8.10)''''':<br />
*** No rule to make target `../../include/openssl/idea.h', needed by `hmac.o'.<br />
<br />
The solution is to create a link:<br />
cd /cvs_sync/elphel353-7.1.8/elphel353-7.1/apps/crypto/openssl-IR0_9_7f-3/openssl/include/openssl<br />
ln -s ../../crypto/idea/idea.h<br />
--><br />
<!--<br />
==Elphel SDK==<br />
<br />
We provide our clients with a complete SDK to develop software, FPGA code and even to redesign the hardware.<br />
<br />
===PHP===<br />
*[[PHP in Elphel cameras]]<br />
*[[Elphel PHP constants]]<br />
*[[PHP Examples]]<br />
<br />
===KDevelop IDE===<br />
*[[KDevelop]] IDE<br />
*work in progress [[KDevelop]] integration with GTKWave & Icarus.<br />
<br />
===FPGA Development in Elphel cameras===<br />
*[[FPGA Development in Elphel cameras]] is the page to read if you want to install Xilinx tools to do some FPGA development.<br />
*iverilog<br />
*GTKWave<br />
</!--<br />
= EeeBox (K)Ubuntu 8.10 installation =<br />
<br />
=== ACPI configuration ===<br />
<br />
EeeBox will suspend and freeze after some time w/o any activity on it, so it's better to turn off suspend. Run this script at this new folder with ''root'' privileges (for example, put text to ''go'' file, make that file executable '''chmod a+x go''', and run with sudo '''sudo ./go'''):<br />
<br />
#!/bin/sh<br />
<br />
# patch ACPI config to turn off suspend<br />
<br />
patch -N /etc/default/acpi-support << 'EOP'<br />
--- a 2008-12-31 00:40:41.000000000 +0200<br />
+++ acpi-support 2008-12-31 00:40:08.000000000 +0200<br />
@@ -56,7 +56,7 @@<br />
# Please specify a space separated list of options. The recommended value is<br />
# "dbus pm-utils"<br />
#<br />
-SUSPEND_METHODS="dbus-pm dbus-hal pm-utils"<br />
+SUSPEND_METHODS=""<br />
<br />
<br />
<br />
@@ -69,10 +69,10 @@<br />
#<br />
<br />
# Comment the next line to disable ACPI suspend to RAM<br />
-ACPI_SLEEP=true<br />
+#ACPI_SLEEP=true<br />
<br />
# Comment the next line to disable suspend to disk<br />
-ACPI_HIBERNATE=true<br />
+#ACPI_HIBERNATE=true<br />
<br />
# Change the following to "standby" to use ACPI S1 sleep, rather than S3.<br />
# This will save less power, but may work on more machines<br />
EOP<br />
<br />
# and apply changes<br />
/etc/init.d/acpi-support restart<br />
/etc/init.d/acpid restart<br />
<br />
=== Network configuration ===<br />
<br />
''Please, someone, have a look how to configure network at Kubuntu 8.10 - I have bad luck with that. The main idea is: setup WiFi for internet, and local interface for local network to work with camera. Possible problem: Oleg told me what with POE adapter, not POE switch (as I do) local network will not work - network manager shut down eth0 every time when contact is lost, for example, when cable disconnected from the camera for reflash with button (this used with prod353 system).''<br />
<br />
=== GCC-CRIS cross-compiler installation ===<br />
<br />
Make a directory where you will build cross-compiler and store sources of it, and run this script at this new folder with ''root'' privileges (for example, put text to ''go'' file, make that file executable '''chmod a+x go''', and run with sudo '''sudo ./go'''):<br />
<br />
#!/bin/sh<br />
<br />
# install packages to build cross-compiler<br />
apt-get install build-essential gcc-4.2 g++-4.2 xutils-dev libncurses5-dev autoconf automake byacc bison zlib1g-dev patch cvs gettext<br />
<br />
# create links to GCC-4.2<br />
rm /usr/bin/gcc<br />
rm /usr/bin/g++<br />
ln -s /usr/bin/gcc-4.2 /usr/bin/gcc<br />
ln -s /usr/bin/g++-4.2 /usr/bin/g++<br />
<br />
# download sources - if links are broken, check developer.axis.com<br />
wget -c http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/old/cris-dist-1.64.tar.gz<br />
wget -c http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/old/cris-dist-glibc-1.64.tar.gz<br />
wget -c http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/old/cris-dist-linux-headers-1.64.tar.gz<br />
wget -c http://www.axis.com/ftp/pub/axis/tools/cris/compiler-kit/old/cris-dist-linux-headersv32-1.64.tar.gz<br />
<br />
# unarch that<br />
tar -xvf cris-dist-1.64.tar.gz<br />
cd cris-dist-1.64<br />
cp ../cris-dist-glibc-1.64.tar.gz ./<br />
tar -xvf cris-dist-glibc-1.64.tar.gz<br />
cp ../cris-dist-linux-* ./<br />
tar -xvf cris-dist-linux-headers-1.64.tar.gz<br />
tar -xvf cris-dist-linux-headersv32-1.64.tar.gz<br />
<br />
# build and install - agree with all questions, at the finish also<br />
./install-cris-tools<br />
<br />
# delete build directory - we don't need it anymore<br />
cd ..<br />
rm -R cris-dist-1.64<br />
<br />
# restore host GCC links - out of the box, Ubuntu 8.10 use GCC-4.3 as default<br />
rm /usr/bin/gcc<br />
rm /usr/bin/g++<br />
ln -s /usr/bin/gcc-4.3 /usr/bin/gcc<br />
ln -s /usr/bin/g++-4.3 /usr/bin/g++<br />
<br />
# it's all, cross-compiler was installed; but while we still root, install packages needed to build firmware here<br />
apt-get install libglib2.0-dev pkg-config flex gettext tcl8.5<br />
<br />
=== NFS installation and configuration ===<br />
<br />
Before firmware build, we should install and configure NFS, because firmware build process will put firmware images in to it for remote (and local) camera reflashing.<br />
<br />
sudo apt-get install nfs-kernel-server<br />
sudo mkdir /nfs<br />
# mkdir /nfs/elphel353-2.10 # not used in 8.0<br />
sudo chmod -R a+r /nfs<br />
sudo chmod -R a+w /nfs<br />
sudo echo "/nfs 192.168.0.0/24(rw,async)" >> /etc/exports<br />
sudo /etc/init.d/nfs-kernel-server restart<br />
<br />
=== build firmware ===<br />
<br />
=== other steps ===<br />
...to be continued - network setup, Firefox,KDeveloper,MPlayer installation...<br />
<br />
--><br />
[[Category:Operating Systems]]<br />
[[Category:Software]]</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8432Elphel workshop in Bordeaux during RMLL 20102010-07-04T18:07:54Z<p>Polto: /* Elphel SDK */</p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop will take place during RMLL 2010 and is organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Some preparations on your end (e.g. pre-installing required packages) would allow us to focus on the main topics and cover any remaining questions. Any question prior or during the workshop should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
Please sign up on the [http://doodle.com/89tms34cippzw4kv participant’s list], and please add your availability during RMLL so that we can find the optimal day & time for the workshop. All further information about this workshop will be available on this wiki page.<br />
<br />
We are looking forward to seeing you at the workshop!<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10353]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
<br />
Elphel provide Free Software SDK for everything but FPGA, here you will have to deal with a proprietary but free of charge Xilinx WebPack ISE.<br />
<br />
To install the SDK for Elphel cameras you need a fresh installation of the latest supported (10.04 at the moment) (K)Ubuntu GNU/Linux distribution and follows instructions available on this page: [[Elphel Software Kit for Ubuntu]], FPGA part is documented separately: [[FPGA Development in Elphel cameras]].<br />
<br />
Those manuals are written so that you should only copy & paste commands in a terminal in order to install the needed software needed to start developing on Elhpel cameras.<br />
<br />
Elphel use [[KDevelop]] IDE. But of course you are free to use vi or emacs...<br />
<br />
If you are not able to install the SDK using those instruction, please report it on our [http://www3.elphel.com/list mailing-list] or here on the wiki in the discussion page.<br />
<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
= Overview of the main software available on the camera =<br />
== Imgsrv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use [http://www.mplayerhq.hu/ mencoder] to assemble videos:<br />
<br />
On the PC edit your cron:<br />
<br />
crontab -e<br />
<br />
you should add a line like<br />
<br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 21:59 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== Trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
<br />
Triggered mode can be used to achieve precise locked FPS, to synchronize multiple cameras or to trigger them from an external device.<br />
<br />
=== About triggered mode ===<br />
Triggered mode is documented [[Trigger|here]]. During your experimentations please do not forget that our CMOS sensor is ERS ([[Electronic Rolling Shutter]]).<br />
<br />
We have hardware synchronization capability allowing 1μs jitter between images as well as external trigger.<br />
<br />
The NC353L camera have FPGA code allowing the camera synchronization, but an additional board is needed. The first board allowing the use of hardware synchronization commercialized by Elphel is the 10369 IO extension board. The board have internal synchronization connectors for cameras mounted in the same camera case and wired internally and an external modular RJ-14 opto-isolated connector.<br />
<br />
You can have several cameras in a so called "slave" mode waiting to receive the trigger and one camera (or any other device) serving as master. To trigger the image from all cameras the "master" device need to send a 3-5v impulse on the synchronization cable. <br />
<br />
The [[10369]] boards have two individual sets of I/Os for the synchronization of several cameras:<br />
<br />
1. Small 4-pin flex cable connectors to interconnect multiple camera boards in a common enclosure<br />
<br />
2. Modular RJ-14 4-pin connectors for synchronizing multiple individual cameras<br />
<br />
Each of the two channels has bi-directional opto-isolated I/O's and a non-isolated high current driver that can trigger multiple cameras. The FPGA code includes a programmable generator that can control the synchronization output drivers, and a programmable input delay generator driven by the selected opto-isolated inputs so each sensor can be triggered with a specified delay from the common for-multiple-cameras trigger. There is also circuitry to drive sensor trigger input.<br />
<br />
The same FPGA module can be used in a single camera configuration to provide precise control over the frame rate. The period of the free running sensor is defined as a product of the number of lines by the number of pixels in a line (including invisible margins) by a pixel period, so there are some restrictions on the period that can be programmed. This triggered mode of sensor operation also simplifies alternating the exposure time between consecutive frames. In a free-running ERS mode, exposure overlaps between frames and it is not possible to control it independently for each frame.<br />
<br />
=== Playing with a LED (or a flash lamp) ===<br />
<br />
Here are some examples with a camera in triggered mode. The camera is filming at Full resolution at 1FPS.<br />
<br />
[[Image:Trigger blink.jpg|thumb|A LED is triggered by the camera and pointed directly to the sensor]]<br />
<br />
{|<br />
|[[Image:trig_1.jpeg|thumb|The LED is triggered just after the sensor is fully erased and the exposure is > 1/15s]]<br />
|[[Image:trig_2.jpeg|thumb|Here the end of the image was erased after the LED was triggered.]]<br />
|[[Image:trig_3.jpeg|thumb|Here the LED was triggered before the image was totally erased and the exposure was not enough big so the top of the image was already read at the time of the LED flash.]]<br />
|}<br />
<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
* There are many other parameters, selection of the sequence and switching between single/multi mode is included in camvc (controls are shown when the 10359 board is detected in the system)<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8430Elphel workshop in Bordeaux during RMLL 20102010-07-02T10:17:32Z<p>Polto: /* Playing with a LED (or a flash lamp) */</p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop will take place during RMLL 2010 and is organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Some preparations on your end (e.g. pre-installing required packages) would allow us to focus on the main topics and cover any remaining questions. Any question prior or during the workshop should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
Please sign up on the [http://doodle.com/89tms34cippzw4kv participant’s list], and please add your availability during RMLL so that we can find the optimal day & time for the workshop. All further information about this workshop will be available on this wiki page.<br />
<br />
We are looking forward to seeing you at the workshop!<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10353]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
= Overview of the main software available on the camera =<br />
== Imgsrv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use [http://www.mplayerhq.hu/ mencoder] to assemble videos:<br />
<br />
On the PC edit your cron:<br />
<br />
crontab -e<br />
<br />
you should add a line like<br />
<br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 21:59 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== Trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
<br />
Triggered mode can be used to achieve precise locked FPS, to synchronize multiple cameras or to trigger them from an external device.<br />
<br />
=== About triggered mode ===<br />
Triggered mode is documented [[Trigger|here]]. During your experimentations please do not forget that our CMOS sensor is ERS ([[Electronic Rolling Shutter]]).<br />
<br />
We have hardware synchronization capability allowing 1μs jitter between images as well as external trigger.<br />
<br />
The NC353L camera have FPGA code allowing the camera synchronization, but an additional board is needed. The first board allowing the use of hardware synchronization commercialized by Elphel is the 10369 IO extension board. The board have internal synchronization connectors for cameras mounted in the same camera case and wired internally and an external modular RJ-14 opto-isolated connector.<br />
<br />
You can have several cameras in a so called "slave" mode waiting to receive the trigger and one camera (or any other device) serving as master. To trigger the image from all cameras the "master" device need to send a 3-5v impulse on the synchronization cable. <br />
<br />
The [[10369]] boards have two individual sets of I/Os for the synchronization of several cameras:<br />
<br />
1. Small 4-pin flex cable connectors to interconnect multiple camera boards in a common enclosure<br />
<br />
2. Modular RJ-14 4-pin connectors for synchronizing multiple individual cameras<br />
<br />
Each of the two channels has bi-directional opto-isolated I/O's and a non-isolated high current driver that can trigger multiple cameras. The FPGA code includes a programmable generator that can control the synchronization output drivers, and a programmable input delay generator driven by the selected opto-isolated inputs so each sensor can be triggered with a specified delay from the common for-multiple-cameras trigger. There is also circuitry to drive sensor trigger input.<br />
<br />
The same FPGA module can be used in a single camera configuration to provide precise control over the frame rate. The period of the free running sensor is defined as a product of the number of lines by the number of pixels in a line (including invisible margins) by a pixel period, so there are some restrictions on the period that can be programmed. This triggered mode of sensor operation also simplifies alternating the exposure time between consecutive frames. In a free-running ERS mode, exposure overlaps between frames and it is not possible to control it independently for each frame.<br />
<br />
=== Playing with a LED (or a flash lamp) ===<br />
<br />
Here are some examples with a camera in triggered mode. The camera is filming at Full resolution at 1FPS.<br />
<br />
[[Image:Trigger blink.jpg|thumb|A LED is triggered by the camera and pointed directly to the sensor]]<br />
<br />
{|<br />
|[[Image:trig_1.jpeg|thumb|The LED is triggered just after the sensor is fully erased and the exposure is > 1/15s]]<br />
|[[Image:trig_2.jpeg|thumb|Here the end of the image was erased after the LED was triggered.]]<br />
|[[Image:trig_3.jpeg|thumb|Here the LED was triggered before the image was totally erased and the exposure was not enough big so the top of the image was already read at the time of the LED flash.]]<br />
|}<br />
<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
* There are many other parameters, selection of the sequence and switching between single/multi mode is included in camvc (controls are shown when the 10359 board is detected in the system)<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=File:Trigger_blink.jpg&diff=8426File:Trigger blink.jpg2010-07-01T17:52:31Z<p>Polto: </p>
<hr />
<div></div>Poltohttps://wiki.elphel.com/index.php?title=File:Trig_3.jpeg&diff=8425File:Trig 3.jpeg2010-07-01T13:34:28Z<p>Polto: </p>
<hr />
<div></div>Poltohttps://wiki.elphel.com/index.php?title=File:Trig_2.jpeg&diff=8424File:Trig 2.jpeg2010-07-01T13:34:12Z<p>Polto: </p>
<hr />
<div></div>Poltohttps://wiki.elphel.com/index.php?title=File:Trig_1.jpeg&diff=8423File:Trig 1.jpeg2010-07-01T13:33:55Z<p>Polto: </p>
<hr />
<div></div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8422Elphel workshop in Bordeaux during RMLL 20102010-07-01T13:16:05Z<p>Polto: </p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop will take place during RMLL 2010 and is organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Some preparations on your end (e.g. pre-installing required packages) would allow us to focus on the main topics and cover any remaining questions. Any question prior or during the workshop should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
Please sign up on the [http://doodle.com/89tms34cippzw4kv participant’s list], and please add your availability during RMLL so that we can find the optimal day & time for the workshop. All further information about this workshop will be available on this wiki page.<br />
<br />
We are looking forward to seeing you at the workshop!<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
* [[10338]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
= Overview of th main software available on the camera =<br />
== Imgserv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use [http://www.mplayerhq.hu/ mencoder] to assemble videos:<br />
<br />
On the PC edit your cron:<br />
<br />
crontab -e<br />
<br />
you should add a line like<br />
<br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 21:59 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== Trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
<br />
Triggered mode can be used to achieve precise locked FPS, to synchronize multiple cameras or to trigger them from an external device.<br />
<br />
=== About triggered mode ===<br />
Triggered mode is documented [[Trigger|here]]. During your experimentations please do not forget that our CMOS sensor is ERS ([[Electronic Rolling Shutter]]).<br />
<br />
We have hardware synchronization capability allowing 1μs jitter between images as well as external trigger.<br />
<br />
The NC353L camera have FPGA code allowing the camera synchronization, but an additional board is needed. The first board allowing the use of hardware synchronization commercialized by Elphel is the 10369 IO extension board. The board have internal synchronization connectors for cameras mounted in the same camera case and wired internally and an external modular RJ-14 opto-isolated connector.<br />
<br />
You can have several cameras in a so called "slave" mode waiting to receive the trigger and one camera (or any other device) serving as master. To trigger the image from all cameras the "master" device need to send a 3-5v impulse on the synchronization cable. <br />
<br />
The [[10369]] boards have two individual sets of I/Os for the synchronization of several cameras:<br />
<br />
1. Small 4-pin flex cable connectors to interconnect multiple camera boards in a common enclosure<br />
<br />
2. Modular RJ-14 4-pin connectors for synchronizing multiple individual cameras<br />
<br />
Each of the two channels has bi-directional opto-isolated I/O's and a non-isolated high current driver that can trigger multiple cameras. The FPGA code includes a programmable generator that can control the synchronization output drivers, and a programmable input delay generator driven by the selected opto-isolated inputs so each sensor can be triggered with a specified delay from the common for-multiple-cameras trigger. There is also circuitry to drive sensor trigger input.<br />
<br />
The same FPGA module can be used in a single camera configuration to provide precise control over the frame rate. The period of the free running sensor is defined as a product of the number of lines by the number of pixels in a line (including invisible margins) by a pixel period, so there are some restrictions on the period that can be programmed. This triggered mode of sensor operation also simplifies alternating the exposure time between consecutive frames. In a free-running ERS mode, exposure overlaps between frames and it is not possible to control it independently for each frame.<br />
<br />
=== Playing with a LED (or a flash lamp) ===<br />
<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8421Elphel workshop in Bordeaux during RMLL 20102010-07-01T13:13:16Z<p>Polto: /* Automation */</p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop will take place during RMLL 2010 and is organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Some preparations on your end (e.g. pre-installing required packages) would allow us to focus on the main topics and cover any remaining questions. Any question prior or during the workshop should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
Please sign up on the [http://doodle.com/89tms34cippzw4kv participant’s list], and please add your availability during RMLL so that we can find the optimal day & time for the workshop. All further information about this workshop will be available on this wiki page.<br />
<br />
We are looking forward to seeing you at the workshop!<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
* [[10338]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
= Overview of th main software available on the camera =<br />
== Imgserv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use [http://www.mplayerhq.hu/ mencoder] to assemble videos:<br />
<br />
On the PC edit your cron:<br />
<br />
crontab -e<br />
<br />
you should add a line like<br />
<br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 21:59 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== Trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
<br />
Triggered mode can be used to achieve precise locked FPS, to synchronize multiple cameras or to trigger them from an external device.<br />
<br />
== About triggered mode ==<br />
Triggered mode is documented [[Trigger|here]]. During your experimentations please do not forget that our CMOS sensor is ERS ([[Electronic Rolling Shutter]]).<br />
<br />
We have hardware synchronization capability allowing 1μs jitter between images as well as external trigger.<br />
<br />
The NC353L camera have FPGA code allowing the camera synchronization, but an additional board is needed. The first board allowing the use of hardware synchronization commercialized by Elphel is the 10369 IO extension board. The board have internal synchronization connectors for cameras mounted in the same camera case and wired internally and an external modular RJ-14 opto-isolated connector.<br />
<br />
You can have several cameras in a so called "slave" mode waiting to receive the trigger and one camera (or any other device) serving as master. To trigger the image from all cameras the "master" device need to send a 3-5v impulse on the synchronization cable. <br />
<br />
The [[10369]] boards have two individual sets of I/Os for the synchronization of several cameras:<br />
<br />
1. Small 4-pin flex cable connectors to interconnect multiple camera boards in a common enclosure<br />
<br />
2. Modular RJ-14 4-pin connectors for synchronizing multiple individual cameras<br />
<br />
Each of the two channels has bi-directional opto-isolated I/O's and a non-isolated high current driver that can trigger multiple cameras. The FPGA code includes a programmable generator that can control the synchronization output drivers, and a programmable input delay generator driven by the selected opto-isolated inputs so each sensor can be triggered with a specified delay from the common for-multiple-cameras trigger. There is also circuitry to drive sensor trigger input.<br />
<br />
The same FPGA module can be used in a single camera configuration to provide precise control over the frame rate. The period of the free running sensor is defined as a product of the number of lines by the number of pixels in a line (including invisible margins) by a pixel period, so there are some restrictions on the period that can be programmed. This triggered mode of sensor operation also simplifies alternating the exposure time between consecutive frames. In a free-running ERS mode, exposure overlaps between frames and it is not possible to control it independently for each frame.<br />
<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8420Elphel workshop in Bordeaux during RMLL 20102010-07-01T13:07:19Z<p>Polto: /* trigger internal & external, multiple cameras synchronization, trigger a camera from GPS */</p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop will take place during RMLL 2010 and is organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Some preparations on your end (e.g. pre-installing required packages) would allow us to focus on the main topics and cover any remaining questions. Any question prior or during the workshop should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
Please sign up on the [http://doodle.com/89tms34cippzw4kv participant’s list], and please add your availability during RMLL so that we can find the optimal day & time for the workshop. All further information about this workshop will be available on this wiki page.<br />
<br />
We are looking forward to seeing you at the workshop!<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
* [[10338]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
= Overview of th main software available on the camera =<br />
== Imgserv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
Http 1.1<br />
<br />
wget, curl<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== Trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
<br />
Triggered mode can be used to achieve precise locked FPS, to synchronize multiple cameras or to trigger them from an external device.<br />
<br />
== About triggered mode ==<br />
Triggered mode is documented [[Trigger|here]]. During your experimentations please do not forget that our CMOS sensor is ERS ([[Electronic Rolling Shutter]]).<br />
<br />
We have hardware synchronization capability allowing 1μs jitter between images as well as external trigger.<br />
<br />
The NC353L camera have FPGA code allowing the camera synchronization, but an additional board is needed. The first board allowing the use of hardware synchronization commercialized by Elphel is the 10369 IO extension board. The board have internal synchronization connectors for cameras mounted in the same camera case and wired internally and an external modular RJ-14 opto-isolated connector.<br />
<br />
You can have several cameras in a so called "slave" mode waiting to receive the trigger and one camera (or any other device) serving as master. To trigger the image from all cameras the "master" device need to send a 3-5v impulse on the synchronization cable. <br />
<br />
The [[10369]] boards have two individual sets of I/Os for the synchronization of several cameras:<br />
<br />
1. Small 4-pin flex cable connectors to interconnect multiple camera boards in a common enclosure<br />
<br />
2. Modular RJ-14 4-pin connectors for synchronizing multiple individual cameras<br />
<br />
Each of the two channels has bi-directional opto-isolated I/O's and a non-isolated high current driver that can trigger multiple cameras. The FPGA code includes a programmable generator that can control the synchronization output drivers, and a programmable input delay generator driven by the selected opto-isolated inputs so each sensor can be triggered with a specified delay from the common for-multiple-cameras trigger. There is also circuitry to drive sensor trigger input.<br />
<br />
The same FPGA module can be used in a single camera configuration to provide precise control over the frame rate. The period of the free running sensor is defined as a product of the number of lines by the number of pixels in a line (including invisible margins) by a pixel period, so there are some restrictions on the period that can be programmed. This triggered mode of sensor operation also simplifies alternating the exposure time between consecutive frames. In a free-running ERS mode, exposure overlaps between frames and it is not possible to control it independently for each frame.<br />
<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8419Elphel workshop in Bordeaux during RMLL 20102010-07-01T12:19:02Z<p>Polto: /* boards */</p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop will take place during RMLL 2010 and is organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Some preparations on your end (e.g. pre-installing required packages) would allow us to focus on the main topics and cover any remaining questions. Any question prior or during the workshop should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
Please sign up on the [http://doodle.com/89tms34cippzw4kv participant’s list], and please add your availability during RMLL so that we can find the optimal day & time for the workshop. All further information about this workshop will be available on this wiki page.<br />
<br />
We are looking forward to seeing you at the workshop!<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
* [[10353]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
= Overview of th main software available on the camera =<br />
== Imgserv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use mencoder for example to assemble videos:<br />
<br />
On the PC edit your cron:<br />
crontab -e<br />
<br />
you should add a line like <br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 22:00 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=Elphel_workshop_in_Bordeaux_during_RMLL_2010&diff=8418Elphel workshop in Bordeaux during RMLL 20102010-07-01T12:15:45Z<p>Polto: /* Automation */</p>
<hr />
<div>= About RMLL =<br />
<br />
[http://2010.rmll.info/ RMLL 2010] (Libre Software Meeting) is a free (as a beer and as a speech) and non commercial conferences, workshops and round table about Free Software and its applications. The LSM goal is to provide a platform for exchange among Free Software developers, users and stakeholders.<br />
<br />
This year RMLL will take place in Bordeaux, France from 6th to 11th of July and is organised by a set of local Free Software user groups and organizations with the help of several public and private sponsors. The event will be hosted by ENSEIRB-MATMECA, U-Bordeaux 1, LaBRI & ENSCB.<br />
<br />
= About the workshop =<br />
<br />
The workshop will take place during RMLL 2010 and is organized by our Swiss partner [http://alsenet.com/ Alsenet SA].<br />
<br />
All our European customers, partners and friends of Elphel are invited to participate in the workshop. It will be a perfect occasion to meet other Elphel customers, users & developers, to test the hardware, software and to discuss project ideas and future developments. RMLL is also an ideal place to get a better understanding of the economical models of Free Software.<br />
<br />
Some preparations on your end (e.g. pre-installing required packages) would allow us to focus on the main topics and cover any remaining questions. Any question prior or during the workshop should be discussed on Elphel’s IRC channel, wiki or public mailing list so we can have a public log that is potentially useful to other Elphel users as well.<br />
<br />
Please sign up on the [http://doodle.com/89tms34cippzw4kv participant’s list], and please add your availability during RMLL so that we can find the optimal day & time for the workshop. All further information about this workshop will be available on this wiki page.<br />
<br />
We are looking forward to seeing you at the workshop!<br />
<br />
= Dates, time and place =<br />
<br />
The workshop will take place on 7th July from 13:00 to 17:00 in the room TD14 of ENSEIRB.<br />
<br />
For accomodation, map, and any other information please visit [http://2010.rmll.info/-Hebergements-.html RMLL web site].<br />
<br />
= Overview of the camera building blocks =<br />
<br />
Elphel's goal is to provide high-quality, intelligent, network cameras based on open hardware and free software. Elphel hopes its modular camera design will attract software and FPGA engineers interested in exploring high-definition videography, among other innovative applications.<br />
<br />
Both turnkey camera assemblies and OEM modules are available. All the documentation is published on this wiki and our mailing [http://www3.elphel.com/list list] under GNU FDL v1.3 license and all the source code including FPGA Verilog code, Linux drivers and software are available under GNU GPL v3 license.<br />
<br />
== boards ==<br />
All the separate camera components are listed on this [[353#Modules_for_the_353.2F363_series_cameras | page]].<br />
<br />
Here are the most commonly used:<br />
<br />
The 10338 & 10353 boards are the minimum requirements to assemble a camera:<br />
* [[10338]] - Aptina MT9P031/MT9P001 5MPix (2592x1944) sensor front end to [[353|Elphel 353 series cameras]].<br />
* [[10338]] - processor board is the computer part of the [[353|Elphel 353 series cameras]].<br />
<br />
Those two are optional and very flexible extension boards:<br />
* [[10359]] - multi-function board. It can be connected between the [[10353|10353 Processor board]] and a sensor one (up to three sensor boards can be connected)<br />
* [[10369]] - IO extension board. SATA, CF, USB 1.1, GPIO, i2c, ... This board also have few [[10369#Adapters | adapters]]<br />
<br />
On each of the board pages you will find links to circuit diagram, parts list, PCB layout & Gerber files. Some data-sheets and other necessary documentations are linked from.<br />
<br />
== assemblies == <br />
Our turnkey modules are listed [http://www3.elphel.com/353_turnkey here] and are documented more in detail on [[Elphel camera parts]].<br />
<br />
== price list ==<br />
http://www3.elphel.com/price_list , last day of RMLL 2010 few cameras will be sold with a special 10% GPL contribution discount.<br />
<br />
== under development ==<br />
Elphel is currently working on Eyesis high resolution panoramic camera - http://blogs.elphel.com/category/panoramic/ <br />
<br />
And soon we return to our next generation camera development - http://blogs.elphel.com/category/andrey/10373/<br />
<br />
= Elphel SDK =<br />
= reflashing camera firmware & FPGA bitstream =<br />
<br />
= Overview of th main software available on the camera =<br />
== Imgserv ==<br />
[[Imgsrv]] was developed to increase the transfer rate of individual images acquired by the Elphel 353 series cameras. [[Imgsrv]] is connected through 8081 port and writes GET responses directly to the socket (reading image data from the [[circbuf]] using zero-copy memory mapping mmap), reaching 9-10MB/sec - virtually the full bandwidth of the network. This server does not provide any control over the sensor or FPGA compressor operation, its only purpose is to get data acquired to the (currently 19 MB) circular buffer in the system RAM. It is intended to have the functionality similar to the camera video streamers that also deal with the data already being acquired to the system buffer to be used when individual images are needed rather than the continuous video stream .<br />
<br />
The [[imgsrv]] makes use of the new functionality of the [[Circbuf | /dev/circbuf]] driver providing it with a convenient web front end. It serves JPEG images (with [[Exif_init | Exif]] data attached) as well as metadata and [[circbuf]] status formatting output as the xml files. <br />
<br />
== astreamer ==<br />
== camogm ==<br />
== daemons ==<br />
== lighttpd / FastCGI / PHP ==<br />
== different PHP scripts ==<br />
<br />
= client software compatible with Elphel cameras =<br />
<br />
== Browsers ==<br />
Firefox 3.6<br />
<br />
=== Automation ===<br />
You can use command line tools such as [http://en.wikipedia.org/wiki/Wget wget] or [http://en.wikipedia.org/wiki/CURL curl] to automate many things on the camera.<br />
<br />
For example on your PC you can use [http://en.wikipedia.org/wiki/Cron cron] and [http://en.wikipedia.org/wiki/Wget wget] to automatically download a full resolution snapshot once per minute. When we can use mencoder for example to assemble videos:<br />
<br />
On the PC edit your cron:<br />
crontab -e<br />
<br />
you should add a line like <br />
#.---------------- minute (0 - 59) <br />
#| .------------- hour (0 - 23)<br />
#| | .---------- day of month (1 - 31)<br />
#| | | .------- month (1 - 12) OR jan,feb,mar,apr ... <br />
#| | | | .----- day of week (0 - 7) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat <br />
#| | | | |<br />
#* * * * * command to be executed<br />
* 6-21 * * * wget http://192.168.0.9/snapfull.php -O ~/timelapse/`date +%s`.jpg > /dev/null 2>&1<br />
10 23 * * * mencoder -ovc copy -mf fps=8:type=jpg 'mf://~/timelapse/*.jpg' -o ~/timelapse_videos/time_lapse_`date +%s`.avi > /dev/null 2>&1; mv ~timelapse ~/timelapse_`date +%s` ;mkdir /home/polto/timelapse<br />
<br />
This example get one image per second in full resolution from the camera from 6:00 to 22:00 and at 23:10 compress acquired images into a time-lapse video.<br />
<br />
== Video frameworks ==<br />
=== Libraries ===<br />
==== FFMPEG ====<br />
==== lib livemedia (live555) ====<br />
<br />
=== VLC, libvlc ===<br />
VLC is a free and open source cross-platform multimedia player and framework, that plays most multimedias files, medias and streaming protocols.<br />
<br />
It is simple to use, yet very powerful and extendable. [[VLC | Vlc page]] provide usage examples.<br />
<br />
=== MPlayer / Mencoder ===<br />
<br />
[[MPlayer]] focuses on low latency displaying by avoiding caching and is therefore perfectly suited to display live video streams from Elphel cameras.<br />
<br />
Another great feature of [[MPlayer]] is the wide range of supported output drivers. It works with X11, Xv, DGA, OpenGL, SVGAlib, fbdev, AAlib, DirectFB, but you can use GGI, SDL (and this way all their drivers), VESA (on every VESA compatible card, even without X11!) and some low level card-specific drivers (for Matrox, 3Dfx and ATI), too! Most of them support software or hardware scaling.<br />
<br />
=== GStreamer ===<br />
<br />
[[Using gstreamer]]<br />
<br />
= few words about network configuration for unicast and multicast modes =<br />
<br />
= Post-processing =<br />
== imageJ plugins for Elphel ==<br />
== JP46 post-processing workflow ==<br />
There are currently 2 applications and a demo script for post processing JP46 quicktime movies: http://elphel.svn.sourceforge.net/viewvc/elphel/tools/<br />
<br />
Movie2DNG is developed by Paulo Henrique Silva and can currently (May 2010) not to the full transition from *.mov to dng but uses ffmpeg to extract a jp46 sequence from the mov.<br />
This is the first step in the process.<br />
<br />
Once there is a image sequence of JP46 (jpeg) files you can use the next application called "JP4toDNGconverter". This tool uses a modified libtiff to write DNG files that can then be opened the same way as RAW files from other cameras in UFRaw, RawStudio, or Adobe Aftereffects, etc.<br />
<br />
The BatchProcess PHP script basically combines the above 2 applications and does a batch conversions of all quicktime movs found in a particular folder to a DNG sequence.<br />
<br />
== Gstreamer plugins for Elphel ==<br />
== using Gstreamer and GLSL with Elphel cameras ==<br />
== working with OpenCV ==<br />
== using OpenCV and GpuCV with Elphel cameras ==<br />
<br />
= Interfacing with the camera, triggering, synchronization = <br />
== simple and stupid integration with Arduino ==<br />
<br />
http://arduino.cc can be used to easily interface button, motion detector or any other external trigger to the camera.<br />
<br />
In [[Arduino| this example]] I wrote a code for Arduino to handle a button and a PIR motion detector. The Arduino is attached to camera's USB and send shell commands to the camera for execution. So PIR is binded to [[camogm]] to record on the internal CF card on motion detection and the button store a full resolution snapshot.<br />
<br />
== trigger internal & external, multiple cameras synchronization, trigger a camera from GPS ==<br />
== Deep hardware / software integration with Elphel: example on likoboard and likomapper software ==<br />
=== About Likoboard and Likomapper projects ===<br />
[http://likoboard.com Likoboard] is a re-programmable microprocessor conceived to create a human – machine interface. Originally likoboard was co-developed by Alsenet SA for an exclusive piece of high-end jewelery by the [http://olfact.ch Maison Olfact]. Being entirely based on Libre technologies Likobord can be integrated as an autonomous remote control to an application, as well as a peripheral USB of an embedded system or of a personal computer.<br />
<br />
Likomapper project was initiated by [[User_talk:Phil|Phil]] and implemented by Alsenet SA. It's goal is to map likoboard's tactile interface to control exposure on the camera.<br />
<br />
Likoboard is connected to the camera via USB, the camera runs a PHP software as daemon to communicate with likoboard over HID and set the exposure time using [[PHP_in_Elphel_cameras|Elphel PHP extension]].<br />
<br />
=== Cross-compiling the libs ===<br />
<br />
liblikoboard depends on libhid, and libhid needs libusb legacy.<br />
<br />
I did compile the latest libusb legacy (0.1.12) from sourceforge,<br />
and libhid-svn (rev. 364) from http://libhid.alioth.debian.org/<br />
<br />
[[Cross_compiling_standalone_packages_for_the_camera|See here for the cross-compiling method]]<br />
<br />
=== Porting the PHP extension ===<br />
<br />
To port the PHP extension php_likoboard I had to:<br />
<br />
* Build it for my native cpu,<br />
* Source init_env from the elphel353/ folder.<br />
* Copy the php_likoboard folder to elphel353/apps/php/ext/php_likoboard.<br />
* Change to this folder and run ../../elphize.<br />
* Run make and copy .libs/php_likoboard.so to /usr/local/crisv32/lib/php/extensions/no-debug-non-zts-20060613/.<br />
<br />
=== Debugging ===<br />
<br />
php_likoboard was not working on the camera. So I did write a test application sending debugging messages in console to see what was going wrong.<br />
<br />
libusb-0.1.12 appears to be broken when running on the camera, so I did restart using libusb-0.1.11 and it worked.<br />
<br />
=== Adding a PHP ELPHEL_DAEMON to the firmware ===<br />
<br />
=== Integration to Elphel CVS ===<br />
<br />
= multisensor examples with 10359 board =<br />
* A while ago (somewhere at 8.0.8.xx) the commands addresses were changed - so please check [[Talk:10359|10359 discussion page]]<br />
* Andrey added extra parameters and now everything can be controlled from parsedit.php<br />
* Following this [http://192.168.0.9/parsedit.php?embed=.1&test=0&showseq=0&title=Camera+WOI+Controls&TRIG&TRIG_PERIOD&MULTI_MODE&MULTI_SEQUENCE&refresh link] might be helpful for changing the camera parameters.<br />
<br />
===Switching channels===<br />
{| border=1<br />
| Parameter<br />
| Default value<br />
| Equal to 10359 reg<br />
| Comments<br />
|-<br />
| MULTI_SEQUENCE<br />
| [[Talk:10359|0x39]]<br />
| 0x806<br />
| 2 LSBs - Direct Channel bits - 0x1 - J2, 0x2 - J3, 0x3 - J4<br />
|}<br />
===Combined frame mode===<br />
1. set other image parameters<br />
<br />
2. set TRIG parameter to 0x4<br />
<br />
3. set MULTI_MODE parameter to 0x1<br />
<br />
= Publications = <br />
http://www3.elphel.com/articles</div>Poltohttps://wiki.elphel.com/index.php?title=Electronic_Rolling_Shutter&diff=8417Electronic Rolling Shutter2010-07-01T00:27:48Z<p>Polto: /* Hardware "margins" */</p>
<hr />
<div>== Basics ==<br />
Most CMOS image sensors (to save one transistor per cell compared to a true "snapshot" shutter) use Electronic Rolling Shutter. Basically it implements two pointers to sensor pixels, both proceeding in the same line-scan order across the sensor,<br />
<br />
One is erase pointer, the other one - readout. Erase pointer runs ahead discharging each photosensitive cell, then follows the readout one. Each pixel sees (and accumulates) the light for the same exposure time (from the moment erase pointer passes it till it is read out), but that happens at different time.<br />
<br />
If you will make an image of a fast passing car with ERS with short exposure - it will all be sharp (no blurring), but the car will seem to lean backwards. The roof will be captured at earlier time than the wheels and this time difference across the frame can be as long as 1/15 of a second for the full frame of the MT9P001 5MPix sensor (it is equal to the frame readout time that is equal to 1 sec divided by the frame rate in most circumstances).<br />
<br />
== Exposure control ==<br />
Exposure in ERS mode is defined by the time delay between the erase and readout pointer and is usually programmed to the sensor as a number of scan lines between the two. For obvious reasons that time can not exceed the frame period, so the maximal exposure is achieved when each pixel is erased immediately after being readout, this exposure is equal to the frame period.<br />
<br />
=== Virtual frame ===<br />
Sometimes when the lighting conditions are poor even the full frame exposure is not enough, for that most sensors have means to increase the frame time. Different manufacturers call it differently, it is sometimes called "virtual frame" or horizontal and vertical blanking. With the virtual frame the readout timing is as if the sensor was much bigger than in reality and instead of nonexistent pixels zero values are fed to the output. But "as if" reading nonexistent pixels in each row (horizontal blanking) and rows of pixels (vertical blanking) takes the same time as real ones, it allows to extend frame readout and increase exposure time.<br />
<br />
But even those "imaginative" frames are not infinite - different sensors have different bit widths of the corresponding counters, you can see them overflow in the Elphel cameras (with the current software) as a horizontal border in the image separating different brightness areas if exposure is too high (depending on the resolution/size it can be about a second).<br />
<br />
=== Hardware "margins" ===<br />
<br />
Total frame readout time can never be as small as the pixel readout time multiplied by the total number of pixels, all sensors (what I know of) need some extra time in each line and some extra time for each frame. You may consider it as a some kind of hardware margins (as those fro printers), and usually these margins are much bigger in horizontally than vertically. Practically that means that if you make sensor capture a narrow horizontal band you can get to much higher frame rate than if you try to cut vertical column. You may find particular calculations of the frame timing in the Aptina (Micron) MT9P001 sensor datasheet, linked [http://www.aptina.com/products/image_sensors/mt9p001/#overview here].<br />
<br />
== ERS-built CMOS sensors with "global" shutter ==<br />
Some CMOS sensors have "global shutter" function mentioned in the specs but it is usually far from the real snapshot ones (as in the modern CCD-based camcorders). Such mode has simultaneous erase of all pixels, but the end of exposure is still determined by the it readout, so exposure time for each pixel will be different - pixels in the bottom of the frame will be exposed longer by the exposure time. Such mode is useful with the flash lamp, when it is triggered after all the pixels are erased but before the readout starts, but usually quite useless without such lamp (or external shutter of some kind - i.e. mechanical or LCD).</div>Poltohttps://wiki.elphel.com/index.php?title=Electronic_Rolling_Shutter&diff=8416Electronic Rolling Shutter2010-07-01T00:25:31Z<p>Polto: /* Basics */</p>
<hr />
<div>== Basics ==<br />
Most CMOS image sensors (to save one transistor per cell compared to a true "snapshot" shutter) use Electronic Rolling Shutter. Basically it implements two pointers to sensor pixels, both proceeding in the same line-scan order across the sensor,<br />
<br />
One is erase pointer, the other one - readout. Erase pointer runs ahead discharging each photosensitive cell, then follows the readout one. Each pixel sees (and accumulates) the light for the same exposure time (from the moment erase pointer passes it till it is read out), but that happens at different time.<br />
<br />
If you will make an image of a fast passing car with ERS with short exposure - it will all be sharp (no blurring), but the car will seem to lean backwards. The roof will be captured at earlier time than the wheels and this time difference across the frame can be as long as 1/15 of a second for the full frame of the MT9P001 5MPix sensor (it is equal to the frame readout time that is equal to 1 sec divided by the frame rate in most circumstances).<br />
<br />
== Exposure control ==<br />
Exposure in ERS mode is defined by the time delay between the erase and readout pointer and is usually programmed to the sensor as a number of scan lines between the two. For obvious reasons that time can not exceed the frame period, so the maximal exposure is achieved when each pixel is erased immediately after being readout, this exposure is equal to the frame period.<br />
<br />
=== Virtual frame ===<br />
Sometimes when the lighting conditions are poor even the full frame exposure is not enough, for that most sensors have means to increase the frame time. Different manufacturers call it differently, it is sometimes called "virtual frame" or horizontal and vertical blanking. With the virtual frame the readout timing is as if the sensor was much bigger than in reality and instead of nonexistent pixels zero values are fed to the output. But "as if" reading nonexistent pixels in each row (horizontal blanking) and rows of pixels (vertical blanking) takes the same time as real ones, it allows to extend frame readout and increase exposure time.<br />
<br />
But even those "imaginative" frames are not infinite - different sensors have different bit widths of the corresponding counters, you can see them overflow in the Elphel cameras (with the current software) as a horizontal border in the image separating different brightness areas if exposure is too high (depending on the resolution/size it can be about a second).<br />
<br />
=== Hardware "margins" ===<br />
<br />
Total frame readout time can never be as small as the pixel readout time multiplied by the total number of pixels, all sensors (what I know of) need some extra time in each line and some extra time for each frame. You may consider it as a some kind of hardware margins (as those fro printers), and usually these margins are much bigger in horizontally than vertically. Practically that means that if you make sensor capture a narrow horizontal band you can get to much higher frame rate than if you try to cut vertical column. You may find particular calculations of the frame timing in the Micron MT9T001 sensor datasheet, linked [http://www.micron.com/products/imaging/products/MT9T001.html here].<br />
<br />
<br />
== ERS-built CMOS sensors with "global" shutter ==<br />
Some CMOS sensors have "global shutter" function mentioned in the specs but it is usually far from the real snapshot ones (as in the modern CCD-based camcorders). Such mode has simultaneous erase of all pixels, but the end of exposure is still determined by the it readout, so exposure time for each pixel will be different - pixels in the bottom of the frame will be exposed longer by the exposure time. Such mode is useful with the flash lamp, when it is triggered after all the pixels are erased but before the readout starts, but usually quite useless without such lamp (or external shutter of some kind - i.e. mechanical or LCD).</div>Polto