Camera Synchronization

From ElphelWiki
Revision as of 12:15, 27 October 2005 by Pfavr (talk | contribs) (summary of the discussion so far)
Jump to: navigation, search

There are several parts of the camera synchronization task.

  1. Camera should receive synchronizing event. It can be done by either special hardware inputs or just over the network. It most cases if you want to syncronize 2 or more networked cameras you do not need extra wires, so the network sycnchronizatrion is the most convinient. But sometimes you would like to be able to trigger the camera without the network - i.e. from some contact closure.
  2. Camera should be able to start image acquisition process when required - generally not possible with most CMOS sensors. /this used with "external trigger" in FPGA API? - Spectr/
  3. And (in some cases) camera should be able to precisely keep time, so in-sync state of two or more cameras will last longer.

Here is a Thread on sf about synchronizing two 313 cameras using sntpdate client: "Synchronizing the cameras ended up being incredibly simple, I didn't have to do anything special at all. I decided to try the easiest solution first, keeping the 2 cameras at the exact same settings. I had the client I wrote request image captures from both cameras at approx. the same time. I havn't benchmarked it to see the exact amount of jitter between matching frames, but I can set FPS to any value and not see any noticable difference."

How about a digital phase locked loop using the RTC timer in the FPGA? NTP is basically a digital phase locked loop in software, but it also adds a lot of code for robustness against "malicious" NTP servers - something we probably don't need for the cameras --Pfavr 15:15, 27 October 2005 (CDT)