Ccam.cgi

From ElphelWiki
Jump to: navigation, search

overview

The interface described below and all the links are for the Model 333 camera, interface for the 313 is approximately (but not completely) the same.

UPDATE: Now it is for the 353 camera, current code is in here

ccam.cgi (source - ccam.c) is currently the main interface to the camera functionality that uses GET method to pass parameters and receive the data back, so you may call it as

http://<camera-ip-address>/admin-bin/ccam.cgi?parameter1=value1&parameter2=value2&...

Most parameters are persistent, so if the value is not specified it will be assumed to remain the same. These parameters are approximately related to the pairs of parameters passed to the main camera driver cc333.c that uses specific sensor driver (for Micron sensors - mt9x001.c) from the user space to the driver with IOCTL. The list of these 63 driver parameters is defined in c313a.h (names staring with "P_"), most of the values come in pairs desired and actual:

      ioctl(devfd, _CCCMD(CCAM_WPARS ,  P_name), desired_value); //set new value of the parameter 
      current_actual_value=ioctl(devfd, _CCCMD(CCAM_RPARS , P_name ), 0); // read current actual value - driver modifies the set value if needed to match the valid range.

Writing these parameters will not cause immediate action, additional write needs to be performed to make driver process the new values. Some parameters can be updated without interrupting the sensor operation and the video stream output if active (i.e. exposure time, panning without window resizing, analog gains, color saturation). Changes in other parameters (such as window size or decimation) will not be applied until the sensor is stopped.

      ioctl(devfd, _CCCMD(CCAM_WPARS , P_UPDATE ),   3); // "on the fly"
      ioctl(devfd, _CCCMD(CCAM_WPARS , P_UPDATE ),   1); // stop the sensor if needed, write new parameters, start sensor and wait sensor-dependent (usually 2) petentially "bad" frames before sending images through the FPGA compressor.

It is possible to read the current values of CCAM_RPARS using special request to ccam.cgi as HTML table, set of javascript assignments or xml data

There is only one copy of these kernel-space variables - they reflect current state of a single sensor and single compressor.

ccam.cgi parameters

Not all of the parameters are applicable to all sensors/camears, some are obsolete.

opt

opt value is an unordered string of characters:

character Description Working?
h Use hardware compression Y
c Consider sensor to be the color one, if not - skip Bayer color filters processing Y
j Special color mode (jp4) - Pixels in each 16x16 macroblock are rearranged to separate Bayer colors in individual 8x8 blocks, then encoded as monochrome. De-mosaic will be applied during post-processing on the host PC. If both "j" and "c" are present, "c" is ignored Y
x Flip (mirror) image horizontally (uses in-sensor capabilities) Y
y Flip (mirror) image vertically (uses in-sensor capabilities) Y
p test pattern (ramp) instead of an image (for Micron sensors - same as "f" below) Y
f test pattern (ramp) generated in FPGA Y
b buffer file N?
m restart exposure after sending N?
s software trigger (for image intensifiers) - trigger if sum of pixels in a line > threshold N?
t external trigger - wait for external trigger input N?
v video mode - currently only means that it is not a reload from memory Y
g use background image N?
q return a quicktime movie clip Y
u updates (some) parameters "on the fly", returns 1x1 pix dummy image Y
* ignore lock file, recover from "camera in use" Y

Frame size and resolution

Key Value range (3MPix sensor) Description Working? "on the fly"? Notes
ww 2..2048 Sensor active window width (before decimation) Y N 1
wh 2..1536 Sensor active window height (before decimation) Y N 1
wl 0..(2047-ww) Sensor active window left margin (before decimation) Y Y 2
wt 0..(1535-wh) Sensor active window top margin (before decimation) Y Y 2
dh 1..8 Horizontal decimation (resoulution/image size reduction) Y N 3
dv 1..8 Vertical decimation (resoulution/image size reduction) Y N 3

Notes:

  1. Has to be (or will be truncated to) multiple of a macroblock (16x16 pixels) after the decimation
  2. Even value
  3. Decimation for MT9T001 3MPix sensor can be any integer from 1 to 8, for most other sensors - only 1/2/4/8. Because of the Bayer color filter mosaic, pixels are decimated in pairs, so decimation "4" means that for each pair of pixels used 6 pixels are skipped.

Exposure controls

There are multiple factors that influence image pixel values for the same lighting conditions, one is exposure time.

Most CMOS image sensors (including Micron sensors used in Elphel camears) use Electronic Rolling Shutter.


Key Value range (3MPix sensor) Description Working? "on the fly"? Notes
e 0..600000 exposure time (0.1 msec step) Y Y 1
vw  ? virtual frame width Y  ? 2
vh  ? virtual frame height Y  ? 3
fps= xx.xx desired frame rate Y  ? 4
sclk= 6..48 sensor clock (in MHz) Y N 5
fclk= 0..127 FPGA clock (in MHz) Y N 6
xtra= 0..?? extra frame time Y N 7

Notes:

  1. Sensor driver will calculate the number of lines of exposure, will increase virtual frame height (vertical blanking) if needed (but currently - not the virtual frame width - horizontal blanking). For longer exposures you may want to do that manually or decrease the sensor clock frequency. Update - for the MT9T001 sensor that might not be needed - I'll fix the driver --Andrey.filippov 12:39, 29 September 2005 (MDT). Done in version 6.4.9 - now the frame time (for MT9T001 only) can be as long as 0xfffff (approximately 1 millilon) scan lines - nearly a full minute with the full frame and 48MHz clock.--Andrey.filippov 11:27, 11 October 2005 (MDT)
  2. It is possible to extend line readout time, but is not normally needed/used.
  3. Explicitly specified virtual frame height - this parameter (if present) overwrites exposure setting. Not normally needed.
  4. Driver will try to reduce frame rate by adding vertical blanking - limited by the maximal blanking time
  5. Sensor clock, may be used with 1.3 and 2 MPix sensors to make longer exposure time (not needed with MT9T001 with rev. 6.4.9 or later), It also can make sense to reduce the frequency when the maximal frame rate is not needed to reduce sensor noise visible as horizontal lines in early revisions of MT9T001 sensor. You may read the sensor chip ID (revision/type) from telnet as "hello -IR ba 0" ("hello -IR" will read all the sensor registers). Current FPGA code uses the sensor clock to synchronize sensor power supply. And so the sensor power can be lost if this clock is too low, 6MHz is safe to use. On the upper side 48MHz is the maximal clock frequency for these sensors, driver limits this value.
  6. FPGA clock frequency (drives compressor and frame buffer memory. For the model 313 practical limit was about 95MHz and you could easily change it "on the fly". Model 333 cameras uses DDR SDRAM and the implemented FPGA inteface to DDR SDRAM needs clock phase adjustment for the memory when you change the frequency. Currently it can be done manually through telnet as "fpcf -phase 0 200". Initial value for the sensor and FPGA clock frequencies might be set in the /etc/init.d/fpga initialization script of the camera.
  7. For debugging purposes (probably needed only for the model 313 camera) frame period might be increased by the specified number of pixel clock periods. It was inteded to fine-tune the frame period (that depends on multiple sensor settings) and make sure it is not shorter than compressor could handle (333 compressor is faster).

Binning

Binning allows effectively to increase the sensor sensitivity when it is operating with reduced resolution (decimation). Decimation still determins the resolution, binning defines how many pixel pairs are added together.

Key Value range (3MPix sensor) Description Working? "on the fly"? Notes
bh 1..dh Horizontal binning (sensitivity for lower resolution) Y Y 1
bv 1..dv Vertical binning (sensitivity for lower resolution) Y Y 1

Notes:

  1. Currently for MT9T001 sensor only, works for all vertical binning values, but not all of the horizontal (some have no effect, others - produce vertical lines). I would expect this glitches will be fixed in newer sensors by Micron.

Here are two examples:

1. Full frame with decimation by 4 in each direction will result in image of 512*384 pixels, pixel values the same as for the full resolution (only 2x2 pixels for each 8x8 are used, others are discarded)

ww=2048&wh=1536&wl=0&wt=0&dl=4&dh=4&bh=1&bv=1

2. Full frame with decimation by 4 in each direction will result in image of 512*384 pixels, pixel values are 16 times higher than for the full resolution (all 8x8 pixels are used, values are added together following the bayer RG/GB mosaic - reds with reds, greens with greens, blues with blues)

ww=2048&wh=1536&wl=0&wt=0&dv=4&dh=4&bh=4&bv=4

Analog Gains

Most sensors have some controls for the analog signal gains before the pixel data is digitized. Some sensors (as now discontinued Kodak KAC-1310) have individual color gains and separate global gain, others (as Micron ones) - only color. Usually there are two "green" gains as with Bayer mosaic filters there are two green pixels in each 2x2 pixek cell (RG/GB). Gain values can be far from linear, too low gain setting might be not enough to saturate pixel value to 1023 (usually 255 after conversion) even with the very bright light.

Key Value range (3MPix sensor) Description Working? "on the fly"? Notes
gr 0..63 analog gain RED (or mono) Y Y
gg 0..63 analog gain GREEN (or green in "red" line) Y Y
gb 0..63 analog gain BLUE Y Y
ggb 0..63 analog gain GREEN in "blue" line) Y Y
kga 63 Kodak KAC1310 analog gain Y (all colors) Y Y 1
kgb  ? Kodak KAC1310 analog gain ? (all colors)  ? Y 1
kgm 6 Kodak KAC1310 mode Y Y 1
  1. Used in Kodak KAC-1310 (now obsolete) sensors. For MT9?001 sensors driver just multiplies gr, gg, gb and ggb by kga/63. It is better to keep it 63 (or do not use at all) for this family of sensors.

Image Quality, Gamma correction, Color Saturation

Key Value range (3MPix sensor) Description Working? "on the fly"? Notes
iq 1..99 JPEG Quality (%) Y  ? 1
gam 0.13 .. 10 Gamma correction value (%) Y Y 2
pxl 0..255 Black level Y Y 3
pxh 0..255 White level Y Y 3
csb 0..710 Color Saturation (%), Blue Y Y 4
csr 0..562 Color Saturation (%), Red Y Y 4
  1. Standard JPEG compression quality in (%). Earlier negative values were used (in software compression mode only) to generate BMP images, then"-1" meant BMP non-compressed and "-2" - BMP RLE compressed. The code is likely rotten by now.
  2. Camera implements virtually arbitrary table-based conversions from e.g. 10-bit sensor data to 8-bit used for compression. You may think of it as 256-entry tables of single-byte values that are used to convert 10 (or more) bit sensor data to 8-bit foramt using linear interpolation between entries. There is four such tables T: one for each color including 2 greens - RG/GB. The interpolation is done as follows:
Y= T[C][x]+ (((T[C][x+1]-T[C][x]) * (X & ((1<<D)-1))) >> D)

where

X - W-bit input (sensor) data,
Y - 8-bit output
x - X (input) truncated to 8 bits: x = X>>(W-8)
D = W-8 (number of bits to be truncated, i.e. 2 for a 10-bit sensor) 
C - color (0..3)
and T is a 4 x 256 byte table (one for each color).

for 10 bits it will be

Y= T[C][x]+ (((T[C][(x)+1]-T[C][x]) * (X & 3)) >> 2)
An implementation detail: (T[C][(X>>2)+1]-T[color][X>>2]) is calculated by software in advance and stored in a separate table. For practical reasons this table is combined with the main one as 16-bit values. The 8 MSB stores the difference T[C][x+1]-T[C][x] and the 8 LSB stores the T[C][x] value. So written to the table is (R[C][x]<<8)+T[C][X].
The 8 bit imposes a restriction on the granularity of the table. Difference between consecutive entries needs to be in the range -128..127 (this is enough).
All this could change in the future (use the source Luke) --Andrey.filippov 17:18, 11 October 2005 (MDT)) 

(How about adding dither before the final truncate? Or maybe the signal has noise enough already to make dither unecessary/harmful? --Pfavr 15:58, 27 October 2005 (CDT))

Model 313 camera had a single table for all colors, Model 333 FPGA has room for bigger tables so each color has each own 256-entry table. Currently for the simplicity of the web-interface ccam.cgi only calculate a single table from a gamma value (100 - linear, 47 - standard gamma setting for video cameras) and 2 of the values (pxl and pxh) below. Together they make something like "levels" in image manipulation programs (such as GIMP), but camera hardware (FPGA code) allows more flexible "curves" control.

  1. values for the 8 MSBs of the sensor data that map to total black (0x00) and total white (0xff) of the output signal. Sensors have different modes of auto-zero, and with default settings MT9T001 sensor adjusts black level so in the complete darkness each pixel would output 0x18 (8 MSBs are 0x0a or 10 decimal), other sensors have different values, it is also possible to reprogram sensors to change the "hardware" black value if needed.
  2. Color saturation values for blue (B-G) and red (R-G) color components, in (%). In linear mode (gam=100) the true colors will be produced with color saturations of 1.0 (100), but for lower gamma setting the color saturation should be increased to compensate for the lowering contrast of the image - with the mosaic color filter pattern lower relative difference between the pixels will be decoded as less intense color.

Verilog code that implements such conversion is here.

Histograms

Model 333 camera calculates histograms (individually for each of 4 colors (including 2 greens). Histograms are calculated inside a specified window - the following parameters are written directly to FPGA - now shadows in kernel space yet (so no way to read back the current values). As the sensors use 2x2 pixel mosaic, these 4 values are made even (by truncating LSB).

Key Value range (3MPix sensor) Description Working? "on the fly"? Notes
hl 0..2046 Histogram window left margin Y Y
ht 0..1534 Histogram window top margin Y Y
hw 0..2048 Histogram window width Y Y
hh 0..1536 Histogram window height Y Y

hl= distance from the left active window boarder to the left of histogram calculation window, default=0

ht= distance from the top active window boarder to the top of histogram calculation window, default=0

hw= histogram calculation window width , default=0xffe (will extend to the bottom right corner)

hh= histogram calculation window height , default=0xffe (will extend to the bottom right corner)

Currently position and size of the histogram window is truncated to even values (LSB ignored).

Histogram calculation is always on when the sensor is running (normally it is even if no stream is output), FPGA uses two pages of internal memory and switches between them when ready. For each frame it first writes zero to each histogram value (4x256) and then adds pixels (after converting from 10 bits sensor data to 8 bit using "curves" tables), limiting the value by 2^18-1 (hardware limitation). If you read histogram table asynchronously it is likely that the sum will differ from the total number of pixels as FPGA could switch pages while you were reading. But it switches only at the end of frame, so there will be no partial sums read out.

There are two ways to read the histogram table now:

  • manually (through telnet) using "fpcf -histogram" - it will print data as hex values or
  • read binary file (4*256*32bits=4KB) /dev/histogram or through a symlink at http://<camera_ip>/histogram

The four (as number of color filters in Bayer mosaic) tables will be read out in the following order:

R (256 values), Gr (green in the "red" row - 256 values), Gb (green in the "blue" row - 256 values), B (256 values)

It is the same order how now the "curves" tables are written to the FPGA. Now ccam.cgi only can fill these table from the gamma value, all colors the same. But you can experiment with it by creating a text file with 1024 hex values - (ccam.c shows how to build it), copy it to the camera file system and use "fpcf -table 400 <path_to_table>" to transfer it to FPGA. If you then reacquire image without changing gamma value ccam.cgi will not overwrite the table you've just downloaded.

It seems that colors work correctly with all image orientations and decimations for all Micron sensors. If not - let me now, you can temporary compensate wrong colors by adding "&byr=<0..3>" (bayer phase shift) to the image URL - it reassigns RG/GB mosaic in different ways, but the sequence of the colors R,Gr,Gb,B in the FPGA tables ("curves", histogram) will still correspond to the colors in the JPEG output.

HTML, XML or VRML

ccam.cgi can send html, xml or vrml (code broken needs to be restored) files, not just images if any of html, htmlr, htmll or htmlj parameters are present in the url.

Key Value range (3MPix sensor) Description Working? "on the fly"? Notes
html 0 no output Y Y
1 all sensor parameters as javaScript Y Y 1, 4
2 all sensor parameters as html Y Y 2, 4
3 beam data as javaScript Y Y 1, 8
4 beam data as html Y Y 2, 8
5 state (5 -picture ready) as javaScript Y Y 1,5
6 state (5 -picture ready) as html Y Y 2,5
7 start image acquisition (option "s" or "t" should be present)  ? Y 6
8 reset waiting for trigger  ? Y 7
10 all sensor parameters as XML Y Y 3
11 beam data as XML Y Y 3,8
12 state (5 -picture ready) as XML Y Y 3, 5
13 start image acquisition (option "s" or "t" should be present), return XML Y Y 3, 6
14 reset waiting for trigger, return XML Y Y 3,7
htmlr n Refresh each n seconds Y Y 9
htmll escaped string command to be executed onLoad in <body> tag Y Y 10
htmlj escaped string include javaScript file Y Y 11

Notes:

  1. Head section of the html output file will have javascrips assingments "document.variable_name=value;" for each parameter. No visible elements in the file - it was intended to be used in a frame set before XMLHttpRequest was supported in most browsers.
  2. Parameters are output as a two-column html table (first column - name, second - value).
  3. Parameters and their values are output as XML file.
  4. Sensor-related parameters are output
  5. Only sensor/compressor state is output. State 7 - sensor is running, constant compression is off (single frame mode), state 8 - compressor is in constant compression mode (such as during streaming), static images can not be acquired, some acquisition parameters can not be changed withowt stopping the compression.
  6. This was designed for sensors with asynchronous reset (such as now obsolete now Zoran ones). Don't remember what it will do (or how to use it) with Micron ones.
  7. Reset waiting for an external trigger (not sure if it still works)
  8. Output beam parameters (center of gravity, half width in x, y, etc.). This code is broken now, but might be repaired.
  9. Instruct the html page to refresh itself each specified number of seconds.
  10. Value is an "escaped" string that contains javaScript command to be executed whenn the page is loaded (body onLoad).
  11. Value is an "escaped" string that has the path of the external javaScriptg file to be included inside the <head> tag of the page


below is yet unedited text from ccam.c comments

* vrmld - decimation to make a grid (>=8 for full frame) (default = 16)
* vrmlb - number of individual blocks in each x/y (default=2)
* vrmlm - maximal pixel. 1023 - full scale, less - increase contrast, 0 - automatic (default =1023)
* vrmli - indentation (default=1)
* vrmlf - format - 0 - integer, 1 - one digit after "." (default 0)
* vrmll - number of countours to build (default = 32)
* vrmlo - options for isolines - e - elevated, f - flat (default=ef)
* vrmlz - 0..9 output (gzip) compression level (0 - none, 1 - fastest, default - 6, best -9)
* hist=n - read frame from "history" applies only to rereading from memory after acquisition of a clip
       n<=0 - from the end of clip (0 - last), n>0 - from the start (1 - first)


* pfh - photofinish mode strip height (0 - normal mode, not photofinish). In this mode each frame will consist of multiple
        pfh-high horizontal (camera should be oriented 90 deg. to make vertical) strips, and no extra lines will be added to the frames
        for demosaic
        for now: +65536 - timestamp for normal frames, +131072 - timestamps for photo-finish mode
* ts  - time stamp mode: 0 - none, 1 - in upper-left corner, 2 - added to the right of the image (photo-finish mode)        
* fsd - frame sync delay (in lines) from the beginning of a frame (needed in photofinish mode - 3 lines?)


* _time=t (ms) will try to set current system time (if it was not set already. _stime - will always set)




* fpns  - 0..3 fpga background subtraction:
*               0 - none,
*               1 (fine) - subtract 8-bit FPN from 10-bit pixel
*               2 - multiply FPN by 2 before subtracting
*               3 - multiply FPN by 4 before subtracting (full scale)
*       note:   negative result is replaced by 0, decrease FPN data before applying for "fat 0"
* fpnm  -       muliply by inverse sensitivity (sensitivity correction) mode:
*               0 - no correction
*               1 - fine (+/- 12.5%)
*               2 - medium (+/- 25%)
*               3 - maximal (+/- 50%)
* pc - pseudo color string. Applies to monochrome images and vrml
* any of vrml* specified - vrml instead of a picture/html
*
* background measurement/subtraction will (now) work only with 10-bit images
* gd = "digital gain" 0..5 (software)
* byr =0..3 Overwite Bayer phase shift, =4 - use calculated by driver.

* bit - pixel depth (10/4/8)
* shl - shift left (FPGA in 8 and 4-bit modes) - obsolete
* clk - MCLK divisor - 80MHz/(2..129) - obsolete?


* bg  = n - calculate background 1-2-4..16 times (does not need option s/t/v)
* parameters for "instant on" quicktime
* qfr = n - number of frames to send in a quicktime clip
* qpad  = % to leave for the frame size to grow (initial size 1-st frame * (100- 1.5*qpad)/100
* qdur = frame duration in 1/600 of a second
* parameters for quicktime clips (send after shooting)
* qsz = n - clip size in KB (w/o headers) (<=0 will use "instant on") - will be obsolete
* qcmd= (mandatory for videoclip)
   1 - start constant compression of all acquired frames
   2 - stop constant compression.
   3 - acquire the whole buffer and stop
   4 - read movie from buffer
   6 (and 5?) - stop, then read
   7 - acquire buffer, then read
* qts = t - playback time/real time