Difference between revisions of "Ccam.cgi"
(→Image Quality, Gamma correction, Color Saturation) |
(→Image Quality, Gamma correction, Color Saturation) |
||
Line 189: | Line 189: | ||
# Camera implements virtually arbitrary table-based conversions from 10-bit sensor data to 8-bit used for compression. Each of the table has 256 entries each consisting of 2 bytes, the tables are indexed by 8 MSBs of the pixel data. One of the bytes holds the base value for the linear interpolation - the output value if the remaining LSBs (2 for 10-bit sensors) are zeros. The other byte - signed (-128..+127)increment between consequtive base values so in the case of 10-bit data this difference is multiplied by pixel 2 LSBs and divided by 4 (for 2 bits). The result is added to the base value. Of course, these second bytes might be calculated from the base ones, but in current FPGA code there are no extra cycles to retrieve 2 values and subtract them from each other (maybe will do that later --[[User:Andrey.filippov|Andrey.filippov]] 17:18, 11 October 2005 (MDT)) so both bytes in each table entry have to be filled by software. Model 313 camera had a single table for all colors, Model 333 FPGA has bigger tables so each of the 4 (including 2 greens) has an individual 256-entry table. Currently for the simlicity of the interface ccam.cgi allows to calculate a single table using gamma value (100 - linear, 47 - standard gamma setting for video camears) and 2 of the values (pxl and pxh) below. Together they make something like "levels" in image manipulation programs (such as [http://www.gimp.org/ GIMP]), but camera hardware (FPGA code) allows more flexible "curves" control. | # Camera implements virtually arbitrary table-based conversions from 10-bit sensor data to 8-bit used for compression. Each of the table has 256 entries each consisting of 2 bytes, the tables are indexed by 8 MSBs of the pixel data. One of the bytes holds the base value for the linear interpolation - the output value if the remaining LSBs (2 for 10-bit sensors) are zeros. The other byte - signed (-128..+127)increment between consequtive base values so in the case of 10-bit data this difference is multiplied by pixel 2 LSBs and divided by 4 (for 2 bits). The result is added to the base value. Of course, these second bytes might be calculated from the base ones, but in current FPGA code there are no extra cycles to retrieve 2 values and subtract them from each other (maybe will do that later --[[User:Andrey.filippov|Andrey.filippov]] 17:18, 11 October 2005 (MDT)) so both bytes in each table entry have to be filled by software. Model 313 camera had a single table for all colors, Model 333 FPGA has bigger tables so each of the 4 (including 2 greens) has an individual 256-entry table. Currently for the simlicity of the interface ccam.cgi allows to calculate a single table using gamma value (100 - linear, 47 - standard gamma setting for video camears) and 2 of the values (pxl and pxh) below. Together they make something like "levels" in image manipulation programs (such as [http://www.gimp.org/ GIMP]), but camera hardware (FPGA code) allows more flexible "curves" control. | ||
# values for the 8 MSBs of the sensor data that map to total black (0x00) and total white (0xff) of the output signal. Sensors have different modes of auto-zero, and with default settings MT9T001 sensor adjusts black level so in the complete darkness each pixel would output 0x18 (8 MSBs are 0x0a or 10 decimal), other sensors have different values, it is also possible to reprogram sensors to change the "hardware" black value if needed. | # values for the 8 MSBs of the sensor data that map to total black (0x00) and total white (0xff) of the output signal. Sensors have different modes of auto-zero, and with default settings MT9T001 sensor adjusts black level so in the complete darkness each pixel would output 0x18 (8 MSBs are 0x0a or 10 decimal), other sensors have different values, it is also possible to reprogram sensors to change the "hardware" black value if needed. | ||
− | + | # Color saturation values for blue (B-G) and red (R-G) color components, in (%). In linear mode (gam=100) the true colors will be produced with color saturations of 1.0 (100), but for lower gamma setting the color saturation should be increased to compensate for the lowering contrast of the image - with the mosaic color filter pattern lower relative difference between the pixels will be decoded as less intense color. | |
− | |||
− | |||
− | |||
− | |||
== below is yet unedited text from ccam.c comments == | == below is yet unedited text from ccam.c comments == |
Revision as of 16:25, 11 October 2005
Contents
overview
The interface described below and all the links are for the Model 333 camera, interface for the 313 is approximately (but not completely) the same.
ccam.cgi (source - ccam.c) is currently the main interface to the camera functionality that uses GET method to pass parameters and receive the data back, so you may call it as
http://<camera-ip-address>/admin-bin/ccam.cgi?parameter1=value1¶meter2=value2&...
Most parameters are persistent, so if the value is not specified it will be assumed to remain the same. These parameters are approximately related to the pairs of parameters passed to the main camera driver cc333.c that uses specific sensor driver (for Micron sensors - mt9x001.c) from the user space to the driver with IOCTL. The list of these 63 driver parameters is defined in c313a.h (names staring with "P_"), most of the values come in pairs desired and actual:
ioctl(devfd, _CCCMD(CCAM_WPARS , P_name), desired_value); //set new value of the parameter current_actual_value=ioctl(devfd, _CCCMD(CCAM_RPARS , P_name ), 0); // read current actual value - driver modifies the set value if needed to match the valid range.
Writing these parameters will not cause immediate action, additional write needs to be performed to make driver process the new values. Some parameters can be updated without interrupting the sensor operation and the video stream output if active (i.e. exposure time, panning without window resizing, analog gains, color saturation). Changes in other parameters (such as window size or decimation) will not be applied until the sensor is stopped.
ioctl(devfd, _CCCMD(CCAM_WPARS , P_UPDATE ), 3); // "on the fly" ioctl(devfd, _CCCMD(CCAM_WPARS , P_UPDATE ), 1); // stop the sensor if needed, write new parameters, start sensor and wait sensor-dependent (usually 2) petentially "bad" frames before sending images through the FPGA compressor.
It is possible to read the current values of CCAM_RPARS using special request to ccam.cgi as HTML table, set of javascript assignments or xml data
There is only one copy of these kernel-space variables - they reflect current state of a single sensor and single compressor.
ccam.cgi parameters
Not all of the parameters are applicable to all sensors/camears, some are obsolete.
opt
opt value is an unordered string of characters:
character | Description | Working? |
h | Use hardware compression | Y |
c | Consider sensor to be the color one, if not - skip Bayer color filters processing | Y |
x | Flip (mirror) image horizontally (uses in-sensor capabilities) | Y |
y | Flip (mirror) image vertically (uses in-sensor capabilities) | Y |
p | test pattern (ramp) instead of an image (for Micron sensors - same as "f" below) | Y |
f | test pattern (ramp) generated in FPGA | Y |
b | buffer file | N? |
m | restart exposure after sending | N? |
s | software trigger (for image intensifiers) - trigger if sum of pixels in a line > threshold | N? |
t | external trigger - wait for external trigger input | N? |
v | video mode - currently only means that it is not a reload from memory | Y |
g | use background image | N? |
q | return a quicktime movie clip | Y |
u | updates (some) parameters "on the fly", returns 1x1 pix dummy image | Y |
* | ignore lock file, recover from "camera in use" | Y |
Frame size and resolution
Key | Value range (3MPix sensor) | Description | Working? | "on the fly"? | Notes |
ww | 2..2048 | Sensor active window width (before decimation) | Y | N | 1 |
wh | 2..1536 | Sensor active window height (before decimation) | Y | N | 1 |
wl | 0..(2047-ww) | Sensor active window left margin (before decimation) | Y | Y | 2 |
wt | 0..(1535-wh) | Sensor active window top margin (before decimation) | Y | Y | 2 |
dh | 1..8 | Horizontal decimation (resoulution/image size reduction) | Y | N | 3 |
dv | 1..8 | Vertical decimation (resoulution/image size reduction) | Y | N | 3 |
Notes:
- Has to be (or will be truncated to) multiple of a macroblock (16x16 pixels) after the decimation
- Even value
- Decimation for MT9T001 3MPix sensor can be any integer from 1 to 8, for most other sensors - only 1/2/4/8. Because of the Bayer color filter mosaic, pixels are decimated in pairs, so decimation "4" means that for each pair of pixels used 6 pixels are skipped.
Exposure controls
There are multiple factors that influence image pixel values for the same lighting conditions, one is exposure time.
Most CMOS image sensors (including Micron sensors used in Elphel camears) use Electronic Rolling Shutter.
Key | Value range (3MPix sensor) | Description | Working? | "on the fly"? | Notes |
e | 0..600000 | exposure time (0.1 msec step) | Y | Y | 1 |
vw | ? | virtual frame width | Y | ? | 2 |
vh | ? | virtual frame height | Y | ? | 3 |
fps= | xx.xx | desired frame rate | Y | ? | 4 |
sclk= | 6..48 | sensor clock (in MHz) | Y | N | 5 |
fclk= | 0..127 | FPGA clock (in MHz) | Y | N | 6 |
xtra= | 0..?? | extra frame time | Y | N | 7 |
Notes:
- Sensor driver will calculate the number of lines of exposure, will increase virtual frame height (vertical blanking) if needed (but currently - not the virtual frame width - horizontal blanking). For longer exposures you may want to do that manually or decrease the sensor clock frequency. Update - for the MT9T001 sensor that might not be needed - I'll fix the driver --Andrey.filippov 12:39, 29 September 2005 (MDT). Done in version 6.4.9 - now the frame time (for MT9T001 only) can be as long as 0xfffff (approximately 1 millilon) scan lines - nearly a full minute with the full frame and 48MHz clock.--Andrey.filippov 11:27, 11 October 2005 (MDT)
- It is possible to extend line readout time, but is not normally needed/used.
- Explicitly specified virtual frame height - this parameter (if present) overwrites exposure setting. Not normally needed.
- Driver will try to reduce frame rate by adding vertical blanking - limited by the maximal blanking time
- Sensor clock, may be used with 1.3 and 2 MPix sensors to make longer exposure time (not needed with MT9T001 with rev. 6.4.9 or later), It also can make sense to reduce the frequency when the maximal frame rate is not needed to reduce sensor noise visible as horizontal lines in early revisions of MT9T001 sensor. You may read the sensor chip ID (revision/type) from telnet as "hello -IR ba 0" ("hello -IR" will read all the sensor registers). Current FPGA code uses the sensor clock to synchronize sensor power supply. And so the sensor power can be lost if this clock is too low, 6MHz is safe to use. On the upper side 48MHz is the maximal clock frequency for these sensors, driver limits this value.
- FPGA clock frequency (drives compressor and frame buffer memory. For the model 313 practical limit was about 95MHz and you could easily change it "on the fly". Model 333 cameras uses DDR SDRAM and the implemented FPGA inteface to DDR SDRAM needs clock phase adjustment for the memory when you change the frequency. Currently it can be done manually through telnet as "fpcf -phase 0 200". Initial value for the sensor and FPGA clock frequencies might be set in the /etc/init.d/fpga initialization script of the camera.
- For debugging purposes (probably needed only for the model 313 camera) frame period might be increased by the specified number of pixel clock periods. It was inteded to fine-tune the frame period (that depends on multiple sensor settings) and make sure it is not shorter than compressor could handle (333 compressor is faster).
Binning
Binning allows effectively to increase the sensor sensitivity when it is operating with reduced resolution (decimation). Decimation still determins the resolution, binning defines how many pixel pairs are added together.
Key | Value range (3MPix sensor) | Description | Working? | "on the fly"? | Notes |
bh | 1..dh | Horizontal binning (sensitivity for lower resolution) | Y | Y | 1 |
bv | 1..dv | Vertical binning (sensitivity for lower resolution) | Y | Y | 1 |
Notes:
- Currently for MT9T001 sensor only, works for all vertical binning values, but not all of the horizontal (some have no effect, others - produce vertical lines). I would expect this glitches will be fixed in newer sensors by Micron.
Here are two examples:
1. Full frame with decimation by 4 in each direction will result in image of 512*384 pixels, pixel values the same as for the full resolution (only 2x2 pixels for each 8x8 are used, others are discarded)
ww=2048&wh=1536&wl=0&wt=0&dl=4&dh=4&bh=1&bv=1
2. Full frame with decimation by 4 in each direction will result in image of 512*384 pixels, pixel values are 16 times higher than for the full resolution (all 8x8 pixels are used, values are added together following the bayer RG/GB mosaic - reds with reds, greens with greens, blues with blues)
ww=2048&wh=1536&wl=0&wt=0&dl=4&dh=4&bh=4&bv=4
Analog Gains
Most sensors have some controls for the analog signal gains before the pixel data is digitized. Some sensors (as now discontinued Kodak KAC-1310) have individual color gains and separate global gain, others (as Micron ones) - only color. Usually there are two "green" gains as with Bayer mosaic filters there are two green pixels in each 2x2 pixek cell (RG/GB). Gain values can be far from linear, too low gain setting might be not enough to saturate pixel value to 1023 (usually 255 after conversion) even with the very bright light.
Key | Value range (3MPix sensor) | Description | Working? | "on the fly"? | Notes |
gr | 0..63 | analog gain RED (or mono) | Y | Y | |
gg | 0..63 | analog gain GREEN (or green in "red" line) | Y | Y | |
gb | 0..63 | analog gain BLUE | Y | Y | |
ggb | 0..63 | analog gain GREEN in "blue" line) | Y | Y | |
kga | 63 | Kodak KAC1310 analog gain Y (all colors) | Y | Y | 1 |
kgb | ? | Kodak KAC1310 analog gain ? (all colors) | ? | Y | 1 |
kgm | 6 | Kodak KAC1310 mode | Y | Y | 1 |
- Used in Kodak KAC-1310 (now obsolete) sensors. For MT9?001 sensors driver just multiplies gr, gg, gb and ggb by kga/63. It is better to keep it 63 (or do not use at all) for this family of sensors.
Image Quality, Gamma correction, Color Saturation
Key | Value range (3MPix sensor) | Description | Working? | "on the fly"? | Notes |
iq | 1..99 | JPEG Quality (%) | Y | ? | 1 |
gam | 0.13 .. 10 | Gamma correction value (%) | Y | Y | 2 |
pxl | 0..255 | Black level | Y | Y | 3 |
pxh | 0..255 | White level | Y | Y | 3 |
csb | 0..710 | Color Saturation (%), Blue | Y | Y | 4 |
csr | 0..562 | Color Saturation (%), Red | Y | Y | 4 |
- Standard JPEG compression quality in (%). Earlier negative values were used (in software compression mode only) to generate BMP images, then"-1" meant BMP non-compressed and "-2" - BMP RLE compressed. The code is likely rotten by now.
- Camera implements virtually arbitrary table-based conversions from 10-bit sensor data to 8-bit used for compression. Each of the table has 256 entries each consisting of 2 bytes, the tables are indexed by 8 MSBs of the pixel data. One of the bytes holds the base value for the linear interpolation - the output value if the remaining LSBs (2 for 10-bit sensors) are zeros. The other byte - signed (-128..+127)increment between consequtive base values so in the case of 10-bit data this difference is multiplied by pixel 2 LSBs and divided by 4 (for 2 bits). The result is added to the base value. Of course, these second bytes might be calculated from the base ones, but in current FPGA code there are no extra cycles to retrieve 2 values and subtract them from each other (maybe will do that later --Andrey.filippov 17:18, 11 October 2005 (MDT)) so both bytes in each table entry have to be filled by software. Model 313 camera had a single table for all colors, Model 333 FPGA has bigger tables so each of the 4 (including 2 greens) has an individual 256-entry table. Currently for the simlicity of the interface ccam.cgi allows to calculate a single table using gamma value (100 - linear, 47 - standard gamma setting for video camears) and 2 of the values (pxl and pxh) below. Together they make something like "levels" in image manipulation programs (such as GIMP), but camera hardware (FPGA code) allows more flexible "curves" control.
- values for the 8 MSBs of the sensor data that map to total black (0x00) and total white (0xff) of the output signal. Sensors have different modes of auto-zero, and with default settings MT9T001 sensor adjusts black level so in the complete darkness each pixel would output 0x18 (8 MSBs are 0x0a or 10 decimal), other sensors have different values, it is also possible to reprogram sensors to change the "hardware" black value if needed.
- Color saturation values for blue (B-G) and red (R-G) color components, in (%). In linear mode (gam=100) the true colors will be produced with color saturations of 1.0 (100), but for lower gamma setting the color saturation should be increased to compensate for the lowering contrast of the image - with the mosaic color filter pattern lower relative difference between the pixels will be decoded as less intense color.
below is yet unedited text from ccam.c comments
* hist=n - read frame from "history" applies only to rereading from memory after acquisition of a clipÃÂ n<=0 - from the end of clip (0 - last), n>0 - from the start (1 - first)
* pfh - photofinish mode strip height (0 - normal mode, not photofinish). In this mode each frame will consist of multiple pfh-high horizontal (camera should be oriented 90 deg. to make vertical) strips, and no extra lines will be added to the frames for demosaic for now: +65536 - timestamp for normal frames, +131072 - timestamps for photo-finish mode * ts - time stamp mode: 0 - none, 1 - in upper-left corner, 2 - added to the right of the image (photo-finish mode) * fsd - frame sync delay (in lines) from the beginning of a frame (needed in photofinish mode - 3 lines?)
* _time=t (ms) will try to set current system time (if it was not set already. _stime - will always set) * if any of html, htmlr, htmll or htmlj are present will genmerate html page instead of an image * html= not present)- picture as before (or vrml) * 0 - nothing * 1 - all sensor parameters as javaScript * 2 - all sensor parameters as html * 3 - beam data as javaScript * 4 - beam data as html * 5 - state (5 -picture ready) as javaScript * 6 - state (5 -picture ready) as html * 7 - start image acquisition (option "s" or "t" should be present) * 8 - reset waiting for trigger * 10 - all sensor parameters as XML * 11 - beam data as XML * 12 - state (5 -picture ready) as XML * 13 - start image acquisition (option "s" or "t" should be present), return XML * 14 - reset waiting for trigger, return XML
* htmlr=n - Refresh each n seconds * htmll=(escape) - command executed onLoad in <body> tag * htmlj=(escape) - include *.js javaScript file
* vrmld - decimation to make a grid (>=8 for full frame) (default = 16) * vrmlb - number of individual blocks in each x/y (default=2) * vrmlm - maximal pixel. 1023 - full scale, less - increase contrast, 0 - automatic (default =1023)
* vrmli - indentation (default=1) * vrmlf - format - 0 - integer, 1 - one digit after "." (default 0) * vrmll - number of countours to build (default = 32) * vrmlo - options for isolines - e - elevated, f - flat (default=ef) * vrmlz - 0..9 output (gzip) compression level (0 - none, 1 - fastest, default - 6, best -9) * fpns - 0..3 fpga background subtraction: * 0 - none, * 1 (fine) - subtract 8-bit FPN from 10-bit pixel * 2 - multiply FPN by 2 before subtracting * 3 - multiply FPN by 4 before subtracting (full scale) * note: negative result is replaced by 0, decrease FPN data before applying for "fat 0" * fpnm - muliply by inverse sensitivity (sensitivity correction) mode: * 0 - no correction * 1 - fine (+/- 12.5%) * 2 - medium (+/- 25%) * 3 - maximal (+/- 50%) * pc - pseudo color string. Applies to monochrome images and vrml
* any of vrml* specified - vrml instead of a picture/html * * background measurement/subtraction will (now) work only with 10-bit images * gd = "digital gain" 0..5 (software) * byr =0..3 Overwite Bayer phase shift, =4 - use calculated by driver. * bit - pixel depth (10/4/8) * shl - shift left (FPGA in 8 and 4-bit modes) - obsolete * clk - MCLK divisor - 80MHz/(2..129) - obsolete?
* bg = n - calculate background 1-2-4..16 times (does not need option s/t/v) * parameters for "instant on" quicktime * qfr = n - number of frames to send in a quicktime clip * qpad = % to leave for the frame size to grow (initial size 1-st frame * (100- 1.5*qpad)/100 * qdur = frame duration in 1/600 of a second * parameters for quicktime clips (send after shooting) * qsz = n - clip size in KB (w/o headers) (<=0 will use "instant on") - will be obsolete * qcmd= (mandatory for videoclip) 1 - start constant compression of all acquired frames 2 - stop constant compression. 3 - acquire the whole buffer and stop 4 - read movie from buffer 6 (and 5?) - stop, then read 7 - acquire buffer, then read
* qts = t - playback time/real time
hl - histogram top (all histogram parameters will be made even by truncating, all written derectly to FPGA - no shadows yet) ht - histogram left hw - histogram width hh - histogram height if ((vp=paramValue(gparams, "csb"))) ioctl(devfd, _CCCMD(CCAM_WPARS , P_COLOR_SATURATION_BLUE ), strtol (vp,&cp,10)); if ((vp=paramValue(gparams, "csr"))) ioctl(devfd, _CCCMD(CCAM_WPARS , P_COLOR_SATURATION_RED ), strtol (vp,&cp,10));