Difference between revisions of "Pattern Recognition"

From ElphelWiki
Jump to: navigation, search
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
= Pattern Recognition =
+
''The project will develop a fpga based pattern recognition system using the Elphel 353 camera. I have created this page in order to document what I am doing and also to receive some feedback from the elphel community.
 
 
The project will develop a fpga based pattern recognition system using the Elphel 353 camera. I have created this page in order to document what I am doing and also to receive some feedback from the elphel community.
 
  
 
I am only at the primary stages. I am working on two issues:
 
I am only at the primary stages. I am working on two issues:
Line 7: Line 5:
 
  * Understanding the processor structure (basically drivers and associated nodes)
 
  * Understanding the processor structure (basically drivers and associated nodes)
 
  * Understanging the fpga code structure in order to remove the jpeg compressor, which we will not use in our recognition algorithm
 
  * Understanging the fpga code structure in order to remove the jpeg compressor, which we will not use in our recognition algorithm
 +
 +
''There is plenty of room in the FPGA left after JPEG compressor, so I do not recommend removing JPEG - It will take a lot of work, really. At the same time you can use it to visualize your processing algorithms feeding them to the compressor.It took me really little time to add focusing helper code to the FPGA (it also can be used as edge detector) mixing the results with the normal images so I could immediately see the results and so troubleshoot the program''--[[User:Andrey.filippov|Andrey.filippov]] 14:47, 1 March 2008 (CST)
  
 
Actually I can use and modify the processor's drivers. Now I'm developing a program that uses the /dev/fsdram node to read the raw image from the DDR and create an equivalent RGB file. After some problems now I create a rgb file and convert it to a visible format such as png or bmp, but the image is wrong and I cannot understand why.
 
Actually I can use and modify the processor's drivers. Now I'm developing a program that uses the /dev/fsdram node to read the raw image from the DDR and create an equivalent RGB file. After some problems now I create a rgb file and convert it to a visible format such as png or bmp, but the image is wrong and I cannot understand why.
 +
 +
''Because there is a different driver that outputs FPGA raw image data - you may ftp image.raw file, designed specifically to get the raw image. The /dev/fsdram does not will have gaps after each line as each line is 512-bytes aligned''--[[User:Andrey.filippov|Andrey.filippov]] 14:47, 1 March 2008 (CST)
  
 
I understand the raw image is stored at the first positions of the DDR memory occupying byte 0 until the byte (width+4)*(height+4)-1. Is this correct? Anybody has a memory map of the DDR? Some information about how the memory is occupied by the data(raw image, gamma data, corrections, jpeg)?
 
I understand the raw image is stored at the first positions of the DDR memory occupying byte 0 until the byte (width+4)*(height+4)-1. Is this correct? Anybody has a memory map of the DDR? Some information about how the memory is occupied by the data(raw image, gamma data, corrections, jpeg)?
 +
''No, that is not so - the image can start at different address (we do it in some modes), and each line is 512-bytes aligned. Each pixel can be 1 byte (when using LUTs) or 2 bhytes (raw sensor data, MSB aligned)''--[[User:Andrey.filippov|Andrey.filippov]] 14:47, 1 March 2008 (CST)
  
 
At maximum resolution (2592*1936) I have to read (2592+4)*(1936+4)=5036240 bytes from memory. I assume that the stored data starts like this:
 
At maximum resolution (2592*1936) I have to read (2592+4)*(1936+4)=5036240 bytes from memory. I assume that the stored data starts like this:
Line 29: Line 32:
  
 
The image.raw application available on the camera is just a symbolic link to /dev/ccam_img that uses the cmoscam353 driver. I'm following this driver to understand the way that is read from the DDR. Can anybody tell me how the raw image data is generated?
 
The image.raw application available on the camera is just a symbolic link to /dev/ccam_img that uses the cmoscam353 driver. I'm following this driver to understand the way that is read from the DDR. Can anybody tell me how the raw image data is generated?
 +
 +
''yes, /dev/ccam_img is the correct driver''--[[User:Andrey.filippov|Andrey.filippov]] 14:47, 1 March 2008 (CST)
 +
  
 
What is the proper way to change the camera resolution?
 
What is the proper way to change the camera resolution?
 +
''Using [[PHP_in_Elphel_cameras]], you may use sample code from camera_demo.php''--[[User:Andrey.filippov|Andrey.filippov]] 14:47, 1 March 2008 (CST)
  
 
Thanks in advance for any help,
 
Thanks in advance for any help,
  
 
Diego Mendez
 
Diego Mendez

Latest revision as of 12:47, 1 March 2008

The project will develop a fpga based pattern recognition system using the Elphel 353 camera. I have created this page in order to document what I am doing and also to receive some feedback from the elphel community.

I am only at the primary stages. I am working on two issues:

* Understanding the processor structure (basically drivers and associated nodes)
* Understanging the fpga code structure in order to remove the jpeg compressor, which we will not use in our recognition algorithm

There is plenty of room in the FPGA left after JPEG compressor, so I do not recommend removing JPEG - It will take a lot of work, really. At the same time you can use it to visualize your processing algorithms feeding them to the compressor.It took me really little time to add focusing helper code to the FPGA (it also can be used as edge detector) mixing the results with the normal images so I could immediately see the results and so troubleshoot the program--Andrey.filippov 14:47, 1 March 2008 (CST)

Actually I can use and modify the processor's drivers. Now I'm developing a program that uses the /dev/fsdram node to read the raw image from the DDR and create an equivalent RGB file. After some problems now I create a rgb file and convert it to a visible format such as png or bmp, but the image is wrong and I cannot understand why.

Because there is a different driver that outputs FPGA raw image data - you may ftp image.raw file, designed specifically to get the raw image. The /dev/fsdram does not will have gaps after each line as each line is 512-bytes aligned--Andrey.filippov 14:47, 1 March 2008 (CST)

I understand the raw image is stored at the first positions of the DDR memory occupying byte 0 until the byte (width+4)*(height+4)-1. Is this correct? Anybody has a memory map of the DDR? Some information about how the memory is occupied by the data(raw image, gamma data, corrections, jpeg)? No, that is not so - the image can start at different address (we do it in some modes), and each line is 512-bytes aligned. Each pixel can be 1 byte (when using LUTs) or 2 bhytes (raw sensor data, MSB aligned)--Andrey.filippov 14:47, 1 March 2008 (CST)

At maximum resolution (2592*1936) I have to read (2592+4)*(1936+4)=5036240 bytes from memory. I assume that the stored data starts like this:


GreenRedGreenRedGreenRed (2596 bytes)

BlueGreenBlueGreenBlueGreen (2596 bytes)

. . .


If I take this data and make some conversion I get a wrong image of what the camera is seeing. I know my conversion algorithm works correctly.

Has anybody work on something like this?

The image.raw application available on the camera is just a symbolic link to /dev/ccam_img that uses the cmoscam353 driver. I'm following this driver to understand the way that is read from the DDR. Can anybody tell me how the raw image data is generated?

yes, /dev/ccam_img is the correct driver--Andrey.filippov 14:47, 1 March 2008 (CST)


What is the proper way to change the camera resolution? Using PHP_in_Elphel_cameras, you may use sample code from camera_demo.php--Andrey.filippov 14:47, 1 March 2008 (CST)

Thanks in advance for any help,

Diego Mendez