Difference between revisions of "SoC"

From ElphelWiki
Jump to: navigation, search
(FPGA Theora compressor for Videoconferencing)
 
(49 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Design Ideas ==
+
[[Design_Ideas|Design Ideas]]
=== FPGA Theora Encoder for Videoconferencing ===
 
In 2005 Elphel implemented a subset of Theora video encoder encoder in Xilinx® FPGA that is part of Elphel model 333 camera capable of compressing 1280x1024@30fps ([http://www.xilinx.com/publications/xcellonline/xcell_53/xc_pdf/xc_video53.pdf], [http://www.linuxdevices.com/articles/AT3888835064.html]), but the CPU in the camera was not fast enough for the job even when the hard part was made by the hardware. In model 333 camera software was responsible for generating frame headers and Ogg encapsulation of the Theora bitstream provided by FPGA.
 
Knowing that [http://developer.axix.com Axis Communications AB] were going to release a new faster processor we decided to wait for it before proceeding with Theora in the camera and used plain old Motion JPEG for a while.
 
 
 
Now we have the the new camera [[Roadmap#Update_on_353.2F363_cameras|353]] tested and released to production - the camera that has a brand new [http://en.wikipedia.org/wiki/ETRAX_CRIS#ETRAX_FS ETRAX FS], more memory and larger FPGA and is already tested in JPEG mode. So now it is a perfect time to resurrect Theora code in the camera and move forward.
 
 
 
Current [http://elphel.cvs.sourceforge.net/elphel/camera333/fpga/x333 FPGA implementation] supports only INTRA and INTER NOMV frames - the goal was to provide efficient compression for the scenes where the camera does not move (CCTV, videoconferencing) and large part of the frame stays the same. To reduce the bandwidth more we need to utilize selective block encoding so if camera is looking at an empty hallway there would be no bitstream at all - just header telling that no block was encoded.
 
 
 
Such ability to selectively encode blocks is already in the FPGA code but we never used it with the slow CPU - encoded block map is a part of the frame header and the header is built by software, currently - before the video starts. To move farther we need either add FPGA code to generate frame headers or make use of the faster processor and do it in software.
 
 
 
Such project requires both FPGA code development (we use, and the rest of the code is written in Verilog HDL) and driver/application code (usually in C). When I was writing code (and debugging it) for the original encoder of the 333 camera I had to do both, but it would be nice to make such development in a team.
 

Latest revision as of 21:58, 2 May 2007

Design Ideas