CORC

浏览/检索结果: 共10条,第1-10条 帮助

限定条件                    
已选(0)清除 条数/页:   排序方式:
Design of the JPEG2000 compression system based on ADV212 会议论文
2nd International Conference on Precision Mechanical Instruments and Measurement Technology, ICPMIMT 2014, May 30, 2014 - May 31, 2014, Chongqing, China
Li G.-N.; Liu Y.-Y.; Han S.-L.; Jin L.-X.
收藏  |  浏览/下载:20/0  |  提交时间:2015/04/27
A MPEG-4 encoder based on TMS320C6416 会议论文
5th International Symposium on Photoelectronic Detection and Imaging, ISPDI 2013, June 25, 2013 - June 27, 2013, Beijing, China
Li G.-J.; Liu W.-N.
收藏  |  浏览/下载:9/0  |  提交时间:2014/05/15
Design of high speed and parallel compression system used in the big area CCD of high frame frequency (EI CONFERENCE) 会议论文
2011 International Conference on Precision Engineering and Non-Traditional Machining, PENTM 2011, December 9, 2011 - December 11, 2011, Xi'an, China
Liu Y.-Y.; Gao Y.-H.; Li G.-N.; Wang W.-H.; Zhang R.-F.; Jin L.-X.
收藏  |  浏览/下载:47/0  |  提交时间:2013/03/25
According to the area CCD camera of characteristics  such as high resolution capacity and high frame frequency  this paper puts forward a high speed and parallel image compression system of high integration degree. Firstly  according to the work principle of the area CCD  FPGA is adopted to realize the timing driving and multichannel and parallel analog signal handling to raise the export frame frequency of the area CCD. Secondly  with an image compression scheme based on FPGA embedded processor MicroBlaze and ADV212 compression chip  real time image compression and the high speed area CCD are realized. Finally  by detecting the analog signal of the area CCD output  the real time compression of the big area CCD image is carried out in different compression ratios and the compression performance is analyzed. Experiment result shows that this scheme can realize real time image compression with the biggest data rate of 520Mbps. When compression bit ratio is 0.15  the signal-to-noise ratio of peak value can reach 36 dB. Image collection and image compression are integrated  which reduces the data transmission between them and improves systematic integration degree.  
Design of multispectral remote sensing image compression system (EI CONFERENCE) 会议论文
2010 International Conference on Computer Application and System Modeling, ICCASM 2010, October 22, 2010 - October 24, 2010, Shanxi, Taiyuan, China
Fan Z.-L.; Xu S.-Y.
收藏  |  浏览/下载:16/0  |  提交时间:2013/03/25
In order to get the higher compression quality of multispectral remote sensing image  which taken the JPEG2000 as the standard  used FPGA and special compression chip ADV212 to realize the remote sensing image compression system is proposed. The system used the specialty compression chip ADV212 which is newly relaesed by Analog Device company. It used to develop fast high-quality compression for the large amount of remote sensing image data. The system realized the image data's input  the compression stream's output  the image preprocessing as well as to control the ADV212 chip's working pattern through FPGA. The experiment result shows that this system which has such characteristics as low power consumption. low cost and easy adjustmen and has a good compressing effect. Meet the requirement of the multispectral remote sensing image's high compression ratio. 2010 IEEE.  
An improved fast parallel SPIHT algorithm and its FPGA implementation (EI CONFERENCE) 会议论文
2010 2nd International Conference on Future Computer and Communication, ICFCC 2010, May 21, 2010 - May 24, 2010, Wuhan, China
Wu Y.-H.; Jin L.-X.; Tao H.-J.
收藏  |  浏览/下载:17/0  |  提交时间:2013/03/25
In view of the current stringent need to the real-time compression algorithm of the high-speed and high-resolution image  such as remote sensing or medical image and so on  in this paper  No List SPIHT (NLS) algorithm has been improved  and a fast parallel SPIHT algorithm is proposed  which is suitable to implement with FPGA. It can deal with all bit-planes simultaneously  and process in the speed of 4pixels/period  so the encoding time is only relative to the image resolution. The experimental results show that  the processing capacity can achieve 200MPixels/s  when the input clock is 50MHz  the system of this paper need 2.29ms to complete lossless compression of a 512x512x8bit image  and only requires 1.31ms in the optimal state. The improved algorithm keeps the high SNR unchanged  increases the speed greatly and reduces the size of the needed storage space. It can implement lossless or lossy compression  and the compression ratio can be controlled. It could be widely used in the field of the high-speed and high-resolution image compression. 2010 IEEE.  
SoC test data compression technique based on RLE-G (EI CONFERENCE) 会议论文
2010 International Conference on Advanced Measurement and Test, AMT 2010, May 15, 2010 - May 16, 2010, Sanya, China
Deng C.; Liu W.; Zheng X.; Yang L.
收藏  |  浏览/下载:20/0  |  提交时间:2013/03/25
Test data compression has been an effective way to reduce test data volume and test time  as well as to solve automatic test equipment (ATE) memory and bandwidth limitation. We analyze the limitations of current test data compression algorithm and draw on the previous experience to deduce an optimal compression coding model suitable for SoC test data. In addition  in this paper we make full use of the relevance of the test vectors and the advantages of statistical coding to present an efficient test data compression method RLE-G based on the coding model  and give the RLE-G the optimal compression efficiency of the boundary conditions and realization steps. The experimental results for ISCAS 89 benchmark circuits demonstrate RLE-G have the excellent advantages of high compression ratio. (2010) Trans Tech Publications.  
Wavelet-based contourlet coding using SPECK algorithm (EI CONFERENCE) 会议论文
2008 9th International Conference on Signal Processing, ICSP 2008, October 26, 2008 - October 29, 2008, Beijing, China
Xiu-Wei T.; Xi-Feng Z.; Tie-Fu D.
收藏  |  浏览/下载:44/0  |  提交时间:2013/03/25
The compression and storage method of the same kind of medical images-DPCM (EI CONFERENCE) 会议论文
4th International Conference on Photonics and Imaging in Biology and Medicine, September 3, 2005 - September 6, 2005, Tianjin, China
Zhao X.; Wei J.; Zhai L.; Liu H.
收藏  |  浏览/下载:9/0  |  提交时间:2013/03/25
Medical imaging has started to take advantage of digital technology  opening the way for advanced medical imaging and teleradiology. Medical images  however  require large amounts of memory. At over 1 million bytes per image  a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year)  and transmitting an image over a network (even the promised superhighway) could take minutes - too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However  the lossless techniques  which allow for perfect reconstruction of the original images  yield modest compression ratio  while the techniques that yield higher compression ratio are lossy  that is  the original image is reconstructed only approximately Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge  we are developing and studying some compression schemes  which are either strictly lossless or diagnostically lossless  taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to-Noise Ratio (SNR) by exploitation of correlations within the source signal  a method of combining differential pulse code modulation (DPCM) is presented.  
Lossless wavelet compression on medical image (EI CONFERENCE) 会议论文
4th International Conference on Photonics and Imaging in Biology and Medicine, September 3, 2005 - September 6, 2005, Tianjin, China
Zhao X.; Wei J.; Zhai L.; Liu H.
收藏  |  浏览/下载:28/0  |  提交时间:2013/03/25
An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS). as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image  thus facilitating accurate diagnosis  of course at the expense of higher bit rates  i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization  wavelet coding  neural networks  and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1  or even more)  they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image  but the achievable compression ratios are only of the order 2:1  up to 4:1. In our paper  we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time  we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance  so that all the low rate codes are included at the beginning of the bit stream. Typically  the encoding process stops when the target bit rate is met. Similarly  the decoder can interrupt the decoding process at any point in the bil stream  and still reconstruct the image. Therefore  a compression scheme generating an embedded code can start sending over the network the coarser version of the image first  and continues with the progressive transmission of the refinement details. Experimental results show that our method can get a perfect performance in compression ratio and reconstructive image.  
Wavelet packet and neural network basis medical image compression (EI CONFERENCE) 会议论文
4th International Conference on Photonics and Imaging in Biology and Medicine, September 3, 2005 - September 6, 2005, Tianjin, China
Zhao X.; Wei J.; Zhai L.
收藏  |  浏览/下载:14/0  |  提交时间:2013/03/25
It is difficult to get high compression ratio and good reconstructed image by conventional methods  we give a new method of compression on medical image. It is to decompose and reconstruct the medical image by wavelet packet. Before the construction the image  use neural network in place of other coding method to code the coefficients in the wavelet packet domain. By using the Kohonen's neural network algorithm  not only for its vector quantization feature  but also for its topological property. This property allows an increase of about 80% for the compression rate. Compared to the JPEG standard  this compression scheme shows better performances (in terms of PSNR) for compression rates higher than 30. This method can get big compression ratio and perfect PSNR. Results show that the image can be compressed greatly and the original image can be recovered well. In addition  the approach can be realized easily by hardware.  


©版权所有 ©2017 CSpace - Powered by CSpace