作者: Tejas Sudharshan Mathai , John Galeotti , Samantha Horvath , George Stetten
DOI: 10.1007/978-3-319-10437-9_1
关键词: Computer science 、 Coherence (signal processing) 、 Millisecond 、 Graphical processing unit 、 Computer graphics (images) 、 Filter (signal processing) 、 Layer (object-oriented design) 、 Noise reduction 、 Computer vision 、 Artificial intelligence 、 Resolution (electron density) 、 Feature (computer vision)
摘要: An image analysis algorithm is described that utilizes a Graphics Processor Unit (GPU) to detect in real-time the most shallow subsurface tissue layer present an OCT obtained by prototype SDOCT corneal imaging system. The system has scanning depth range of 6mm and can acquire 15 volumes per second at cost lower resolution signal-to-noise ratio (SNR) than diagnostic scanners. To best our knowledge, we are first experiment with non-median percentile filtering for simultaneous noise reduction feature enhancement images, believe implement any form on GPU. was applied five different test images. On average, it took ~0.5 milliseconds preprocess 20th-percentile filter, ~1.7 second-stage faintly imaged transparent surface.