Research in the center covers a broad spectrum of multimedia signal processing and analysis. In recent years, researchers in the center have pioneered the development of feature extraction with application to image registration, segmentation, steganography and information retrieval from large multimedia databases. The researchers in the center participate in several interdisciplinary projects including the Bio-image informatics project whose goal is to develop, test and deploy a unique, fully operational distributed digital library of bio-molecular image data accessible to researchers around the world, and the graduate training program in Interactive Digital Multimedia.

BisQue has been used to manage and analyze 10 hours of unexplored ocean habitats using underwater remotely operated vehicles and 23.3 hours (884GB) of high definition video from dives in Bering Sea submarine canyons to evaluate the density of fishes, structure-forming corals and sponges and to document and describe fishing damage. 

We propose a deep learning model to identify the characteristic differences in Computation Tomography (CT) scans between COVID-19 and other similar types of viral pneumonia

We propose a novel and efficient algorithm to model high-level topological structures of neuronal fibers.

We propose a novel weakly supervised method to improve the boundary of the 3D segmented nuclei utilizing an over-segmented image.

The Deep Eye-CU (DECU) project integrates temporal motion information with the multimodal multiview network to monitor patient sleep poses. It uses deep features, which slightly improved the patient sleep pose classification accuracy (when compared to the performance of engineered features such as Hu-moments and Histogram of Oriented Gradients (HOG)). The DECU also uses principles from Hidden Markov Models (HMMs) a popular technique in speech process. It leverages pose time-series data and assumes that patient motion can be modeled using a “state-machine” approach. However, HHMs are limited in their ability to model state duration. In a high level analysis, state duration is used to distinguish between poses and pseudo poses, which are transitory poses seen when patients move from one position to another. The DECU framework (system and algorithms) are currently deployed in a real medical ICU at Santa Barbara Cottage Hospital where study volunteers have consented to the study.

The Eye-CU project incorporates a multiview aspect of the network to successfully remove the complex and prohibitively expensive pressure mat. This work uses purely visual rgb and depth sensors position at relatively different locations (i.e., multiview). Eye-CU learns the weights the contribution of each sensor and view via couple-constrained Least-Squares (cc-LS) modality trust estimation algorithm. The Eye-CU system in combination with cc-LS successfully match the performance of the MEYE network while and reliably classifies patient poses in challenging scene conditions (variable illumination and various sensor occlusions).

For instance, the MEYE (multimodal ICU ) network focuses on the detection of patient sleep poses using multimodal sensor network data: cameras (rgb, depth, thermal), a pressure mat (flexible sensor array), and room environmental sensors (temperature, humidity, sound). The multimodal data allows the algorithms to deal with challenging scene conditions (partial sensor occlusions and illumination changes). The rooms sensors are used to trigger and tune modality weights.

The Multimodal Multiview Network for Healthcare is a collaborative effort between researchers from the Electrical and Computer Engineering Department at the University of California Santa Barbara and the intesivists and medical practitioners from the Medical Intensive Care Unit (MICU) at Santa Barbara Cottage Hospital. The Objective of the research is to improve quality of care by  monitoring patients and workflows in real ICU rooms. The network is non-disruptive and non-intrusive and the methods and protocols to protect and maintain the privacy of patients and staff.

In this project, we integrate the Dream.3D software package into BisQue. Dream.3D is an open and modular software package that allows users to reconstruct, instantiate, quantify, mesh, handle and visualize microstructure digitally. By integrating it with BisQue, Dream.3D runs can be managed from any computer with a web browser, due to BisQue's web-based nature. Similarly, results and visualizations can be easily shared with collaborators right from a web page, without any software installation. In addition, analysis can be scaled out to run on compute clusters for faster exploration of parameter spaces. Furthermore, BisQue adds provenance tracking, allowing the Dream.3D user to understand exactly the data flow of inputs and outputs and the analysis settings. This greatly improves reproducibility and the understanding of analysis history.

The BisQue image analysis platform was used to develop algorithms to assay phenotypes such as directional root-tip growth or comparisons of seed size differences.