Augmented Reality and Intrinsic Funtional Connectivity Visualization Application: ARIBraiN3T

G.M Rojas1, J.A. Fuentes1, M. Gálvez2, D.S. Margulies3

 

1Advanced Medical Image Processing Lab, 2Department of Radiology, Clínica las Condes, Santiago, Chile, 3Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.

 

Background: Visualization of complex neuroimaging data such as functional connectivity, tractography, functional imaging is a challenge. Many different visualization solutions have been proposed. Some authors, describe 2-D and 3-D advanced neuroimaging visualization methods for tractography and functional connectivity data. Others describe a stereoscopic-related method to view neuroradiological 3-D images. An Augmented Reality (AR) system generates a composite view for the user that is the combination of a real scene viewed by the user and a virtual scene generated by the computer that augments (or supplements) the scene with additional information (e.g. sound, video, graphics, etc.). Here we describe an Android AR application that shows the Default Mode, Dorsal Attention, and Ventral Attention Networks.

 

Methods: Using MNI152 2mm standard MPRAGE image we created a mesh model using the Gray Scale Model Maker algorithm, 3DSlicer software. A 7 Yeo Network Liberal Mask (surfer.nmr.mgh.harvard.edu/fswiki/CorticalParcellation_Yeo2011) was used to create the mesh model of 7 standard Yeo Networks (Model Maker algorithm, 3D Slicer software. Laplacian HC algorithm, MeshLab was used to smooth the mesh of each three Yeo network, and brain mesh. Android application was created with C#,  Javascript, and the software tools: Unity 4.02f (graphic engine), Sublime text3 tools, Blender (mesh and materials), Gimp (textures), Qualcomm Vuforia for Unity Android & iOS (Augmented Reality)

 

Results: ARiBraiN3T (Augmented Reality intrinsic Brain Networks 3 Targets): By focusing the three target images (Fig 1), we have produced three 3D transparent brains with the fused default mode network in light blue, dorsal attention network in blue, and ventral attention network in green  (Fig 1). The user can rotate the brain using the onscreen controls, by rotating the view angle over the target image, or rotating the target image itself. The size of the brain can be changed by modifying the distance between the android based device and the target image.

 

Conclusions: ARiBraiN is an application that demonstrates the fusion of augmented reality techniques with functional connectivity data. It might be possible to further adapt this application to be used in scientific reporting and/or methodology.

 


Android app on Google Play

 

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *