A scanning-fiber-based method developed for imaging bioengineered tissue constructs such as synthetic carotid arteries is reported. Our approach is based on directly embedding one or more hollow-core silica fibers within the tissue scaffold to function as micro-imaging channels (MIC). The imaging process is carried out by translating and rotating an angle-polished fiber micro-mirror within the MIC to scan excitation light across the tissue scaffold. The locally emitted fluorescent signals are captured using an electron multiplying CCD camera and then mapped into fluorophore distributions according to fiber micro-mirror positions. Using an optical phantom composed of fluorescent microspheres, tissue scaffolds, and porcine skin, we demonstrated single-cell-level imaging resolution (20 to 30 μm) at an imaging depth that exceeds the photon transport mean free path by one order of magnitude. This result suggests that the imaging depth is no longer constrained by photon scattering, but rather by the requirement that the fluorophore signal overcomes the background "noise" generated by processes such as scaffold autofluorescence. Finally, we demonstrated the compatibility of our imaging method with tissue engineering by visualizing endothelial cells labeled with green fluorescent protein through a ∼500 μm thick and highly scattering electrospun scaffold.
As post‐secondary education migrates online, developing and evaluating new avenues for assessment in anatomy is paramount. Three‐dimensional (3D) visualization technology is one area with the potential to augment or even replace resource‐intensive cadaver use in anatomical education. This manuscript details the development of a smartphone application, entitled “Virtual Reality Bell‐Ringer (VRBR),” capable of displaying monoscopic two‐dimensional (2D) or stereoscopic 3D images with the use of an inexpensive cardboard headset for use in spot examinations. Cadaveric image use, creation, and pinning processes are explained, and the source code is provided. To validate this tool, this paper compares traditional laboratory‐based spot examination assessment stations against those administered using the VRBR application to test anatomical knowledge. Participants (undergraduate, n = 38; graduate, n = 13) completed three spot examinations specific to their level of study, one in each of the modalities (2D, 3D, laboratory) as well as a mental rotation test (MRT), Stereo Fly stereotest, and cybersickness survey. Repeated measures ANCOVA suggested participants performed significantly better on laboratory and 3D stations compared to 2D stations. Moderate to severe cybersickness symptoms were reported by 63% of participants in at least one category while using the VRBR application. Highest reported symptoms included: eye strain, general discomfort, difficulty focusing, and difficulty concentrating. Overall, the VRBR application is a promising tool for its portability, affordability, and accessibility. Due to reported cybersickness and other technical limitations, the use of VRBR as an alternative to cadaveric specimens presents several challenges when testing anatomy knowledge that must be addressed before widespread adoption.
Recent technological advancements in X‐Reality (XR) seem to create remarkably realistic models for anatomical education. Despite a lack of evidence regarding the efficacy of XR in this context, several institutions have adopted these technologies as primary educational tools in anatomy as an alternative to traditional cadaveric laboratories. In our earliest study, we evaluated a 3D, interactive projection on a 2D screen against a physical model of a female pelvis. This data demonstrated that those who learnt on the physical model performed significantly better during testing. Subsequently, we explored the efficacy of more intricate XR systems. Thus, we compared the efficacy of the Microsoft HoloLens, a mixed‐reality (MR) device, to a physical model in anatomy education. We recruited 20 McMaster University students and ran a preliminary study to gather qualitative data regarding the optimal MR environment. We gathered participant preference based on their experience observing several virtual objects against different coloured backgrounds and various lighting combinations. We used this data to build the testing environment for the MR model, such as adding black curtains and floor tiles to the room, and using a single light over the projection. These conditions were also used for the physical model, thus placing it at a slight disadvantage. We then recruited 40 McMaster University students with no prior anatomical education, and randomized them into two groups: one learning on a physical model of a female pelvis and one learning on the MR model of a female pelvis. We measured two possible covariates, spatial and stereoscopic ability, through two pretest assessments: a Mental Rotations Test (MRT) and a Titmus Fly Test, respectively. Our participants were then given 10 minutes to learn 20 structures using their respective models, and were tested on a female cadaveric pelvis on the basis of a 25‐question test with no time limit. This test included 15 nominal questions, which asked participants to name the indicated structure, and 10 functional questions, which asked participants to determine the function of a structure based on its location and form. We hypothesized that due to the realistic model that the MR system created, it should perform at least equivalent to the physical model in the context of anatomical education. Our assessments found that participants learning on the physical model performed significantly better in comparison to their MR counterparts on both nominal (65% vs 41%, respectively; p = 0.0051) and functional measures (42% vs. 31%, respectively; p = 0.0134). Additionally, when controlling for the aforementioned covariates, we found these results to remain consistent. Ultimately, our results indicate that the MR device is an inefficient tool for anatomical education when compared to traditional physical models. Our future directions involve exploring possible determinants influencing the superiority of the physical model, such as stereoscopic vision, as well as the assessment of other XR systems, such as virtual reality headsets, in the context of anatomy education.Support or Funding InformationSelf‐funded.This abstract is from the Experimental Biology 2018 Meeting. There is no full text article associated with this abstract published in The FASEB Journal.
Three‐dimensional (3D) visualization technology such as virtual reality (VR) has the ability to illustrate and replicate physical dissection, and its novelty has captured the interest of many educational institutions. Unfortunately, the testing of 3D technology lags behind development, and most research is confined to case studies. This study's objective is to (1) analyze the short‐term and long‐term efficacy of VR dissection technology compared to an interactive, physical dissective model, and (2) determine if other factors, such as spatial ability, impacts the effectiveness of learning anatomy from VR models. Based on previous research in our lab, static physical models have been shown to be superior to VR models when learning anatomy. Thus, the physical dissection model is hypothesized to perform better in teaching anatomy. The interactive, physical model consists of a 3D‐printed bony pelvis and fabric perineal structures to effectively display the dissections. The physical model was scanned to produce an identical VR replica which is displayed on an HTC Vive. This crossover study will use undergraduate McMaster University students (n=52) with no formal anatomy education. Participants will be asked to learn anatomical structures from both physical and VR models, and be tested on the knowledge from each model in two separate tests. After 48 hours, they will be tested to determine if either model exhibits better long‐term retention. Tests will include nominal, functional, and spatial questions to assess recognition, critical thinking, and spatial awareness. Preliminary data (n=13) suggests that there is no statistically significant difference between either models when learning anatomy during short‐term testing (p=0.24) and long‐term testing (p=0.054). On short‐term retention tests, participants are receiving an average score of 7(3) and 6(2) out of 15 when learning from the VR and physical models, respectively. During long‐term testing, participants are achieving an average score of 7(3) and 5(2) out of 15 when also learning from the VR and physical models, respectively. Data collection is underway and expected to yield complete results by January 2019. Data from additional participants will further elucidate the impact of VR and physical dissection models on student learning. These results could help guide and improve the development of future anatomy education programs.Support or Funding InformationThis project was self‐funded.This abstract is from the Experimental Biology 2019 Meeting. There is no full text article associated with this abstract published in The FASEB Journal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.