This study showed that drug-coated PLLA (Poly (L-lactide)) microneedle arrays can induce rapid and painless local anesthesia. Microneedle arrays were fabricated using a micro-molding technique, and the needle tips were coated with 290.6 ± 45.9 μg of lidocaine, the most widely used local anesthetic worldwide. A dip-coating device was newly designed for the coating step using an optimized coating formulation. Lidocaine coated on the arrays was released rapidly into PBS within 2 min, and its stability in storage lasted 3 weeks at 4, 25, and 37°C. Furthermore, the microneedle arrays showed consistent in vitro skin penetration and delivered 200.8 ± 43.9, 224.2 ± 39.3, and 244.1 ± 19.6 μg of lidocaine into the skin 1, 2, and 5 min after application with a high delivery efficiency of 69, 77, and 84%. Compared to a commercially available topical anesthetic EMLA® cream, a 22.0, 13.6, and 14.0-fold higher amount of lidocaine was delivered into the skin. Note, in vitro skin permeation of Lidocaine was also notably enhanced by a 2-min-application of the lidocaine-coated microneedle arrays. Altogether, these results suggest that the biocompatible lidocaine-coated PLLA microneedle arrays could provide significantly rapid local anesthesia in a painless manner without any of the issues from topical applications or hypodermic injections of local anesthetics.
For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.
We present a novel, compact single-shot hyperspectral imaging method. It enables capturing hyperspectral images using a conventional DSLR camera equipped with just an ordinary refractive prism in front of the camera lens. Our computational imaging method reconstructs the full spectral information of a scene from dispersion over edges. Our setup requires no coded aperture mask, no slit, and no collimating optics, which are necessary for traditional hyperspectral imaging systems. It is thus very cost-effective, while still highly accurate. We tackle two main problems: First, since we do not rely on collimation, the sensor records a projection of the dispersion information, distorted by perspective. Second, available spectral cues are sparse, present only around object edges. We formulate an image formation model that can predict the perspective projection of dispersion, and a reconstruction method that can estimate the full spectral information of a scene from sparse dispersion information. Our results show that our method compares well with other state-of-the-art hyperspectral imaging systems, both in terms of spectral accuracy and spatial resolution, while being orders of magnitude cheaper than commercial imaging systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.