We use principal component analysis (PCA) to reduce the dimensionality of features of video frames for the purpose of content description. This low-dimensional description makes practical the direct use of all the frames of a video sequence in later analysis. The PCA representation circumvents or eliminates several of the stumbling blocks in current analysis methods and makes new analyses feasible. We demonstrate this with two applications. The first accomplishes high-level scene description without shot detection and key-frame selection. The second uses the time sequences of motion data from every frame to classify sports sequences.
The continuous drive of the semiconductor industry towards smaller features sizes requires mask manufacturers to achieve ever tighter tolerances for the most critical dimensions on the mask. CD uniformity requires particularly tight control. Equipment manufacturers and process engineers target their development to support these requirements. But as numerous publications indicate, more sophisticated data correction methods are still employed to compensate for shortcomings in equipment and process or to account for the boundary conditions in some layouts that contribute to process deviations. Among the corrected effects are proximity and linearity effects, fogging and etch effects, and pattern fidelity. Different designs vary by pattern size distribution as well as by pattern density distribution. As the implementation of corrections for optical proximity effects in wafer lithography has shown, breaking up the original polygons in the design layout for selective and environment-aware correction yields increased data volumes and can have an impact on the data quality of the mask writing data. The paper investigates the effect of various correction algorithms specifically deployed for mask process effects on top of wafer process related corrections. The impact of MPC flows such as rule-based linearity and proximity correction and density-based long range effect correction on the metrics for data preparation and mask making is analyzed. Experimental data on file size, shot count and data quality indicators including small figure counts are presented for different correction approaches and a variety of correction parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.