Study DesignA biomechanical study.PurposeTo develop a predictive model for pullout strength.Overview of LiteratureSpine fusion surgeries are performed to correct joint deformities by restricting motion between two or more unstable vertebrae. The pedicle screw provides a corrective force to the unstable spinal segment and arrests motions at the unit that are being fused. To determine the hold of a screw, surgeons depend on a subjective perioperative feeling of insertion torque. The objective of the paper was to develop a machine learning based model using density of foam, insertion angle, insertion depth, and reinsertion to predict the pullout strength of pedicle screw.MethodsTo predict the pullout strength of pedicle screw, an experimental dataset of 48 data points was used as training data to construct a model based on different machine learning algorithms. A total of five algorithms were tested in the Weka environment and the performance was evaluated based on correlation coefficient and error matrix. A sensitive study of various parameters for obtaining the best combination of parameters for predicting the pullout strength was also preformed using the L9 orthogonal array of Taguchi Design of Experiments.ResultsRandom forest performed the best with a correlation coefficient of 0.96, relative absolute error of 0.28, and root relative squared error of 0.29. The difference between the experimental and predicted value for the six test cases was not significant (p >0.05).ConclusionsThis model can be used clinically for understanding the failure of pedicle screw pullout and pre-surgical planning for spine surgeon.
The weather conditions are changing continuously and the entire world is suffers from the changing Clemet and their side effects. Therefore pattern on changing weather conditions are required to observe. With this aim the proposed work is intended to investigate about the weather condition pattern and their forecasting model. On the other hand data mining technique enables us to analyse the data and extract the valuable patterns from the data. Therefore in order to understand fluctuating patterns of the weather conditions the data mining based predictive model is reported in this work. The proposed data model analyse the historical weather data and identify the significant on the data. These identified patterns from the historical data enable us to approximate the upcoming weather conditions and their outcomes. To design and develop such an accurate data model a number of techniques are reviewed and most promising approaches are collected. Thus the proposed data model incorporates the Hidden Markov Model for prediction and for extraction of the weather condition observations the K-means clustering is used. For predicting the new or upcoming conditions the system need to accept the current scenarios of weather conditions. The implementation of the proposed technique is performed on the JAVA technology. Additionally for justification of the proposed model the comparative study with the traditional ID3 algorithm is used. To compare both the techniques the accuracy, error rate and the time and space complexity is estimated as the performance parameters. According to the obtained results the performance of the proposed technique is found enhanced as compared to available ID3 based technique.
Due to advancement of technology the computational algorithms and computer based data analysis are used for making large and effective decision. Therefore a significant role on the human life is observed the main aim of developing such kind of technology is to provide ease in various domains and making future planning for managing the risk. Among various issues the disaster is also a critical risk in today's scenario of India. Thus the risk management and disaster management techniques are developed for keep in track the losses in controlled manner. In this presented work a new model using the data mining techniques for predicting the disaster and their place is proposed for development. Therefore various different data mining techniques and methods are included for developing the accurate and effective data model. The proposed work includes the three main contributions for prediction based technique development. First the preprocessing technique development by which the unstructured data is processed and filtered for transform the information into the structured data format. Therefore in this phase the Bay's classifier is used, Secondly development of learning technique for accurate pattern learning of the disasters and their places. Therefore in this phase the k-means clustering and hidden Markov model is employed for performing the training. Finally the prediction and their performance evaluation, in this phase the trained model is used to accept the current scenarios and predict the next event.The implementation of the proposed technique is performed using the JAVA technology and for dataset generation the Google search API is used. After the implementation of the proposed system the performance of the system in terms of accuracy, error rate, time complexity and space complexity is evaluated. The experimental results demonstrate the effective and accurate learning of the system. Thus the proposed data model is adoptive and acceptable for the various real world data analysis and decision making task.
Now a day's use of internet has been increased tremendously, so providing information relevant to a user at particular time is very important task. Periodic web personalization is a process of recommending the most relevant information to the users at accurate time. In this paper we are proposing an improved personalize web recommender model, which not only considers user specific activities but also considers some other factors related to websites like total number of visitors, number of unique visitors, numbers of users download data, amount of data downloaded, amount of data uploaded and number of advertisements for a particular URL to provide a better result. This model consider user's web access activities to extract its usage behavior to build knowledge base and then knowledge base along with prior specified factors are used to predict the user specific content. Thus this advance computation of resources will help user to access required information more efficiently and effectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.