Seizures are a disorder caused by structural brain lesions, life-threatening metabolic derangements, or drug toxicity. The present study describes the behavior related to proconvulsant activity induced by thiocolchicoside (TCC) in rats and investigates the electrocorticographic patterns of this behavior and the effectiveness of classic antiepileptic drugs used to control these seizures. Forty-nine adult male Wistar rats were used and divided into two phases of our experimental design: 1) evaluation of seizure-related behavior and electrocorticographic patterns induced by TCC and 2) evaluation of the efficacy of classical antiepileptic drugs to control the proconvulsive activity caused by TCC. Our results showed that TCC induced tonicclonic seizures that caused changes in electrocorticographic readings, characteristic of convulsive activity, with average amplitude greater than that induced by pentylenetetrazole. Treatment with anticonvulsants, especially diazepam, reduced the electrocorticographic outbreaks induced by TCC. The results suggested that TCC caused seizures with increased power in brain oscillations up to 40 Hz and that diazepam may partially reverse the effects.
The classification experiments covered by machine learning (ML) are composed by two important parts: the data and the algorithm. As they are a fundamental part of the problem, both must be considered when evaluating a model's performance against a benchmark. The best classifiers need robust benchmarks to be properly evaluated. For this, gold standard benchmarks such as OpenML-CC18 are used. However, data complexity is commonly not considered along with the model during a performance evaluation. Recent studies employ Item Response Theory (IRT) as a new approach to evaluating datasets and algorithms, capable of evaluating both simultaneously. This work presents a new evaluation methodology based on IRT and Glicko-2, jointly with the decodIRT tool developed to guide the estimation of IRT in ML. It explores the IRT as a tool to evaluate the OpenML-CC18 benchmark for its algorithmic evaluation capability and checks if there is a subset of datasets more efficient than the original benchmark. Several classifiers, from classics to ensemble, are also evaluated using the IRT models. The Glicko-2 rating system was applied together with IRT to summarize the innate ability and classifiers performance. It was noted that not all OpenML-CC18 datasets are really useful for evaluating algorithms, where only 10% were rated as being really difficult. Furthermore, it was verified the existence of a more efficient subset containing only 50% of the original size. While Randon Forest was singled out as the algorithm with the best innate ability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.