2014
DOI: 10.3389/fnbeh.2014.00028
|View full text |Cite
|
Sign up to set email alerts
|

Parameter optimization for automated behavior assessment: plug-and-play or trial-and-error?

Abstract: Behavioral neuroscience is relying more and more on automated behavior assessment, which is often more time-efficient and objective than manual scoring by a human observer. However, parameter adjustment and calibration are a trial-and-error process that requires careful fine-tuning in order to obtain reliable software scores in each context configuration. In this paper, we will pinpoint some caveats regarding the choice of parameters, and give an overview of our own and other researchers' experience with widel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 20 publications
(22 citation statements)
references
References 24 publications
0
22
0
Order By: Relevance
“…In addition, counterbalancing of contexts A and B was not possible because of unequal acquisition when using different grid floors. 39 The use of Long-Evans rats, which may perform better on discrimination tasks, 42 , 43 instead of Wistar rats, did not improve our results. Finally, we succeeded in fine-tuning the contextual characteristics and training parameters, thereby developing a robust contextual generalization gradient (Experiments 1 and 2).…”
Section: Resultsmentioning
confidence: 55%
See 1 more Smart Citation
“…In addition, counterbalancing of contexts A and B was not possible because of unequal acquisition when using different grid floors. 39 The use of Long-Evans rats, which may perform better on discrimination tasks, 42 , 43 instead of Wistar rats, did not improve our results. Finally, we succeeded in fine-tuning the contextual characteristics and training parameters, thereby developing a robust contextual generalization gradient (Experiments 1 and 2).…”
Section: Resultsmentioning
confidence: 55%
“…Freezing during test was measured manually by a trained observer (continuous measurement with a stopwatch from video recordings), as previous findings indicated that comparison of software-scored freezing in different contexts was not reliable. 39 Percentage freezing was calculated as the percentage of time the rat was freezing during the 8-min test on Day 2. Data are from one observer in Experiment 1, and the average of two observers in all other studies.…”
Section: Methodsmentioning
confidence: 99%
“…Freezing behavior was scored manually with a stopwatch by two observers blind to the experimental condition (Luyten et al 2014). Freezing during each CS (i.e., the average of both raters) was expressed as percentage of the total CS duration (20 s).…”
Section: Methodsmentioning
confidence: 99%
“…An advantage of utilizing fear extinction to address these questions is that it provides a continuous readout of learning via a mouse's freezing, allowing us to examine the precise temporal relationship between DA and the expression of learned behavior. However, one limitation of freezing as a readout of learning is that it traditionally requires hand scoring when mice are tethered to neural headgear, as existing software confuses tether movement with mouse movement (Luyten et al 2014;Shoji et al 2014) . The need for human labeling has often restricted the analysis of freezing to specific epochs, such as the presentation of the conditioned stimuli.…”
Section: Introductionmentioning
confidence: 99%