2016 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS) 2016
DOI: 10.1109/marss.2016.7561703
|View full text |Cite
|
Sign up to set email alerts
|

Automated detection of live cells and microspheres in low contrast bright field microscopy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2016
2016
2018
2018

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…Random Sample Consensus (RANSAC) was used in combination with data filtering and optimal convergence process in order to detect and track micro-grippers and micro-objects in micro-assembly tasks [19]. Finally, Bollavaram et al presented an accurate and robust method to detect the 2D positions and orientations of micro-scale objects in low contrast bright field microscopy [20]. The main reason for estimating the 2D pixel positions of miniaturized agents is related to significant technical difficulties in tracking the agents in 3D.…”
Section: Introductionmentioning
confidence: 99%
“…Random Sample Consensus (RANSAC) was used in combination with data filtering and optimal convergence process in order to detect and track micro-grippers and micro-objects in micro-assembly tasks [19]. Finally, Bollavaram et al presented an accurate and robust method to detect the 2D positions and orientations of micro-scale objects in low contrast bright field microscopy [20]. The main reason for estimating the 2D pixel positions of miniaturized agents is related to significant technical difficulties in tracking the agents in 3D.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, some of the methods are not suitable for real-time implementation [37] or only work for specific cell shapes [38]. We have developed a method to address many of these limitations and perceive objects of varying types and shapes from a series of time-lapse images taken at the same cross-sectional plane [39]. Our method is based on a novel combination of many well-known image processing techniques.…”
Section: Introductionmentioning
confidence: 99%