Histological image analysis plays a key role in understanding the effects of disease and treatment responses at the cellular level. However, evaluating histology images by hand is time-consuming and subjective. While semi-automatic and automatic approaches for image segmentation give acceptable results in some branches of histological image analysis, until now this has not been the case when applied to skeletal muscle histology images. We introduce Charisma, a new top-down cell segmentation framework for histology images which combines image processing techniques, a supervised trained classifier and a novel robust clump splitting algorithm. We evaluate our framework on real-world data from intensive care unit patients. Considering both segmentation and cell property distributions, the results obtained by our method correspond well to the ground truth, outperforming other examined methods.
While any grasp must satisfy the grasping stability criteria, good grasps depend on the specific manipulation scenario: the object, its properties and functionalities, as well as the task and grasp constraints. We propose a probabilistic logic approach for robot grasping, which improves grasping capabilities by leveraging semantic object parts. It provides the robot with semantic reasoning skills about the most likely object part to be grasped, given the task constraints and object properties, while also dealing with the uncertainty of visual perception and grasp planning. The probabilistic logic framework is task-dependent. It semantically reasons about pre-grasp configurations with respect to the intended task and employs object-task affordances and object/task ontologies to encode rules that generalize over similar object parts and object/task categories. The use of probabilistic logic for task-dependent grasping contrasts with current approaches that usually learn direct mappings from visual perceptions to taskdependent grasping points. The logic-based module receives data from a low-level module that extracts semantic objects parts, and sends information to the low-level grasp planner. These three modules define our probabilistic logic framework, which is able to perform robotic grasping in realistic kitchen-related scenarios. Response to Reviewers:The paper has been read carefully and small changes and corrections of the text have been made according to the review. We also added links where source code has been made available.
While relational representations have been popular in early work on syntactic and structural pattern recognition, they are rarely used in contemporary approaches to computer vision due to their pure symbolic nature. The recent progress and successes in combining statistical learning principles with relational representations motivates us to reinvestigate the use of such representations. More specifically, we show that statistical relational learning can be successfully used for hierarchical image understanding. We employ kLog, a new logical and relational language for learning with kernels to detect objects at different levels in the hierarchy. The key advantage of kLog is that both appearance features and rich, contextual dependencies between parts in a scene can be integrated in a principled and interpretable way to obtain a qualitative representation of the problem. At each layer, qualitative spatial structures of parts in images are detected, classified and then employed one layer up the hierarchy to obtain higher-level semantic structures. We apply a four-layer hierarchy to street view images and successfully detect corners, windows, doors, and individual houses.
Abstract. Object grasping is a key task in robot manipulation. Performing a grasp largely depends on the object properties and grasp constraints. This paper proposes a new statistical relational learning approach to recognize graspable points in object point clouds. We characterize each point with numerical shape features and represent each cloud as a (hyper-) graph by considering qualitative spatial relations between neighboring points. Further, we use kernels on graphs to exploit extended contextual shape information and compute discriminative features which show improvement upon local shape features. Our work for robot grasping highlights the importance of moving towards integrating relational representations with low-level descriptors for robot vision. We evaluate our relational kernel-based approach on a realistic dataset with 8 objects.
Understanding images in terms of logical and hierarchical structures is crucial for many semantic tasks, including image retrieval, scene understanding and robotic vision. This paper combines robust feature extraction, qualitative spatial relations, relational instance-based learning and compositional hierarchies in one framework. For each layer in the hierarchy, qualitative spatial structures in images are detected, classified and then employed one layer up the hierarchy to obtain higher-level semantic structures. We apply a four-layer hierarchy to street view images and subsequently detect corners, windows, doors, and individual houses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.