Light detection and ranging (LiDAR) sensors have proven to be a valuable tool to gather spatial information about the environment and are a crucial component in perception of autonomous systems. In the agricultural domain, state‐of‐the‐art algorithms for detection, classification, and tracking often utilize a combination of LiDAR and camera by fusing both semantic and spatial information. This is in part due to the availability of fast algorithms specifically optimized to make use of the convenient 2D data structure of RGB images. Still, there are limitations relying on cameras in agriculture because they highly depend on favorable lighting conditions and generally lack explicit geometric information, whereas LiDAR is especially advantageous in that regard as well as regarding range. This makes LiDAR particularly valuable for perception applications such as self‐localization, mapping, or object detection. The unstructured nature of 3D LiDAR point clouds, however, coupled with their possibly large size and density, presents a significant hurdle in terms of real‐time capability for these kinds of tasks. Object detection on 3D LiDAR sensor data is therefore challenging and solutions in agriculture usually are tied to very narrow use cases or rigorous down sampling to ensure real‐time applicability. Here, we present an algorithm featuring 2.5D map representations to avoid the computational drawbacks and ensure real‐time capability. We utilize established algorithms to project the 3D LiDAR data into two distinct maps and then apply object detection algorithms on each of those maps individually and to subsequently combine the information into a joint estimate. From the information stored in the 2.5D map, axis‐aligned bounding boxes for each object are computed containing information about position and dimensions. We present a proof of concept and assess the real‐time capability for a domain‐specific use case.