Lightweight ground robots have been increasingly showing their potential to tackle agricultural tasks that require time-consuming human labor thus limited in detail or area coverage. A task that benefits several modern agricultural practices is scouting (walking though the field). However, a reliable autonomous navigation is still a challenge. A robot needs to deal with the agricultural field dynamism and unpredictable obstacles, e.g. humans and machines. In particular, corn and sorghum present an additional issue due to the standard way of cultivation: Narrow sub-meter lanes that become even less visible on later stages due to dense coverage and spreading of leaves. This condition heavily influences the sensors by provoking frequent occlusions, misreadings and other situations out of their working range. In such context, three questions arise: 1) Can the unexplored potential of Light Detection and Ranging (LiDAR) sensor suffice to interpret a narrow lane crop environment without artificial landmarks? 2) Does the search of a best line representation for crop rows really represent the problem? 3) How can lateral distance estimation be improved through sensor fusion? To answer these three questions, Perception LiDAR (PL) has been developed to estimate lateral distance to crop rows using a 2D LiDAR as the core sensor. An Extended Kalman Filter enables the use of embedded odometry and inertial measurements. Manual ground truth has shown that the task of finding best line representation from LiDAR data is challenging, and it could have as low as 54% of line estimates within 0.05 m error. Nonetheless, the proposed method has enabled 72 km of autonomous operation of multiple robots. Due to unreliable RTK-GNSS signal under canopy, PL outputs significantly outperform the GNSS-based positioning, which may not even be in the current lane. The extensive field testing happened in multiple corn and sorghum fields between 2017 and 2020. For continuous lane, i.e. at least one of the side rows exists, the success rate to finish the desired segment of autonomous navigation is 89.76%. The higher performance for LiDAR input with significant less presence of objects other than stalks, and also the failure concentration on sensor occlusion and gaps, both situations with low to none visible rows, strongly indicates that the problem is not best line fitting, but rather a classification one where the end goal is to find stalks. In summary, although PL does not provide a fully intervention-free for within crop rows navigation, its current capabilities relieve operators from the tedious task of manually driving the robot the whole time and pave the way towards a fully autonomous agricultural robot.