This paper makes a contribution to research on digital twins that are generated from robot sensor data. We present the results of an online user study in which 240 participants were tasked to identify real-world objects from robot point cloud data. In the study we manipulated the render style (point clouds vs voxels), render resolution (i.e., density of point clouds and granularity of voxel grids), colour (monochrome vs coloured points/voxels), and motion (no motion vs rotational motion) of the shown objects to measure the impact of these attributes on object recognition performance. A statistical analysis of the study results suggests that there is a three-way interaction between our independent variables. Further analysis suggests: 1) objects are easier to recognise when rendered as point clouds than when rendered as voxels, particularly lower resolution voxels; 2) the effect of colour and motion is affected by how objects are rendered, e.g., utility of colour decreases with resolution for point clouds; 3) an increased resolution of point clouds only leads to an increased object recognition if points are coloured and static; 4) high resolution voxels outperform medium and low resolution voxels in all conditions, but there is little difference between medium and low resolution voxels; 5) motion is unable to improve the performance of voxels at low and medium resolutions, but is able to improve performance for medium and low resolution point clouds. Our results have implications for the design of robot sensor suites and data gathering and transmission protocols when creating digital twins from robot gathered point cloud data.