Bandwidth request-grant mechanisms are used in 802.16 networks to manage the uplink bandwidth needs of subscriber stations (SSs). Requests may be sent by SSs to the base station (BS) by means of several mechanisms defined in the standard. Based on the incoming requests, the BS (which handles most of the bandwidth scheduling in the system) schedules the transmission of uplink traffic, by assigning transmission opportunities to the SSs in an implementationdependent manner. In this paper we present a study of some bandwidth allocation issues, arising from the management of the perception of subscriber stations' bandwidth needs at the base station. We illustrate how the bandwidth perception varies depending on the policy used to handle requests and grants. By means of ns-2 simulations, we evaluate the potential impact of such policies on the system's aggregate throughput when the traffic is composed of Best-Effort TCP flows.