The recent progress of RISC technology has led to the feeling that a significant percentage of image processing applications, which in the past required the use of special purpose computer architectures or of "ad hoc" hardware, can now be implemented in software on low cost general purpose platforms. We decided to undertake the study described in this paper to understand the extent to which this feeling corresponds to reality . We selected a set of reference RISC based systems to represent RISC technology, and identified a set of basic image processing tasks to represent the image processing domain. We measured the performance and studied the behaviour of the reference systems in the execution of the basic image processing tasks by running a number of experiments based on different program organizations. The results of these experiments are summarized in a table, which can be used by image processing application designers to evaluate whether RISC based platforms are able to deliver the computing power required for a specific application.The study of the reference system behaviour led us to draw the following conclusions. First, unless special programming solutions are adopted, image processing programs turn out to be extremely inefficient on RISC based systems. This is due to the fact that present generation optimizing compilers are not able to compile image processing programs into efficient machine code.Second, while computer architecture has evolved from the original flat organization towards a more complex organization, based, for example, on memory hierarchy and instruction level parallelism, the programming model upon which high level languages (e.g., C, Pascal) are based has not evolved accordingly. As a consequence programmers are forced to adopt special programming solutions and tricks to bridge the gap between architecture and programming model to improve efficiency.Third, although processing speed has grown up much faster than memory access speed, in current generation single processor RISC systems image processing can still be considered computebound. As a consequence, improvements in processing speed (originated for example by a higher degree of parallelism) will yield improvements of an equal factor in applications.
In this paper, we consider the evolution of telephone networks from time-division multiplexing circuit switching to packet switching and, in particular, to packet switching-based on Internet Protocol (IP-supported telephony
The GPRS (general packet radio service) broad availability is driving a widespread development of mobile telemetry systems for fleet management, supply chain management and dangerous goods monitoring applications. In this paper we present the results of extensive measurements of the GPRS network-layer uplink latency performed over a four-month period from about fifty road trucks using telemetry service, providing an uplink latency characterization in a moving vehicle environment. The results show the relationship between vehicle speed and latency. Furthermore, the performances of the stop-and-wait in a moving vehicle environment are evaluated in order to design a variant of such a protocol based on a vehicle speed-aware retransmission timeout
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.