“…Thus, a compliant implementation may choose to separate threads that perform all the I/O, parse the request to identify the target POA and priority, and hand off the request to the appropriate thread in the POA thread pool, as shown in Figure 10. Such an implementation can increase average and worst-case latency and create opportunities for unbounded priority inversions [15], however. For instance, even under a light load, the server ORB incurs a dynamic memory allocation, multiple synchronization operations, and a context switch to pass a request between a network I/O thread and a POA thread.…”