Many embedded systems get improvements on hardware such as massive memory and multi-cores. According these improvements, some application that demands performance of excessive operations per seconds has been appeared. These applications often use dynamic memory allocation. But, existing allocators does not scale well, thus those applications is limited theirs performance by allocators. Moreover, because the applications that run on embedded systems are rarely powered-off, the external fragmentation problem is critical. This paper introduces the allocator, lock-free and scalable that free the synchronization cost and low the fragmentation. Our allocator has per-thread heap and allocate the close size memory instead of the exact size of memory to reduce synchronization cost and allocation/de-allocation time. Our result on test application that can be run with 1 to 32 threads demonstrate that our allocator yields low average fragmentation and improves overall program performance over the standard Linux allocator by up to a factor of 60 on 32 threads, and up to a factor of 10 over the next best allocator we tested.
: Recently, by developing many training systems in battle field, the demand for interconnecting and internetworking between Live, Virtual, Constructive training systems has been increased to support efficient data distribution and system control. But, there are lots of problems for them to interwork, because the existing researches only support L-L, V-V, C-C Interoperability. Therefore, we propose L-V-C gateway to provide interoperable simulation environment based on HLA and DDS between them. First, we illustrate FOM Management that parses RPR-FOM XML file to acquire Data information to be shared between them, and generates common data structure and source code used for L-V-C Gateway. L-V-C Gateway created from FOM Management supports Data Conversion and Quality of Service between HLA and DDS. HLA Federate and DDS Domainparticipant in L-V-C Gateway play a role of logical communication channel and relay data from HLA Federation to DDS Domain and vice versa.
Since just‐in‐time (JIT) has considerable overhead to detect hot spots and compile them at runtime, using sophisticated optimization techniques for embedded devices means that any resulting performance improvements will be limited. In this paper, we introduce a novel static Dalvik bytecode optimization framework, as a complementary compilation of the Dalvik virtual machine, to improve the performance of Android applications. Our system generates optimized Dalvik bytecodes by using Low Level Virtual Machine (LLVM). A major obstacle in using LLVM for optimizing Dalvik bytecodes is determining how to handle the high‐level language features of the Dalvik bytecode in LLVM IR and how to optimize LLVM IR conforming to the language information of the Dalvik bytecode. To this end, we annotate the high‐level language features of Dalvik bytecode to LLVM IR and successfully optimize Dalvik bytecodes through instruction selection processes. Our experimental results show that our system with JIT improves the performance of Android applications by up to 6.08 times, and surpasses JIT by up to 4.34 times.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.