Many hardware-based defense schemes against speculative execution attacks use special mechanisms to protect instructions while speculative, and lift the mechanisms when the instructions turn non-speculative. In this paper, we observe that speculative instructions can sometimes become Speculation Invariant before turning non-speculative. Speculation invariance means that (i) whether the instruction will execute and (ii) the instruction's operands are not a function of speculative state. Hence, we propose to lift the protection mechanisms on these instructions early, when they become speculation invariant, and issue them without protection. As a result, we improve the performance of the defense schemes without changing their security properties.To exploit speculation invariance, we present the InvarSpec framework. InvarSpec includes a program analysis pass that identifies, for each relevant instruction i, the set of older instructions that are Safe for i-i.e., those that do not prevent i from becoming speculation invariant. At runtime, the InvarSpec micro-architecture loads this information and uses it to determine when speculative instructions can be issued without protection. InvarSpec is one of the first defense schemes for speculative execution that combines cooperative compiler and hardware mechanisms. Our evaluation shows that InvarSpec effectively reduces the execution overhead of hardware defense schemes. For example, on SPEC17, it reduces the average execution overhead of fence protections from 195.3% to 108.2%, of Delay-On-Miss from 39.5% to 24.4%, and of InvisiSpec from 15.4% to 10.9%.
Semantic SLAM is an important field in autonomous driving and intelligent agents, which can enable robots to achieve high-level navigation tasks, obtain simple cognition or reasoning ability and achieve language-based human-robot-interaction. In this paper, we built a system to creat a semantic 3D map by combining 3D point cloud from ORB SLAM [1], [2] with semantic segmentation information from Convolutional Neural Network model PSPNet-101 [3] for large-scale environments. Besides, a new dataset for KITTI [4] sequences has been built, which contains the GPS information and labels of landmarks from Google Map in related streets of the sequences. Moreover, we find a way to associate the real-world landmark with point cloud map and built a topological map based on semantic map.
Symbolic execution now becomes an indispensable technique for software testing and program analysis. There are several symbolic execution tools available off-the-shelf, and we need a practical benchmark approach to learn their capabilities. Therefore, this paper introduces a novel approach to benchmark symbolic execution tools in a fine-grained and efficient manner. In particular, our approach evaluates the performance of such tools against the known challenges faced by general symbolic execution techniques, such as floating-point numbers and symbolic memories. To this end, we first survey related papers and systematize the challenges of symbolic execution. We extract 12 distinct challenges from the literature and categorize them into two categories: symbolic-reasoning challenges and path-explosion challenges. Then, we develop a dataset of logic bombs and a framework to benchmark symbolic execution tools automatically. For each challenge, our dataset contains several logic bombs, each of which is guarded by a specific challenging problem. If a symbolic execution tool can find test cases to trigger logic bombs, it indicates that the tool can handle the corresponding problems. We have conducted real-world experiments with three popular symbolic execution tools: KLEE, Angr, and Triton. Experimental results show that our approach can reveal their capabilities and limitations in handling particular issues accurately and efficiently. The benchmark process generally takes only dozens of minutes to evaluate a tool. We release our dataset on GitHub as open source, with an aim to better facilitate the community to conduct future work on benchmarking symbolic execution tools.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.