Abstract.We study the problem of integer factoring given implicit information of a special kind. The problem is as follows: let N 1 = p 1 q 1 and N 2 = p 2 q 2 be two RSA moduli of same bit-size, where q 1 , q 2 are α-bit primes. We are given the implicit information that p 1 and p 2 share t most significant bits. We present a novel and rigorous lattice-based method that leads to the factorization of N 1 and N 2 in polynomial time as soon as t ≥ 2α + 3. Subsequently, we heuristically generalize the method to k RSA moduli N i = p i q i where the p i 's all share t most significant bits (MSBs) and obtain an improved bound on t that converges to t ≥ α + 3.55 ... as k tends to infinity. We study also the case where the k factors p i 's share t contiguous bits in the middle and find a bound that converges to 2α + 3 when k tends to infinity. This paper extends the work of May and Ritzenhofen in [9], where similar results were obtained when the p i 's share least significant bits (LSBs). In [15], Sarkar and Maitra describe an alternative but heuristic method for only two RSA moduli, when the p i 's share LSBs and/or MSBs, or bits in the middle. In the case of shared MSBs or bits in the middle and two RSA moduli, they get better experimental results in some cases, but we use much lower (at least 23 times lower) lattice dimensions and so we obtain a great speedup (at least 10 3 faster). Our results rely on the following surprisingly simple algebraic relation in which the shared MSBs of p 1 and p 2 cancel out: q 1 N 2 − q 2 N 1 = q 1 q 2 (p 2 − p 1 ). This relation allows us to build a lattice whose shortest vector yields the factorization of the N i 's.
The ability to transfer knowledge to novel environments and tasks is a sensible desiderata for general learning agents. Despite the apparent promises, transfer in RL is still an open and little exploited research area. In this paper, we take a brand-new perspective about transfer: we suggest that the ability to assign credit unveils structural invariants in the tasks that can be transferred to make RL more sample-efficient. Our main contribution is SECRET, a novel approach to transfer learning for RL that uses a backward-view credit assignment mechanism based on a self-attentive architecture. Two aspects are key to its generality: it learns to assign credit as a separate offline supervised process and exclusively modifies the reward function. Consequently, it can be supplemented by transfer methods that do not modify the reward function and it can be plugged on top of any RL algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.