Interval scheduling is a basic problem in the theory of algorithms and a classical task in combinatorial optimization. We develop a set of techniques for partitioning and grouping jobs based on their starting and ending times, that enable us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in dynamic and local settings of computation leads to several new results.For (1 + ε)-approximation of job scheduling of n jobs on a single machine, we obtain a fully dynamic algorithm with O( log n /ε) update and O(log n) query worst-case time. Further, we design a local computation algorithm that uses only O( log n /ε) queries. Our techniques are also applicable in a setting where jobs have rewards/weights. For this case we obtain a fully dynamic algorithm whose worst-case update and query time has only polynomial dependence on 1/ε, which is an exponential improvement over the result of Henzinger et al. [SoCG, 2020].We extend our approaches for unweighted interval scheduling on a single machine to the setting with M machines, while achieving the same approximation factor and only M times slower update time in the dynamic setting. In addition, we provide a general framework for reducing the task of interval scheduling on M machines to that of interval scheduling on a single machine. In the unweighted case this approach incurs a multiplicative approximation factor 2 − 1/M .
Entropic causal inference is a framework for inferring the causal direction between two categorical variables from observational data. The central assumption is that the amount of unobserved randomness in the system is not too large. This unobserved randomness is measured by the entropy of the exogenous variable in the underlying structural causal model, which governs the causal relation between the observed variables. [15] conjectured that the causal direction is identifiable when the entropy of the exogenous variable is not too large. In this paper, we prove a variant of their conjecture. Namely, we show that for almost all causal models where the exogenous variable has entropy that does not scale with the number of states of the observed variables, the causal direction is identifiable from observational data. We also consider the minimum entropy coupling-based algorithmic approach presented by [15], and for the first time demonstrate algorithmic identifiability guarantees using a finite number of samples. We conduct extensive experiments to evaluate the robustness of the method to relaxing some of the assumptions in our theory and demonstrate that both the constant-entropy exogenous variable and the no latent confounder assumptions can be relaxed in practice. We also empirically characterize the number of observational samples needed for causal identification. Finally, we apply the algorithm on Tübingen cause-effect pairs dataset.
We analyze the computational complexity of several new variants of edge-matching puzzles. First we analyze inequality (instead of equality) constraints between adjacent tiles, proving the problem NP-complete for strict inequalities but polynomial-time solvable for nonstrict inequalities. Second we analyze three types of triangular edge matching, of which one is polynomial-time solvable and the other two are NP-complete; all three are #P-complete. Third we analyze the case where no target shape is specified and we merely want to place the (square) tiles so that edges match exactly; this problem is NP-complete. Fourth we consider four 2-player games based on 1×n edge matching, all four of which are PSPACE-complete. Most of our NP-hardness reductions are parsimonious, newly proving #P and ASP-completeness for, e.g., 1 × n edge matching. Along the way, we prove #P-and ASP-completeness of planar 3-regular directed Hamiltonicity; we provide linear-time algorithms to find antidirected and forbidden-transition Eulerian paths; and we characterize the complexity of new partizan variants of the Geography game on graphs.
We analyze the computational complexity of several new variants of edge-matching puzzles. First we analyze inequality (instead of equality) constraints between adjacent tiles, proving the problem NPcomplete for strict inequalities but polynomial for nonstrict inequalities. Second we analyze three types of triangular edge matching, of which one is polynomial and the other two are NP-complete; all three are #P-complete. Third we analyze the case where no target shape is specified, and we merely want to place the (square) tiles so that edges match (exactly); this problem is NP-complete. Fourth we consider four 2-player games based on 1 × n edge matching, all four of which are PSPACE-complete. Most of our NP-hardness reductions are parsimonious, newly proving #P and ASP-completeness for, e.g., 1 × n edge matching.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.