Motivated by, e.g., sensitivity analysis and end-to-end learning, the demand for differentiable optimization algorithms has been significantly increasing. In this paper, we establish a theoretically guaranteed versatile framework that makes the greedy algorithm for monotone submodular function maximization differentiable. We smooth the greedy algorithm via randomization, and prove that it almost recovers original approximation guarantees in expectation for the cases of cardinality and κ-extensible system constrains. We also show how to efficiently compute unbiased gradient estimators of any expected output-dependent quantities. We demonstrate the usefulness of our framework by instantiating it for various applications.Preprint. Under review.