The optimization of expensive to evaluate, black-box, mixed-variable functions, i.e. functions that have continuous and discrete inputs, is a difficult and yet pervasive problem in science and engineering. In Bayesian optimization (BO), special cases of this problem that consider fully continuous or fully discrete domains have been widely studied. However, few methods exist for mixed-variable domains and none of them can handle discrete constraints that arise in many real-world applications. In this paper, we introduce MiVaBo, a novel BO algorithm for the efficient optimization of mixed-variable functions combining a linear surrogate model based on expressive feature representations with Thompson sampling. We propose an effective method to optimize its acquisition function, a challenging problem for mixed-variable domains, making MiVaBo the first BO method that can handle complex constraints over the discrete variables. Moreover, we provide the first convergence analysis of a mixed-variable BO algorithm. Finally, we show that MiVaBo is significantly more sample efficient than state-of-the-art mixed-variable BO algorithms on several hyperparameter tuning tasks, including the tuning of deep generative models.
This article presents a thorough investigation of the five Russian deictic words that correspond to the English meanings 'here ' and 'now': zdes', tut, sejčas, teper' and vot. We analyze data from the Russian National Corpus and data from Russian TV news broadcasts. On the basis of the corpus data, we propose a radial category network consisting of nine subcategories, which encompass all five words, and show that although the deictic words have overlapping distributions, they all have distinct "radial category profiles" in the sense that they display different centers of gravity in the network. We advance the "Minimal Adaptation Hypothesis", according to which language makes adaptations that are as small as possible, when applied to a new setting, such as the one created by TV.Аннотация: В статье представлено тщательное исследование пяти дейктических слов русского языка, соответсвующих английским местоимениям 'here' и 'now': здесь, тут, сейчас, теперь и вот. Мы проанализировали данные Национального корпуса русского языка, а также данные русских телевизионных выпусков новостей. На основе данных корпуса мы предлагаем радиальную сеть значений, состоящую из девяти подкатегорий и описывающую все пять дейктических слов. Мы показываем, что, хотя исследуемые дейктические единицы пересекаются в их распределении, каждая из них имеет свой «профиль радиальной категории», иными словами, свой центр тяжести в рамках предложенной радиальной сети значений. Мы также выдвигаем «гипотезу минимальной адаптации», согласно которой приспособление языка к новым условиям функционирования, таким как условия телевидения, оказывается по возможности минимальным.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.