This paper presents an algorithm of hub number p robustness estimation using specific simulations in the single allocation hub location problem. The simulation has to model the service demand trends in each origin node to each destination node. This idea is based on the hub network dependence on service demand forecasting, which is modeled by random values from random distribution with parameters reflecting the demand changes. The algorithm includes the mixed integer programming model which describes the hub location-allocation problem with single allocation (each node is connected exactly to one hub). The model chooses the optimal locations for the fixed number of hubs p from the fixed possible location set in the problem. The perturbed data simulate the changes in the service need and present the perspectives of the network changes, and the algorithm fixes these changes. The number of changes in the network is consolidated into the variety frequencies which describe the variabilities in the set of simulations. The algorithm is implemented on Python 3.5 and model optimization is fulfilled using Gurobi Optimizer 7.0.1 software. The results in the real dataset are illustrated and discussed. Refs 18. Fig. 1. Tables 3.
The hub location-allocation problem under uncertainty is a real-world task arising in the areas such as public and freight transportation and telecommunication systems. In many applications, the demand is considered as inexact because of the forecasting inaccuracies or human's unpredictability. This study addresses the robust uncapacitated multiple allocation hub location problem with a set of demand scenarios. The problem is formulated as a nonlinear stochastic optimization problem to minimize the hub installation costs, expected transportation costs and expected absolute deviation of transportation costs. To eliminate the nonlinearity, the equivalent linear problem is introduced. The expected absolute deviation is the robustness measure to derive the solution close to each scenario. The robust hub location is assumed to deliver the least costs difference across the scenarios. The number of scenarios increases size and complexity of the task. Therefore, the classical and improved Benders decomposition algorithms are applied to achieve the best computational performance. The numerical experiment on CAB and AP dataset presents the difference of resulting hub networks in stochastic and robust formulations. Furthermore, performance of two Benders decomposition strategies in comparison with Gurobi solver is assessed and discussed.
The problem of a distribution centres network construction based on the statistical data analysis of the LTL transportation company is considered. The distribution centre network is built on the basis of the demand on terminal services. Statistical criterion for selecting the number of distribution centres in the network based on the application of the network robustness principle to the disturbances in demand for services in each terminal is suggested. Demand distortions are proposed to be carried out taking into account the forecasting of future trends in demand. The simulation study on real data is carried out. The considered task consists of a terminal network where the cargo is generated to deliver other terminals. The goal is to estimate the robust number of hubs in the network which minimizes the total flows costs and is resistant to the possible flows changes in the network. The results on the real dataset are illustrated and discussed.
The literature describes the Semantic Textual Similarity (STS) area as a fundamental part of many Natural Language Processing (NLP) tasks. The STS approaches are dependent on the availability of lexical-semantic resources. There are several efforts to improve the lexicalsemantics resources for the English language, and the state-of-art report a large amount of application for this language. Brazilian Portuguese linguistics resources, when compared with English ones, do not have the same availability regarding relation and contents, generation a loss of precision in STS tasks. Therefore, the current work presents an approach that combines Brazilian Portuguese and English lexical-semantics ontology resources to reach all potential of both language linguistic relations, to generate a language-mixture model to measure STS. We evaluated the proposed approach with a well-known and respected Brazilian Portuguese STS dataset, which brought to light some considerations about mixture models and their relations with ontology language semantics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.