In particle swarm optimization, a set of particles move towards the global optimum point according to their experience and experience of other particles. Parameters such as particle rate, particle best experience, the best experience of all the particles and particle current position are used to determine the next position of each particle. Certain relationships received the input parameters and determined the next position of each particle. In this article, the relationships are accurately assessed and the amount of the effect of input parameters is horizontally set. To set coefficients adaptively, the notion is taken from bee behavior in collecting nectar. This method was implemented on software and examined in the standard search environments. The obtained results indicate the efficiency of this method in increasing the rate of convergence of particles towards the global optimum.
Keyword:Adaptive is composed of a set of particles. The aim of all the particles is approaching the optimum response and reducing error. The error of each particle is particle distance to response. Each particle can be a potential response. Each particle determines its future position by consulting with other particles and its experiences. The position of each particle is a result of its experiences and other particles' experiences. For example, we consider a person as a smart particle and the purpose as buying a suitable automobile. The person pays attention to two factors in buying a suitable automobile; First, his last experiences of buying an automobile and second, consulting with other people and asking their opinion about experiences of buying an automobile. The person, regarding his experiences and others' experiences in buying an automobile, selects his optimum automobile. Figure 1 indicates how a hypothetical particle performs in the optimization algorithm of particle swarm optimization. The horizontal axis indicates the scope of search space and the vertical axis indicates the amount of error according to consistent function. As shown in Figure 1, there is a search space in which a particle tries to reach a global optimum. x(t) is the position of a particle at the time t, v(t) is the rate of a particle at the time t, p best (t) is the best experience of a particle to the time t and g best (t) is the best experience of all the particles to the time t. In PSO method, each particle tends to move towards its best experience and best experience of other particles. p best -x(t) is the distance of particle to its best experience and g best -x(t) is the distance of particle to the best experience of other particles. The rate v (t+1) is the resultant of the two