2008 IEEE Conference on Soft Computing in Industrial Applications 2008
DOI: 10.1109/smcia.2008.5045942
|View full text |Cite
|
Sign up to set email alerts
|

Modifying Ant Colony Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2010
2010
2016
2016

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 8 publications
0
6
0
Order By: Relevance
“…Artificial ants in ACO are stochastic solution construction procedures that build candidate solutions for the problem instance under concern, by exploiting artificial pheromone information that is adapted based on the ant search experience and possibly available heuristic information [30]. ACO was successfully applied to many problems such as the traveling salesman problem [31][32][33][34] and mentioned earlier job shop scheduling [26][27][28][29].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Artificial ants in ACO are stochastic solution construction procedures that build candidate solutions for the problem instance under concern, by exploiting artificial pheromone information that is adapted based on the ant search experience and possibly available heuristic information [30]. ACO was successfully applied to many problems such as the traveling salesman problem [31][32][33][34] and mentioned earlier job shop scheduling [26][27][28][29].…”
Section: Literature Reviewmentioning
confidence: 99%
“…They also described about Underload and Overload of load balancing methods. Sarayut Nonsiri and Siriporn Supratid [3] discussed about the ACO that allows fast near optimal solutions to be found. It is useful in industrial environments where computational resources and time are limited.…”
Section: Related Workmentioning
confidence: 99%
“…We have experimented with several values for the term ∆ (r,s).A good choice was inspired by Q-learning [3] , an algorithm developed to solve reinforcement learning problems which allows an agent to learn such an optimal policy by the recursive application of a rule. ∆ (r,s)=γ.max (s,z) where zєJ k (s) and 0<γ<=1.Alternate choices may be ∆ (r,s)=t 0 or ∆ (r,s)=0.…”
Section: Acs Local Updating Rulementioning
confidence: 99%
“…The increased amount of pheromone attributed as the positive feedback. This optimization technique does not rely on mathematical description of the specific issues, but has strong global optimization feature [3], high performance [4] and flexibility. Three main aspects to determine ACS are:…”
Section: Introductionmentioning
confidence: 99%