2007
DOI: 10.1016/j.ejor.2005.12.028
|View full text |Cite
|
Sign up to set email alerts
|

ILP approaches to the blockmodel problem

Abstract: Blockmodelling is a method for identifying structural similarities or equivalences between elements which has applications in a variety of contexts, including multiattribute performance assessment. One criterion for forming blocks results in a difficult nonlinear integer programme. We give several integer linear programming formulations of this problem and provide comparative computational results. We show that methods of reducing symmetry proposed by Sherali and Smith are not effective in this case and propos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2009
2009
2016
2016

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 23 publications
1
4
0
Order By: Relevance
“…symmetry in the problem imposed a significant burden on the solver that could not be alleviated by even incorporating f ≤ f * at the root node of the B&B process. Furthermore, it is observed that enforcing such tight upper bound constraints induced limited computational speed-ups and sometimes had adverse computational effects, which resonates with the findings of Ragsdale and Shapiro (1996; as further echoed by Proll (2007) in the context of the blockmodel problem). Likewise, Table 4 reports the computational performance of DTP ("sym = 5"), DTP-S, and DTP µ -S to which we have appended the constraint f ≤ f * .…”
Section: Models With Bounded Objective Functionssupporting
confidence: 56%
See 2 more Smart Citations
“…symmetry in the problem imposed a significant burden on the solver that could not be alleviated by even incorporating f ≤ f * at the root node of the B&B process. Furthermore, it is observed that enforcing such tight upper bound constraints induced limited computational speed-ups and sometimes had adverse computational effects, which resonates with the findings of Ragsdale and Shapiro (1996; as further echoed by Proll (2007) in the context of the blockmodel problem). Likewise, Table 4 reports the computational performance of DTP ("sym = 5"), DTP-S, and DTP µ -S to which we have appended the constraint f ≤ f * .…”
Section: Models With Bounded Objective Functionssupporting
confidence: 56%
“…For the noise dosage and the wagon load-balancing problems, the formulations COP µ -S and COP-S strongly dominated COP ("sym = 5", f ≤ f * ), which suggests that jointly employing the two symmetry-defeating paradigms (objective perturbations and hierarchical constraints) provides a far more decisive factor in enhancing problem solvability than relying on the CPLEX-enabled symmetrydefeating feature, even when the objective function is subjected to a very tight bound. In fact, imposing such strong upper bound constraints yielded limited computational improvements and often had adverse effects on the solver for the applications we investigated, which echoes the findings of Ragsdale and Shapiro (1996) and Proll (2007).…”
Section: Conclusion and Directions For Future Researchmentioning
confidence: 55%
See 1 more Smart Citation
“…Variations of grouping on graphs consider different flexibilities in group density, membership, and separation. The theory, techniques, and algorithms used to create groups are not unique to SNA and significant research on defining and creating cliques and clusters can be found in the mathematics, operations research, computer sciences, and IT domains for a wide variety of application areas (James et al, 2010;Proll, 2007). One grouping technique, originating from and popularized by the SNA area, is called blockmodeling.…”
Section: Methodology: Social Network Analysismentioning
confidence: 99%
“…Following Jessop (2003) and Proll (2007), Jessop et al (2007) discuss approaches for this sort of optimization and describe two methods useful for moderate sized problems. It was found that if the network density (the proportion of all inter-object relations classed as similar rather than dissimilar) was not too high then an approach based on the enumeration of groups is feasible.…”
Section: Optimal Groupsmentioning
confidence: 99%