2005
DOI: 10.1002/dir.20028
|View full text |Cite
|
Sign up to set email alerts
|

Response models based on bagging neural networks

Abstract: Identifying customers who are likely to respond to a product offering is an important issue in direct marketing.Response models are typically built from historical purchase data. A popular method of choice, logistic regression, is easy to understand and build, but limited in that the model is linear in parameters. Neural networks are nonlinear and have been found to improve predictive accuracies for a variety of business applications. Neural networks have not always demonstrated clear supremacy over traditiona… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2006
2006
2018
2018

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 84 publications
(41 citation statements)
references
References 30 publications
0
41
0
Order By: Relevance
“…A few of them have been applied to response modeling such as under-sampling (Ha, Cho, & MacLachlan, 2005), ensemble (Yu & Cho, 2006) and cost-modifying methods (Shin & Cho, 2006), among which the under-sampling method has been very popular. We apply the under-sampling method to LR.…”
Section: Balancing the Class Distributionmentioning
confidence: 99%
See 1 more Smart Citation
“…A few of them have been applied to response modeling such as under-sampling (Ha, Cho, & MacLachlan, 2005), ensemble (Yu & Cho, 2006) and cost-modifying methods (Shin & Cho, 2006), among which the under-sampling method has been very popular. We apply the under-sampling method to LR.…”
Section: Balancing the Class Distributionmentioning
confidence: 99%
“…While selecting or extracting relevant variables is very important, it is not our main concern. Malthouse (2001) extracted 17 out of the 91 input variables for this dataset, and Ha et al (2005) used 15 among them, removing two variables whose variations are negligible. In this paper, these 15 variables were used as input variables that are listed in Table 1.…”
Section: Dmef4 Datasetmentioning
confidence: 99%
“…However, they still need human annotation efforts for constructing the initial training corpus. To resolve this problem, we propose a semi-supervised NER system using active learning [16] based on bagging (bootstrap aggregating) [17] with distant supervision [18]. Unlike existing ML-systems, our system does not require a substantial amount of NE tagged training corpus and instead, only requires a NE dictionary that contains NEs and their classes.…”
Section: Previous Workmentioning
confidence: 99%
“…The response rate was 9.4%. For practical reasons such as computational cost and imbalanced data distribution, only a subset of DMEF4 customers was selected and used for experiments based on 'weighted dollar amount spent' as was defined in (Ha, Cho, & MacLachlan, 2005):…”
Section: Data Setmentioning
confidence: 99%
“…However, complexities of human behavior rendered these inherently linear models have a lot to be desired. Recently, nonlinear and complex machine learning approaches were proposed such as neural networks (Bentz & Merunkay, 2000;Bounds & Ross, 1997;Ha, Cho, & MacLachlan, 2005;Moutinho, Curry, Davies, & Rita, 1994;Kim & Street, 2004;Zahavi & Levin, 1997a), decision trees (Haughton & Oulabi, 1997) and support vector machines (Cheung, Kwok, Law, & Tsui, 2003). They are known to be more flexible so that any degree of complexity can be learned or handled.…”
Section: Introductionmentioning
confidence: 98%