2013
DOI: 10.19030/jabr.v15i4.8151
|View full text |Cite
|
Sign up to set email alerts
|

An Application Of An Artificial Neural Network Investment System To Predict Takeover Targets

Abstract: Artificial neural networks are a robust, effective complement to traditional statistical methods in financial applications. They can incorporate qualitative and quantitative information, and recognize underlying patterns and trends in large, complex data sets. This paper applies a neural network model to identify potential acquisition targets. The model incorporates various factors based on acquisition theories suggested in the literature. The resulting neural network model exhibits a highly successful predict… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…Barnes, 1990), logistic regression (e.g., Palepu, 1986), and probit (Pastena & Ruland, 1986). Some models have also been proposed using learning based models like artificial neural networks (Cheh, Weinberg & Yook, 1999;Panigrahi, 2004) and expert systems (e.g., Lyons & Persek, 1991). Barnes (2000) investigated whether choice of estimating technique (LDA vs. Logit) and choice of data form significantly affects accuracy of takeover prediction models.…”
Section: Definition Of Targets and Non-targets Vary In The Literaturementioning
confidence: 99%
“…Barnes, 1990), logistic regression (e.g., Palepu, 1986), and probit (Pastena & Ruland, 1986). Some models have also been proposed using learning based models like artificial neural networks (Cheh, Weinberg & Yook, 1999;Panigrahi, 2004) and expert systems (e.g., Lyons & Persek, 1991). Barnes (2000) investigated whether choice of estimating technique (LDA vs. Logit) and choice of data form significantly affects accuracy of takeover prediction models.…”
Section: Definition Of Targets and Non-targets Vary In The Literaturementioning
confidence: 99%
“…More recently, Computational Intelligence (CI) techniques such as Artificial Neural Network (ANN) and their variations such as Self Organizing Maps (SOMs) of Hopfield Neural Networks (HNNs) have been used for predicting takeover targets. In arguing for the benefit of ANN; Cheh, Weinberg and Yook [18] claim that the parametric nature of the statistical techniques requires one to make certain assumptions about the exact nature of the functional relationship between the multiple input variables. However, such a problem can be avoided by using ANNs which do not need any functional relationship between the multiple input variables.…”
Section: Review Of Literaturementioning
confidence: 99%
“…Meador, Church, and Rayburn's [9] use a sample of 100 acquired companies; and Tsagkanos, Georgopoulos, and Siriopoulos [10] analysis consist of 35 acquired companies. Cheh, Weinberg and Yook [18] use a sample space of 1275 unacquired companies and 173 acquired companies. Hongjiu, Yanrong, and Shufen [20] use a set of ten enterprises to distinguish between target and nontarget companies.…”
Section: Review Of Literaturementioning
confidence: 99%
See 1 more Smart Citation
“…To classify or predict the swimming times, we utilize twelve different machine learning methods that are based on variations of the following nine approaches: 1) the Support Vector Regression (SVR) method [24,28,3], which have been mostly used for pattern recognition; 2) the Artificial Neural Network (ANN) method, which has been widely used in classification [27,11,12,7]; 3) the K-nearest Neighbor (KNN) algorithm in which K training samples of closest distance to the test sample are used to select the label [10,13]; 4) the Support Vector Machine (SVM) method based on the structural risk minimization principle and the statistical learning theory [23,4]; 5) the Decision Tree (DT) algorithm, based on a greedy top-down recursive partitioning strategy for tree growth [9,2]; 6) the Random Forest (RF) approach, which is an ensemble classifier that consists of many decision trees and outputs the class [15,16]; 7)AdaBoost, which constructs a succession of weak learners by using different training sets that are derived from resampling the original data [20,6]; 8) the Navie Bayes (NB) classification, which relaxes the restriction of the dependency structures between attributes by simply assuming that attributes are conditionally independent, given the class label [26,29]; 9) the Latent Dirichlet Allocation (LDA), a supervised method that searches for the project axes on which the data points of different classes are far from each other while requiring data points of the same class to be close to each other [5].…”
mentioning
confidence: 99%