2012 45th Hawaii International Conference on System Sciences 2012
DOI: 10.1109/hicss.2012.100
|View full text |Cite
|
Sign up to set email alerts
|

An Exploratory Examination of Antecedents to Software Piracy: A Cross-Cultural Comparison

Abstract: Software piracy continues to be a growing problem on a global scale for software developers. The purpose of this study was to conduct a crosscultural comparison of a model predicting the intent of individuals to pirate software using two subsamples: Jordan and the US. Our results suggest that the Theory of Reasoned Action provides a strong predictive ability for our US subsample, but not for our Jordanian sample. Additionally, public selfconsciousness, ideology, and religiosity varied in their ability to moder… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2013
2013
2015
2015

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 40 publications
(50 reference statements)
0
2
0
Order By: Relevance
“…This is important because the psychometric properties from the samples must be demonstrated to have the same structure to establish that the groups had similar interpretations of our instrument's items. Failure to establish measurement invariance would suggest that we measured different phenomena across the groups, therefore making comparison between groups meaningless [35]. To assess measurement invariance, we used the component-based CFA in SmartPLS 2 [32] to conduct factor analysis for each group of data and retained items that had factor loadings of at least .5 [18] in all the groups (and dropped for all groups items with loadings less than .5), thereby establishing configural invariance.…”
Section: Multi-group Comparisonmentioning
confidence: 99%
“…This is important because the psychometric properties from the samples must be demonstrated to have the same structure to establish that the groups had similar interpretations of our instrument's items. Failure to establish measurement invariance would suggest that we measured different phenomena across the groups, therefore making comparison between groups meaningless [35]. To assess measurement invariance, we used the component-based CFA in SmartPLS 2 [32] to conduct factor analysis for each group of data and retained items that had factor loadings of at least .5 [18] in all the groups (and dropped for all groups items with loadings less than .5), thereby establishing configural invariance.…”
Section: Multi-group Comparisonmentioning
confidence: 99%
“…Using the criteria from Chin (1998) and Fornell and Larcker (1981), indicator reliability can be assumed because Cronbach's  and the composite reliability that analyzes the strength of each indicator's correlation with their variables are all higher than the threshold value of 0.7. Convergent and discriminate validity can be assumed as all constructs have an Average Variance Extracted (AVE) (which represents the variance extracted by the variables from its indicator items) above the recommended threshold of 0.5 and greater than the variance shared with other variables (Setterstrom et al, 2012). The measurement models yielded an acceptable value of all indices for PLS model validity and reliability.…”
Section: The Measurement Modelmentioning
confidence: 99%