Trust is vital for market development, but how can trust be enhanced in a marketplace? A common view is that more trusting may help to build trust, especially in less developed economies. In this paper, we argue that more trusting may lead to less trust. We set up a rational expectation model in which a marketplace uses buyer protection to promote buyer trusting. Our results show that buyer protection may reduce trust in equilibrium and even hinder market expansion because it triggers differential entry between honest and strategic sellers and may induce more cheating from strategic sellers. Using a large transaction-level data set from the early years of Eachnet.com (an eBay equivalent in China), we find evidence that is consistent with the model predictions. Stronger buyer protection leads to less favorable evaluation of seller behavior and is associated with slower market expansion. These findings suggest that a trust-promoting policy aiming at buyer trusting may not be effective if it is not accompanied by additional incentives to improve seller trustworthiness. JEL: D8, L15, L81.
We propose a token-based blockchain system that streamlines abstractions into a universal token structure. In the proposed system, each token has an identity that enables the implementation of specific rollbacks and governance that make 51% attacks unprofitable. Because the token-based bookkeeping method only verifies and updates the ownership within each transaction, the proposed system supports parallel expansion and cross-chain transactions without limit. The flexible authority management mechanism of the proposed system is regulatory-friendly, as the intensity of supervision and governance can be adapted to accommodate different application scenarios. INDEX TERMS Token-based, parallel-processing, specific-rollback, regulatory-friendly.
In order to learn transformation-invariant features, several effective deep architectures like hierarchical feature learning and variant Deep Belief Networks (DBN) have been proposed. Considering the complexity of those variants, people are interested in whether DBN itself has transformation-invariances. First of all, we use original DBN to test original data. Almost same error rates will be achieved, if we change weights in the bottom interlayer according to transformations occurred in testing data. It implies that weights in the bottom interlayer can store the knowledge to handle transformations such as rotation, shifting, and scaling. Along with the continuous learning ability and good storage of DBN, we present our Weight-Transformed Training Algorithm (WTTA) without augmenting other layers, units or filters to original DBN. Based upon original training method, WTTA is aiming at transforming weights and is still unsupervised. For MNIST handwritten digits recognizing experiments, we adopted 784-100-100-100 DBN to compare the differences of recognizing ability in weights-transformed ranges. Most error rates generated by WTTA were below 25% while most rates generated by original training algorithm exceeded 25%. Then we also did an experiment on part of MIT-CBCL face database, with varying illumination, and the best testing accuracy can be achieved is 87.5%. Besides, similar results can be achieved by datasets covering all kinds of transformations, but WTTA only needs original training data and transform weights after each training loop. Consequently, we can mine inherent transformation-invariances of DBN by WTTA, and DBN itself can recognize transformed data at satisfying error rates without inserting other components.
The recent incidents involving Dr. Timnit Gebru, Dr. Margaret Mitchell, and Google have triggered an important discussion emblematic of issues arising from the practice of AI Ethics research. We offer this paper and its bibliography as a resource to the global community of AI Ethics Researchers who argue for the protection and freedom of this research community. Corporate, as well as academic research settings, involve responsibility, duties, dissent, and conflicts of interest. This article is meant to provide a reference point at the beginning of this decade regarding matters of consensus and disagreement on how to enact AI Ethics for the good of our institutions, society, and individuals. We have herein identified issues that arise at the intersection of information technology, socially encoded behaviors, and biases, and individual researchers’ work and responsibilities. We revisit some of the most pressing problems with AI decision-making and examine the difficult relationships between corporate interests and the early years of AI Ethics research. We propose several possible actions we can take collectively to support researchers throughout the field of AI Ethics, especially those from marginalized groups who may experience even more barriers in speaking out and having their research amplified. We promote the global community of AI Ethics researchers and the evolution of standards accepted in our profession guiding a technological future that makes life better for all.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.