Open Source Hardware (OSH) is an increasingly viable approach to intellectual property management extending the principles of Open Source Software (OSS) to the domain of physical products. These principles support the development of products in transparent processes allowing the participation of any interested person. While increasing numbers of products have been released as OSH, little is known on the prevalence of participative development practices in this emerging field. It remains unclear to which extent the transparent and participatory processes known from software reached hardware product development. To fill this gap, this paper applies repository mining techniques to investigate the transparency and workload distribution of 105 OSH product development projects. The results highlight a certain heterogeneity of practices filling a continuum between public and private development settings. They reveal different organizational patterns with different levels of centralization and distribution. Nonetheless, they clearly indicate the expansion of the open source development model from software into the realms of physical products and provide the first large-scale empirical evidence of this recent evolution. Therewith, this article gives body to an emerging phenomenon and contributes to give it a place in the scientific debate. It delivers categories to delineate practices, techniques to investigate them in further detail as well as a large dataset of exemplary OSH projects. The discussion of first results signposts avenues for a stream of research aiming at understanding stakeholder interactions at work in new product innovation practices in order to enable institutions and industry in providing appropriate responses.
Intense collaboration within networks of stakeholders characterizes current engineering design processes. In these, engineers use IT systems to create artifacts, which manifest their knowledge allowing its circulation. Still, a research gap exists regarding the understanding of kinds, relations and interdependencies between IT systems, artifacts and knowledge types. This article addresses this gap by presenting results of a systematic literature review. The results contribute to close the mentioned gap, give insight on focusses of current research and identify further need for investigations.
To develop smart services to successfully operate as a component of smart service systems (SSS), they need qualitatively and quantitatively sufficient data. This is especially true when using statistical methods from the field of artificial intelligence (AI): training data quality directly determines the quality of resulting AI models. However, AI model quality is only known when AI training can take place. Additionally, the creation of not yet available data sources (e.g., sensors) takes time. Therefore, systematic specification is needed alongside smart service systemsSSS development. Today, there is a lack of systematic support for specifying data relevant to smart services. This gap can be closed by realizing the systematic approach SemDaServ presented in this article. The research approach is based on Blessing’s Design Research Methodology (literature study, derivation of key factors, success criteria, solution functions, solution development, applicability evaluation). SemDaServ provides a three-step process and five accompanying artifacts. Using domain knowledge for data specification is critical and creates additional challenges. Therefore, the SemDaServ approach systematically captures and semantically formalizes domain knowledge in SysML-based models for information and data. The applicability evaluation in expert interviews and expert workshops has confirmed the suitability of SemDaServ for data specification in the context of SSS development. SemDaServ thus offers a systematic approach to specify the data requirements of smart services early on to aid development to continuous integration and continuous delivery scenarios.
Value creation in most business areas takes place in networks that involve a wide range of stakeholders from various disciplines within and beyond company borders. Collaboration in such networks require the exchange of knowledge that is manifested in digital artefacts and consequently in data. As the utilization of that "hidden" knowledge has become increasingly important, the provision of relevant data in sufficient quality has also become crucial. This article proposes a reference model for knowledge driven data provision processes that is developed within a research project at the Virtual Vehicle Research Center GmbH for a future networked engineering environment. It describes a systematic process to drive operationalization of data provision from knowledge requirements to identify, extract and provide raw data until the application of such data sets. Still, the model in its current state is only applicable by descriptive means and needs further development and validation in practical use cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.