Modern sensors deployed in most Industry 4.0 applications are intelligent, meaning that they present sophisticated behavior, usually due to embedded software, and network connectivity capabilities. For that reason, the task of calibrating an intelligent sensor currently involves more than measuring physical quantities. As the behavior of modern sensors depends on embedded software, comprehensive assessments of such sensors necessarily demands the analysis of their embedded software. On the other hand, interlaboratory comparisons are comparative analyses of a body of labs involved in such assessments. While interlaboratory comparison is a well-established practice in fields related to physical, chemical and biological sciences, it is a recent challenge for software assessment. Establishing quantitative metrics to compare the performance of software analysis and testing accredited labs is no trivial task. Software is intangible and its requirements accommodate some ambiguity, inconsistency or information loss. Besides, software testing and analysis are highly human-dependent activities. In the present work, we investigate whether performing interlaboratory comparisons for software assessment by using quantitative performance measurement is feasible. The proposal was to evaluate the competence in software code analysis activities of each lab by using two quantitative metrics (code coverage and mutation score). Our results demonstrate the feasibility of establishing quantitative comparisons among software analysis and testing accredited laboratories. One of these rounds was registered as formal proficiency testing in the database—the first registered proficiency testing focused on code analysis.
A ocorrência de defeitos em artefatos de software é praticamente inevitável. É extremamente arriscadocontar apenas com atividades de teste para identificar estes defeitos. Aspectos de qualidade devem ser tratados simultaneamente ao processo de desenvolvimento de software, já que não poderão ser impostos quando o produto estiver finalizado. Os custos associados ao teste, isolamento, correção e re-teste do software são maiores que o custo necessário para identificar os defeitos tão logo eles sejam inseridos nos artefatos produzidos ao longo do ciclo de desenvolvimento. Inspeções de software se propõem a reduzir o número de defeitos propagados de uma fase de desenvolvimento para outra. Num processo de inspeção, a identificação de defeitos pode ser feita de forma ad-hoc, com a utilização de checklists ou com a adoção de uma técnica específica. As técnicas de leitura baseada em perspectiva (PBR) foram criadas para apoiar a identificação de defeitos em documentos de requisitos de software escritos em linguagem natural. PBR tem sido submetida a diversos estudos experimentais e as observações resultantes destes estudos nos motivaram a definir uma ferramenta que apoiasse sua aplicação. Nossa hipótese se relaciona à possibilidade da redução do tempo necessário para a inspeção. Um estudo de viabilidade da ferramenta realizado com estudantes de pós-graduação mostrou indícios desta possibilidade e da viabilidade de utilização desta ferramenta.
An adequate way of making software organizations to remain competitive is to ensure their innovative capacity and the continuous increasing of their software process productivity with quality. Indeed, the ability on increasing the software productivity relies on among other issues in the organization's measurement and prediction capacity. Productivity refers to the rate at which a company produces goods, and its observation takes into account the number of people and the amount of other necessary resources to produce such goods. However, it is not clear how productivity can be observed when the product is software. Therefore, this work presents the results of an investigation regarding software productivity measurement and prediction methods. A previous systematic literature review was evolved and re-executed, limited to the year 2013. It allowed the identification of 89 new primary studies evidencing that: (1) ratio-based and weighted factors analyses still represent most of the methods applied to measure, describe and interpret software productivity; (2) 24 factors present evidence of influencing productivity, and; (3) SLOC-based measures, despite the criticism and issues associated with these sort of measurements, are the most common measures used in the studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.