2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE) 2019
DOI: 10.1109/ase.2019.00096
|View full text |Cite
|
Sign up to set email alerts
|

Automated Trainability Evaluation for Smart Software Functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…Finally we concluded that the understanding of how Software Engineering Model Development practices and the adoption of a Machine Learning Workflow in accordance with those practices, more specifically, with the Software Engineering life-cycle is a subject of vital importance for the evolution of Machine Learning/Artificial Intelligence and continuing development of its applications (especially on a large scale), even if further research on the topic is needed. -Provenance tag for data models [1], [2], [13] Documentation and Versioning -Extract metadata from repositories is difficult -Catalog of ML models to support design and maintenance [15] Non-functional Requirements -security -unassured reliability and lacking transparency -Identify parts of the ISO 26262 to be adapted to ML -An approach based on dependability assurances [30], [31] Design and Implementation -APIs look and feel like conventional APIs, but abstract away data-driven behavior -catalog of design patterns for ML development -information to support documentation and design of APIs [6], [32] Evaluation -Testing interpretability, privacy, or efficiency of ML -Proposal of new test semantic -Tests based on quality score [3], [33], [34] Deployment and Maintenance -Lack of support to adapt based on feedback -An approach to support adaptation based on quality gates [34] Software Capability Maturity Model (CMM)…”
Section: Discussionmentioning
confidence: 99%
“…Finally we concluded that the understanding of how Software Engineering Model Development practices and the adoption of a Machine Learning Workflow in accordance with those practices, more specifically, with the Software Engineering life-cycle is a subject of vital importance for the evolution of Machine Learning/Artificial Intelligence and continuing development of its applications (especially on a large scale), even if further research on the topic is needed. -Provenance tag for data models [1], [2], [13] Documentation and Versioning -Extract metadata from repositories is difficult -Catalog of ML models to support design and maintenance [15] Non-functional Requirements -security -unassured reliability and lacking transparency -Identify parts of the ISO 26262 to be adapted to ML -An approach based on dependability assurances [30], [31] Design and Implementation -APIs look and feel like conventional APIs, but abstract away data-driven behavior -catalog of design patterns for ML development -information to support documentation and design of APIs [6], [32] Evaluation -Testing interpretability, privacy, or efficiency of ML -Proposal of new test semantic -Tests based on quality score [3], [33], [34] Deployment and Maintenance -Lack of support to adapt based on feedback -An approach to support adaptation based on quality gates [34] Software Capability Maturity Model (CMM)…”
Section: Discussionmentioning
confidence: 99%