2020
DOI: 10.1007/s40319-020-00908-z
|View full text |Cite
|
Sign up to set email alerts
|

The Intersection Between AI and IP: Conflict or Complementarity?

Abstract: Artificial intelligence (AI) is everywhere. If Alan Turing raised the question of providing machines with a form of intelligence as early as 1950, 1 AI has since revealed its potential thanks to big data and the improvement of computing power. The term ''artificial intelligence'' was popularized by John McCarthy and Marvin Lee Minsky, organizers of the 1956 Dartmouth conference that made AI a field of research in its own right. AI refers to systems that demonstrate intelligent behavior by analyzing their envir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…It appears that the AIA prioritizes economic, business, and innovation over moral concerns and that human rights are just an afterthought (Castets‐Renard & Besse, 2022). Noteworthy is that fundamental rights seldom appear in the main text; the primary focus of the AIA is clearly on markets and market access of AI systems.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…It appears that the AIA prioritizes economic, business, and innovation over moral concerns and that human rights are just an afterthought (Castets‐Renard & Besse, 2022). Noteworthy is that fundamental rights seldom appear in the main text; the primary focus of the AIA is clearly on markets and market access of AI systems.…”
Section: Discussionmentioning
confidence: 99%
“…As such, the proposed approach leaves the preliminary risk assessment, including identifying AI systems as high‐risk to the provider and developer. The latter possess consequently a significant amount of discretionary leeway (Smuha et al., 2021), for example, they can decide whether the used software is an AI system, whether the system may cause harm, and how to comply with the mandatory requirements of Title III, Chapter 2 AIA (Castets‐Renard & Besse, 2022); furthermore, they can (theoretically) classify high‐risk technologies as adhering to the rules using the self‐assessment procedure (EPRS, 2022a). Critics fear the rules on prohibited and high‐risk AI practices may prove ineffective because the risk assessment is left to the provider/developer (Kop, 2021).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations