2015
DOI: 10.1093/ijlit/eav009
|View full text |Cite
|
Sign up to set email alerts
|

Search engine liability for autocomplete suggestions: personality, privacy and the power of the algorithm

Abstract: This article is concerned with the liability of search engines for algorithmically produced search suggestions, such as through Google's 'autocomplete' function. Liability in this context may arise when automatically generated associations have an offensive or defamatory meaning, or may even induce infringement of intellectual property rights. The increasing number of cases that have been brought before courts all over the world puts forward questions on the conflict of fundamental freedoms of speech and acces… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…The campaign highlighted examples like 'women shouldn't [have rights],' 'women cannot [be trusted],' 'women should [be slaves]' (where the part in square brackets represents a search suggestion that completes the partially typed query preceding it). 2 Similar issues have also been highlighted by legal complaints concerned with suggestions allegedly defaming an individual or an organization (e.g., a suggestion implying the plaintiff is a 'scam' or a 'fraud') or promoting harmful illicit activities (e.g., a suggestion pointing to pirated versions of a plaintiff's content) (Ghatnekar 2013;Cheung 2015;Karapapa and Borghi 2015).…”
Section: Introductionmentioning
confidence: 90%
“…The campaign highlighted examples like 'women shouldn't [have rights],' 'women cannot [be trusted],' 'women should [be slaves]' (where the part in square brackets represents a search suggestion that completes the partially typed query preceding it). 2 Similar issues have also been highlighted by legal complaints concerned with suggestions allegedly defaming an individual or an organization (e.g., a suggestion implying the plaintiff is a 'scam' or a 'fraud') or promoting harmful illicit activities (e.g., a suggestion pointing to pirated versions of a plaintiff's content) (Ghatnekar 2013;Cheung 2015;Karapapa and Borghi 2015).…”
Section: Introductionmentioning
confidence: 90%
“…Some researchers have examined how autocomplete can become more precise in its predictions, and better reflect a user's concerns at a point in time (Bar-Yossef and Kraus, 2011). Others have adopted more normative lines of questioning, interrogating the legality of (and liability for) information presented through predictions (Karapapa and Borghi, 2015;Olteanu et al, 2020) or how such predictions -across platforms and timescales -could steer patterns of user inquiry (Robertson et al, 2019). Baker and Potts (2013) demonstrated how autocomplete predictions can reproduce racist, biassed and stereotyped discourses in English-language searches.…”
Section: Autocomplete Algorithmic Power and Global Linguistic Diversitymentioning
confidence: 99%
“…Re-identifying previously de-identified government data are even proposed as a new criminal offense, In this era many data privacy issues emerge, often unplanned, for instance the autocomplete suggestions of search engines with many requests on the basis of EU data protection law for removal of suggestions including private individuals' information (Karapapa & Borghi, 2015). Sensitive user data can be used to derive actionable insights, and it is usually accepted that individuals cannot be identified, data will be aggregated and so on.…”
Section: Data Privacy and Big Personal Datamentioning
confidence: 99%