Digital platforms are not just software‐based media, they are governing systems that control, interact, and accumulate. They also solidify markets; that is, social networks of exchange that do not necessarily leave data traces, into infrastructure, that is, material arrangements of traceable activity. This article examines the forms of domination found in this digital platform model, and corrects some existing simplistic theoretical conclusions about digital platforms. It first provides a schematic overview of digital infrastructures of governance, and the attendant systemic mechanics they engender. It then argues that we need a more syncretic, interdisciplinary approach to the platform‐based economy. The shifting emphases of different academic disciplines in relation to digital platforms are only partially grounded in their different normative biases; they can also be attributed to use of different disciplinary lenses. The field of information systems management and design studies is chiefly concerned with direct, technical interplatform affordances and connections, and with providing observations of certain systemic attributes of digital platforms. Critical political economy, by contrast, mainly considers the emerging transnational, geopolitical formations of platform capitalism. The interplay between these different systemic mechanics is summarized and presented here in the concept of “platform logic.”
Intelligence on mass media audiences was founded on representative statistical samples, analysed by statisticians at the market departments of media corporations. The techniques for aggregating user data in the age of pervasive and ubiquitous personal media (e.g. laptops, smartphones, credit cards/swipe cards and radio-frequency identification) build on large aggregates of information (Big Data) analysed by algorithms that transform data into commodities. While the former technologies were built on socioeconomic variables such as age, gender, ethnicity, education, media preferences (i.e. categories recognisable to media users and industry representatives alike), Big Data technologies register consumer choice, geographical position, web movement, and behavioural information in technologically complex ways that for most lay people are too abstract to appreciate the full consequences of. The data mined for pattern recognition privileges relational rather than demographic qualities. We argue that the agency of interpretation at the bottom of market decisions within media companies nevertheless introduces a 'heuristics of the algorithm', where the data inevitably becomes translated into social categories. In the paper we argue that although the promise of algorithmically generated data is often implemented in automated systems where human agency gets increasingly distanced from the data collected (it is our technological gadgets that are being surveyed, rather than us as social beings), one can observe a felt need among media users and among industry actors to 'translate back' the algorithmically produced relational statistics into 'traditional' social parameters. The tenacious social structures within the advertising industries work against the techno-economically driven tendencies within the Big Data economy.
P atients with symptoms suggestive of acute myocardial infarction (AMI) account for ≈10% of all emergency department (ED) presentations. 1 The majority of patients are finally found to have diagnoses other than AMI. 2 Thus, the expeditious evaluation of such patients is important because delays in ruling out AMI may interfere with the detection of other underlying diseases. The 0/1 hour (0/1h) algorithm and the 0/3 hour (0/3h) algorithm are both recommended by the European Society of Cardiology with a Class I recommendation for the early rule-out of AMI. 1 The 0/1h algorithm and 0/3h algorithm are completely different protocols. Whereas the 0/1h algorithm uses high-sensitivity cardiac troponin (hs-cTn) concentrations at presentation and absolute changes within the first hour and hence takes optimal advantage of the increased diagnostic accuracy and precision of hs-cTn assays, the 0/3h algorithm uses a fixed threshold protocol based on the 99th percentile at presentation and 3 hours in conjunction with clinical criteria (GRACE [Global Registry of Acute Coronary Events] score <140 and the need to be pain free). It is currently unknown whether 1 algorithm is preferable to the other. The aim of this study was to directly compare safety, quantified by the negative predictive value (NPV) and the negative likelihood ratio (LR) for the presence of AMI, and efficacy, quantified by the proportion of patients triaged toward rule-out in a large diagnostic multicenter study enrolling patients presenting with suspected AMI to the ED (URL: https://www.clinicaltrials.gov. Unique identifier: NCT00470587). The study was carried out according to the principles of the Declaration of Helsinki and approved by the local ethics committees. Written informed consent was obtained from all patients. Patients presenting with ST-segment-elevation MI were excluded. Triage toward rule-out by the 0/1h or the 0/3h algorithm was compared against the final adjudication performed by 2 independent cardiologists using all information, including cardiac imaging and serial hs-cTnT measurements. Analyses were performed with hs-cTnT and hs-cTnI. NPV and efficacy were compared by the McNemar test and Pearson χ 2 test, respectively. The 95% confidence intervals (CIs) were calculated with the Wilson score method without continuity correction. Among 2547 patients eligible for analysis with hs-cTnT, AMI was the final adjudicated diagnosis in 387 patients (15%). The 0/1h algorithm provided safety similar to that of the 0/3h algorithm (NPV, 99.8% [95% CI, 99.4-99.9] and negative LR, 0.01 [95% CI, 0.00-0.03] versus NPV, 99.7% [95% CI, 99.2-99.9] and negative LR, 0.02 [95% CI, 0.00-0.05]) but allowed the rule-out of significantly more patients compared with the 0/3h algorithm (60% versus 44%; P<0.001). Among 2197 patients eligible for analysis with hs-cTnI, AMI was the final diagnosis in 327 patients (15%). The 0/1h algorithm provided higher safety compared with the 0/3h algorithm (NPV, 99.6% [95% CI, 99.1-99.9%] and negative LR, 0.02 [95% CI, 0.01-0.05] versus NP...
Through an interview-based study of Swedish public service broadcasting (PSB) companies, I explore the ways in which these institutions react to and interact with a set of normative conceptions of a contemporary digital media ecology characterized by social networking and personalization of the media experience. The respondents were engaged in negotiations of how to realistically maintain public values in a commercially configured online milieu. The nature of organizational adaptation within PSB is found to be complex. Several elements of the Nordic PSB model appear to counteract acquiescence to algorithmically aided personalization: its majoritarian heritage, its institutional caution toward data positivism, favoring more interpretive editorial audience knowledge, and the high costs and structural consequences of making individual users uniquely identifiable. These organizational ambitions and obstacles are embodied in recent innovations that act to mimic a personalized delivery, however, doing so without utilizing algorithmically aided prediction and instead favoring manual editorial selection.
The advance in digital fabrication technologies and additive manufacturing allows for the fabrication of complex truss structure designs but at the same time posing challenging structural optimization problems to capitalize on this new design freedom. In response to this, an iterative approach in which Sequential Linear Programming (SLP) is used to simultaneously solve a size and shape optimization sub-problem subject to local stress and Euler buckling constraints is proposed in this work. To accomplish this, a first order Taylor expansion for the nodal movement and the buckling constraint is derived to conform to the SLP problem formulation. At each iteration a post-processing step is initiated to map a design vector to the exact buckling constraint boundary in order to facilitate the overall efficiency. The method is verified against an exact non-linear optimization problem formulation on a range of benchmark examples obtained from the literature. The results show that the proposed method produces optimized designs that are either close or identical to the solutions obtained by the non-linear problem formulation while significantly decreasing the computational time. This enables more efficient size and shape optimization of truss structures considering practical engineering constraints.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.