The evolution of Web and service technologies has led to a wide landscape of standards and protocols for interaction between loosely coupled software components. Examples range from Web applications, mashups, apps, and mobile devices to enterprise-grade services. Cloud computing is the industrialization of service provision and delivery, where Web and enterprise services are converging on a technological level. The article discusses this technological landscape and, in particular, current trends with respect to cloud computing. The survey focuses on the communication aspect of interaction by reviewing languages, protocols, and architectures that drive today's standards and software implementations applicable in clouds. Technological advances will affect both client side and service side. There is a trend toward multiplexing, multihoming, and encryption in upcoming transport mechanisms, especially for architectures, where a client simultaneously sends a large number of requests to some service. Furthermore, there are emerging client-to-client communication capabilities in Web clients that could establish a foundation for upcoming Web-based messaging architectures.Comment: Accepted Version 2015-02-20, 41 pages, 19 figures, 3 tables, Service Oriented Computing and Applications (2015
Abstract. In recent years, much research focused on entropy as a metric describing the "chaos" inherent to network traffic. In particular, network entropy time series turned out to be a scalable technique to detect unexpected behavior in network traffic.In this paper, we propose an algorithm capable of detecting abrupt changes in network entropy time series. Abrupt changes indicate that the underlying frequency distribution of network traffic has changed significantly. Empirical evidence suggests that abrupt changes are often caused by malicious activity such as (D)DoS, network scans and worm activity, just to name a few.Our experiments indicate that the proposed algorithm is able to reliably identify significant changes in network entropy time series. We believe that our approach helps operators of large-scale computer networks in identifying anomalies which are not visible in flow statistics.
Abstract-False-positives are a problem in anomaly-based intrusion detection systems. To counter this issue, we discuss anomaly detection for the eXtensible Markup Language (XML) in a language-theoretic view. We argue that many XML-based attacks target the syntactic level, i.e. the tree structure or element content, and syntax validation of XML documents reduces the attack surface. XML offers so-called schemas for validation, but in real world, schemas are often unavailable, ignored or too general. In this work-in-progress paper we describe a grammatical inference approach to learn an automaton from example XML documents for detecting documents with anomalous syntax.We discuss properties and expressiveness of XML to understand limits of learnability. Our contributions are an XML Schema compatible lexical datatype system to abstract content in XML and an algorithm to learn visibly pushdown automata (VPA) directly from a set of examples. The proposed algorithm does not require the tree representation of XML, so it can process large documents or streams. The resulting deterministic VPA then allows stream validation of documents to recognize deviations in the underlying tree structure or datatypes.
Cryptanalysis of enciphered documents typically starts with identifying the cipher type. A large number of encrypted historical documents exists, whose decryption can potentially increase the knowledge of historical events. This paper investigates whether machine learning can support the cipher type classification task when only ciphertexts are given. A selection of engineered features for historical ciphertexts and various machine-learning classifiers have been applied for 56 different cipher types specified by the American Cryptogram Association. Different neuronal network models were empirically evaluated. Our best-performing model achieved an accuracy of 80.24% which improves the current state of the art by 37%. Accuracy is calculated by dividing the total number of samples by the number of true positive predictions. The software-suite is published under the name "Neural Cipher Identifier (NCID)".
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.