For meaningful information exchange or integration, providers and consumers need compatible semantics between source and target systems. It is widely recognized that achieving this semantic integration is very costly. Nearly all the published research concerns how system integrators can discover and exploit semantic knowledge in order to better share data among the systems they already have. This research is very important, but to make the greatest impact, we must go beyond after-the-fact semantic integration among existing systems, to actively guiding semantic choices in new ontologies and systems - e.g., what concepts should be used as descriptive vocabularies for existing data, or as definitions for newly built systems. The goal is to ease data sharing for both new and old systems, to ensure that needed data is actually collected, and to maximize over time the business value of an enterprise's information systems.
This paper examines the implications of network-centric warfare for information system development: How should we build C2 information systems for net-centric operations? We begin with six highly-probable predictions for the NCW future. From these we derive a number of present implications for system development: things we should do now, and problems we will have to solve along the way. Our answers touch on the information technology to be employed within the systems, the architectural principles that will guide and structure their development, and the acquisition process used to build and deploy the systems.
Information sharing is a key tenet of network-centric warfare (NCW). Information sharing succeeds when the right information is provided to the right people at the right time and place so that they can make the right decisions. This will not occur without an information management policy and process that is fitted to the needs of NCW-one that is flexible, seamless, and complete. In this paper we describe the essential architecture of a net-centric information management process, one that is based on the information and data management strategy of the US Air Force.
There are almost always differences between the behavior intended by a programmer and the behavior actually implemented by his code. These differences are logical errors, and the process of eliminating them is called the (logical) debugging process.The usual way of locating these errors is to use a "break-and-inspect'' style debugging tool. The programmer uses the debugger to search for a small part of the program's execution that does not proceed as expected. The existing debuggers enable the programmer to make this search, but do not assist in the search. This paper presents techniques for assisting the programmer in the error diagnosis process. A debugging tool incorporating these techniques will assist the programmer in directing the course of the diagnosis, in determining which variables need to be examined at any breakpoint, in deciding whether the variables examined have the correct values, and in detecting the use of pointers to storage locations which have been previously released. L i \ T R~~L~l ( 'I.lO\A computer program is a specification for some computation, expressed in a language that can be automatically translated into primitive instructions for some machine. Because the people who write programs are fallible, there are usually differences between the computational behavior actually represented by a program and the behavior intended by the programmer. These differences are called logical errors, and the process of eliminating them is the (logical) debugging process.This process can be divided into three steps. The first is error detection, in which the programmer discovers that a program does not work correctly for a particular input. The second is error diagnosis, in which the programmer isolates the section of the program code which is responsible for the incorrect behavior. The final step is error correction, in which the programmer modifies the faulty section of the program to eliminate the error. We will be concerned only with the error diagnosis step: our job will begin with a program known to be incorrect, and will end when the faulty statements are identified.The traditional approach to error diagnosis is to study the execution behavior of the incorrect program. The programmer uses a debugging tool to examine the internal operation of the program, permitting him to compare what the program actually does with what he believes it should do. An alternative approach explored in [3,14] ignores the runtime behavior and instead compares the program code to the programmer's intentions.It is widely accepted that an execution-analysis debugger for high-level languages should be interactive and source-level [19]. . Tools of this sort automatically translate between the source and machine levels, relieving the programmer of any need to understand the machine architecture or the relation between source and object code. The numerous source-level debuggers currently in use [ 1,2,5-7~0,151 permit the user to specify breakpoints at any line in the source code. When the user receives control at a breakpo...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.