Blood sampling with different anticoagulants alters matrix metalloproteinase (MMP-) 9 expression, thus influencing its concentration and diagnostic validity. Here, we aimed to evaluate the effects of different anticoagulants on MMP-9 regulation. MMP-9 expression was assessed in response to ethylenediaminetetraacetic acid, citrate, and high-/low-molecular-weight heparin (HMWH, LMWH) in co-culture experiments using THP-1, Jurkat, and HT cells (representing monocytes, T, and B cells). Triple and double cell line co-culture experiments revealed that HMWH treatment of THP-1 and Jurkat led to a significant MMP-9 induction, whereas other anticoagulants and cell type combinations had no effect. Supernatant of HMWH-treated Jurkat cells also induced MMP-9 in THP-1 suggesting monocytes as MMP-9 producers. HMWH-induced cytokine/chemokine secretion was assessed in co-culture supernatant, and the influence of cytokines/chemokines on MMP-9 production was analyzed. These experiments revealed that Jurkat-derived IL-16 and soluble intercellular adhesion molecule (sICAM-) 1 are able to induce MMP-9 and IL-8 production by THP-1. As a consequence, the increased MMP-9 expression found in HMWH blood samples may be influenced by HMWH-dependent secretion of IL-16 and sICAM-1 by T cells resulting in an increased production of MMP-9 and IL-8 by monocytes. IL-8, in turn, may support MMP-9 and its own expression in a positive autocrine feedback loop.
Adaptive networks can be easily trained to associate arbitrary input and output patterns. When subgroups of patterns (lists) are presented sequentially, however, a network tends to "unlearn" previously acquired associations while learning new associations. A second form of sequential learning problem is reported in this paper. Learning of each successive list of pattern pairs becomes progressively more difficult. Evidence for this cumulative negative transfer was obtained from simulations using backpropagation of errors to train multilayer networks. The cause of the problem appears to be the development of extreme weights during learning of new lists. Unbounded weights may be a liability for the backpropagation algorithm. AbstractNeural network (NN) classifiers have been applied to numerous practical problems of interest. A very common type of i W classifier is the multi-layer perceptron, trained with back propagation. Although this learning procedm has been used successfully in many applications, it has several drawbacks including susceptibility to local minima and excessive convergence times.This paper presents two alternatives to back propagation for synthesizing NN classifiers. Both procedures generate appropriate network structures and weights in a fast and efficient manner without any gradient descent. The resulting decision rules are optimal under certain conditions; the weights obtained via these procedures can be used "as is" or as a starting point for back propagation.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.