A certain State-recognized civilian hospital employed two kinds of doctors who appeared on its staff lists: full-time doctors, and part-time doctors used to reinforce its medical team during an armed conflict with a neighbouring country. Some weeks later the country was occupied by the enemy. Two part-time doctors on their way home in a private car marked with a red cross for protective purposes were stopped by a police patrol, which seized the car and confiscated the doctors' identifying armlets. This was done on the grounds that improper use was being made of the Red Cross emblem as a protective device, contrary to Articles 24, 25, 26 and 44 of the First Geneva Convention of 12 August 1949 for the amelioration of the condition of the wounded and sick in armed forces in the field, and Articles 20 and 21 of the Fourth Geneva Convention of 12 August 1949 relative to the protection of civilian persons in time of war.
Incremental learning enables artificial agents to learn from sequential data. While important progress was made by exploiting deep neural networks, incremental learning remains very challenging. This is particularly the case when no memory of past data is allowed and catastrophic forgetting has a strong negative effect. We tackle classincremental learning without memory by adapting prediction bias correction, a method which makes predictions of past and new classes more comparable. It was proposed when a memory is allowed and cannot be directly used without memory, since samples of past classes are required. We introduce a two-step learning process which allows the transfer of bias correction parameters between reference and target datasets. Bias correction is first optimized offline on reference datasets which have an associated validation memory. The obtained correction parameters are then transferred to target datasets, for which no memory is available. The second contribution is to introduce a finer modeling of bias correction by learning its parameters per incremental state instead of the usual past vs. new class modeling. The proposed dataset knowledge transfer is applicable to any incremental method which works without memory. We test its effectiveness by applying it to four existing methods. Evaluation with four target datasets and different configurations shows consistent improvement, with practically no computational and memory overhead.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.