Regulation of social exchanges refers to controlling social exchanges between agents so that the balance of exchange values involved in the exchanges are continuously kept-as far as possible-near to equilibrium. Previous work modeled the social exchange regulation problem as a POMDP (Partially Observable Markov Decision Process), and defined the policyToBDIplans algorithm to extract BDI (Beliefs, Desires, Intentions) plans from POMDP models, so that the derived BDI plans can be applied to keep in equilibrium social exchanges performed by BDI agents. The aim of the present paper is to extend that BDI-POMDP agent model for self-regulation of social exchanges with a module, based on HMM (Hidden Markov Model), for recognizing and learning partner agents' social exchange strategies, thus extending its applicability to open societies, where new partner agents can freely appear at any time. For the recognition problem, patterns of refusals of exchange proposals are analyzed, as such refusals are produced by the partner agents. For the learning problem, HMMs are used to capture probabilistic state transition and observation functions that model the social exchange strategy of the partner agent, in order to translate them into POMDP's actionbased state transition and observation functions. The paper formally addresses the problem of translating HMMs into POMDP models and vice versa, introducing the translation algorithms and some examples. A discussion on the results of simulations of strategy-based social exchanges is presented, together with an analysis about related work on social exchanges in multiagent systems.Keywords Social exchange strategy · Recognition and learning of social exchange strategies · Self-regulation of social exchange strategies · Partially observable Markov decision process · Hidden Markov model