Artificial intelligence (AI) has made many breakthroughs in the perfect information game. Nevertheless, Bridge, a multiplayer imperfect information game, is still quite challenging. Bridge consists of two parts: bidding and playing. Bidding accounts for about 75% of the game and playing for about 25%. Expert-level teams are generally indistinguishable at the playing level, so bidding is the more decisive factor in winning or losing. The two teams can communicate using different systems during the bidding phase. However, existing bridge bidding models focus on at most one bidding system, which does not conform to the real game rules. This paper proposes a deep reinforcement learning model that supports multiple bidding systems, which can compete with players using different bidding systems and exchange hand information normally. The model mainly comprises two deep neural networks: a bid selection network and a state evaluation network. The bid selection network can predict the probabilities of all bids, and the state evaluation network can directly evaluate the optional bids and make decisions based on the evaluation results. Experiments show that the bidding model is not limited by a single bidding system and has superior bidding performance.