SUMMARYIn this paper we examine whether semantic information can be extracted automatically from the type of multichannel images currently broadcast in Japan and whether such information can be used in practical applications as a forerunner of next-generation broadcasting systems. We propose a method for extracting objects from multipleviewpoint broadcast television images that is based on analyzing a core wide-view image; we then apply this method to a real multiple-viewpoint television broadcast of a soccer match. Specifically, we extract information regarding the location of each player based on the relations between the wide-view image and other local-view images using both of these to extract the location of players and perform tracking. We show that each player can be tracked over a long period of time using this method.