50 0 50 100 b* R O Y G Br Gr Pu Pi Pi B r = 1 Balls Sectors Categories Δr = 1 Δr = 1 Δh = 5°Δ r = 1 Δh = 40°Δ r = 20 Δh = 5°Δ r = 20 50 0 -50 -50 Δr = 20 Δh = 40°a * Figure 1. We constructed models that estimate human color-concept associations using color distributions extracted from images of relevant concepts. We compared methods for extracting color distributions by defining different kinds of color tolerance regions (white outlines) around each target color (regularly spaced large dots) in CIELAB space. Subplots show a planar view of CIELAB space at L* = 50, with color tolerance regions defined as balls (left column; radius ∆r ), cylindrical sectors (middle columns; radius ∆r and hue angle ∆h), and category boundaries around each target color (right column; Red, Orange, Yellow, Green, Blue, Purple, Pink, Brown, Gray; white and black not shown). Each target color is counted as "present" in the image each time any color in its tolerance region is observed. This has a smoothing effect, which enables the inclusion of colors that are not present in the image but similar to colors that are. A model that includes two sector features and a category feature best approximated human color-concept associations for unseen concepts and images (see text for details).Abstract-To interpret the meanings of colors in visualizations of categorical information, people must determine how distinct colors correspond to different concepts. This process is easier when assignments between colors and concepts in visualizations match people's expectations, making color palettes semantically interpretable. Efforts have been underway to optimize color palette design for semantic interpretablity, but this requires having good estimates of human color-concept associations. Obtaining these data from humans is costly, which motivates the need for automated methods. We developed and evaluated a new method for automatically estimating color-concept associations in a way that strongly correlates with human ratings. Building on prior studies using Google Images, our approach operates directly on Google Image search results without the need for humans in the loop. Specifically, we evaluated several methods for extracting raw pixel content of the images in order to best estimate color-concept associations obtained from human ratings. The most effective method extracted colors using a combination of cylindrical sectors and color categories in color space. We demonstrate that our approach can accurately estimate average human color-concept associations for different fruits using only a small set of images. The approach also generalizes moderately well to more complicated recycling-related concepts of objects that can appear in any color.