Weak consistency is a memory model that is frequently considered for shared memory systems. Its most distinguishable feature lies in a category of operations in two types: data operations and synchronization operations. For highly parallel shared memory systems, this model offers greater performance potential than strong models such as sequential consistency by permitting unconstrained optimization on updates propagation before synchronization is invoked. It captures the intuition that delaying updates produced by data operations before triggering a synchronization operation does not typically affect the program correctness. To formalize the connection between concrete executions and the corresponding specification, we propose in this work a new approach to define weak consistency. This formalization, defined in terms of distributed histories abstracted from concrete executions, provides an additional perception of the concept and facilitates automatic analysis of system behaviors. We then investigate the problems on verifying whether implementations have correctly implemented weak consistency. Specifically, we consider two problems: (1) the testing problem that checks whether one single execution is weakly consistent, a critical problem for designing efficient testing and bug hunting algorithms, and (2) the model checking problem that determines whether all executions of an implementation are weakly consistent. We show that the testing problem is NP-complete, even for finite processes and short programs. The model checking problem is proven to be undecidable.