Web pages consist of different visual segments, serving different purposes. Typical structural segments are header, right or left columns and main content. Segments can also have nested structure which means some segments may include other segments. Understanding these segments is important in properly displaying web pages for small screen devices and in alternative forms such as audio for screen reader users. There exist different techniques in identifying visual segments in a web page. One successful approach is Vision Based Segmentation Algorithm (VIPS Algorithm) which uses both the underlying source code and also the visual rendering of a web page. However, there are some limitations of this approach and this paper explains how we have extended and improved VIPS and built it in Java. We have also conducted some online user evaluations to investigate how people perceive the success of the segmentation approach and in which granularity they prefer to see a web page segmented. This paper presents the preliminary results which show that, people perceive segmentation with higher granularity as better segmentation regardless of the web page complexity.
The equality of access – accessibility – is difficult to quantify, define, or agree upon. Our previous work analysed the responses of web accessibility specialists in regard to a number of pre-defined definitions of accessibility. While uncovering much, this analysis did not allow us to quantify the communities’ understanding of the relationship accessibility has with other domains and assess how the community scopes accessibility. In this case, we asked over 300 people, with an interest in accessibility, to answer 33 questions surrounding the relationship between accessibility, user experience (UX), and usability; inclusion and exclusion; and evaluation, in an attempt to harmonise our understanding of web accessibility. We found that respondents think that accessibility and usability are highly related and also think that accessibility is applicable to everyone and not just people with disabilities. Respondents strongly agree that accessibility must be grounded on user-centred practices and that accessibility evaluation is more than just inspecting source code; however, they are divided as to whether training in ‘Web Content Accessibility Guidelines’ is necessary or not to assess accessibility. These perceptions are important for usability and UX professionals, developers of automated evaluation tools, and those practitioners running website evaluations
Anecdotal evidence suggests that people with autism may have different processing strategies when accessing the web. However, limited empirical evidence is available to support this. This paper presents an eye tracking study with 18 participants with high-functioning autism and 18 neurotypical participants to investigate the similarities and differences between these two groups in terms of how they search for information within web pages. According to our analysis, people with autism are likely to be less successful in completing their searching tasks. They also have a tendency to look at more elements on web pages and make more transitions between the elements in comparison to neurotypical people. In addition, they tend to make shorter but more frequent fixations on elements which are not directly related to a given search task. Therefore, this paper presents the first empirical study to investigate how people with autism differ from neurotypical people when they search for information within web pages based on an in-depth statistical analysis of their gaze patterns.
Web Content Accessibility Guidelines 2.0 (WCAG 2.0) require that success criteria be tested by human inspection. Further, testability of WCAG 2.0 criteria is achieved if 80% of knowledgeable inspec tors agree that the criteria has been met or not. In this paper we in vestigate the very core WCAG 2.0, being their ability to determine web content accessibility conformance. We conducted an empir ical study to ascertain the testability of WCAG 2.0 success crite ria when experts and non-experts evaluated four relatively complex web pages; and the differences between the two. Further, we dis cuss the validity of the evaluations generated by these inspectors and look at the differences in validity due to expertise.In summary, our study, comprising 22 experts and 27 non-experts, shows that approximately 50% of success criteria fail to meet the 80% agreement threshold; experts produce 20% false positives and miss 32% of the true problems. We also compared the performance of experts against that of non-experts and found that agreement for the non-experts dropped by 6%, false positives reach 42% and false negatives 49%. This suggests that in many cases WCAG 2.0 con formance cannot be tested by human inspection to a level where it is believed that at least 80% of knowledgeable human evaluators would agree on the conclusion. Why experts fail to meet the 80% threshold and what can be done to help achieve this level are the subjects of further investigation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.