S S S t t t o o o n n n y y y B B B r r r o o o o o o k k k U U U n n n i i i v v v e e e r r r s s s i i i t t t y y yThe official electronic file of this thesis or dissertation is maintained by the University Libraries on behalf of The Graduate School at Stony Brook University. The Web has evolved into a dominant digital medium for conducting many types of online transactions such as shopping, paying bills, making travel plans, etc. Such transactions typically involve a number of steps spanning several web pages. For sighted users these steps are relatively straightforward to do with graphical web browsers. But they pose tremendous challenges for visually impaired individuals. This is because screen readers, the dominant assistive technology used by visually impaired users, function by speaking out the screen's content serially. Consequently, using them for conducting transactions can cause considerable information overload. But usually one needs to browse only a small fragment of a web page to do a step of a transaction (e.g., choosing an item from a search results list). Based on this observation this dissertation develops a model-directed transaction framework to identify, extract and aurally render only the "relevant" page fragments in each step of a transaction. The framework uses a process model to encode the state of the transaction and a concept model to identify the page fragments relevant for the transaction in that state. The two models are constructed from labeled transaction sequences using traditional classification and automata learning methods.
© © © A A A l l l l l l R R R i i i g g g h h h t t t s s s RNext, we relax the requirement of needing fully labeled training data. Specifically, we present a framework to mine transaction models from partially labeled click stream data generated by transactions, where some or all the labels could be missing. Not having to rely exclusively on (manually) labeled click stream data has iii important benefits: Visually impaired users do not have to depend on third party (e.g., sighted users) for constructing transaction models. This makes it possible to mine personalized models from transaction click streams associated with sites that visually impaired users visit regularly. Since partially labeled data is relatively easier to generate, scaling up the construction of domain-specific transaction models (e.g., shopping, airline reservations, bill payments, etc.) is feasible. Lastly, adjusting the performance of deployed models over time with new training data is also doable.In terms on techniques used for mining we expand our repertoire to include web content analysis to partition a web page into segments consisting of semantically related content elements, contextual analysis of data surrounding clickable elements in a page and clustering of page segments based on contextual analysis.We provide qualitative and quantitative experimental evidence of the practical effectiveness of our models in improving user experience when conducting online transactions with n...