Digital Life

LiquidText Software Supports Active Reading with Fingertip Gestures

Atlanta (June 28, 2011) — Many reading tasks require individuals to not only read a document, but also to understand, learn from and retain the information in it. For this type of reading, experts recommend a process called active reading, which involves highlighting, outlining and taking notes on the text.

Researchers at the Georgia Institute of Technology have developed software that facilitates an innovative approach to active reading. Taking advantage of touch-screen tablet computers, the LiquidText software enables active readers to interact with documents using finger motions. LiquidText can significantly enhance the experiences of active readers, a group that includes students, lawyers, managers, corporate strategists and researchers.

"Most computer-based active reading software seeks to replicate the experience of paper, but paper has limitations, being in many ways inflexible," said Georgia Tech graduate student Craig Tashman. "LiquidText offers readers a fluid-like representation of text so that users can restructure, revisualize and rearrange content to suit their needs."

LiquidText was developed by Tashman and Keith Edwards, an associate professor in the Georgia Tech School of Interactive Computing. The software can run on any Windows 7 touchscreen computer.

Details on LiquidText were presented in May 2011 at the Association for Computing Machinery's annual Conference on Human Factors in Computing Systems (CHI) in Vancouver, Canada. Development of LiquidText was supported by the National Science Foundation, Steelcase, Samsung and Dell.

Active reading demands more of the reading medium than simply advancing pages, Edwards noted. Active readers may need to create and find a variety of highlights and comments, and move rapidly among multiple sections of a document.

"With paper, it can be difficult to view disconnected parts of a document in parallel, annotation can be constraining, and its linear nature gives readers little flexibility for creating their own navigational structures," said Edwards.

LiquidText provides flexible control of the visual arrangement of content, including both original text and annotations. To do this, the software uses a number of common fingertip gestures on the touchscreen and introduces several novel gestures. For example, to view two areas of a document at once, the user can pinch an area of text and collapse it.

Active reading involves annotation, content extraction and fast, fluid navigation among multiple portions of a document. To accomplish these tasks, LiquidText integrates a traditional document reading space with a dedicated workspace area where the user can organize excerpts and annotations of a text -- without losing the links back to their sources. In these spaces, the user can perform many actions, including:

  • Highlight text
  • Comment about text
  • Extract text
  • Collapse text
  • Bookmark text
  • Magnify text

For commenting, LiquidText breaks away from the traditional one-to-one mapping between content and comments. Comment objects can refer to any number of pieces of content across a document, or even multiple documents. Comments can be pulled off, rearranged and grouped with other items while maintaining a persistent link back to the content they refer to. To add a comment, users simply select the desired text and begin typing.

Content can also be copied and extracted using LiquidText. Once a section of text has been selected, the user creates an excerpt simply by dragging the selection into the workspace until it "snaps off" of the document. The original content remains in the document, although it is tinted slightly to indicate that an excerpt has been made from it. Excerpts can be freely laid out in the workspace area or be attached to one another or to documents to form groups, while each excerpt can also be traced back to its source.

"The problem with paper and some software programs is that the comments must generally fit in the space of a small margin and can only be linked to a single page of text at a time," said Tashman. "LiquidText's more flexible notion of comments and large workspace area provide space for organizing and manipulating any comments or document excerpts the user may have created."

In addition to traditional zooming and panning, the user can create a magnifying glass in the workspace by tapping with three fingers. The magnifying glass zooms in on select areas while allowing the user to maintain an awareness of the workspace as a whole. Users can manipulate the magnifying glass with simple multi-touch gestures, such as pinching or stretching to resize the lens, or rotating to change the zoom level -- like the zoom lens of a camera. Users can position, resize and control the zoom level of the magnifying glasses in a continuous motion by movements of the hand alone.

The ability to move within a document, search for text, turn a page, or flip between locations to compare parts of a text is also important for active reading. To complete these actions, LiquidText allows users to collapse text, dog-ear text and create magnified views of text.

"In contrast to traditional document viewing software, in which users must create separate panes and scroll them individually, LiquidText's functionality lets a user view two or more document areas with just one action, parallelizing an otherwise serial task," explained Edwards.

Since developing their initial prototype, the researchers have refined the software based on feedback from designers and human factors professionals, and active readers that included managers, lawyers, students and strategists.

Tashman is currently working with Georgia Tech's Enterprise Innovation Institute to form a startup company to commercialize the technology. The $15,000 Georgia Tech Edison Prize he won, along with $43,000 in grants from the Georgia Research Alliance, will help launch the new company that plans to introduce LiquidText to the public later this year.

The Georgia Tech Edison Prize was established to encourage formation of startup companies based on technology developed at Georgia Tech, and was made possible by a multi-year grant from the Charles A. Edison Fund, named for the inventor's son. Presentation of the prize, the second to be awarded from the Fund, was part of the Georgia Tech Graduate Research and Innovation Conference held Feb. 8, 2011.

This project is supported in part by the National Science Foundation (Award No. IIS-0705569). The content is solely the responsibility of the principal investigator and does not necessarily represent the official views of the NSF.

Research News & Publications Office
Georgia Institute of Technology
75 Fifth Street, N.W., Suite 314
Atlanta, Georgia 30308 USA

Media Relations Contacts: Abby Robinson (abby@innovate.gatech.edu; 404-385-3364) or John Toon (jtoon@gatech.edu; 404-894-6986)

Writer: Abby Robinson

Related Links

For more information contact:

Abby Robinson
Research News and Publications
Contact Abby Robinson
404-385-3364

Photos

Click on an image below to see the full photo

  • Craig Tashman and Keith Edwards
  • LiquidText software
  • LiquidText screen

Faculty

  • Amy Bruckman

    Amy Bruckman

    Associate Professor
    School of Interactive Computing, College of Computing

    Areas of Expertise:
    Educational Technology, Social Networking/Online Communities, Wikipedia, Twitter, Facebook, Internet Research Ethics, Human Computer Interaction, Human Computer Interaction for Kids

  • Carl DiSalvo

    Carl DiSalvo

    Assistant Professor
    School of Literature, Communication and Culture, Ivan Allen College of Liberal Arts

    Areas of Expertise:
    Participatory Design, Critical Design, Design Studies, Robotics and Sensing in Art and Community Settings

  • Keith Edwards

    Keith Edwards

    Associate Professor
    School of Interactive Computing, College of Computing

    Areas of Expertise:
    Social Impacts of Technology, Home Network Security, Home Networking, Human-Computer Interaction

  • Irfan Essa

    Irfan Essa

    Professor
    School of Interactive Computing, College of Computing
    School of Electrical and Computer Engineering, College of Engineering

    Areas of Expertise:
    Computational Video, Computational Photography, Computational Journalism, Computational Media, Computational Perception

  • Beki Grinter

    Beki Grinter

    Associate Professor
    School of Interactive Computing, College of Computing

    Areas of Expertise:
    Societal Impacts of Technology, Human-Computer Interaction, Computer Supported Cooperative Work

  • Renu Kulkarni

    Renu Kulkarni

    Executive Director, FutureMedia

    Areas of Expertise:
    Convergence of digital, social, mobile and multimedia industries, Strategic Alliances, Industry Partnerships, Open Innovation Practices

  • Blair MacIntyre

    Blair MacIntyre

    Associate Professor
    School of Interactive Computing, College of Computing
    School of Literature Communication and Culture, Ivan Allen College of Liberal Arts

    Areas of Expertise:
    Augmented Reality, Virtual Reality, Mobile Games, Social Games, Augmented Reality Games, Video Game Design, Video Game Architecture

  • Ali Mazalek

    Ali Mazalek

    Assistant Professor
    School of Literature, Communication and Culture, Ivan Allen College of Liberal Arts

    Areas of Expertise:
    Tangible Interfaces, Experimental Media, Media Arts, Interaction Design, Emerging Technologies

  • Janet Murray

    Janet H. Murray

    Ivan Allen College Dean's Professor
    School of Literature, Communication and Culture, Ivan Allen College of Liberal Arts

    Areas of Expertise:
    Game Design, Interactive Narrative, Interactive Television, Media Convergence, Information Design, Digital Media and Education

  • Elizabeth Mynatt

    Elizabeth Mynatt

    Director, GVU Center
    Professor, School of Interactive Computing
    Associate Dean for Strategic Planning and Initiatives
    College of Computing

    Areas of Expertise:
    Human-Computer Interaction, Human-Centered Computing, Health Informatics, Ubiquitous Computing, Assistive Technologies

  • Ashwin Ram

    Ashwin Ram

    Associate Professor
    School of Interactive Computing, College of Computing

    Areas of Expertise:
    Artificial Intelligence (AI) (Case-Based Reasoning, Natural Language, & Game/Entertainment AI), Human-Centered Computing - Cognitive Science, Healthcare Informatics

  • Bruce Walker

    Bruce Walker

    Associate Professor
    School of Psychology, College of Sciences School of Interactive Computing, College of Computing

    Areas of Expertise:
    Interactive Music, Mobile Music, Human-Computer Interaction, Auditory Perception, Psychology