Research


My research focuses on the intersection between human computer interaction and multimedia information retrieval to explore experiential media systems; “systems that integrate computing and digital media with the physical and social experience.* I am very interested in the use, application and impact of these complex but novel, personal and social multimedia archives and applications.

* See: Rikakis, Kelliher, Lehrer. Experiential Media and Digital Culture, IEEE Computer, 46(1) p.46-54


Research Interests

Primary research domains along with areas of focus for my research

  1. Primary: Human Computer Interaction (HCI); Experiential Multimedia
  2. Areas: Experience capture, composition and reuse; Social and Reflective Multimedia Systems; Multimedia Retrieval; Digital Narrative and Storytelling; Interactivity and Media Art; Interdisciplinary; Art-Science Integration; Process-oriented design.

Research Areas

Current and past research areas of research activity

Experience Capture, Composition & Reuse


How do we capture + represent personal experience through media collections and computational tools? How can these long-term personal archives, e.g. lifelogs and social media, enable new insights and understanding of personal, societal or global significance?


In working with John Saduaskas, we are exploring the use of lifelogging approaches within the context of K-12 education to inspire and support students in developing creative writing outcomes. Preliminary investigation includef user-centered inquiry through interviews, focus groups, workshops, and participatory design sessions with both teachers and students. This has yielded design guidelines and requirements in support of the development of a computational support tool. The developed online environment aids students in generating personally relevant writing topic ideas. Through writing-performance-based evaluation, the tool is currently being evaluated with 90 K-12 students and is embedded in real-world curricular contexts.


Working with Dr. Aisling Kelliher, we identified opportunity to develop new capture and presentation tools suited to recording process within situated and shared events. This documentation framework was inspired by the lifelogging vision and deployed within the context of the Emerge event at ASU in March 2012. This novel and highly dynamic event precipitated the need for novel documentation approaches. Our conversational framework was developed by co-opting techniques from Bill Gaver’s cultural probes, social networking strategies, and participatory documentation methodologies. This framework mixed elements of traditional recording apparatus (videography, photography and audio recorders), social media contribution, and novel capture technologies (wearable passive capture, time-lapse video, self-documentation, and experience capture installations) to more fully describe the event. This mixed initiate approach to the documentation of shared experienced crafted a highly detailed record and afforded a unique lens on the events, workshops and experiences felt by attendees. This is an active research area which has lead to the development of an online participatory community for the dissemination and continued discourse around this coordinated account of event content.


Noting that highly-experiential 'life-sampling' methods often do not integrate a first-person, qualified or reflective accounts, we developed a novel recording apparatus, the Probotron. This is an experience recording installation that can be deployed during situated events. We use this novel situated installation as an explorative means to investigate the potential of integrating reflective, subjective accounts into the highly chronological records that might be found in a lifelog collection. It has gathered 624 responses from 3 distinct deployments. It explores the experience of experience capture and asks what a first person account can contribute to data-centered records. A reworked version for distributed and personal use is planned.


The potential for lifelog collections to move beyond the scope of the individual and offer opportunities for discovery to motivated third parties is an exciting avenue for research and one which I have collaborated with Dr. Aisling Kelliher to investigate. We were interested to discovering how and if third parties could infer meaning and purpose from a large-scale lifelog collection. We evaluated the examination of lifelog data and construction of storied interpretations from this content in a provocative learning environment where the overall focus of study was precisely this type of considered, mediated activity. It saw a single 9-month dataset distributed to over 100 participants within a pedagogical context. The participants were versed with the skills necessary to conduct the evaluation through a semester long 'Media-Editing' module in which the evaluation was situated as the final assessable component. This enabled exploration of the affordances, constraints and considerations involved when presenting such voluminous rich and detailed personal archives to third parties. We determined a variety of reflective strategies and approaches that could benefit from additional computational support including data discovery, ethical considerations and creative opportunities.


An important challenge, especially in multimodal lifelog collections, is identifying the right content to present to the user in response to their information need. Beyond providing access to these large scale collections, tools must enable and support the user in way-finding, interpreting and constructing meaning from these archives so that they might gain value from them. In response to this, I created Orison, a custom storytelling tool for lifelog data. The tool supports the exploration of lifelog content and its arrangement into a story-based layout. The requirements for representation, sense-making and storytelling were designed based on investigations into practices surrounding media use, management and composition within three contexts: family practice, genealogical investigations, and hobbyist scrapbooking. Studies with the scrapbooking community most heavily influenced the tool and the method of digital storytelling enabled within the tool closely resembled that observed in these studies: album based two-dimensional layouts. Additionally, the tool’s workflow was intended to parallel their observed patterns and creative practices, albeit in a digital environment. While the tool enables wholly manual and effortful construction of storied interpretations, given the size, scope and complexity of the archives employed computational generation of narrative accounts was developed for the system. A fully automatic (computationally controlled), semi-automatic (computationally guided) and manual variation of the story generation functionality was evaluated with three collection owners who had amassed lifelog collections over a 2.5 year period. Outcomes supports and confirms prior work by Kevin Brooks which suggests a semi-automatic approach is favored as it maintains the authorial control of the user while reducing the effort to prepare a narrative outcome. Additionally, insights and reflections on the utility, use and opportunities for these storied interpretations of past personal experience were uncovered through qualitative inquiry with the participants.


Within this work, we explored the applicability of semantic concept detection, a method often used within video retrieval, on the domain of visual lifelogs. Our concept detector models the correspondence between low-level visual features and high-level semantic concepts (such as indoors, outdoors, people, buildings, etc.) using supervised machine learning. By doing so it determines the probability of a concept’s presence. We apply detection of 27 everyday semantic concepts on a lifelog collection composed of 257,518 SenseCam images from 5 users. The results were evaluated on a subset of 95,907 images, to determine the accuracy for detection of each semantic concept. We conducted further analysis on the temporal consistency, co-occurance and relationships within the detected concepts to more extensively investigate the robustness of the detectors within this domain.


Beyond the challenge of employing lifelogging devices in research investigations, the collections often have unique properties owing to the nature and affordances of the capture device. While they create unique and engaging perspectives on past events, they present complexities when analyzing and utilizing the content. We conducted assessments of the nature and composition of long-term multimodal collections to explore the challenges, opportunities and features of these novel collections, as well as developing considerations for their use in longitudinal research.



Participatory Communities, Curation & Coordination


How can understanding of complex heterogeneous communities be gathered through community contribution and action? What are the strategies for digital curation and coordination that help identify shared value and meaning for diverse interdisciplinary communities?


XSEAD is an online platform supporting networks of creativity and innovation across science, engineering, art and design. The platform combines key characteristics of social media networking applications to help incentivize participation, strengthen engagement and support dynamic community organization. Research centers on community understanding and growth, models for digital participatory curation, and multimodal representation of interdisciplinary process and outcome.


This online community-driven platform enables the dissemination and discussion of process-centered representations of futurist methods in action. The goal is to provide a shared repository of content, with novel annotation and curation mechanisms to support the needs of diverse researchers, practitioners, educators and the general public in considering the future. Research explores the representation of process and the development of automatic event-centered algorithmic approaches; the utility of process-centered multimedia to the design fiction community; and the continuation of event-centered discourse through online forums.


Taskville is an engaging, interactive workplace game mediated by social media, where participants play the game by completing both work and personal related tasks using a city building metaphor; completed tasks are rendered as new buildings in the graphical environment. During 2012-2013 completed longitudinal evaluations of this tools utility in work place coordination and awareness.


The role of trust is critical in establishing collaborative relationships in the volunteer community. We have been exploring trust and the use of social networking applications through a series of a series of interviews with organizations and volunteers. This work will yield design recommendations for creating collaborative, social tools for the volunteer community towards the implementation of an online community tool.



Interactivity in Media Arts Applications


What is the nature and contribution of computational interactivity in already richly interactive spaces such as in media arts performance? How do media-arts perspectives on the development of interactive works enrich human-centered computing exploration?


Working in the context of the rich interactive space of echo::system, a hybrid work of media arts installation and improvisational performance, we are inquiring into the nature of computed mediated ‘interactivity’. Engaging interdisciplinary collaborators, this multifaceted performance develops active mediated spaces that explores socio-cultural and ecological aspects of natural/urban biomes. Across a series of design residencies in 2013, we introduced mixed-method approach of human-centered exploration methods, active observation and interview in combination with documentary capture of design meetings, choreographic development and outcomes. These methods were designed with the intention of generating new knowledge and and design insight for interactive performance through the act of both documenting and making this hybrid interactive performance.


Treadmills are used in echo::system as a virtual environment navigation system. The installation provides a dynamic, simulated walk through desert landscape. Through situated evaluation, we are exploring the kinesthetic experience offered by these treadmills. Probative observational studies were conducted in 2012 and 2013 in the context of real performance. Now a first round of formal evaluations of this novel embodied interface has been completed with 20 individuals.


With Aisling Kelliher and colleagues from ASU’s School of Sustainability, we explored the potential of exhibition as both a forum for translational research where complex academic concepts require representation in accessible formats, a platform for participatory involvement in continuing discourse, and an investigative framework for hybrid interactive experiences. Here, members of the public were involved as as co-researchers in discussions about the future using speculative design methods (e.g. writing letters to your future self, clay-modeling an archaeological object from the future). Creating an emphasis on the exhibition as an open contribution space where visitors could add elements for curation and display was important. Results from this experience provided insights and implications for exhibitions interaction design and the value of an art/exhibition as research approach.



Multimedia Search and Summarization


What are the affordances of computer and human in satisfying information need and in facilitating access to large-scale multimedia archives?


A mobile framework for landmark image recognition and classification has three major components in its architecture. The landmark image recognition and classification engine resides on the server along with a series of web services designed to expose their functionality to a mobile application running on a compatible mobile device (in this case an iPhone.) The efficacy of the mobile framework in end-to-end image classification and annotation was evaluated with automatic and user-centered evaluations.


As part of DCU's TRECVid Interactive Search activities, I developed an interactive search system which integrated with the K-Space video search engine to enable a user to query, retrieve and select shots relevant to a topic of interest. The traditional approach to presenting video search results is to maximize recall by offering a user as many potentially relevant shots as possible within a limited amount of time. 'Context'-oriented systems opt to allocate a portion of the results presentation space to providing additional contextual cues about the returned results. In video retrieval these cues often include temporal information such as a shot's location within the overall video broadcast and/or its neighboring shots. We developed two interfaces with identical retrieval functionality in order to measure the effects of such context on user performance. The first system had a 'recall-oriented' interface, where results from a query were presented as a ranked list of shots. The second was 'context-oriented', with results presented as a ranked list of broadcasts. In the 2007 TRECVid evaluation, 10 users participated in the experiments, of which 8 were novices and 2 experts. Participants completed a number of retrieval topics using both the recall-oriented and context-oriented systems. In 2008, we completed a multi-site, multi-interface experiment. Three institutes participated involving 36 users, 12 each from Dublin City University (DCU, Ireland), University of Glasgow (GU, Scotland) and Centrum Wiskunde & Informatica (CWI, the Netherlands).


As part of the TRECVid Summarization effort, I developed two solutions to summarizing BBC Rushes content. Rushes are the raw material (extra video, B-rolls footage) used to produce a video. 20 to 40 times as much material may be shot as actually becomes part of the finished product. Within the TRECVid summarization task, given a video from the rushes test collection, the goal is to automatically create an summary clip less than or equal to 2% of the original video's duration. The goal of the research was to balance brevity of the summary with overall coherence and salience, and was informed by a number of human-centered concepts as was the visual composition and arrangement of the on-screen elements.


A collaborative video search system for mobile devices, ‘iBingo’. It supports division of labour among users, providing search results to colocated iPod Touch devices


Research Methods and Approaches

A short overview of the methods typically used in my research and explorations

  1. Understanding: Contextual Inquiry, Interview, Ethnography & Observation;
  2. Design: Participatory Design, Focus Groups, Conceptual Design, Prototyping;
  3. Process: Development of research demonstrator systems as a mode of process-oriented critical inquiry into experiential multimedia
  4. Multimedia: System and Algorithm development;
  5. Evaluation: Applied exploration in situated, real-world contexts; Mixed methods qualitative exploration with quantitative multimedia methods (precision, recall.)